ID
stringlengths
11
163
Content
stringlengths
1.52k
32.9k
23andMe Case Study _ Life Sciences _ AWS.txt
23andMe could migrate its existing environment with virtually no changes, and over time started incorporating more AWS services into its solution. The company is looking for further ways to optimize costs using AWS, exploring services like AWS Graviton processor, which delivers excellent price performance for cloud workloads running in Amazon EC2. The company is finding opportunities to be cost optimal while retaining the resources it needs for on-demand computing. “We’re about 10 months past migration, and the eventual goal is to drive a faster process from idea to validation. Our researchers are faster and more efficient, and our hope is to see a big research breakthrough,” says de Leon.  Increased scalability, supporting a compute job running on more than 80,000 virtual CPUs About 23andMe Español {font-family:"Cambria Math"; 日本語 mso-font-pitch:variable; font-family:"Arial",sans-serif; 한국어 {font-family:Cambria; Amazon MAP mso-bidi-font-size:12.0pt; AWS Services Used Arnold de Leon Sr. Program Manager, 23andMe margin:0in; Optimizing Value Running HPC on AWS   mso-pagination:widow-orphan; Optimized costs @font-face {page:WordSection1;}ol 23andMe can scale on demand to match compute capacity for actual workloads and then scale back down. “To give a sense of scale, we had a peak compute job running with over 80,000 virtual CPUs operating at once,” says de Leon. In addition, using Amazon EC2 instances has removed resource contention for 23andMe’s researchers. “Recently, we had a 3-week production workload finish 33 percent ahead of schedule. Since migrating to AWS, our ability to deliver compute resources to our researchers is now unmatched,” says Graham. mso-bidi-font-family:Cambria;}.MsoChpDefault ไทย font-family:"Cambria",serif; mso-default-props:yes; panose-1:2 4 5 3 5 4 6 3 2 4; Português {margin-bottom:0in;}ul Français Embracing the Cloud for Secure Data Storage Exploring Future Possibilities with Flexibility on AWS 23andMe quickly discovered the benefits of having a variety of Amazon EC2 instance types available for its use. “We have the entire menu of Amazon EC2 offerings available to us, and one way to achieve efficiency is finding an optimal fit for resource use,” says Justin Graham, manager of an infrastructure engineering group at 23andMe. As of 2022, the company uses many instance types flexibly, including Amazon EC2 X2i Instances, the next generation of memory-optimized instances delivering improvements in performance, price performance, and costs for memory-intensive workloads. 23andMe also uses AWS Batch to provide rightsizing and match resources to determine which instance types to use, which helps with price-performance optimization. mso-font-signature:3 0 0 0 1 0;}@font-face mso-ascii-font-family:Cambria;   panose-1:5 0 0 0 0 0 0 0 0 0; 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. The AWS Migration Acceleration Program (MAP) is a comprehensive and proven cloud migration program based upon AWS’s experience migrating thousands of enterprise customers to the cloud. mso-style-unhide:no; 2022 AWS Batch Türkçe English {mso-style-unhide:no; Tiếng Việt Headquartered in California, 23andMe is known for its at-home DNA collection kits. The company also uses its database of genetic information to further its understanding of biology and therapeutics to develop new drugs and therapies. Founded in 2006, 23andMe has collected an enormous amount of data and generated millions of lines of code for its research and therapeutics. They use this data for regression analysis, genome-wide association studies, and general correlation studies across datasets. The genetic testing market has been gaining momentum because of the increased prevalence of genetic diseases, better awareness among the public about the benefits of early detection, and falling costs of genetic sequencing over the past 16 years. mso-hansi-font-family:Cambria; mso-font-signature:-536869121 1107305727 33554432 0 415 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-type:export-only; Benefits of AWS Organizations of all sizes across all industries are transforming and delivering on their missions every day using AWS. Contact our experts and start your own AWS Cloud journey today. {font-family:Wingdings; Managing scientists’ file-based home directories presented another challenge. To solve this issue, 23andMe turned to Weka, an AWS Partner. The WekaIO parallel file system is functional, cost-effective, and compatible with Amazon S3. This helped 23andMe’s internal team implement changes with no disruption to the customer's experience. When the migration was complete, 23andMe started taking advantage of AWS services for HPC like Amazon EC2 C5 Instances, which deliver cost-effective high performance at a low price per compute ratio for running advanced compute-intensive workloads. It chose this type of Amazon EC2 instance because it was the closest analog to its previous computing resources. Increased efficiency, completing a 3-week production workload 33% ahead of schedule Amazon EC2 mso-generic-font-family:decorative; mso-fareast-font-family:Cambria; عربي While enjoying these benefits of using HPC services on AWS, 23andMe has not had to compromise on its initial spending goals. “Our goal was to keep our costs the same but gain flexibility, capability, and value. Savings is less about the bottom line and more about what we gain for what we spend,” says de Leon. 23andMe has achieved increases in cost optimization by using a variety of AWS services, including Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud, as well as Amazon EC2. 23andMe is all-in on AWS and aims to continue pursuing price-performance optimization for its workloads. To give a sense of scale, we had a peak compute job running with over 80,000 virtual CPUs operating at once. Using Amazon EC2 has removed the resource contention for 23andMe’s researchers." Migrated smoothly to the cloud within 4 months 23andMe Innovates Drug and Therapeutic Discovery with HPC on AWS Amazon Simple Storage Service (Amazon S3), an object storage service that offers scalability, data availability, security, and performance. “If we care about a piece of data, we store it in Amazon S3,” says Arnold de Leon, program manager in charge of cloud spending at 23andMe. “It is an excellent way of securing data with regard to data durability.” 23andMe uses Amazon S3 intelligent tiering storage class to automatically migrate data to the most cost-effective access tier when access patterns change. mso-style-qformat:yes; Deutsch 23andMe used the AWS Migration Acceleration Program (AWS MAP), a comprehensive and proven cloud migration program based on the experience that AWS has in migrating thousands of enterprise customers to the cloud. Using AWS MAP, 23andMe could achieve a smooth migration in only 4 months. “What AWS MAP was offering us was the ability to do a fast, massive shift,” says de Leon. “Usually when you do that, it’s very expensive, but AWS MAP solved that problem.” 23andMe migrated everything out of its data center and into the cloud on AWS. One year after migrating to AWS, as the AWS MAP program ends for 23andMe, it is achieving equal or better price performance because of the team’s diligence in adopting AWS services. Amazon S3 Italiano mso-font-charset:0; 23andMe, a genomics and biotechnology company based in California, provides genetic information to customers and has crowdsourced billions of data points for study, resulting in scientific discoveries. Genomics and biotechnology company 23andMe provides direct-to-customer genetic testing, giving customers valuable insights into their genetics. 23andMe needed more scalability and flexibility in its high-performance computing (HPC) to manage multiple petabytes of data efficiently. The company had been using an on-premises solution but began using Amazon Web Services (AWS) in 2016 to store important data. In 2021, the company made a full migration to the cloud, a process that took only 4 months. Since adopting AWS HPC services, including Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload, and AWS Batch, which lets developers, scientists, and engineers easily and efficiently run hundreds of thousands of batch computing jobs on AWS, 23andMe has increased its scalability, flexibility, and cost optimization. mso-bidi-font-family:Cambria;}p.Normal0, li.Normal0, div.Normal0 mso-bidi-font-family:Cambria;}div.WordSection1 Learn more » mso-font-signature:3 0 0 0 -2147483647 0;}@font-face {mso-style-name:Normal0; font-size:11.0pt; Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Ρусский Removed compute resource contention among researchers mso-font-charset:77; 中文 (简体) {margin-bottom:0in;} 23andMe initially used an on-premises facility, but as its data storage and compute needs grew, the company began looking to the cloud for greater scalability and flexibility. Additionally, the company sought to reduce human operating costs for facility maintenance and accelerate its ability to adopt new hardware and tech by transitioning to the cloud. In 2016, the company began using mso-style-parent:""; AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. As it started using cloud services, 23andMe tried a hybrid solution, running workloads in its data center and on AWS concurrently. This solution provided some scalability but came with associated costs of migrating data back and forth between the on-premises data center and the cloud. To achieve better cost optimization while also gaining more flexibility and scalability, 23andMe decided to migrate fully to AWS in 2021. Get Started mso-generic-font-family:roman; Contact Sales
36 new or updated datasets on the Registry of Open Data_ AI analysis-ready datasets and more _ AWS Public Sector Blog.txt
AWS Public Sector Blog 36 new or updated datasets on the Registry of Open Data: AI analysis-ready datasets and more by Erin Chu | on 13 JUL 2023 | in Analytics , Announcements , Artificial Intelligence , AWS Data Exchange , Education , Open Source , Public Sector , Research | Permalink | Comments |  Share The AWS Open Data Sponsorship Program makes high-value, cloud-optimized datasets publicly available on Amazon Web Services (AWS). AWS works with data providers to democratize access to data by making it available to the public for analysis on AWS; develop new cloud-native techniques, formats, and tools that lower the cost of working with data; and encourage the development of communities that benefit from access to shared datasets. Through this program, customers are making over 100PB of high-value, cloud-optimized data available for public use. The full list of publicly available datasets are on the Registry of Open Data on AWS and are now also discoverable on AWS Data Exchange . This quarter, AWS released 36 new or updated datasets. As July 16 is Artificial Intelligence (AI) Appreciation Day , the AWS Open Data team is highlighting three unique datasets that are analysis-ready for AI. What will you build with these datasets? Three AI analysis-ready datasets on the Registry of Open Data NYUMets Brain Dataset from the NYU Langone Medical Center is one of the largest datasets in existence of cranial imaging, and the largest dataset of metastatic cancer, containing over 8,000 brain MRI studies, clinical data, and treatment records from cancer patients. Over 2,300 images have been annotated for metastatic tumor segmentations, making NYUMets: Brain a valuable source of segmented medical imaging. An AI model for segmentation tasks as well as a longitudinal tracking tool are available for NYUMets through MONAI. Learn more about this dataset . RACECAR Dataset from the University of Virginia is the first open dataset for full-scale and high-speed autonomous racing. RACECAR is suitable to explore issues regarding localization, object detection and tracking (LiDAR, Radar, and Camera), and mapping that arise at the limits of operation of the autonomous vehicle. You can get started with RACECAR with this SageMaker Studio Lab notebook . Aurora Multi-Sensor Dataset from Aurora Operations, Inc. is a large-scale multi-sensor dataset with highly accurate localization ground truth, captured between January 2017 and February 2018 in the metropolitan area of Pittsburgh, PA, USA. The de-identified dataset contains rich metadata, such as weather and semantic segmentation, and spans all four seasons, rain, snow, overcast and sunny days, different times of day, and a variety of traffic conditions. This data can be used to develop and evaluate large-scale long-term approaches to autonomous vehicle localization. Aurora is applicable to many research areas including 3D reconstruction, virtual tourism, HD map construction, and map compression. Full list of new or updated datasets These three datasets join 33 other new or updated datasets on the Registry of Open Data in the following categories. Climate and weather: ECMWF real-time forecasts from European Centre for Medium-Range Weather Forecasts NOAA Wang Sheeley Arge (WSA) Enlil from the National Oceanic and Atmospheric Administration (NOAA) ONS Open Data Portal from National Electric System Operator of Brazil Pohang Canal Dataset: A Multimodal Maritime Dataset for Autonomous Navigation in Restricted Waters from the Mobile Robotics & Intelligence Laboratory (MORIN Lab) Sup3rCC from National Renewable Energy Laboratory EURO-CORDEX – European component of the Coordinated Regional Downscaling Experiment from Helmholtz Centre Hereon / GERICS Geospatial: Astrophysics Division Galaxy Segmentation Benchmark Dataset from the National Aeronautics and Space Administration (NASA) Astrophysics Division Galaxy Morphology Benchmark Dataset from NASA ESA WorldCover Sentinel-1 and Sentinel-2 10m Annual Composites from the European Space Agency Korean Meteorological Agency (KMA) GK-2A Satellite Data from the Korean Meteorological Agency NASA / USGS Controlled Europa DTMs from NASA NASA / USGS Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) Targeted DTMs from NASA Nighttime-Fire-Flare from Universities Space Research Association (USRA) and NASA Black Marble PALSAR-2 ScanSAR Tropical Cyclone Mocha (L2.1) from the Japan Aerospace Exploration Agency (JAXA) PALSAR-2 ScanSAR Flooding in Rwanda (L2.1) from JAXA Solar Dynamics Observatory (SDO) Machine Learning Dataset from NASA Life sciences: Extracellular Electrophysiology Compression Benchmark from the Allen Institute for Neural Dynamics Long Read Sequencing Benchmark Data from the Garvan Institute Genomic Characterization of Metastatic Castration Resistant Prostate Cancer from the University of Chicago Harvard Electroencephalography Database from the Brain Data Science Platform The Human Sleep Project from the Brain Data Science Platform Integrative Analysis of Lung Adenocarcinoma in Environment and Genetics Lung cancer Etiology (Phase 2) from the University of Chicago National Cancer Institute Imaging Data Commons (IDC) Collections from the Imaging Data Commons Indexes for Kaiju from the University of Copenhagen Bioinformatics Center Molecular Profiling to Predict Response to Treatment (phs001965) from the University of Chicago NYUMets Brain Dataset from the NYU Langone Medical Center SPaRCNet data:Seizures, Rhythmic and Periodic Patterns in ICU Electroencephalography from the Brain Data Science Platform The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset from the University of California San Francisco UK Biobank Linkage Disequilibrium Matrices from the Broad Institute VirtualFlow Ligand Libraries from Harvard Medical School Machine learning: Aurora Multi-Sensor Dataset from Aurora Operations, Inc. RACECAR Dataset from University of Virginia Exceptional Responders Initiative from Amazon Amazon Seller Contact Intent Sequence from Amazon Open Food Facts Images from Open Food Facts Product Comparison Dataset for Online Shopping from Amazon What are people doing with open data? Amazon Location Service launched Open Data Maps for Amazon Location Service , a data provider option for the Maps feature based on OpenStreetMap . Oxford Nanopore Technologies benchmarked their genomic basecalling algorithms, which decodes DNA or RNA to sequence for analysis, on 20 different Amazon Elastic Compute Cloud (Amazon EC2) instances . HuggingFace hosted a Bio x ML Hackathon that challenged teams to leverage AI tools, open data, and cloud resources to solve problems at the intersection of the life sciences and artificial intelligence. How can you make your data available? Looking to make your data available? The AWS Open Data Sponsorship Program covers the cost of storage for publicly available high-value, cloud-optimized datasets. We work with data providers who seek to: Democratize access to data by making it available for analysis on AWS Develop new cloud-native techniques, formats, and tools that lower the cost of working with data Encourage the development of communities that benefit from access to shared datasets Learn how to propose your dataset to the AWS Open Data Sponsorship Program . Learn more about open data on AWS . Read more about open data on AWS: Largest metastatic cancer dataset now available at no cost to researchers worldwide Creating access control mechanisms for highly distributed datasets 33 new or updated datasets on the Registry of Open Data for Earth Day and more How researchers can meet new open data policies for federally-funded research with AWS Accelerating and democratizing research with the AWS Cloud Introducing 10 minute cloud tutorials for research Subscribe to the AWS Public Sector Blog newsletter to get the latest in AWS tools, solutions, and innovations from the public sector delivered to your inbox, or contact us . Please take a few minutes to share insights regarding your experience with the AWS Public Sector Blog in this survey , and we’ll use feedback from the survey to create more content aligned with the preferences of our readers. TAGS: Artificial Intelligence , AWS Open Data Sponsorship Program , climate , dataset , datasets , geospatial , geospatial data , life sciences , Machine Learning , open data , open data on AWS , public sector , Registry of Open Data on AWS Erin Chu Erin Chu is the life sciences lead on the Amazon Web Services (AWS) open data team. Trained to bridge the gap between the clinic and the lab, Erin is a veterinarian and a molecular geneticist, and spent the last four years in the companion animal genomics space. She is dedicated to helping speed time to science through interdisciplinary collaboration, communication, and learning. Comments View Comments Resources AWS in the Public Sector AWS for Government AWS for Education AWS for Nonprofits AWS for Public Sector Health AWS for Aerospace and Satellite Solutions Case Studies Fix This Podcast Additional Resources Contact Us Follow  AWS for Government Twitter  AWS Education Twitter  AWS Nonprofits Twitter  Newsletter Subscription
54gene _ Case Study _ AWS.txt
experimentation Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Genomics research studying global population is crucial for learning how genomic variation impacts diseases and how data can be used to improve the well-being of all populations. Despite the diverse genetic makeup of people in Africa, the continent is vastly underrepresented in global genetic research, with less than 3 percent of genomic data coming from African populations. The mission of health technology startup 54gene is to bridge this gap to deliver precision medicine to Africa and the global population. Solution | Analyzing Datasets as Large as 30–40 TB in a Few Days   54gene Equalizes Precision Medicine by Increasing Diversity in Genetics Research Using AWS 54gene’s integrative digital solution has three major components: the clinical operations to enroll patients for collecting clinical and phenotypic data, the biobank that stores biospecimens, and the downstream genomic analysis, which uses technologies like genotyping and whole genome sequencing to generate insights. This large-scale genomic analysis needs access to robust HPC solutions to process a high throughput of data. “Our current architecture, which is exclusively on AWS, strikes a good balance between cost effectiveness and flexibility,” says Joshi. “We have varying sizes and designs of computing architecture to make our processes cost effective, and it has been really nice.” Using AWS ParallelCluster, 54gene can customize the kind of HPC that it wants to use depending on the type and size of the data coming in. The startup has one queue for handling terabytes of data with compute-optimized nodes and a separate queue for smaller tasks, like running short Python scripts. The AWS team provided support throughout the migration and design of GENIISYS. “AWS listens carefully to our questions and needs and works diligently to provide additional resources,” says He. 日本語 2023 AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. About 54gene Analyzed The company built a proprietary solution called GENIISYS on Amazon Web Services (AWS) to curate genetic, clinical, and phenotypic data from Africa and other diverse populations and generate insights that can lead to new treatments and diagnostics. Using multiple AWS services, including AWS ParallelCluster, an open-source cluster management tool that makes it simple to deploy and manage high performance computing (HPC) clusters on AWS, GENIISYS can scale to cost-effectively support massive datasets and power precision medicine for historically underserved demographics. 한국어 54gene is already seeing the benefits of AWS as it develops and scales new features of GENIISYS. “We are doing a lot of trial and error,” says Joshi. “On AWS, we can start small with novel ideas and deploy a lot of small applications, and the AWS team helps us determine which particular interface best suits us.” Overview | Opportunity | Solution | Outcome | AWS Services Used To store and visualize its datasets, 54gene uses Amazon Relational Database Service (Amazon RDS), which makes it simple to set up, operate, and scale databases in the cloud. “On Amazon RDS, we’re able to store metadata from our three major components of research and query our datasets efficiently,” says Joshi. The startup also uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload, to power its data analytics workflows. Using different HPC configurations, 54gene can analyze datasets as large as 30–40 TB in just a few days. And even while it’s achieving a throughput of more than 5 TB per week, the startup is reducing its costs on AWS. “Another factor that made us choose AWS is that AWS has a great presence in the African continent, including the close physical proximity of its data centers to our business units there,” says He.   54gene is using its data analytics infrastructure on AWS to drive research into specific diseases. For example, the startup is working to identify what genetic factors might lead to more serious cases of sickle cell disease in Nigeria and to tailor treatments to patients based on disease severity. 54gene stores all its genomic data using Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. “Another great aspect of working on AWS is that we can configure data storage to be cost effective,” says Joshi. The company uses Amazon S3 Lifecycle policies to automatically migrate data to Amazon S3 Glacier storage classes—which are purpose-built for data archiving—to minimize storage costs.   To conveniently access data stored in Amazon S3 for processing using HPC clusters, the startup uses Amazon FSx for Lustre, which provides fully managed shared storage built on a popular high-performance file system. And 54gene’s computational scientists, many of whom had trained on traditional on-premises setups, adjusted easily to AWS. “What’s nice about AWS is that we are able to replicate a familiar environment for our computational scientists with minimal cloud training,” says Joshi. “AWS ParallelCluster is a great example of that.” Based in Nigeria, 54gene is a genomics startup that works with pharmaceutical and research partners to study genetic diseases and identify treatments. It’s focused on addressing the need for diverse datasets from underrepresented African populations. Amazon EC2 30–40 TB AWS Services Used Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Reduced 中文 (繁體) Bahasa Indonesia ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Our current architecture, which is exclusively on AWS, strikes a good balance between cost effectiveness and flexibility. We have varying sizes and designs of computing architecture to make our processes cost effective, and it has been really nice.” costs Achieved Overview Español Facilitated Ji He Senior Vice President of Technology, 54gene Get Started  flexible, scalable, and reliable cloud infrastructure Opportunity | Using AWS ParallelCluster to Build a Scalable, Cost-Effective Genomics Research Solution for 54gene  AWS ParallelCluster Türkçe With the flexibility and cost effectiveness of the cloud, 54gene is better able to study the effects of diseases on previously underrepresented African genetic data. The startup can also seamlessly integrate its highly curated clinical, phenotypic, and genetic data within one solution and build capacity for further research initiatives focused on targeted populations in Africa or specific disease areas. “We have the flexibility to do almost anything on AWS,” says Joshi. “From running quick scripts to genotyping in a matter of hours to analyzing terabytes of data efficiently, this flexibility has been really beneficial.” English Learn how 54gene in life sciences is curating diverse datasets to unlock genetic insights in Africa and globally using AWS. Outcome | Continuing to Increase Representation for African Genetic Data in Global Health Research  datasets that increase diversity in global genetic research Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud Learn more » Deutsch Nigeria-based 54gene collaborates with local research institutions and global pharmaceutical partners to study the many ethnolinguistic groups within Nigeria, better understand the diversity present on the continent, and uncover new biological insights. Its GENIISYS solution includes a state-of-the-art biorepository that stores highly curated clinical, phenotypic, and genetic data from the African population to facilitate research for a new wave of therapeutics. “Through GENIISYS, we wanted to create a gateway between genomics insights from Africa and research in other countries,” says Ji He, senior vice president of technology at 54gene. Amazon RDS Tiếng Việt Amazon S3 Italiano Customer Stories / Life Sciences To effectively collect and store genomic data and connect it to phenotypic information (such as clinical and demographic data), the startup needed a flexible cloud-based solution that could scale while still optimizing costs. “When we’re performing genotyping or whole genome sequencing, we generate huge amounts of data, and we have to process it at a high rate of throughput,” says Esha Joshi, bioinformatics engineer at 54gene. “We chose AWS because of its reliability and scalability and the fact that we have to pay only for what we use. That’s important for a startup because it can be difficult to anticipate computing and storage needs.” Contact Sales Learn more » Português of data analyzed in a few days
6sense Case Study.txt
Searching for a more scalable solution, 6sense began to explore Kubernetes, an open-source container orchestration system, to improve its data pipelines. In 2018, the company migrated its application and API services to two Kubernetes clusters and began using kOps, a set of tools for installing, operating, and deleting Kubernetes clusters in the cloud. Although a containerized architecture improved agility for 6sense, kOps was not fully managed, which required the 6sense team to perform significant day-to-day operations and management. “Using kOps, we experienced way too much maintenance overhead,” says Liaw. “We realized that if we could reduce these manual tasks, our team could focus its time on serving the customer instead of managing Kubernetes.” Français Benefits of AWS Amazon Elastic Compute Cloud (Amazon EC2) By migrating to fully managed Amazon EKS clusters, 6sense can effectively scale and manage its data pipeline, which has accelerated its speed to deliver insights to its customers. The company plans to further improve its scaling capabilities using Karpenter, an open-source Kubernetes cluster automatic scaler built alongside AWS.  Español Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Searching for Scalable Pipeline Orchestration Improved speed to market for new applications and features 日本語 Using Amazon EKS, 6sense has seen a 400 percent improvement in workload throughput, giving it the ability to process 1–2 TB of data per day and growing. With this speed, 6sense can support highly complex workloads and quickly deliver valuable insights to its customers 65 percent faster.  With Enterprise Support, you get 24x7 technical support to automatically manage health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive / preventative programs and AWS subject matter experts. Continuing to Enhance Scalability on AWS Contact Sales Get Started 한국어 6sense’s AWS-powered solution is not only extremely fast but also highly scalable. “We can scale a cluster on Amazon EKS almost infinitely to run as many things in parallel as possible,” says Premal Shah, senior vice president of engineering and infrastructure at 6sense. “We no longer need to worry about how much we can run per hour.” The company also relies on Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which are used to run large workloads at a significant cost savings and accelerate workloads by running parallel tasks. By using Amazon EC2 Spot Instances, 6sense can provision the capacity it needs to support its future expansion while optimizing for costs.  6sense Insights Inc.’s Revenue AI reinvents the way companies create, manage, and convert pipelines to revenue by capturing anonymous buying signals, targeting the right accounts, and recommending channels and messages to boost performance. Frees employees’ time to focus on high-value tasks and innovation Delivers insights to customers 65% faster Because Amazon EKS is a fully managed Kubernetes service, 6sense no longer needs to focus on managing or operating its Kubernetes clusters. Using this time savings, its team can dedicate time to improving the customer experience. “On AWS, we are able to increase developer velocity, reduce unnecessary red tape, and serve our customers as best as we can,” says Liaw. “We can push out new features, insights, and products to them as quickly as possible. The faster we can innovate to serve our customers, the better the experience is for everybody—including our team.” Improved developer productivity Improving Speed, Agility, and Innovation Using Amazon EKS Improved workload throughput by 400% AWS Services Used Processes 1–2 TB of data per day 中文 (繁體) Bahasa Indonesia 6sense has also vastly accelerated its development speeds by migrating to AWS. On Apache Mesos, the company was limited in its ability to build, test, and deploy new data pipelines due to limitations on container throughput. On Amazon EKS, 6sense can run up to 300 percent more containers per hour. It can also run the same number of Docker containers on Amazon EKS in approximately 50 percent of the time that it took under its previous solution. By achieving this level of speed and scalability, 6sense has improved developer productivity and accelerated its speed to market for new applications and features.  We can scale a cluster on Amazon EKS almost infinitely to run as many things in parallel as possible.” Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Learn more » 中文 (简体) In 2019, 6sense chose to invest in AWS Enterprise Support, which provides concierge-like service to support companies in achieving outcomes and finding success in the cloud. The AWS Enterprise Support team helped the company realize that it could alleviate the issues that it was facing by migrating to Amazon EKS, which is fully managed. “For 6sense, Amazon EKS was almost a drop-in replacement that magically worked better,” says Liaw. Premal Shah Senior Vice President of Engineering and Infrastructure, 6sense Insights Inc.   6sense migrated to Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud or on premises. Using Amazon EKS, 6sense completes workloads significantly faster while reducing management needs, improving its speed of delivery, and freeing its developers to focus on innovative solutions. 6sense Insights Inc. (6sense) needed to effectively scale and manage its data pipelines so that it could better support its growth. With 6sense Revenue AI, a leading platform for predictable revenue growth, the company generates actionable insights for business-to-business sales and marketing teams. This service relies on artificial intelligence, machine learning, and big data processing, requiring 6sense to run complex workloads and process terabytes of data per day. When its open-source pipeline orchestration solution could no longer support these workloads, 6sense began exploring alternative solutions and chose to implement fully managed services from Amazon Web Services (AWS).  Headquartered in San Francisco, California, 6sense delivers data analytics, sales insights, and other predictions so that business-to-business revenue teams can better understand their buyers and customers. In 2014, the company began using Apache Mesos, an open-source solution that manages compute clusters, to orchestrate its data pipeline frameworks. “As we grew, we encountered several limitations on Apache Mesos,” says George Liaw, director of infrastructure engineering at 6sense. “We could only offer compute resources to one framework at a time, which slowed our processes. We also experienced scaling issues.”  Türkçe Facilitates a fully managed solution English 6sense Insights Inc. Improves Scalability and Accelerates Speed to Market by Migrating to Amazon EKS Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  AWS Enterprise Support Deutsch Tiếng Việt Italiano ไทย About 6sense Insights Inc. 2022 Amazon EC2 Spot Instances On AWS, 6sense freed its employees to focus on innovation, and the company will continue to use AWS services to develop new, value-generating solutions. “At 6sense, we are able to move quickly and innovate on AWS without being held back,” says Liaw. Amazon Elastic Kubernetes Service (Amazon EKS) In September 2021, 6sense began migrating its remaining workloads from legacy solutions running on Apache Mesos and kOps to Amazon EKS. The company migrated the majority of its application and API service workloads to Amazon EKS within the first week and developed a stable and usable pipeline orchestration solution by the end of 2021. “Once we started running Amazon EKS clusters, we unlocked valuable capabilities,” says Liaw. “We could test clusters with more flexible configurations without worrying about their stability.” By December 2021, the company was running 7–8 clusters on Amazon EKS and had completed 80 percent of its migration.  Português Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices.
Accelerate Time to Business Value Using Amazon SageMaker at Scale with NatWest Group _ Case Study _ AWS.txt
On AWS, NatWest Group can quickly launch personalized products and services to meet customer demands, boost satisfaction, and anticipate future needs. The bank’s data science teams are empowered to deliver significant business value with streamlined workflows and a self-service environment. In fact, NatWest Group is on track to double its number of use cases to 60 and achieve a 3-month time to value. Français The bank will continue to explore and create new, innovative solutions on AWS. For example, NatWest Group will soon introduce an ML offering that automatically sets prices for its products, improving the intelligence and efficiency of the pricing process.  2023 Español To equip its data teams with the skills that they need to use these tools, NatWest Group has encouraged its employees to embark on cloud learning journeys. It has hosted over 720 AWS Training courses for its data science teams to learn new skills, such as applying best practices for DevOps and building a data lake on AWS. Additionally, several employees obtained AWS Certifications, which are industry-recognized credentials that validate technical skills and cloud expertise. By offering these opportunities, NatWest Group has equipped its data science teams to build powerful, predictive ML models on AWS at a faster pace. 日本語 NatWest Group is one of the largest banks in the United Kingdom. Formally established in 1968, the company has origins dating back to 1727. NatWest Group seeks to use its rich legacy data to innovate and personalize its personal, business, and corporate banking and insurance services. To deliver these solutions at a faster pace, the bank needed a standardized ML approach. “We didn’t have a consistent way to access our data, generate insights, or build solutions,” says Andy McMahon, head of MLOps for data innovation for NatWest Group. “Our customers felt these challenges because it took a much longer time to derive value than we wanted.” Contact Sales for data science teams To deploy personalized solutions at an enterprise scale, NatWest Group chose to adopt Amazon SageMaker as its core ML technology. The bank also engaged AWS Professional Services, a global team of experts that can help companies realize their desired business outcomes when using AWS, to prepare for the project. During a series of workshops, NatWest Group and AWS Professional Services worked together to identify areas of improvement within the company’s ML landscape and created a strategy for development. After crafting a comprehensive plan, the teams began working on the project in July 2021.   한국어 Accelerate Time to Business Value Using Amazon SageMaker at Scale with NatWest Group Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps, improving data science team productivity by up to 10x.. Learn more » Solution | Achieving an Agile DevOps Culture Using AWS ML Solutions Opportunity | Using Amazon SageMaker to Reduce Time to Value for NatWest Group AWS Services Used from 2–4 weeks to hours Outcome | Deploying Innovative Services at Scale Using Amazon SageMaker 中文 (繁體) Bahasa Indonesia NatWest Group employees now have fast and simple access to the data and tools that they need to build and train ML models. “We modernized our technology stack, simplified data access, and standardized our governance and operational procedures in a way that maintains the right risk behaviors,” says McMahon. “Using Amazon SageMaker, we can go from an idea on a whiteboard to a working ML solution in production in a few months versus 1 year or more.” NatWest Group launched its first offerings in November 2022, reducing its time to value from 12–18 months to only 7. 30+ ML use cases AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي To remain competitive in the fast-paced financial services industry, NatWest Group is under pressure to deliver increasingly personalized and premier services to its 19 million customers. The bank has built a variety of workflows to explore its data and build machine learning (ML) solutions that provide a bespoke experience based on customer demands. However, its legacy processes were slow and inconsistent, and NatWest Group wanted to accelerate its time to business value with ML. 中文 (简体)   Amazon SageMaker Studio About NatWest Group Overview built in 4 months The bank turned to Amazon Web Services (AWS) and adopted Amazon SageMaker, a service that data scientists and engineers use to build, train, and deploy ML models for virtually any use case with fully managed infrastructure, tools, and workflows. By centralizing its ML processes on AWS, NatWest Group has reduced the time that it takes to launch new products and services by several months and has embraced a more agile culture among its data science teams. In April 2022, NatWest Group launched an enterprise-wide, centralized ML workflow, which it powers by using Amazon SageMaker. And because the bank already had a presence on Amazon Simple Storage Service (Amazon S3)—an object storage service offering industry-leading scalability, data availability, security, and performance—this was the service of choice for its data lake migration. With simpler access to data and powerful ML tools, its data science teams have built over 30 ML use cases on Amazon SageMaker in the first 4 months after launch. These use cases include a solution that tailors marketing campaigns to specific customer segments and an application that automates simple fraud detection tasks so that investigators can focus on difficult, higher-value cases. Get Started Reduced time to value   Customer Stories / Financial Services “There’s so much that we’ve gained from using our data intelligently,” says Greig Cowan, head of data science for data innovation at NatWest Group. “On AWS, we have opened up many new avenues and opportunities for us to detect fraud, tailor our marketing, and understand our customers and their needs.” Türkçe English 720+ Promotes self-service environment NatWest Group is a British banking company that offers a wide range of services for personal, business, and corporate customers. It serves 19 million customers throughout the United Kingdom and Ireland. Greig Cowan Head of data science for data innovation, NatWest Group If you want to launch an environment for data science work, it could take 2–4 weeks. On AWS, we can spin up that environment within a few hours. At most, it takes 1 day.” AWS Service Catalog Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. To accelerate its employees’ workflows, NatWest Group uses AWS Service Catalog, which organizations use to create, organize, and govern infrastructure-as-code templates. Before the bank adopted this solution, data scientists or engineers would need to contact a centralized team if they wanted to provision an ML environment. Previously, it would take 2–4 weeks before the infrastructure was ready to use. Now, NatWest Group can launch a template from AWS Service Catalog and spin up an ML environment in just a few hours. Its data teams can begin working on projects much sooner and have more time to focus on building powerful ML models. This self-service environment not only empowers data science teams to derive business value faster, but it also encourages consistency. “As a large organization, we want to make sure anything that we build is scalable and consistent,” says McMahon. “On AWS, we have standardized our approach to data using a consistent language and framework, which can be rolled out across different use cases.” Reduced time to provision environment Deutsch Tiếng Việt Amazon S3 Italiano ไทย Learn how NatWest Group used Amazon SageMaker to create personalized customer journeys with secure machine learning. To learn more, visit aws.amazon.com/financial-services/machine-learning/. Learn more » AWS courses completed from 12–18 months to 7 NatWest Group has adopted a number of features on Amazon SageMaker to streamline its ML workflows with the security and governance required of a major financial institution. In particular, NatWest Group adopted Amazon SageMaker Studio, a single web-based visual interface where it can perform all ML development steps. Because Amazon SageMaker Studio is simple to use and configure, new users can quickly set it up and start building ML models sooner. Português Amazon SageMaker
Accelerate Your Analytics Journey on AWS with DXC Analytics and AI Platform _ AWS Partner Network (APN) Blog.txt
AWS Partner Network (APN) Blog Accelerate Your Analytics Journey on AWS with DXC Analytics and AI Platform by Dhiraj Thakur and Murali Gowda | on 27 JUN 2023 | in Analytics , Artificial Intelligence , AWS Partner Network , Customer Solutions , Intermediate (200) , Thought Leadership | Permalink | Comments |  Share By Dhiraj Thakur, Solutions Architect – AWS By Murali Gowda, Advisor Architect – DXC Technology DXC Technology Analytics are an essential tool that helps companies accelerate their business outcomes, but the current approach to analytics taken by most companies limits their effectiveness. Rapid changes in business intelligence and analytics solutions mean companies are continually over-investing in solutions that rapidly age. They’re spending more time reevaluating, redesigning, and redeploying technologies than applying them to the business. They’re also making new commitments to expand their IT footprint at a time when most want to reduce their total estate. Analytics can unlock new value from data, but customers want to make faster decisions and gain greater competitive advantage. To benefit from the full power of analytics, customers need a solution they can deploy quickly and use to improve the effectiveness of their existing business intelligence over time—and avoid investing in tools that become obsolete before they’re deployed. With DXC Technology’s Analytics and AI Platform (AAIP) , an analytics platform as a service built on Amazon Web Services (AWS), you can develop and deploy new analytics applications in weeks. In this post, we walk through the features and benefits of AAIP, which helps you look further and deeper, gaining business insights from data you could not previously access or manage. DXC Technology is an AWS Premier Tier Services Partner and Managed Service Provider (MSP) that understands the complexities of migrating workloads to AWS in large-scale environments and the skills needed for success. Platform Overview Historically, several challenges held customers back from adopting advanced analytics: Siloed data and operational data stores hindered data access and discovery, thereby limiting insights generation. Data duplicated across multiple systems led to data quality issues. Managing data ingestion, data integration, and data quality all from a single, centralized location. Gaining approval on enterprise data models and entity relationship models from multiple business units. Regulatory and compliance issues. Complex, upfront costs, and heavy development marred with skills issues Limited by use of on-premises options. Administrative overhead. DXC Analytics and AI Platform is an analytics solution that rapidly improves the effectiveness and impact of your existing business intelligence landscape. AAIP addresses these challenges and eliminates the need to make continuous investments that expand the IT footprint and increase maintenance and upgrade costs. Figure 1 – DXC Analytics and AI Platform (AAIP). The bottom layer of the graphic above is DXC’s managed service offering where they offer to manage the platform. The next layer shows where DXC offers flexible deployment options including hybrid cloud, on-premises, and AWS deployments. Bundled with DXC’s managed service, AAIP takes the guesswork and complexity out of analytics with a fully managed, industrialized solution that incorporates the latest technologies. DXC follows AWS best practices for policies, architecture, and operational processes built to satisfy the requirements of enterprise grade security to protect data and IT infrastructure hosted in AWS. DXC provides the core industrialized platform complemented by AWS products and platform extensions from a rich services catalog, and custom options are also available. Customers can take advantage of rapid advances in artificial intelligence (AI), automation, and core analytics technologies offered from AWS. DXC’s solution accelerator, design patterns, and reference architecture speed up the implementation, allowing you to quickly access the right data and develop solutions that target the most critical needs. Using AAIP, customers can develop and deploy analytics apps that are more user-friendly and self-service oriented, using a pay-as-you-go mode. Solution Features and Benefits AAIP is a hardened software-defined architecture that combines the standard security and compliance controls with best-of-breed tooling to provide platform as a service (PaaS). The following diagram provides the benefits offered from AAIP as a service. Figure 2 – AAIP solution features and benefits. There are many benefits of AAIP available, including: Scale: A platform that scales as you grow. Seamlessly works with on-premises or cloud vendors, with multi- and hybrid-cloud deployment options. Support and maintenance: Leverages a pre-built monitoring and infrastructure configuration. Security: The enterprise-grade platform is built with high standards in security, including protection for most frequently occurring infrastructure (layer 3 and 4) attacks like distributed denial of service (DDoS), reflection attacks, and others. The platform is HITRUST certified and uses AWS Shield , a threat detection service that continuously monitors AWS accounts. Patching and scanning: Managed services functions include analytics workloads, service management, data backup/recovery, software patches/upgrades, continuous vulnerability management, and incident management. Operating system and security patches are reviewed and applied periodically. New instances are scanned prior to implementation, and anti-virus scanning is implemented. Data visualization tools: Robust data visualization tools and algorithms for advanced analytics and ML. Logging and monitoring: Provisioned resource tracking for continuous monitoring of account related activity across AWS infrastructure. Standard and selectable AWS and third-party tooling: Preconfigured ServiceNow for incident management and simplified workload monitoring. In case of any incident Amazon Simple Notification Service (Amazon SNS) sends the notification to users and triggers the ServiceNow incidents. Data pipelines: Batch, event- and API-driven data pipeline and workflow engines. In the following diagram, you can see how AAIP features support end-to-end cloud analytics adoption. Figure 3 – AAIP offering overview. The black box in Figure 3 shows DXC’s offerings in data analytics platform, including decades of extensive industry experience, enterprise-grade security and platform, and accelerators. The grey box shows DXC’s best practice guidance to customers to rapidly build the platform for their analytics needs. The purple box shows benefits to customers. AAIP provides distinct advantages to customers including: Accelerate the time to business value: DXC solution accelerators offer a T-shirt sizing-based platform, ingestion of the right data, and rapid execution of targeted business use cases. End-to-end managed services: DXC’s managed services leverage a deep pool of technical, business, and industry experts with field-tested methodologies, processes, tools delivered per an agreed service-level agreement (SLA). This includes monitoring, incident management, centralized logging, endpoint security, cloud security posture management, compliance, scanning, and threat detection. Solution accelerators: DXC offers accelerators such as reference architectures, design patterns, deployment automation, blueprints, and runbooks that cover the initial setup, onboarding, and ongoing run with adherence to SLAs. Full-service suite: Utilize a full set of analytics services to assist in achieving analytics insight goals. Supports delivery of advanced analytics (AI/ML, natural language processing) and actionable insights to business stakeholders. Conclusion In this post, you learned about the features and benefits of using DXC Technology’s Analytics and AI Platform (AAIP) on AWS. In an environment of competitive pressure emerging from AI and analytics, AAIP enables companies to unleash the potential of data in real-world, practical applications. AAIP is a proven analytics platform that’s built from AWS-native services and enables users to scale their business seamlessly and reduce go-to-market time significantly. DXC offers standardized services to advise and coach people, change organizational structures, and implement and run analytics platforms at scale. . . DXC Technology – AWS Partner Spotlight DXC Technology is an AWS Premier Tier Services Partner  that understands the complexities of migrating workloads to AWS in large-scale environments, and the skills needed for success. Contact Partner | Partner Overview | AWS Marketplace | Case Studies TAGS: AWS Competency Partners , AWS MSP Partner Program , AWS Partner Guest Post , AWS Partner Solutions Architects (SA) , AWS Partner Success Stories , AWS Premier Tier Services Partners , AWS Public Sector Partners , AWS Service Delivery Partners , AWS Solution Provider Partners , AWS Well-Architected Partners , DXC Technology , Managed Service Provider Comments View Comments Resources AWS Partner and Customer Case Studies AWS Partner Network Case Studies Why Work with AWS Partners Join the AWS Partner Network Partner Central Login AWS Training for Partners AWS Sponsorship Opportunities Follow  AWS Partners LinkedIn  AWS Partners Twitter  AWS Partners YouTube  AWS Email Updates  APN Blog RSS Feed
Accelerating customer onboarding using Amazon Connect _ NCS Case Study _ AWS.txt
NCS, an AWS Partner, had been using AWS services to support various applications and IT environments for several years. The NCS Service Desk team wanted to expand its use of AWS by migrating to Amazon Connect, a pay-as-you-go, contact center offering with infinite scalability. “Amazon Connect met all our requirements, and we knew it would allow us to add innovative features on top of it in the future to meet our customers’ needs,” Cheung says. Amazon Comprehend On-demand scaling Français About NCS Group 2023 Amazon Connect is an omnichannel cloud contact center that allows you to set up a contact center in minutes that can scale to support millions of customers. With Amazon Connect you can stay ahead of customer expectations and outpace the competition at a lower cost.  Español Recently, NCS has started using AI and ML technologies such as Contact Lens for Amazon Connect, which the company now deploys for contact center analytics. “With Contact Lens for Amazon Connect, we can measure the quality of our customer calls by generating analytical reports within hours of a call,” says Sivabalan Murugaya. 日本語 To further improve its customer experience, NCS has integrated a survey in Amazon Connect to gauge customer sentiment after each call. “Our customer satisfaction scores have been very high, which is encouraging,” says Cheung. NCS has accelerated onboarding time, improved customer communications, and reduced costs by migrating its Service Desk contact center to Amazon Connect. The group is funneling savings back into the business and can more efficiently deploy staff to value-added projects. “We can invest more in our development efforts now,” Cheung says. “As a result, our team is spending more time exploring new features and innovations to serve our customers.” Get Started 한국어 Although NCS initially planned for the migration to take six months, the company completed it in just three months. “Because of the AWS integration and overall efficiency of Amazon Connect, we migrated 40 projects to Amazon Connect quickly and easily,” elaborates Murugaya. Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Transforming NCS Service Desk to be More Agile NCS Group, a subsidiary of Singtel Group, is a leading IT consulting firm that partners with governments and enterprises in the Asia Pacific region to advance communities through technology. It was established in 1981 and has 12,000 employees across the region.   reduction in system operations costs Jessica Cheung Practice Lead for EUC and Service Desk, NCS AWS Services Used Additionally, with the integration between Amazon Connect and the NCS knowledge base system, service desk agents can quickly search different databases for information. “We now have a consistent feed of accurate information to relay to our customers,” adds Murugaya. As part of an ongoing digital transformation, NCS sought to onboard new Service Desk customers faster by moving away from the solution’s on-premises IT environment. “The deployment time for new customers could take eight weeks because of software implementation and hardware procurement, and that was too long. We wanted technology that was agile, modular, cost effective, and easy to scale as we grew,” says Sivabalan Murugaya, lead consultant for EUC and Service Desk at NCS Group. On-demand scaling was a key point, as Service Desk call volumes are highly dynamic; from one day to the next the group might need 100 additional service center agents. 中文 (繁體) Bahasa Indonesia Data sovereignty Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) 3 weeks Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text. Amazon Comprehend helps businesses simplify document processing, classify documents, redact personally identifying information, and more. Learn more » Outcome | Investing in New Features and AI Innovation Amazon Connect Overview Since 1981, NCS has been providing technology solutions and consulting services to government agencies and enterprises across the Asia Pacific region. The group employs 12,000 people, many of them working with the NCS Service Desk. “Through NCS Service Desk, we support our customers’ application, infrastructure, and end-user desktop needs,” explains Jessica Cheung, practice lead for EUC and Service Desk at NCS Group. customer onboarding time Contact Lens for Amazon Connect, a feature of Amazon Connect, provides a set of conversational analytics and quality management capabilities, powered by machine learning, that helps understand and classify the sentiment, trends, and compliance of your conversations. Learn more » NCS Service Desk serves healthcare organizations and local governments, making data sovereignty another critical consideration for a new Service Desk IT environment. NCS was also looking to implement technology that would facilitate efficient innovation with native AI capabilities. Türkçe NCS is also evaluating Amazon Comprehend to derive new insights from text within its knowledge base. Cheung concludes, “We are confident that with Amazon Connect and other AWS services, we can keep providing a better contact center solution for our global customers.” NCS migrated its on-premises Service Desk solution to Amazon Connect to halve onboarding time, reduce operations costs, and improve customer communications with new technologies such as artificial intelligence and machine learning. English Contact Lens for Amazon Connect NCS Accelerates Customer Onboarding by Moving its Contact Center to Amazon Connect Amazon Connect met all our requirements, and we knew it would allow us to add innovative features on top of it in the future to meet our customers’ needs.” complies with strict data residency requirements Deutsch Tiếng Việt The group is using Amazon Connect as an omnichannel call center solution, including Contact Lens for Amazon Connect to perform call analytics. Using Amazon Web Services (AWS), NCS onboards new customers twice as fast, reduced operations costs, and gains the agility to innovate new features with native artificial intelligence (AI) and machine learning (ML) capabilities. Taking advantage of Amazon Connect, NCS is delivering an omnichannel solution that integrates voice, chat, email, and AI to improve its overall customer experience. For example, the group typically uses in-house AI to handle end users’ emails within a minute. However, responses can take longer when customers present more complex issues. Using Amazon Connect, service desk agents receive the complex emails immediately and can provide a timely response. Italiano ไทย supports variable, volatile workloads Solution | Saving Time and Operations Cost with an Omnichannel Solution Onboarding new customers to Amazon Connect is likewise quicker and easier. Instead of six to eight weeks, onboarding now takes just three weeks. The group can scale its Service Desk solution up or down on demand and has reduced system operational costs by 30 percent. By leveraging various data centers within the AWS Asia Pacific Region, it also ensures compliance with customers’ stringent data sovereignty requirements. Learn more » NCS Group (NCS) is a multinational information technology company that serves governments and enterprises across Asia Pacific. To improve agility and onboard customers faster, NCS migrated its on-premises call center to Amazon Connect. 30% Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Accelerating Migration at Scale Using AWS Application Migration Service with 3M Company _ Case Study _ AWS.txt
applications cutover in 12 hours 3M Company is a manufacturing company that uses science to improve lives and solve some of the world’s toughest challenges. 3M has corporate operations in 70 countries and sales in over 200. Get more flexibility and value out of your SAP investments with the world’s most secure, reliable, and extensive cloud infrastructure, 200+ AWS services to innovate, and, purpose-built SAP automation tooling to reduce risk and simplify operations. Learn more » Français scalability, flexibility, and resiliency Outcome | Developing Modern, Cloud-First Applications 2023 Solution | Migrating 2,200 Applications in 24 Months Using AWS Application Migration Service SAP on AWS Español Global manufacturer 3M Company migrated 2,200 applications to AWS in 24 months with minimal downtime, improving its scalability and resiliency, and optimizing costs to save millions of dollars. 日本語 AWS Services Used Contact Sales Customer Stories / Manufacturing Accelerating Migration at Scale Using AWS Application Migration Service with 3M Company Get Started 한국어 AWS Professional Services AWS Professional Services offerings help you achieve specific outcomes related to enterprise cloud adoption. Each offering delivers a set of activities, best practices, and documentation reflecting our experience supporting hundreds of customers in their journey to the AWS Cloud. Learn more » Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The promise of the cloud—and what we achieved after we migrated to AWS—was the ability to flexibly scale and deploy with a very short lead time.” Improved About 3M Company Reduced 中文 (繁體) Bahasa Indonesia AWS DataSync by cost optimizing compute applications across thousands of servers migrated in 24 months Ρусский عربي AWS Application Migration Service 中文 (简体) Opportunity | Working alongside AWS Professional Services to Get to Migration at Scale for 3M Company   The migration at scale moved at significant speed. At one point, the team moved 500 applications in around 12 hours. Perhaps even more impressively, 3M’s largest and most critical workload—its enterprise resource planning solution, which included hundreds of terabytes of data and hundreds of applications—was cutover in under 20 hours. That solution was migrated to SAP on AWS, which offers proven approaches backed by expert experience supporting SAP customers in the cloud on AWS. “The speed and consistency in delivering our workloads to the cloud was truly a benefit of 3M working alongside AWS in our migration at scale,” says Hammer. “When we looked at the challenge that was presented to us—30 months or fewer to migrate nearly all our enterprise workloads from our aging data center to the cloud—the combined effort between 3M, AWS Professional Services, and other AWS engineering teams made that possible. We were able to hit our milestones and migrate our workloads; we reduced risks and, in many cases, introduced better capabilities using AWS, which provided the scalability and flexibility and resiliency that we didn’t have in the data center.” 3M is a global manufacturing company, producing products from adhesives to medical supplies to industrial abrasives, all with the mission to use science to improve lives and solve tough customer challenges. With corporate operations in 70 countries and sales in over 200, 3M needed greater scalability than was available using its on-premises data centers. There were long lead times for procuring and deploying hardware, making it difficult for 3M to meet the demands of existing workloads and slowing down new projects. 3M required greater stability and sustainability, neither of which the aging data center could provide. Overview Kyle Hammer Director of Cloud Transformation, 3M Company Türkçe To perform the migration, 3M used tools such as AWS Application Migration Service, which minimizes time-intensive, error-prone manual processes by automating the conversion of source servers to run natively on AWS. AWS Application Migration Service also simplifies application modernization with built-in and custom-optimization options. 3M also used AWS DataSync, a secure, online service that automates and accelerates moving data between on-premises and AWS storage services. Using these tools, 3M could replicate its workloads from on premises to AWS with minimal changes. 3M migrated some workloads that required more creative, flexible work-around capabilities, and using AWS tools, it could address those challenges as they arose. “We were able to maintain the pace that we needed even with those diverse workloads across many different systems,” says Hammer. After each wave of the migration, the company also took time to thoroughly and thoughtfully evaluate how the migration was going. “We captured data in each wave, and that data would help remediate challenges in subsequent migrations,” says Hammer. “That process was helpful for us to mitigate risk and improve the delivery.” Global manufacturer 3M Company (3M) needed a technology solution more flexible and scalable than its data centers. Not only were the data centers aging, but it was difficult to obtain new hardware when 3M needed to increase its capacity quickly. 3M began looking for a cloud-hosting solution to run its applications, including 11 different enterprise resource planning environments. 3M Enterprise IT selected Amazon Web Services (AWS) as its preferred cloud services provider and used AWS tools and expertise to migrate thousands of servers in 24 months. Now on AWS, 3M has increased its scalability and resiliency, and it has begun using automation to streamline processes such as server deployment and rightsizing. English 500 Now that 3M has completed its migration at scale, the company is delivering new applications with a cloud-first, serverless focus. 3M is planning to move its databases into AWS-native database services, such as Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 3M is automating server builds in the cloud using the AWS interface. Now, users within 3M can build and deploy resources on AWS in minutes, compared to weeks or even months on premises. 3M is also using automation to correctly size compute instances for workloads and to schedule compute only when needed. “On AWS, we no longer need to run many of our systems 24 hours a day, like we used to do in our data center,” says Hammer. “That’s resulted in millions of dollars in compute savings from what we initially migrated to the cloud.” 3M is also optimizing its storage and backups, saving hundreds of thousands of dollars in its storage rightsizing efforts alone. 3M kicked off its 3M Cloud Transformation Program in 2020 to complete a migration at scale to AWS. “The promise of the cloud—and what we achieved after we migrated to AWS—was the ability to flexibly scale and deploy with a very short lead time,” says Kyle Hammer, director of cloud transformation at 3M. To complete its migration at scale, 3M began working alongside AWS Professional Services, a global team of experts that can help organizations realize desired business outcomes using AWS, to plan a migration. “Working alongside AWS Professional Services went very well,” says Hammer. “This migration would not have been successful in the time that we had allotted without the strong collaboration from AWS and AWS Professional Services.” AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. Learn more » Deutsch Tiếng Việt Overview | Opportunity | Solution | Outcome | AWS Services Used Italiano ไทย 2,200 Saved millions of dollars Learn more » AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automating the conversion of your source servers to run natively on AWS. “3M is driving to increase our presence with digital products and enterprise. We’re continuing to develop products that are supporting and solving challenges for our customers, and those will be developed in the cloud on AWS,” says Hammer. resource deployment time from weeks to minutes The 3M Cloud Transformation Program began with 8 months of designing and planning, followed by 24 months of migration at scale. 3M completed the transformation program with minimal downtime in 24 months with 51 waves, delivering 2,200 existing enterprise applications to AWS in addition to hundreds of other new instances and applications that were in development in that time frame. “We worked alongside AWS Professional Services to develop a solid plan that had the appropriate governance and controls in place so that we could review, flex, build, and scale to meet the migration needs,” says Hammer. “Through that methodology, we could adjust the technical processes and react quickly to keep the program on track and continue to deliver our migration at scale.” The end state of the migration included over 6,200 instances on Amazon Elastic Compute Cloud (Amazon EC2)—a service that provides secure and resizable compute capacity for virtually any workload—and petabytes of data migrated to other AWS services. Português
Accelerating Time to Market Using AWS and AWS Partner AccelByte _ Omeda Studios Case Study _ AWS.txt
Omeda Studios was founded in 2020 with the mission to build community-driven games. Omeda’s founders began the Predecessor project in 2018, seeking to rebuild a defunct multiplayer online battle arena game they had enjoyed and make it available for PC and console. The studio had built a backend but found the architecture was not designed to scale with the expected numbers of players. The company knew it would need another solution. “We needed a reliable, resilient, and scalable backend that would handle hundreds of thousands of players,” says Miles. 68,000 players Français Outcome | Launching Predecessor for PC and Console Español ran successful playtest with no downtime 日本語 AWS Services Used Customer Stories / Games 2022 4–6 months 한국어 Tom Miles Vice President of Engineering, Omeda Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon DocumentDB Get Started In addition to AccelByte offering the services and features that the studio needed, Omeda also received great customer support from AccelByte. “The ease of integration with AccelByte was much simpler than anything else we tried,” says Miles. “Instead of struggling to integrate with an unfamiliar backend, the AccelByte team implemented it for us.” In April 2022, the studio ran a playtest—the third playtest for the game, and the first using AccelByte’s backend. Over 68,000 players logged in to play the game during the test weekend, playing 11 million total minutes. Omeda received overwhelmingly positive feedback from the test on social media, including positive feedback about the latency of the game. “There was no downtime for the infrastructure during the playtest,” says Steven Meilleur, founder and chief technology officer at Omeda. “It went off without a hitch, and we were able to accommodate all the players that wanted to gain access. It was impressive to see how AccelByte’s solutions on AWS held up with that kind of load.” Opportunity | Building a Reliable Backend for Predecessor 中文 (繁體) Bahasa Indonesia Omeda Studios Accelerates Time to Market Using AWS and AWS Partner AccelByte Omeda researched the options and found AccelByte, which offered game solutions that fit most closely with the experience Omeda wanted to offer. Using AWS, AccelByte provides account services; cloud game storage to track and save player progression and configurations; social services for players to make friends and establish groups; dedicated server fleet management services; monetization services; and tools such as stats, leaderboards, and achievements to boost player engagement. AccelByte has been an AWS Partner since 2019. “We wanted to serve our customers better by investing in running our technology on AWS as efficiently and reliably as possible,” says Train Chiou, vice president of customer success at AccelByte. “Our goal is to help our clients get to market quicker and not have to worry about reinventing the wheel. You don’t have to spend the first year of creating your game investing in technologies that have already been well established, and you can focus on making the game better.” Omeda began working alongside AccelByte in August 2021 to integrate the game with AccelByte’s backend, which helped the studio accelerate the launch of Predecessor by 4–6 months. The studio also saves time by using managed services. For persistent storage, the game backend services use Amazon DocumentDB (with MongoDB compatibility) (Amazon Document DB)—a scalable, highly durable, and fully managed database service for operating mission-critical MongoDB workloads—and Amazon Relational Database Service (Amazon RDS) for PostgreSQL, a managed service that makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. By using fully managed services, Omeda can focus its time on creating a great player experience. “Game studios take a long time to grow, so it’s pivotal for us to use resources where they are most needed: in developing the game,” says Miles. “Using AWS, we can spend more time on developing game features.” Omeda plans to release Predecessor by the end of 2022. “It’s a very short time scale for a game in general, let alone a game that’s going to be online,” says Miles. “Using AWS and AccelByte and having the cooperation from their teams facilitated our meeting those aggressive deadlines.” The studio is growing quickly, doubling its employee base in the 2 years since it was founded. After the PC release, the studio will also work on releasing the game for consoles. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Amazon EC2 Amazon RDS makes it easy to set up, operate, and scale PostgreSQL deployments in the cloud. With Amazon RDS, you can deploy scalable PostgreSQL deployments in minutes with cost-efficient and resizable hardware capacity. Learn more » Learn more »   Overview Predecessor, by 4–6 months using AWS Partner AccelByte’s game backend services built on AWS. Time for creativity Scalable solution Gaming company Omeda Studios accelerated the launch of its first game, About Omeda Studios Türkçe English Amazon RDS for PostgreSQL Amazon DocumentDB is a scalable, highly durable, and fully managed database service for operating mission-critical MongoDB workloads. Learn more » Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. “We’ve succeeded in rebuilding most of what we set out to build,” says Meilleur. “AWS has delivered what we needed in a time when we really needed it.” accelerated game launch Omeda turned to Amazon Web Services (AWS) and AccelByte, an AWS Partner and game technology company that provides game backend as a service. Using AccelByte services, built on AWS, Omeda accelerated the time to market for Predecessor and improved the reliability and elasticity of the game. “Our aim is to release the game to players as soon as we can, and AccelByte helped us with this,” says Tom Miles, vice president of engineering at Omeda. Deutsch Using AccelByte’s services on AWS, Omeda can scale the backend of its game to meet demand for hundreds of thousands of players. Compute for the game runs on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. AccelByte has deployed its services on AWS to meet Omeda’s load and usage requirements, using different sized disk queues and deployment methodologies to accommodate Omeda’s target player concurrency and setting up the architecture to automatically scale up or down. Additionally, because AWS offers high service-level agreements, the reliability and uptime of the game service are high, with AccelByte targeting 99.9 percent uptime for its clients. “High uptime is key for a good player experience, and that’s one of the things we trust AWS to deliver,” says Miles. “You can make the best game in the world, but if players can’t play it because it’s down, it doesn’t even matter.” Tiếng Việt Founded in 2020, Omeda Studios is a London-based game studio that builds community-driven games. Its first game, Predecessor, is a multiplayer online battle arena game launching in 2022. focused on improving player experience rather than rebuilding backend Italiano ไทย Solution | Accelerating Production Using AccelByte and AWS Contact Sales High uptime is key for a good player experience, and that’s one of the things we trust AWS to deliver.” to support hundreds of thousands of concurrent players Omeda Studios (Omeda) needed a scalable, reliable backend to bring its game, Predecessor, to market quickly and support hundreds of thousands of players. With 50,000 fans in the game’s Discord server and 140,000 players who have signed up to playtest the game, Predecessor is Omeda’s first game, and the studio wanted to concentrate its small team on making the best player experience possible without focusing all its energy on building the game backend. Português
Achieving Burstable Scalability and Consistent Uptime Using AWS Lambda with TiVo _ Case Study _ AWS.txt
Deploying the tech stack and architecture is cheap and simple. Because of the pricing tiers of some of the managed services that we’re using and the pay-as-you-go pricing model, it costs almost nothing to innovate." Solution | Modernizing Hundreds of APIs Using AWS Lambda Français Increased 2023 Outcome | Improving Innovation Using Serverless Solutions Español Learn how TiVo in the media and entertainment industry achieved burstable scalability and consistent uptime of streaming services using AWS Lambda and Amazon API Gateway. performance taking only 30 ms at load TiVo plans to continue migrating the rest of its APIs to the cloud using AWS and is looking for ways to innovate further. With more investment in AWS solutions, the company has improved integration and connectivity. It benefits from managed services, like data sharing and data migration, because it is not egressing data. “We get a lot of benefits from using AWS at a very good pricing model. It is enticing to continue migrating to AWS,” says Devitt-Carolan.   日本語 By using AWS-managed and serverless solutions, TiVo has a better understanding of cost limits and can use this to instruct its architecture decisions and innovation. “Deploying the tech stack and architecture is cheap and simple, so that’s a clear benefit for us,” says Devitt-Carolan. “Because of the pricing tiers of some of the managed services that we’re using and the pay-as-you-go pricing model, it costs almost nothing to innovate.” Pairing low costs for early development testing alongside an understanding of the cost and usage patterns fits the incubation process of innovation for TiVo. Building off managed services costs the company only dollars per day, at most. Customer Stories / Media & Entertainment Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Adding new devices and accounts to TiVo’s solution, managing content and entitlement, and managing the arrival of guide and programming data are all powered by hundreds of APIs that interface with those datasets. Modernizing these APIs to improve scalability and connectivity was important to the company. TiVo interacts with its clients through the Amazon API Gateway. “Our use of Amazon API Gateway is tightly coupled with our authentication and authorization strategy,” says Devitt-Carolan. Using Amazon API Gateway, TiVo drives connectivity and forwards APIs to its microservices, legacy APIs, and serverless functions like AWS Lambda, a serverless, event-driven compute service that supports running code for virtually any type of application or backend service without provisioning or managing servers. All data processing from APIs is run at scale using AWS Lambda. Improved AWS Services Used Achieving Burstable Scalability and Consistent Uptime Using AWS Lambda with TiVo Reduced 中文 (繁體) Bahasa Indonesia Opportunity | Using Amazon API Gateway to Improve Scalability for TiVo innovation prompted by low development costs Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي TiVo makes it easy for people to find, watch, and enjoy what they love in one integrated experience, driving loyalty and engagement. In 2017 TiVo began developing microservices for better scalability and time to market, but the continued investment in its infrastructure impeded the desired benefits. “We have a lot of technology that’s interconnected, with dependencies across our services, data stores, and deployment models,” says Taram Devitt-Carolan, vice president of engineering at Xperi. 中文 (简体) The interconnectedness of services has performance cost benefits for TiVo. “Our goal is to treat APIs as a commodity,” says Devitt-Carolan. “If we need to call an API and load a particular piece of data, it costs only 30 ms at load, whether there is a concurrency of 1 or a concurrency of 1,000, which is excellent.” Overview Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. To run its microservices, TiVo uses Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service to run Kubernetes in the AWS Cloud and on-premises data centers. When the company develops a microservice, it runs on an Amazon EKS cluster that has been assimilated into the company’s modernized tech stack to be more compatible with its use cases. TiVo similarly uses Amazon Managed Streaming for Apache Kafka (Amazon MSK), which makes it simple to ingest and process streaming data in near real time with fully managed Apache Kafka, with a more distributed strategy to fit the company’s needs. “Using Amazon MSK and our infrastructure as code, we can make smaller clusters to support sets of APIs that are related to specific data,” says Devitt-Carolan. Taram Devitt-Carolan Vice President of Engineering, Xperi Türkçe hosting cost with pay-as-you-go pricing model English Amazon Elastic Kubernetes Service (Amazon EKS) automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Amazon API Gateway TiVo creates DVR technology and provides television, on-demand, and streaming services to customers. The company has a solution designed to provide businesses with audience analytics and drive viewership. TiVo Brands LLC (TiVo), a wholly owned subsidiary of entertainment technology company Xperi Inc., is migrating hundreds of APIs to the cloud to achieve burstable scalability, expand growth globally, and achieve consistent uptime of its video services. Instead of investing in an on-premises solution that required an ongoing investment in its network infrastructure, TiVo engineering decided to invest in serverless technologies and managed solutions to power core features and critical use cases. TiVo chose Amazon Web Services (AWS) to modernize its on-premises solution by going serverless. In doing so, TiVo improved global scalability, reduced its technical debt, and facilitated innovation and engineering efforts without experiencing budget strain. Deutsch Amazon EKS TiVo uses AWS Lambda functions across a variety of use cases, both externally and internally. These range from calling services within its system to reading or writing operations. Alongside AWS Lambda, the company uses Amazon DynamoDB, a fast, flexible NoSQL database service for single-digit millisecond performance at virtually any scale. TiVo uses AWS Lambda and Amazon DynamoDB to make its APIs lightweight and to query and respond to clients in client use cases. “We have a good, immediate, and burstable scale strategy using Amazon DynamoDB and AWS Lambda, which empowers us to simplify our multiregion approach,” says Devitt-Carolan. By using these serverless services in tandem and modernizing its tech stack, the company improves scalability from a global perspective and can support hundreds of millions of calls per day. Tiếng Việt About TiVo Italiano ไทย Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Learn more » Amazon DynamoDB Higher Learn more » scalability to support streaming globally AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. AWS Lambda Português After carefully reviewing the factors slowing transformation, TiVo engineering selected AWS to host all new services so that the teams could focus on bringing value to the customer with the ease and elasticity of using serverless technologies. “Adopting more AWS-managed services facilitated better connectivity and synchronization across the tech stack,” says Devitt-Carolan. One of the primary managed services TiVo uses is Amazon API Gateway, which it uses to create, maintain, and secure APIs at virtually any scale. By modernizing its tech stack, TiVo achieves a separation of concerns and predictability at scale.
Acrobits Uses Amazon Chime SDK to Easily Create Video Conferencing Application Boosting Collaboration for Global Users _ Acrobits Case Study _ AWS.txt
Français Acrobits leverages Amazon Chime SDK to streamline application development, scale to support thousands of new customers, and increase communication and collaboration. 2023 Español Solution | Building a New Video Conferencing Solution with Amazon Chime SDK Acrobits worked alongside the Amazon Chime SDK team to create LinkUp, a new video conferencing solution that features audio, video, screen sharing, and chat functionality for desktop and mobile environments. The application uses AWS services, including Amazon Elastic Compute Cloud (Amazon EC2) instances for compute. “The Amazon Chime SDK team was a great help. Each time we had an issue, they responded right away,” adds Torreblanca. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Acrobits is also considering integrating Amazon Chime SDK features such as speech-to-text and machine learning (ML) capabilities to analyze customer sentiment. “I can see us using machine learning in our call centers to track customers’ moods during calls,” Torreblanca says. “Amazon Chime SDK makes it easy for us to add new features that differentiate our application, and we plan to do that to make our customers even more comfortable using LinkUp.” 日本語 Outcome | Easing Development and Creating a Simple, Unified Application Experience With LinkUp, Acrobits customers across the globe have improved collaboration via desktop or mobile application. “Our customers simply open the application and press a button for comprehensive video and audio conferencing and chat capabilities, helping them communicate and collaborate more easily,” says Torreblanca. “Also, with features such as noise suppression in Amazon Chime SDK, we can drastically improve communication in call centers or even in noisy home environments.” LinkUp also provides user authentication, moderator controls, call recording, and calendar integration, as well as noise suppression through Amazon Voice Focus. Additionally, Acrobits developers used WebRTC Media, integrated into Amazon Chime SDK, for high-quality audio and video on WebRTC-enabled browsers and mobile systems. “WebRTC also uses encryption for the entire media element, which gave us confidence in the overall security of the environment,” says Torreblanca. Get Started 한국어 The company also needed the right technology to scale as customers adopted the solution. “To meet demand, we knew we had to scale from 10,000 to 100,000 to even 1 million endpoints based on what we were forecasting,” says Torreblanca. “The cloud was the only way to make that possible.” Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used About Acrobits AWS Services Used Improves 中文 (繁體) Bahasa Indonesia to support thousands of new customers Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Simplifies application development Acrobits provides white-label communication and collaboration applications to customers worldwide. To simplify development, the company chose to build on Amazon Web Services (AWS) and used the Amazon Chime SDK to create LinkUp, a new video collaboration platform. Recently, Acrobits needed to respond to customers who were asking for a new video conferencing tool. “The pandemic really initiated that, because many of our customers were caught by surprise and suddenly had people working from home. They needed to give their employees a remote solution for collaborating over video,” says Torreblanca. “Building a video collaboration solution from the ground up wasn’t something we were ready for or had the time and available resources to do on our own.” Scales “Our customers have high expectations, and there’s always a risk when we put out a new solution, but we were confident we could deliver because of the support and responsiveness we got from AWS. Overview By using Amazon Chime SDK and relying on additional AWS services, Acrobits can easily scale LinkUp to meet the video conferencing needs of thousands of customers without limitations. “CPU and memory requirements are intensive for any application, and video conferencing is even more so,” explains Torreblanca. “The moment we need to scale as the application grows, we must ensure we have the power to add thousands of new users immediately. AWS helps us do that. Our developers don’t need to worry about managing compute capacity and servers as the platform continues expanding.” Because Acrobits’ parent company Sinch, an AWS Partner, runs the majority of its business on AWS, Acrobits sought an AWS-based development solution. That search led the company to Amazon Chime SDK, a set of developer tools that helps builders easily integrate real-time voice, video, and messaging into applications. “Amazon Chime SDK is scalable and very robust,” says Torreblanca. “It is also purely an SDK solution without a defined UI, allowing us to develop a brandable user interface for our customers while also supporting our core white label business.” Türkçe Acrobits Uses Amazon Chime SDK to Easily Create Video Conferencing Application and Boost Collaboration for Global Users English By relying on Amazon Chime SDK, Acrobits was able to develop and launch LinkUp in months, offering on-demand scale to support thousands of new customers while improving collaboration for global users. The moment we need to scale as the application grows, we must ensure we have the power to add thousands of new users immediately. AWS helps us do that. Our developers don’t need to worry about managing compute capacity and servers as the platform continues expanding.” Acrobits is a rapidly growing provider of white-label communication and collaboration solutions delivered through a low-code platform. Owned by Sinch, which provides software development kits (SDKs) and application programming interfaces (APIs) for developers, Acrobits helps companies to create customizable and brandable enterprise-grade collaboration solutions in a variety of industries. “We serve 500 businesses in 74 countries and manage around 140 million endpoints” says Rafael Torreblanca, managing director at Acrobits. Amazon Chime SDK Because Amazon Chime SDK simplifies feature integration, Acrobits streamlined the development and management of LinkUp. “Amazon Chime SDK gives us a lot of flexibility in terms of tools we can use, and it has native interfaces for iOS and Android. This really simplified development,” says Torreblanca. “It was easy for us to integrate video, audio, chat, and noise suppression into the application.” Acrobits is a technology leader in mobile and desktop communication and collaboration solutions, providing white-label solutions to customers worldwide. The company’s solutions enable HD voice, video, and multi-messaging mobile and desktop products for system integrators, content service providers, and telecom companies across the communications industry. Deutsch Tiếng Việt Italiano ไทย collaboration in the hybrid workplace Rafael Torreblanca Managing Director, Acrobits Video conferencing may help to increase businesses’ productivity while working from home, but with the world reopening, a new trend has emerged: video conferencing fatigue—a trend that's largely driven by complex UIs. Acrobits designed LinkUp to offer a seamless experience for customers. "LinkUp is not a complicated tool. It's a unified video collaboration platform with simple ways to create and start a meeting and invite people to attend," says Torreblanca. "Using LinkUp, it's very easy for people to set up meetings, connect their calendars, present, and record calls from within the UI while adding a powerful collaboration component to our softphone apps." Amazon EC2 Opportunity | Responding to Customer Demands for Better Collaboration With the Amazon Chime SDK, builders can easily add real-time voice, video, and messaging powered by machine learning into their applications. Português
Actuate AI Case study.txt
Ben Ziomek Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Computer vision startup Actuate AI had a novel idea for identifying threats through security footage. Instead of focusing on facial recognition, which can be expensive, biased, and unreliable, the company set out to use artificial intelligence (AI) object recognition to detect weapons using security camera footage. The result of its efforts was a system that identifies weapons and intruders in real time and notifies stakeholders of immediate threats. However, Actuate AI didn’t want to impose expensive hardware costs on its customers’ security systems, so it knew it would need substantial cloud compute power for offsite inferencing and for scaling as the company grew. Added a security layer with minimal bandwidth usage, often lower than 50 kilobits per second per camera “Most security decision makers are concerned with being able to identify where people are in a building at any given time, being able to understand anomalous behaviors, and trying to identify violent situations before they happen,” says Ziomek. “Unless you know exactly the people who are going to be doing these acts, facial recognition doesn’t help. By focusing on object recognition, we can give our clients all of the security information they need in an instantaneous, easy-to-digest format that respects privacy.” Español About Actuate AI 日本語 Contact Sales For most applications, you just need raw GPU power. Having access to that has enabled us to cut our costs significantly and win some very large contracts." Actuate AI Powers Its Real-Time Threat-Detection Security Tech Using Amazon EC2 Get Started 한국어 Like many startups, Actuate AI faces the challenge of scale—and it has found a suitable growth environment in the AWS Cloud. “For most applications, you just need raw GPU power,” says Ziomek. “Having access to that has enabled us to cut our costs significantly and win some very large contracts that would have been cost prohibitive had we been running on any other type of virtual machines. We’ve found that the level of granularity we get in monitoring and management on AWS has enabled us to maintain the same level of quality while we scale dramatically.” By focusing the AI inference engine on weapons and intruders rather than faces, Actuate AI is able to provide its clients actionable information with fewer false positives and without the racial bias inherent in many facial recognition–based AI models. Focusing on objects also enables Actuate AI to apply its technology to other relevant security and compliance tasks, including mask compliance, social distancing detection, intruder detection, people counting, and pedestrian traffic analysis. Actuate AI found an effective solution in Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud, and a number of other Amazon Web Services (AWS) Cloud services. This solution enabled Actuate AI to offer an affordable, high-level security layer to existing systems for schools, businesses, and the US military. “We run on the cloud using AWS,” says Actuate AI cofounder and chief technology officer Ben Ziomek, “which lets us offer solutions that are more flexible, faster to install, and less expensive than those from almost anyone else on the market.” AWS Services Used Amazon EC2 C5 instances deliver cost-effective high performance at a low price per compute ratio for running advanced compute-intensive workloads. 中文 (繁體) Bahasa Indonesia Actuate AI is a software-based, computer vision AI startup that turns any security camera into a smart camera that monitors threats in real time, accelerating the response times of security firms, schools, corporations, and the US military. Amazon EC2 G4 Instances give Actuate AI a highly responsive, scalable solution that delivers enough power to run image processing and AI inference for eight jobs concurrently—but only when it’s needed. This flexibility enables Actuate AI to scale as necessary while reducing its accelerated computing costs by as much as 66 percent, giving it a huge competitive advantage over AI security firms using on-premises GPUs. “Even a really active camera is going to only have motion on it maybe 40 percent of the time during the day and less than 1 percent of the time at night,” says Ziomek. “On AWS, I only have to pay for the time I’m actually using it, which makes the cloud extremely beneficial to our business model. We have never had an issue with GPU instance availability on AWS.” Ρусский عربي Enabled a fully software-based AI detection system 中文 (简体) The potential applications of its technology are vast. Actuate AI is already working with some customers to track ingress and direct employees to temperature-monitoring stations in the wake of the COVID-19 pandemic, as well as with the US military to help with weapon cataloguing and tracking. Actuate AI currently uses CUDA by NVIDIA—a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of NVIDIA GPUs—and intends to use NVIDIA A100 Tensor Core GPU–based Amazon EC2 instances to further test the limits of its AI. Actuate AI utilizes an in-house AI system that combined best practices from many industry-leading convolutional neural network–based AI models. Many of the system’s core functions, however, operate using AWS. The AI uses the processing power of an Amazon EC2 C5 Instance to monitor cameras for movement at all times. In doing so, the AI identifies relevant objects in less than half a second with the help of Amazon EC2 G4 Instances. Once the AI has decided that the event is a threat, the metadata is stored in Amazon DynamoDB, a key-value and document database that delivers single-digit millisecond performance at any scale. Actuate AI stores the images themselves in Amazon S3. Then, depending on the client’s preferences, Actuate AI uses Amazon API Gateway—a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale—to send the client push notifications about the threat. These notifications can be sent immediately to monitoring stations in under a second, dramatically increasing the client’s ability to respond to threats. Meeting the Future on AWS Overcoming the Shortcomings of Facial Recognition Amazon EC2 C5 Benefits of AWS When Ziomek and Actuate AI cofounder and CEO Sonny Tai decided to develop a computer vision AI security system, they knew that improving from the status quo meant changing some of the basics of traditional AI security solutions. Instead of relying on facial recognition, Actuate AI would use object recognition as the backbone of its inference engine. And rather than the expensive, on-premises hardware typically built into other AI security suites, the company would use accelerated cloud computing.   Reduced accelerated computing cost by 66% Türkçe Historically, a lot of building-monitoring security and defense tasks required expensive, specialized hardware, but Actuate AI is taking a software approach and moving said tasks to the cloud. “We can turn any camera into a smart camera and basically displace a lot of sensor suites by using off-the-shelf cameras that can gather almost-as-good data for a far cheaper price,” says Ziomek. “We’re able to do this with minimal bandwidth usage, often lower than 50 kilobits per second per camera.” Sends push notifications of suspicious activity in under a second   English Amazon EC2 G4 instances deliver the industry’s most cost-effective and versatile GPU instance for deploying machine learning models in production and graphics-intensive applications. Getting Powerful, Cost-Effective Compute Using Amazon EC2 Deutsch Detects firearms and intruders with greater than 99% accuracy in less than 0.5 seconds  Tiếng Việt Cofounder and Chief Technology Officer, Actuate AI Actuate AI runs all actions in the AWS Cloud—using everything from Amazon EC2 P3 Instances powered by NVIDIA V100 Tensor Core GPUs to Amazon EC2 G4 Instances powered by NVIDIA T4 Tensor Core GPUs, AWS Lambda, Amazon API Gateway, and Amazon DynamoDB serverless tools. Additionally, the company stores security images in Amazon Simple Storage Service (Amazon S3), which offers industry-leading scalability, data availability, security, and performance. The cloud architecture enables the company to avoid the cost, time, and liability involved in installing and maintaining expensive, onsite servers and to pass on the savings to its clients. “With AI, generally you need accelerated processing, or graphics processing units [GPUs], and those get expensive fast,” says Ziomek. “We save our customers money while still making everything work without having to do anything onsite, and that’s enabled by the fact that we’re a cloud-first solution.” Italiano ไทย Actuate AI’s inference engine relies on what may be the world’s largest database of labeled security camera footage—a library of more than 500,000 images that helps the company’s AI scour live video to detect very small objects in highly complex scenes with greater than 99 percent accuracy and an industry-leading false positive rate. Much like a graphically demanding video game, image-reliant AI inferencing requires access to powerful GPUs that can quickly analyze high-resolution images and video concurrently. Actuate AI’s models only run when motion is detected, so the number of camera feeds analyzed by the AI will increase as motion is detected by more cameras connected to Actuate AI’s security system. 2020 Learn more » Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2 G4 Instances Facilitated 100% cloud-based data production Português
ADP Developed an Innovative and Secure Digital Wallet in a Few Months Using AWS Services _ Case Study _ AWS.txt
ADP has seen a positive response in usage of its digital wallet in the United States, processing nearly $1 billion of transactions in customer savings envelopes in the 7 months since launching the product. Contact Sales Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » 2023 Español 日本語 ADP Digital Wallet Architecture Diagram valuable flexibility with Earned Wage Access feature Founded in 1949, ADP serves one million customers in 140 countries with its human capital management software. As the source of pay for one in six Americans, ADP saw an opportunity to help enhance the employee experience through financial wellness offerings. The company wanted to move quickly to provide a socially responsible option for its existing customers and lead the way with a modern industry solution. The company’s digital wallet includes on-demand access to eligible workers’ earned wages before payday, support for online shopping, and many other cutting-edge features. ADP had been using AWS services since 2015 and had worked with Nuvalence on other business initiatives since 2019, so it decided to enlist both companies as it worked on this strategic initiative. “The AWS team has been with us through thick and thin and is always responsive. By using AWS, we have incorporated best practices while building resilient systems that can handle our global scale,” says Lohit Sarma, senior vice president of product development at ADP. “Nuvalence has been a strategic partner of ours, delivering high-quality work. Its expertise in building large-scale digital solutions was an ideal fit for our needs, and we brought the firm in to provide high-quality performance.” 한국어 The digital wallet development started in early 2022. Teams from ADP, Nuvalence, and AWS first aligned on the architecture and security requirements. AWS then made service recommendations that were based on the use case and the existing architecture. Nuvalence paired with ADP engineers to design and build the solution, maximizing the effectiveness of features from AWS services and providing the glue to connect to ADP’s infrastructure and existing set of services. Although similar projects often take several years to complete, ADP released the first version of its digital wallet in a few months. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Fortifies security As of 2022, ADP supports approximately 1.7 million Wisely card members across the United States and plans to keep investing in its digital wallet while rolling out additional features using AWS services. “ADP pays one in six workers and moves close to $100 billion in payroll per day in the United States,” says Lohit. “We have to be working 24/7 with high quality, resiliency, and reliability. We brought AWS and Nuvalence together because of these requirements.” Provides eligible members Get Started Lohit Sarma Senior Vice President of Product Development, ADP AWS Services Used 中文 (繁體) Bahasa Indonesia ADP needed flexibility and extensibility to offer a dynamic solution for a fast-moving market with many changing variables. ADP provides education for companies as they roll out the Earned Wage Access feature. With this support, companies can help eligible members make informed decisions while getting valuable access to earned wages when needed. “ADP takes great pride in being a company with high morals that is always there for its clients and their people,” says Lohit. “Using AWS services, we can give people tools to manage their finances and give them access to funds when they potentially need them the most.” Amazon S3 Ρусский عربي Solution | Launching Multiple Features Quickly Using Serverless Technology from AWS Lambda 中文 (简体) About ADP Overview To make that vision a reality, ADP needed to build a solution that supported high security and privacy standards, facilitated going to market quickly, and offered technology for innovation. ADP worked alongside Amazon Web Services (AWS) and Nuvalence, an AWS Partner, to use modern, cloud-native development practices to build the solution for its digital wallet. ADP built an innovative digital wallet in a few months alongside AWS and Nuvalence to make financial wellness tools more accessible to US workers. Customer Stories / Financial Services Türkçe speed, creating a digital wallet in a few months Because ADP manages employee and financial services, the company needed the solution to meet rigorous compliance-quality standards, including the Payment Card Industry Data Security Standard. To bolster the security of its digital wallet, ADP uses services like Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve virtually any amount of data from anywhere. Using Amazon S3, ADP can securely store flat text files involved in money movement. The solution also uses tokens for the card number to keep transactions secure. Because the payment credentials were loaded securely into the digital wallet, customers could use the digital card for purchases and make payments immediately without waiting for a physical card to arrive in the mail. “Data security and privacy are critical to everything we develop,” says Lohit. “Using AWS services, we could uphold our company’s existing standards while innovating on the implementation.” English ADP Developed an Innovative and Secure Digital Wallet in a Few Months Using AWS Services Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram using tokens and oversight Supported $1 billion Increased development Deutsch AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Opportunity | Selecting AWS and Nuvalence to Collaborate on ADP’s Digital Wallet Tiếng Việt With its digital wallet, ADP accomplished its mission of making financial wellness tools more accessible to US workers. The digital wallet is a safe and simple option through which employees without a traditional bank account can access their pay, giving them freedom in spending their wages. The Earned Wage Access feature gives eligible members access to their earned wages before payday, creating a viable alternative for customers who urgently need access to funds and eliminating the need to take out high-interest-rate loans. Human capital management company ADP serves one million customers in 140 countries. In the United States, ADP released its innovative digital wallet, which features tools to help card members with financial wellness. Italiano ไทย ADP, a global leader in human capital management solutions, wanted to provide workers across North America with unprecedented flexibility with a modern digital wallet. ADP’s vision was to use its robust workforce data and many years of experience to create a product that adapted to the modern way that people managed their money. Outcome | Investing in the Digital Wallet for Future Growth Using AWS Services Close Learn more » Click to enlarge for fullscreen viewing.  Data security and privacy are critical to everything we develop. Using AWS services, we could uphold our company’s existing standards while innovating on the implementation.” Architecture Diagram AWS Lambda $1 billion of processing transactions in customer savings envelopes in 7 months Português ADP met its goal to release the digital wallet quickly using AWS Lambda, a serverless, event-driven compute service that customers use to run code without thinking about servers or clusters. The digital wallet uses AWS Lambda to create a variety of different functions, minimizing the compute footprint of the service. “The team used AWS Lambda to provide an efficient and scalable approach to handling authentication, authorization, and other key functions for the wallet,” says Abe Sultan, partner at Nuvalence and executive sponsor of the Nuvalence team working with ADP. Using serverless technology, ADP could both go to market quickly and leave room to scale for future growth as the needs of the solution evolve.
Adzuna doubles its email open rates using Amazon SES _ Adzuna Case Study _ AWS.txt
At first, Adzuna relied on standard Amazon SES features while staff focused on content and deliverability. In recent years, Adzuna has shifted to using dedicated IP addresses and tools like Amazon CloudWatch, a service that provides observability of users’ AWS resources and applications on AWS and on premises. Handles large volumes needs of a growing user base Français For a job search engine to differentiate itself in a crowded market, it must be able to match job seekers to relevant jobs more swiftly and reliably than its competitors. Adzuna, a United Kingdom–based job aggregator that serves 20 countries, aims to achieve that goal by using smart technology to match people to the right jobs and sending personalized emails to users. To handle this substantial task, Adzuna required an email service that was reliable, simple to use, and that could scale as the company grew. The company turned to Amazon Web Services (AWS) and found Amazon Simple Email Service (Amazon SES), a high-scale inbound and outbound cloud email service, to be the solution for its requirements. Using Amazon SES, Adzuna can efficiently send billions of emails to its users across the globe. To support its goal of sending personalized emails to users, Adzuna needed an easy-to-use email service that could handle increasingly large volumes of email as the company grew. Amazon SES proved to be a simple, scalable solution. First, it integrated seamlessly with Adzuna’s existing AWS infrastructure. Second, because Amazon SES could be used as a Simple Mail Transfer Protocol, the Adzuna developers were able to automate the entire process. The team never had to log on to the service or worry about its inner workings, which meant that it could focus its energy on more important tasks like making necessary edits and updates to emails. Solution | Supporting Company Goals through Simplicity and Scalability Español Opportunity | Seeking Reliability, Scalability, and Cost Effectiveness for Large Volumes of Email 日本語 AWS Services Used Adzuna is a smart, transparent job search engine used by tens of millions of visitors per month across 20 countries globally. It uses the power of technology to match people to better, more fulfilling jobs and keep the world working. Bilal Ikram Email Marketing Manager, Adzuna Because its users rely on the accuracy and timeliness of Adzuna’s emailed job alerts, Adzuna required an email service that was, above all, reliable. “It’s important that there’s no downtime and that there are no deliverability issues or at least no server issues where emails just completely fail to send,” says Bilal Ikram, email marketing manager at Adzuna. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Adzuna has continued to benefit from the scalability of Amazon SES and its additional features. In 2022, the company added an additional four countries, and it has used Amazon SES to meet the needs of its growing user base throughout the expansion. Achieved Adzuna Doubles Email Open Rates Using Amazon SES Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Adzuna launched in 2011 as a job search site based in the United Kingdom, and it now operates in 20 countries, including the United States, Singapore, Australia, and India. Users can search the website by type of job and location and have the option to sign up with their email address for job alerts. When users sign up, Adzuna sends an initial welcome email and, after that, sends regular alerts when relevant jobs are posted to the site. With tens of millions of visitors every month, Adzuna sends around two billion personalized emails every year. Improved 中文 (繁體) Bahasa Indonesia Overall, Adzuna has benefited from using multiple AWS services for different purposes while keeping everything under the same umbrella. Outcome | Relying on an Integrated Suite of Solutions Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » 2022 Doubled Overview Türkçe “We can simply create commands that constantly send out the emails connected to Amazon SES without us having to worry about volumes,” Ikram says. Further, Adzuna set up Amazon SES so that it runs across multiple AWS Regions, helping to manage the workload and providing a backup option for sending emails if needed. “If we were to have an outage, we would have a fallback, which makes the network more reliable,” Ikram says. English of email as the company grows “It would be impossible for us to send volumes of emails with dynamic content to the same extent without using Amazon SES,” says Ikram. “It’s very important that we automate that process and send out emails that are relevant to our users.” “Using Amazon SES, I can focus more on improving the quality and content of the emails and our underlying metrics rather than having to worry about just sending the emails out on a daily basis,” Ikram says. “So that means we have more time to focus on the things that really matter—connecting our users to better, more fulfilling jobs.” Amazon SES lets you reach customers confidently without an on-premises Simple Mail Transfer Protocol (SMTP) system. About Adzuna email open rates Amazon Simple Email Service (Amazon SES) Deutsch a simple, seamless setup using AWS infrastructure Amazon SES turned out to be the most reliable tool for the company’s needs. The Adzuna team initially tested a few other email tools, but they weren’t scalable to the degree the company needed. Using the automation abilities of Amazon SES, the company has been able to handle its burgeoning volume of email since it began using the service in 2011—almost from the company’s start. Without these capabilities, Adzuna would be unable to perform a key service feature. Tiếng Việt Italiano ไทย Amazon CloudWatch Contact Sales Using Amazon SES, I can focus more on improving the quality and content of the emails and our underlying metrics rather than having to worry about just sending the emails out on a daily basis.” email click-through rates Supports Português Since Adzuna’s migration to dedicated internet protocol addresses, the company has seen a significant improvement in email open rates, which have almost doubled. It also saw improvements in click-through rates.
AEON Case Study.txt
Reduced costs Français Traffic surges can stifle our business. Using AWS, we can scale easily, and guarantee our customers a reliable service.” Español Amazon EC2 Scales automatically Learn how »  AEON Scales Card Processing System, Achieves 40% Market Growth Using AWS About AEON 日本語 Customer Stories / Financial Services / Cyprus 2023 AWS Migration Acceleration Program Get Started 한국어 John Abraham CEO, AEON Payment Technologies Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity: Faster Cloud Migration and Modernization Using AWS Migration Acceleration Program AEON is now able to easily comply with GDPR requirements too, using AWS Regions and Availability Zones. The company also set up its own data center close to the AWS EU (Frankfurt) Region data center to support personal identification number (PIN) encryption and decryption, and to meet local privacy requirements in the region. AWS Services Used 中文 (繁體) Bahasa Indonesia The company is now able to scale to meet traffic peaks within minutes. “During peak card usage times, we’re seeing 100 card transactions per second with a large number of people checking their accounts online,” says John Abraham, CEO at AEON. “Traffic surges can stifle our business. Thanks to Cloud Nomads and using AWS, we can scale easily, and guarantee our customers a reliable service.” Contact Sales Ρусский AEON’s next challenge was to ensure its card processing system was market ready and able to serve new territories in Europe and Africa. عربي Opportunity: A Streamlined, Scalable Card Processing Software System The AWS Migration Acceleration Program (MAP) is a comprehensive and proven cloud migration program based upon AWS’s experience migrating thousands of enterprise customers to the cloud.  Learn more » 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AEON turned to AWS Partner Cloud Nomads when it realized its on-premises system was hampering growth. Its existing infrastructure couldn’t scale without significant investment in IT equipment. Its main challenge was to ensure its banking clients could meet customer usage peaks at the end and the beginning of each month, when employee wages are typically paid in. AEON scales to handle credit card usage peaks and support 40 percent growth. Using Amazon Web Services, it can comply with data laws and security protocols to support market entry in Europe and Africa. Overview AWS Directory Service Outcome: Building a Growth-Ready Infrastructure to Support New Markets AWS Customer Success Stories Türkçe The company completed its migration in just 3 months using the AWS Migration Acceleration Program (AWS MAP), which helps businesses speed their cloud migration and modernization journey with an outcome-driven methodology. Using AWS MAP gave AEON assurance over the migration process, providing its IT team with the confidence that the project would deliver the successful outcome it needed. English Based in Cyprus, AEON Payment Technologies wanted to move to the cloud to scale its card processing system for banking customers, and expand into new markets in Europe and Africa. It migrated in just 3 months using the AWS Migration Acceleration Program with the help of AWS Partner, Cloud Nomads. With its infrastructure running on AWS, AEON has increased the number of credit and debit cards it handles by 40 percent over 2 years. The business has also saved 33 percent of planned expenditure on IT, and can scale to handle traffic peaks within minutes. Critically, it can easily comply with Visa and Mastercard’s regulations, local data laws, and support Payment Card Industry Data Security Standard (PCI DSS) standards for card processing. AEON began by migrating its card processing software and databases to Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. AEON also uses Amazon EC2 instances for Windows and Linux to support the card processing system’s databases. This expansion meant the company needed support for PCI DSS compliance in new regions. Critically, it also meant that AEON had to comply with EU GDPR data privacy laws. In some of its target markets, it would also need to keep sensitive data within country borders to meet local regulations. The AEON team has worked closely with AWS to create a scalable and reliable cloud-based system. “In our business, technology can hinder progress—now, the opposite is true for AEON,” says Abraham. “Technology is aiding our growth. The fact that we handle traffic peaks without incident is a great achievement for both our IT team and AWS.” AEON has reduced its reliance on on-premises equipment and cut its planned infrastructure budget to one-third of its previous budget using cloud services. “The sales cycle in the card processing industry is long,” says Abraham. “Also, it’s essential to have infrastructure in place so new customers have confidence that we can support them right away. Using AWS, we have the flexibility to serve new customers instantly in our new markets without having to invest in expensive IT equipment and having it sit idle.” AEON is now evaluating AWS Outposts—which businesses can use to run AWS infrastructure and services on premises for a truly consistent hybrid experience—to support PIN encryption and decryption in the future. Deutsch AEON’s systems on Amazon Web Services (AWS) are certified to meet the regulations of its payment associates, Visa and Mastercard. This includes ensuring compliance for those companies’ card issuing and transaction acquisition regulations. With its systems built on AWS, AEON can also comply with the Payment Card Industry Data Security Standard (PCI DSS) requirements and the European Union (EU) General Data Protection Regulation (GDPR) for data privacy.The company has also cut IT expenditure to one-third of its previous budget and can now scale its system to handle traffic peaks within minutes. Cyprus-based AEON Payment Technologies is a third-party card processing software provider that provides value-added services to support the payment processing needs of the commercial banking industry. This includes card issuing, transaction management, and also authorization, reconciliation, and infrastructure services. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Access reliable, scalable infrastructure on demand. Scale capacity within minutes with SLA commitment of 99.99% availability. Learn more » Tiếng Việt AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, activates your directory-aware workloads and AWS resources to use managed AD on AWS. Learn more » Italiano ไทย Amazon EC2 running Microsoft Windows Server is a secure, reliable, and high-performance environment for deploying Windows-based applications and workloads. Learn more » 40% growth Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Using AWS, AEON can handle the complex PCI DSS security protocols in the cloud for its card processing software. “We have to have multiple levels of security in place to meet industry regulations—otherwise, we would not be able to operate,” says Abraham. “Because AWS is PCI DSS compliant, we could move to the cloud, easily meet these industry standards, and benefit from much faster card processing.” Solution: Delivering Full Compliance with Banking Protocols and Privacy Laws Based in Cyprus, AEON Payment Technologies wanted to move to the cloud to scale its card processing system for banking customers and expand into new markets in Europe and Africa. It migrated in just 3 months using the AWS Migration Acceleration Program. With its infrastructure running on AWS, AEON has increased the number of credit and debit cards it handles by 40 percent over 2 years. The business has also saved 33 percent of planned expenditure on IT, and can scale to handle traffic peaks within several minutes. Critically, it can now easily comply with local data laws and support Payment Card Industry Data Security Standard (PCI DSS) standards for card processing. Amazon EC2 Windows Instances Over the past 2 years, AEON has increased the number of credit and debit cards it handles by 40 percent. “Using AWS, we now support 11.5 million cards and 30,000 merchant card terminals,” says Abraham. “We can also guarantee the 99.999 percent uptime we need so that our banking clients limit downtime and manage reputational risk.” Português
ALTBalaji _ Amazon Web Services.txt
AWS Elemental MediaTailor is a channel assembly and personalized ad-insertion service for video providers to create linear over-the-top (OTT) channels using existing video content. The service then lets you monetize those channels—or other live streams—with personalized advertising. Learn more » Amazon Redshift Français Shahabuddin Sheikh Chief Technology Officer, ALTBalaji India-based Español ALTBalaji launched its platform on the AWS Cloud, using Amazon CloudFront to securely deliver media content to millions of customers every day, Amazon Elastic Compute Cloud (Amazon EC2) instances to run applications, and Amazon Redshift as a data warehouse for analytics. About ALTBalaji    Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Amazon CloudFront 日本語 2022 Zero Downtime live-stream views of Lock Upp ALTBalaji is a subscription-based video on demand (SVOD) platform that produces original over-the-top (OTT) media content. To broadcast live streams of its Indian reality show Lock Upp, the company chose to build its live streaming infrastructure on Amazon Web Services (AWS). India-based ALTBalaji is parent company Balaji Telefilms’ first foray into the digital entertainment space. ALTBalaji offers fresh, original, exclusive stories, tailored for Indian audiences across the world. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Elemental MediaLive Amazon Personalize for targeted content recommendations to viewers and “AWS Elemental MediaLive removed the complexity of developing and operating our live streaming infrastructure, allowing us to focus on providing better user experience and producing unique, compelling content. We're now exploring new ways to enhance our customers' experience, and voice search is just the next step in our journey of constant improvement,” Sheikh concludes. To broadcast live streams of Lock Upp, ALTBalaji built its live streaming infrastructure on AWS Elemental MediaLive—a solution that encodes and transcodes real-time video for broadcast and streaming delivery. Results from a proof of concept (POC) revealed the company could easily add live streaming with advanced broadcasting capabilities to its platform and meet its challenging timeline. The team worked with its AWS Technical Account Manager (TAM) and Subject Matter Expert (SME) to conduct an AWS Infrastructure Event Management (IEM) analysis to right-size the live streaming infrastructure for load handling. In addition, it used AWS Elemental MediaTailor to set up server-side ad integration for live streams under free subscription accounts. ALTBalaji is now preparing for Lock Upp’s second season knowing it can deliver a reliable live streaming experience. It also plans to test AWS Services Used 中文 (繁體) Bahasa Indonesia 10x Furthermore, the live-streaming solution easily managed a tenfold increase in viewership during highly anticipated episodes showing nominations and evictions from Kangana Ranaut’s “jail”. Customer Success / Media Solution | Building Live Streaming Capabilities from Scratch Contact Sales Ρусский Launched in April 2017, عربي By using AWS Elemental MediaLive, ALTBalaji delivered its live streaming solution in weeks and ensured uninterrupted live streams of Lock Upp during its 72-day run for millions of viewers across India. ALTBalaji Develops Live Streaming Capabilities and Delivers Reality Show in Real Time to Millions 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Transcribe to allow viewers to use voice commands over typing to search for series content. Scales to meet tenfold surge in viewership Overview AWS Elemental MediaTailor Get Started 100 Million   Türkçe English Outcome | Ensuring Uninterrupted Live Streams for Millions of Viewers AWS Elemental MediaLive is a broadcast-grade live video processing service that creates high-quality streams for delivery to broadcast TVs and internet-connected devices. Learn more » ALTBalaji, a subsidiary of Balaji Telefilms Limited, is the group’s foray into the digital entertainment space. ALTBalaji is an SVOD platform aiming to provide 34 million subscribers with original over-the-top (OTT) Indian media content right at their fingertips. Subscribers can log in to ALTBalaji and access content—such as shows, movies, and music videos—via desktops, tablets, smartphones, and internet-connected TVs. Deutsch Amazon Rekognition to reduce the cost of video ad integration and other content operations. Furthermore, ALTBalaji wants to assess Tiếng Việt ALTBalaji had just over a month to deliver a live streaming solution in time for the start of the series. Shahabuddin Sheikh, chief technology officer at ALTBalaji, says, “Aside from meeting the deadline, we were also concerned about infrastructure downtime and service lags during the live streams, which would negatively impact the viewer experience.” Opportunity | Delivering a Live Streaming Solution in One Month Italiano ไทย In December 2021, ALTBalaji began production on an Indian reality competition series called Lock Upp. Local celebrities, including renowned Indian film stars, comedians, and sports stars, would be locked inside actor and show host Kangana Ranaut’s “jail” for 72 days, and voted out by viewers until there was a winner. It set a February 2022 launch date for Lock Upp and wanted to broadcast live streams of the show for its duration. Live streamed reality series for 72 days with no downtime Just 19 days after its premiere, Lock Upp garnered more than 100 million views, becoming the most-watched reality show in the Indian OTT space. During the airing of the series, ALTBalaji reported a tenfold increase in viewer data compared to its historical average. However, thanks to optimized workflows in its Amazon Redshift data warehouse, ALTBalaji handled the surge seamlessly. Furthermore, the company gained valuable insights into how often viewers paused and played streams, alongside behavior during live streaming ads and activities that influenced video view count. It plans to use this information to improve product development and user experience. ALTBalaji built its live streaming workflows using AWS Elemental MediaLive, a broadcast-grade live video processing service for high-quality video streams. As a result, it experienced zero downtime during its first live stream despite a tenfold increase in viewership. Learn more » Many viewers will be streaming from smaller towns in India where internet speeds are slower than major urban cities. To ensure an uninterrupted and enjoyable viewing experience from any location, ALTBalaji minimized lags that could cause streams to fail by finetuning AWS Elemental MediaLive. AWS Elemental MediaLive removed the complexity of developing and operating our live streaming infrastructure, allowing us to focus on providing better user experience and producing unique, compelling content.” By using AWS Elemental MediaLive, ALTBalaji delivered its live streaming solution in weeks and ensured uninterrupted live streams of Lock Upp for millions of viewers across India during its 72-day run. Sheikh describes the assistance from AWS as “hyper support”. Sheikh says, “Without AWS Elemental MediaLive, it would’ve taken several months to deliver our streaming solution. From the start, AWS understood the criticality of everything we were doing and stayed the course with the team even after the go-live date.” Português  Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more »
Amanotes Stays on Beat by Delivering Simple Music Games to Millions Worldwide on AWS.txt
Français 120 million Español Expansion To stay ahead of competitors, Amanotes needs to innovate continuously to deliver more immersive game experiences, while managing costs effectively. With Amazon Elastic Container Service (Amazon ECS) and AWS Fargate, the business easily deploys applications across a scalable, multi-region infrastructure and minimizes its technology team’s management and maintenance workload. Average request processing time Learn More AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. 日本語 Customer Stories / Games The business is executing plans to complement its existing music ‘Play’ pillar with a ‘Learn’ pillar delivered through an educational music app, and a ‘Simulation’ pillar that gives users the ability to learn musical instruments through digital simulations. This strategy is designed to realize Amanotes’ vision of becoming the number one ecosystem for everyone to play, learn, create, and connect through music. Average time to deliver downloads 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Amanotes launched its business on the AWS Cloud for scalability, low latency, and stability. “We analyzed cloud providers and determined AWS had the extensive reach we required: 27 AWS Regions worldwide, each featuring multiple Availability Zones and hundreds of edge locations,” says Nguyen Nghi, Head of Technology at Amanotes. Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Pursuing growth in China and Japan Get Started Solution | Running Music Games and Apps Seamlessly on Amazon CloudFront AWS Services Used Amanotes is running its application services, core database, and backend API services on the AWS Cloud. It uses Amazon CloudFront to deliver game content reliably and with low latency to its global user base.  “With Amazon CloudFront, we’re delivering content that includes five leading music games to more than 120 million monthly active users who, collectively, make more than 90 million download requests per day,” says Nghi. “We can also secure the content from cyberattacks that could compromise our reputation and slow our expansion into new markets.”   中文 (繁體) Bahasa Indonesia To learn more, visit aws.amazon.com/cloudfront. monthly active users of Amanotes’ games With Amazon CloudFront, we’re delivering content that includes five leading music games to more than 120 million monthly active users who, collectively, make more than 90 million download requests per day.” Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) 1.5 seconds Nguyen Nghi Head of Technology, Amanotes Outcome | Innovating with New Services and Connecting Global Users Through Music 2022 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Overview Amazon Elastic Kubernetes Service Founded in 2014 and headquartered in Ho Chi Minh City, Vietnam, Amanotes oversees a portfolio of music games and apps, including Magic Tiles 3, Tiles Hop, and Dancing Road. Since its founding, users across the globe have downloaded Amanotes music games and apps more than 2.5 billion times. AWS Fargate Amanotes’ founders decided to focus on a niche the business describes as ‘Simple Music Games’; games that are intuitive and easy for users to interact with. In 2016, Amanotes developed Magic Tiles 3, a game requiring users to tap digital musical notes on their smartphone screens in sync with songs from selected genres. Amazon Elastic Container Service Amanotes plans to further leverage AWS Global Infrastructure and innovative solutions to grow its business in markets such as Japan and China. The business also believes new AWS edge locations in Hanoi and Ho Chi Minh City present opportunities to acquire new customers in its domestic market. Nghi says, “We aim to grow our business as much as possible, and AWS provides the speed and scale we need to do this.” Türkçe English 90 million content file download requests met daily The business delivers its content files in 1.5 seconds or less, with smaller files delivered in just 0.1 seconds. Average request processing time for the Amanotes API is around 100 milliseconds. This low latency leads to repeat gamers and attracts advertisers. This in turn increases revenue generation from in-game and reward-based advertisements, pay-to-play, and subscriptions.   Opportunity | Delivering Music Games with Speed and Scale  Amanotes is also leveraging Amazon Elastic Kubernetes Service (Amazon EKS) to run some of its services. “By leveraging managed services capabilities from Amazon EKS, our team can focus purely on application development without worrying about infrastructure,” says Nghi. Amanotes is a Vietnam-headquartered music game developer that publishes games to a global audience. To provide game downloads to global users reliably, securely, and with low latency, Amanotes chose to launch on AWS. About Amanotes Deutsch Amanotes uses Amazon CloudFront, Amazon Elastic Kubernetes Service, and Amazon Elastic Container Service to deliver games from a scalable, multi-region infrastructure via a global content delivery network. With AWS, Amanotes delivers tens of millions of downloads every day to customers around the world. Amanotes Stays on Beat by Delivering ‘Simple Music Games’ to Millions Worldwide on AWS Tiếng Việt With AWS, Amanotes has built on the success of Magic Tiles 3 to develop another four major music games: Tiles Hop, Dancing Road, Beat Blader 3D, and Dancing Race, growing into a global app publisher. It’s now one of the leading mobile game publishers in Southeast Asia and one of the top music game publishers worldwide.   Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Italiano ไทย Amazon CloudFront Amanotes delivers a low-latency, seamless gaming experience to players around the globe with Amazon CloudFront. Learn more » Personalizing user experiences is key to Amanotes’ growth strategy. The business plans to use machine learning through Amazon Personalize to generate more relevant music recommendations to gamers, increasing engagement and growing revenue by attracting more customers. 100 milliseconds In 2014, Nguyen Tuan Cuong and Vo Tuan Binh co-founded Amanotes to give users the ability to extend their interactions with music beyond listening. This meant using technology to create personalized experiences tailored to each users’ taste, consumption, and musical ability.  Português
Amazon OpenSearch Services vector database capabilities explained _ AWS Big Data Blog.txt
AWS Big Data Blog Amazon OpenSearch Service’s vector database capabilities explained by Jon Handler , Dylan Tong , Jianwei Li , and Vamshi Vijay Nakkirtha | on 21 JUN 2023 | in Amazon OpenSearch Service , Amazon SageMaker , Artificial Intelligence , Customer Solutions , Foundational (100) , Intermediate (200) , Thought Leadership | Permalink | Comments |  Share OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, security monitoring, and observability applications, licensed under the Apache 2.0 license. It comprises a search engine, OpenSearch, which delivers low-latency search and aggregations, OpenSearch Dashboards, a visualization and dashboarding tool, and a suite of plugins that provide advanced capabilities like alerting, fine-grained access control, observability, security monitoring, and vector storage and processing. Amazon OpenSearch Service is a fully managed service that makes it simple to deploy, scale, and operate OpenSearch in the AWS Cloud. As an end-user, when you use OpenSearch’s search capabilities, you generally have a goal in mind—something you want to accomplish. Along the way, you use OpenSearch to gather information in support of achieving that goal (or maybe the information is the original goal). We’ve all become used to the “search box” interface, where you type some words, and the search engine brings back results based on word-to-word matching. Let’s say you want to buy a couch in order to spend cozy evenings with your family around the fire. You go to Amazon.com, and you type “a cozy place to sit by the fire.” Unfortunately, if you run that search on Amazon.com, you get items like fire pits, heating fans, and home decorations—not what you intended. The problem is that couch manufacturers probably didn’t use the words “cozy,” “place,” “sit,” and “fire” in their product titles or descriptions. In recent years, machine learning (ML) techniques have become increasingly popular to enhance search. Among them are the use of embedding models, a type of model that can encode a large body of data into an n-dimensional space where each entity is encoded into a vector, a data point in that space, and organized such that similar entities are closer together. An embedding model, for instance, could encode the semantics of a corpus. By searching for the vectors nearest to an encoded document — k-nearest neighbor (k-NN) search — you can find the most semantically similar documents. Sophisticated embedding models can support multiple modalities, for instance, encoding the image and text of a product catalog and enabling similarity matching on both modalities. A vector database provides efficient vector similarity search by providing specialized indexes like k-NN indexes. It also provides other database functionality like managing vector data alongside other data types, workload management, access control and more. OpenSearch’s k-NN plugin provides core vector database functionality for OpenSearch , so when your customer searches for “a cozy place to sit by the fire” in your catalog, you can encode that prompt and use OpenSearch to perform a nearest neighbor query to surface that 8-foot, blue couch with designer arranged photographs in front of fireplaces. Using OpenSearch Service as a vector database With OpenSearch Service’s vector database capabilities, you can implement semantic search, Retrieval Augmented Generation (RAG) with LLMs, recommendation engines, and search rich media. Semantic search With semantic search, you improve the relevance of retrieved results using language-based embeddings on search documents. You enable your search customers to use natural language queries, like “a cozy place to sit by the fire” to find their 8-foot-long blue couch. For more information, refer to Building a semantic search engine in OpenSearch to learn how semantic search can deliver a 15% relevance improvement, as measured by normalized discounted cumulative gain (nDCG) metrics compared with keyword search. For a concrete example, our Improve search relevance with ML in Amazon OpenSearch Service workshop explores the difference between keyword and semantic search, based on a Bidirectional Encoder Representations from Transformers (BERT) model, hosted by Amazon SageMaker to generate vectors and store them in OpenSearch. The workshop uses product question answers as an example to show how keyword search using the keywords/phrases of the query leads to some irrelevant results. Semantic search is able to retrieve more relevant documents by matching the context and semantics of the query. The following diagram shows an example architecture for a semantic search application with OpenSearch Service as the vector database. Retrieval Augmented Generation with LLMs RAG is a method for building trustworthy generative AI chatbots using generative LLMs like OpenAI, ChatGPT, or Amazon Titan Text . With the rise of generative LLMs, application developers are looking for ways to take advantage of this innovative technology. One popular use case involves delivering conversational experiences through intelligent agents. Perhaps you’re a software provider with knowledge bases for product information, customer self-service, or industry domain knowledge like tax reporting rules or medical information about diseases and treatments. A conversational search experience provides an intuitive interface for users to sift through information through dialog and Q&A. Generative LLMs on their own are prone to hallucinations —a situation where the model generates a believable but factually incorrect response. RAG solves this problem by complementing generative LLMs with an external knowledge base that is typically built using a vector database hydrated with vector-encoded knowledge articles. As illustrated in the following diagram, the query workflow starts with a question that is encoded and used to retrieve relevant knowledge articles from the vector database. Those results are sent to the generative LLM whose job is to augment those results, typically by summarizing the results as a conversational response. By complementing the generative model with a knowledge base, RAG grounds the model on facts to minimize hallucinations. You can learn more about building a RAG solution in the Retrieval Augmented Generation module of our semantic search workshop . Recommendation engine Recommendations are a common component in the search experience, especially for ecommerce applications. Adding a user experience feature like “more like this” or “customers who bought this also bought that” can drive additional revenue through getting customers what they want. Search architects employ many techniques and technologies to build recommendations, including Deep Neural Network (DNN) based recommendation algorithms such as the two-tower neural net model , YoutubeDNN . A trained embedding model encodes products, for example, into an embedding space where products that are frequently bought together are considered more similar, and therefore are represented as data points that are closer together in the embedding space. Another possibility is that product embeddings are based on co-rating similarity instead of purchase activity. You can employ this affinity data through calculating the vector similarity between a particular user’s embedding and vectors in the database to return recommended items. The following diagram shows an example architecture of building a recommendation engine with OpenSearch as a vector store. Media search Media search enables users to query the search engine with rich media like images, audio, and video. Its implementation is similar to semantic search—you create vector embeddings for your search documents and then query OpenSearch Service with a vector. The difference is you use a computer vision deep neural network (e.g. Convolutional Neural Network (CNN)) such as ResNet to convert images into vectors. The following diagram shows an example architecture of building an image search with OpenSearch as the vector store. Understanding the technology OpenSearch uses approximate nearest neighbor (ANN) algorithms from the NMSLIB , FAISS , and Lucene libraries to power k-NN search. These search methods employ ANN to improve search latency for large datasets. Of the three search methods the k-NN plugin provides, this method offers the best search scalability for large datasets. The engine details are as follows: Non-Metric Space Library (NMSLIB) – NMSLIB implements the HNSW ANN algorithm Facebook AI Similarity Search (FAISS) – FAISS implements both HNSW and IVF ANN algorithms Lucene – Lucene implements the HNSW algorithm Each of the three engines used for approximate k-NN search has its own attributes that make one more sensible to use than the others in a given situation. You can follow the general information in this section to help determine which engine will best meet your requirements. In general, NMSLIB and FAISS should be selected for large-scale use cases. Lucene is a good option for smaller deployments, but offers benefits like smart filtering where the optimal filtering strategy—pre-filtering, post-filtering, or exact k-NN—is automatically applied depending on the situation. The following table summarizes the differences between each option. . NMSLIB-HNSW FAISS-HNSW FAISS-IVF Lucene-HNSW Max Dimension 16,000 16,000 16,000 1024 Filter Post filter Post filter Post filter Filter while search Training Required No No Yes No Similarity Metrics l2, innerproduct, cosinesimil, l1, linf l2, innerproduct l2, innerproduct l2, cosinesimil Vector Volume Tens of billions Tens of billions Tens of billions < Ten million Indexing latency Low Low Lowest Low Query Latency & Quality Low latency & high quality Low latency & high quality Low latency & low quality High latency & high quality Vector Compression Flat Flat Product Quantization Flat Product Quantization Flat Memory Consumption High High Low with PQ Medium Low with PQ High Approximate and exact nearest-neighbor search The OpenSearch Service k-NN plugin supports three different methods for obtaining the k-nearest neighbors from an index of vectors: approximate k-NN, score script (exact k-NN), and painless extensions (exact k-NN). Approximate k-NN The first method takes an approximate nearest neighbor approach—it uses one of several algorithms to return the approximate k-nearest neighbors to a query vector. Usually, these algorithms sacrifice indexing speed and search accuracy in return for performance benefits such as lower latency, smaller memory footprints, and more scalable search. Approximate k-NN is the best choice for searches over large indexes (that is, hundreds of thousands of vectors or more) that require low latency. You should not use approximate k-NN if you want to apply a filter on the index before the k-NN search, which greatly reduces the number of vectors to be searched. In this case, you should use either the score script method or painless extensions. Score script The second method extends the OpenSearch Service score script functionality to run a brute force, exact k-NN search over knn_vector fields or fields that can represent binary objects. With this approach, you can run k-NN search on a subset of vectors in your index (sometimes referred to as a pre-filter search ). This approach is preferred for searches over smaller bodies of documents or when a pre-filter is needed. Using this approach on large indexes may lead to high latencies. Painless extensions The third method adds the distance functions as painless extensions that you can use in more complex combinations. Similar to the k-NN score script, you can use this method to perform a brute force, exact k-NN search across an index, which also supports pre-filtering. This approach has slightly slower query performance compared to the k-NN score script. If your use case requires more customization over the final score, you should use this approach over score script k-NN. Vector search algorithms The simple way to find similar vectors is to use k-nearest neighbors (k-NN) algorithms, which compute the distance between a query vector and the other vectors in the vector database. As we mentioned earlier, the score script k-NN and painless extensions search methods use the exact k-NN algorithms under the hood. However, in the case of extremely large datasets with high dimensionality, this creates a scaling problem that reduces the efficiency of the search. Approximate nearest neighbor (ANN) search methods can overcome this by employing tools that restructure indexes more efficiently and reduce the dimensionality of searchable vectors. There are different ANN search algorithms; for example, locality sensitive hashing, tree-based, cluster-based, and graph-based. OpenSearch implements two ANN algorithms: Hierarchical Navigable Small Worlds (HNSW) and Inverted File System (IVF). For a more detailed explanation of how the HNSW and IVF algorithms work in OpenSearch, see blog post “ Choose the k-NN algorithm for your billion-scale use case with OpenSearch ”. Hierarchical Navigable Small Worlds The HNSW algorithm is one of the most popular algorithms out there for ANN search. The core idea of the algorithm is to build a graph with edges connecting index vectors that are close to each other. Then, on search, this graph is partially traversed to find the approximate nearest neighbors to the query vector. To steer the traversal towards the query’s nearest neighbors, the algorithm always visits the closest candidate to the query vector next. Inverted File The IVF algorithm separates your index vectors into a set of buckets, then, to reduce your search time, only searches through a subset of these buckets. However, if the algorithm just randomly split up your vectors into different buckets, and only searched a subset of them, it would yield a poor approximation. The IVF algorithm uses a more elegant approach. First, before indexing begins, it assigns each bucket a representative vector. When a vector is indexed, it gets added to the bucket that has the closest representative vector. This way, vectors that are closer to each other are placed roughly in the same or nearby buckets. Vector similarity metrics All search engines use a similarity metric to rank and sort results and bring the most relevant results to the top. When you use a plain text query, the similarity metric is called TF-IDF, which measures the importance of the terms in the query and generates a score based on the number of textual matches. When your query includes a vector, the similarity metrics are spatial in nature, taking advantage of proximity in the vector space. OpenSearch supports several similarity or distance measures: Euclidean distance – The straight-line distance between points. L1 (Manhattan) distance – The sum of the differences of all of the vector components. L1 distance measures how many orthogonal city blocks you need to traverse from point A to point B. L-infinity (chessboard) distance – The number of moves a King would make on an n-dimensional chessboard. It’s different than Euclidean distance on the diagonals—a diagonal step on a 2-dimensional chessboard is 1.41 Euclidean units away, but 2 L-infinity units away. Inner product – The product of the magnitudes of two vectors and the cosine of the angle between them. Usually used for natural language processing (NLP) vector similarity. Cosine similarity – The cosine of the angle between two vectors in a vector space. Hamming distance – For binary-coded vectors, the number of bits that differ between the two vectors. Advantage of OpenSearch as a vector database When you use OpenSearch Service as a vector database, you can take advantage of the service’s features like usability, scalability, availability, interoperability, and security. More importantly, you can use OpenSearch’s search features to enhance the search experience. For example, you can use Learning to Rank in OpenSearch to integrate user clickthrough behavior data into your search application and improve search relevance. You can also combine OpenSearch text search and vector search capabilities to search documents with keyword and semantic similarity. You can also use other fields in the index to filter documents to improve relevance. For advanced users, you can use a hybrid scoring model to combine OpenSearch’s text-based relevance score, computed with the Okapi BM25 function and its vector search score to improve the ranking of your search results. Scale and limits OpenSearch as vector database support billions of vector records. Keep in mind the following calculator regarding number of vectors and dimensions to size your cluster. Number of vectors OpenSearch VectorDB takes advantage of the sharding capabilities of OpenSearch and can scale to billions of vectors at single-digit millisecond latencies by sharding vectors and scale horizontally by adding more nodes. The number of vectors that can fit in a single machine is a function of the off-heap memory availability on the machine. The number of nodes required will depend on the amount of memory that can be used for the algorithm per node and the total amount of memory required by the algorithm. The more nodes, the more memory and better performance. The amount of memory available per node is computed as memory_available = ( node_memory – jvm_size ) * circuit_breaker_limit , with the following parameters: node_memory – The total memory of the instance. jvm_size – The OpenSearch JVM heap size. This is set to half of the instance’s RAM, capped at approximately 32 GB. circuit_breaker_limit – The native memory usage threshold for the circuit breaker. This is set to 0.5. Total cluster memory estimation depends on total number of vector records and algorithms. HNSW and IVF have different memory requirements. You can refer to Memory Estimation for more details. Number of dimensions OpenSearch’s current dimension limit for the vector field knn_vector is 16,000 dimensions. Each dimension is represented as a 32-bit float. The more dimensions, the more memory you’ll need to index and search. The number of dimensions is usually determined by the embedding models that translate the entity to a vector. There are a lot of options to choose from when building your knn_vector field. To determine the correct methods and parameters to choose, refer to Choosing the right method . Customer stories: Amazon Music Amazon Music is always innovating to provide customers with unique and personalized experiences. One of Amazon Music’s approaches to music recommendations is a remix of a classic Amazon innovation, item-to-item collaborative filtering , and vector databases. Using data aggregated based on user listening behavior, Amazon Music has created an embedding model that encodes music tracks and customer representations into a vector space where neighboring vectors represent tracks that are similar. 100 million songs are encoded into vectors, indexed into OpenSearch, and served across multiple geographies to power real-time recommendations. OpenSearch currently manages 1.05 billion vectors and supports a peak load of 7,100 vector queries per second to power Amazon Music recommendations. The item-to-item collaborative filter continues to be among the most popular methods for online product recommendations because of its effectiveness at scaling to large customer bases and product catalogs. OpenSearch makes it easier to operationalize and further the scalability of the recommender by providing scale-out infrastructure and k-NN indexes that grow linearly with respect to the number of tracks and similarity search in logarithmic time. The following figure visualizes the high-dimensional space created by the vector embedding. Brand protection at Amazon Amazon strives to deliver the world’s most trustworthy shopping experience, offering customers the widest possible selection of authentic products. To earn and maintain our customers’ trust, we strictly prohibit the sale of counterfeit products, and we continue to invest in innovations that ensure only authentic products reach our customers. Amazon’s brand protection programs build trust with brands by accurately representing and completely protecting their brand. We strive to ensure that public perception mirrors the trustworthy experience we deliver. Our brand protection strategy focuses on four pillars: (1) Proactive Controls (2) Powerful Tools to Protect Brands (3) Holding Bad Actors Accountable (4) Protecting and Educating Customers. Amazon OpenSearch Service is a key part of Amazon’s Proactive Controls. In 2022, Amazon’s automated technology scanned more than 8 billion attempted changes daily to product detail pages for signs of potential abuse. Our proactive controls found more than 99% of blocked or removed listings before a brand ever had to find and report it. These listings were suspected of being fraudulent, infringing, counterfeit, or at risk of other forms of abuse. To perform these scans, Amazon created tooling that uses advanced and innovative techniques, including the use of advanced machine learning models to automate the detection of intellectual property infringements in listings across Amazon’s stores globally. A key technical challenge in implementing such automated system is the ability to search for protected intellectual property within a vast billion-vector corpus in a fast, scalable and cost effective manner. Leveraging Amazon OpenSearch Service’s scalable vector database capabilities and distributed architecture, we successfully developed an ingestion pipeline that has indexed a total of 68 billion, 128- and 1024-dimension vectors into OpenSearch Service to enable brands and automated systems to conduct infringement detection, in real-time, through a highly available and fast (sub-second) search API. Conclusion Whether you’re building a generative AI solution, searching rich media and audio, or bringing more semantic search to your existing search-based application, OpenSearch is a capable vector database. OpenSearch supports a variety of engines, algorithms, and distance measures that you can employ to build the right solution. OpenSearch provides a scalable engine that can support vector search at low latency and up to billions of vectors. With OpenSearch and its vector DB capabilities, your users can find that 8-foot-blue couch easily, and relax by a cozy fire. About the Authors Jon Handler is a Senior Principal Solutions Architect at Amazon Web Services based in Palo Alto, CA. Jon works closely with OpenSearch and Amazon OpenSearch Service, providing help and guidance to a broad range of customers who have search and log analytics workloads that they want to move to the AWS Cloud. Prior to joining AWS, Jon’s career as a software developer included four years of coding a large-scale, eCommerce search engine. Jon holds a Bachelor of the Arts from the University of Pennsylvania, and a Master of Science and a Ph. D. in Computer Science and Artificial Intelligence from Northwestern University. Jianwei Li is a Principal Analytics Specialist TAM at Amazon Web Services. Jianwei provides consultant service for customers to help customer design and build modern data platform. Jianwei has been working in big data domain as software developer, consultant and tech leader. Dylan Tong is a Senior Product Manager at Amazon Web Services. He leads the product initiatives for AI and machine learning (ML) on OpenSearch including OpenSearch’s vector database capabilities. Dylan has decades of experience working directly with customers and creating products and solutions in the database, analytics and AI/ML domain. Dylan holds a BSc and MEng degree in Computer Science from Cornell University.  Vamshi Vijay Nakkirtha is a Software Engineering Manager working on the OpenSearch Project and Amazon OpenSearch Service. His primary interests include distributed systems. He is an active contributor to various plugins, like k-NN, GeoSpatial, and dashboard-maps. Comments View Comments Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Anghami Case Study.txt
With the recent rise of rival music services, Anghami recognized the growing significance of guiding customers towards the artists and content that align with their preferences. This became even more crucial given the extensive and expanding collection of Arabic and international music available on the platform. These music-recommendation features attract new customers, and foster greater user loyalty. The company has observed that users spend more time on the site when presented with additional song recommendations. Anghami's previous solution for generating recommendations used legacy code that made it difficult for its team to expand its functionality. Anghami decided to create a new, cloud-native solution on AWS. The new platform eliminated the liability of maintaining old code, and freed up more time for engineers to build new features and capabilities for customers. It also meant they could take advantage of versatile tools such as Amazon OpenSearch Service, which makes it easy to perform interactive log analytics, real-time application monitoring, and website searches. Amazon OpenSearch Service Français Outcome: Owning Audio Content and Delighting Customers Using AWS Amazon S3 Español The company aimed to develop a cutting-edge recommendations platform that could scale to handle its expanding user-base, while facilitating the creation of novel features and services for its customers. Anghami is a music-streaming service based in Abu Dhabi. It serves approximately 70 million users in Europe, the Middle East and North Africa (MENA), and the US, giving them access to more than 72 million songs and podcasts. Over the past 10 years, it grew from a homegrown start-up into the first Arab technology company to be listed on the Nasdaq stock exchange in February 2022. Anghami sets itself apart from competitors by helping customers find suitable audio content through personalized recommendations. When its previous technology platform proved difficult to maintain and develop new features for, it turned to Amazon Web Services (AWS). The company built a new platform on AWS that uses machine learning (ML) to generate recommendations. It can now quickly surface relevant content for users, attract top tech talent, rapidly develop new features that enrich customer experience, and support future product innovation. Opportunity: Reducing Technology Risk and Building a Platform for Innovation 日本語 Amazon SageMaker 2023 An AWS customer since its inception, Anghami reached out to AWS solution architects to investigate its technology options based on its future plans. After several in-depth workshops, they came up with a new architecture that is simple, powerful, and easy to maintain and develop on. Within 6 months of the initial architecture workshops with AWS, Anghami launched its cloud-based recommendations engine for its growing catalog of songs and podcasts. The service’s recommendation platform now runs on Amazon OpenSearch Service. Anghami stores its user behavior data and audio content on Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. To run its large data workloads, the company uses Amazon EMR, which easily runs and scales Apache Spark, Hive, Presto, and other big workloads. These workloads include training nearly a decade’s worth of customer data that has been collected from millions of customers using the streaming music service daily. To train the machine learning models that produce music recommendations, Anghami uses Amazon SageMaker, which helps to build, train, and deploy ML models. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Learn how »  Kevin Williams Vice President (VP) of Machine Learning, Anghami Anghami plans to continue growing its audio catalog and expanding its user base in the Middle East and beyond. “We want to own audio in the regions we operate, for podcasts, audiobooks, and music,” adds Williams. “Using AWS, we have everything we need to accomplish that. Our platform is flexible, reliable, scalable, and easy to maintain, so we can spend our efforts on valuable tasks that benefit customers instead of maintenance.” Get Started Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. Learn more » Our platform is flexible, reliable, scalable, and easy to maintain, so we can spend our efforts on valuable tasks that benefit customers instead of maintenance.” AWS Services Used Overview 中文 (繁體) Bahasa Indonesia 10x Anghami Personalizes Music Recommendations Using Amazon OpenSearch Service Solution: Attracting Top Tech Talent and Developing Prototypes in Days on AWS Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) 72+ million Customer Stories / Media Entertainment / MENA Amazon EMR About Company Founded in 2012 in Beirut, Anghami offers free and paid audio-streaming services. Its premium service provides features such as the ability to download tracks and play them offline, rewind or fast-forward music, and view lyrics. AWS Customer Success Stories Türkçe Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks. English Anghami now has a technology foundation it can build on for years to come. “I'm excited about running development sprints and discovering the best customer experiences in a timely manner,” says Williams. 6 months songs and podcasts served seamlessly Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch Tiếng Việt Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Anghami can also release new music to fans almost immediately. When new tracks drop, typically on Fridays, fans can access them within a minute of the official release. With the previous solution, the tech team couldn’t quickly add a single track to the catalog. However, using OpenSearch, the team can insert and serve songs with its machine learning algorithm within moments of the song’s release. “This is an essential feature that really makes us stand out compared to our rivals,” says Williams. “It’s satisfying to build on fans’ excitement about new releases.” Italiano ไทย Founded in 2010, Anghami provides a music-streaming service in the Middle East and North Africa (MENA), Europe and the US. The company has offices in Abu Dhabi, Beirut, Cairo, Dubai, and Riyadh, and employs more than 160 people. Anghami developers can now rapidly prototype new feature ideas from product teams and quickly develop queries to recommend content for users. Writing a search query and creating a prototype takes 1–2 days on AWS, as opposed to around 2 weeks on the previous system. Since launching on AWS, the team has created new functions on the service landing page that suggest artists and relevant playlists for customers to listen to, instead of just suggesting tracks. Building its platform on AWS has reduced the company’s technology risk because it is now easier to find talented engineers and DevOps staff. “As a tech company, you’re only as good as your talent,” says Kevin Williams, Vice President (VP) of Machine Learning at Anghami. “We can quickly find candidates with OpenSearch skills and others who are motivated to learn OpenSearch because it’s a widely used technology. It's also quicker to train up technical staff, because they can access existing documentation on AWS services.” Learn more » to migrate entire song database faster to develop music search queries Português Contact Sales
Announcing enhanced table extractions with Amazon Textract _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Announcing enhanced table extractions with Amazon Textract by Raj Pathak , Anjan Biswas , and Lalita Reddi | on 07 JUN 2023 | in Amazon Machine Learning , Amazon Textract , Artificial Intelligence | Permalink | Comments |  Share Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from any document or image. Amazon Textract has a Tables feature within the AnalyzeDocument API that offers the ability to automatically extract tabular structures from any document. In this post, we discuss the improvements made to the Tables feature and how it makes it easier to extract information in tabular structures from a wide variety of documents. Tabular structures in documents such as financial reports, paystubs, and certificate of analysis files are often formatted in a way that enables easy interpretation of information. They often also include information such as table title, table footer, section title, and summary rows within the tabular structure for better readability and organization. For a similar document prior to this enhancement, the Tables feature within AnalyzeDocument would have identified those elements as cells, and it didn’t extract titles and footers that are present outside the bounds of the table. In such cases, custom postprocessing logic to identify such information or extract it separately from the API’s JSON output was necessary. With this announcement of enhancements to the Table feature, the extraction of various aspects of tabular data becomes much simpler. In April 2023, Amazon Textract introduced the ability to automatically detect titles, footers, section titles, and summary rows present in documents via the Tables feature. In this post, we discuss these enhancements and give examples to help you understand and use them in your document processing workflows. We walk through how to use these improvements through code examples to use the API and process the response with the Amazon Textract Textractor library . Overview of solution The following image shows that the updated model not only identifies the table in the document but all corresponding table headers and footers. This sample financial report document contains table title, footer, section title, and summary rows. The Tables feature enhancement adds support for four new elements in the API response that allows you to extract each of these table elements with ease, and adds the ability to distinguish the type of table. Table elements Amazon Textract can identify several components of a table such as table cells and merged cells. These components, known as Block objects, encapsulate the details related to the component, such as the bounding geometry, relationships, and confidence score. A Block represents items that are recognized in a document within a group of pixels close to each other. The following are the new Table Blocks introduced in this enhancement: Table title – A new Block type called TABLE_TITLE that enables you to identify the title of a given table. Titles can be one or more lines, which are typically above a table or embedded as a cell within the table. Table footers – A new Block type called TABLE_FOOTER that enables you to identify the footers associated with a given table. Footers can be one or more lines that are typically below the table or embedded as a cell within the table. Section title – A new Block type called TABLE_SECTION_TITLE that enables you to identify if the cell detected is a section title. Summary cells – A new Block type called TABLE_SUMMARY that enables you to identify if the cell is a summary cell, such as a cell for totals on a paystub. Types of tables When Amazon Textract identifies a table in a document, it extracts all the details of the table into a top-level Block type of TABLE . Tables can come in various shapes and sizes. For example, documents often contain tables that may or may not have a discernible table header. To help distinguish these types of tables, we added two new entity types for a TABLE Block : SEMI_STRUCTURED_TABLE and STRUCTURED_TABLE . These entity types help you distinguish between a structured versus a semistructured table. Structured tables are tables that have clearly defined column headers. But with semi-structured tables, data might not follow a strict structure. For example, data may appear in tabular structure that isn’t a table with defined headers. The new entity types offer the flexibility to choose which tables to keep or remove during post-processing. The following image shows an example of STRUCTURED_TABLE and SEMI_STRUCTURED_TABLE . Analyzing the API output In this section, we explore how you can use the Amazon Textract Textractor library to postprocess the API output of AnalyzeDocument with the Tables feature enhancements. This allows you to extract relevant information from tables. Textractor is a library created to work seamlessly with Amazon Textract APIs and utilities to subsequently convert the JSON responses returned by the APIs into programmable objects. You can also use it to visualize entities on the document and export the data in formats such as comma-separated values (CSV) files. It’s intended to aid Amazon Textract customers in setting up their postprocessing pipelines. In our examples, we use the following sample page from a 10-K SEC filing document. The following code can be found within our GitHub repository . To process this document, we make use of the Textractor library and import it for us to postprocess the API outputs and visualize the data: pip install amazon-textract-textractor The first step is to call Amazon Textract AnalyzeDocument with Tables feature, denoted by the features=[TextractFeatures.TABLES] parameter to extract the table information. Note that this method invokes the real-time (or synchronous) AnalyzeDocument API, which supports single-page documents. However, you can use the asynchronous StartDocumentAnalysis API to process multi-page documents (with up to 3,000 pages). from PIL import Image from textractor import Textractor from textractor.visualizers.entitylist import EntityList from textractor.data.constants import TextractFeatures, Direction, DirectionalFinderType image = Image.open("sec_filing.png") # loads the document image with Pillow extractor = Textractor(region_name="us-east-1") # Initialize textractor client, modify region if required document = extractor.analyze_document( file_source=image, features=[TextractFeatures.TABLES], save_image=True ) The document object contains metadata about the document that can be reviewed. Notice that it recognizes one table in the document along with other entities in the document: This document holds the following data: Pages - 1 Words - 658 Lines - 122 Key-values - 0 Checkboxes - 0 Tables - 1 Queries - 0 Signatures - 0 Identity Documents - 0 Expense Documents – 0 Now that we have the API output containing the table information, we visualize the different elements of the table using the response structure discussed previously: table = EntityList(document.tables[0]) document.tables[0].visualize() The Textractor library highlights the various entities within the detected table with a different color code for each table element. Let’s dive deeper into how we can extract each element. The following code snippet demonstrates extracting the title of the table: table_title = table[0].title.text table_title 'The following table summarizes, by major security type, our cash, cash equivalents, restricted cash, and marketable securities that are measured at fair value on a recurring basis and are categorized using the fair value hierarchy (in millions):' Similarly, we can use the following code to extract the footers of the table. Notice that table_footers is a list, which means that there can be one or more footers associated with the table. We can iterate over this list to see all the footers present, and as shown in the following code snippet, the output displays three footers: table_footers = table[0].footers for footers in table_footers: print (footers.text) (1) The related unrealized gain (loss) recorded in "Other income (expense), net" was $(116) million and $1.0 billion in Q3 2021 and Q3 2022, and $6 million and $(11.3) billion for the nine months ended September 30, 2021 and 2022. (2) We are required to pledge or otherwise restrict a portion of our cash, cash equivalents, and marketable fixed income securities primarily as collateral for real estate, amounts due to third-party sellers in certain jurisdictions, debt, and standby and trade letters of credit. We classify cash, cash equivalents, and marketable fixed income securities with use restrictions of less than twelve months as "Accounts receivable, net and other" and of twelve months or longer as non-current "Other assets" on our consolidated balance sheets. See "Note 4 - Commitments and Contingencies." (3) Our equity investment in Rivian had a fair value of $15.6 billion and $5.2 billion as of December 31, 2021 and September 30, 2022, respectively. The investment was subject to regulatory sales restrictions resulting in a discount for lack of marketability of approximately $800 million as of December 31, 2021, which expired in Q1 2022. Generating data for downstream ingestion The Textractor library also helps you simplify the ingestion of table data into downstream systems or other workflows. For example, you can export the extracted table data into a human readable Microsoft Excel file. At the time of this writing, this is the only format that supports merged tables. table[0].to_excel(filepath="sec_filing.xlsx") We can also convert it to a Pandas DataFrame . DataFrame is a popular choice for data manipulation, analysis, and visualization in programming languages such as Python and R. In Python, DataFrame is a primary data structure in the Pandas library. It’s flexible and powerful, and is often the first choice for data analysis professionals for various data analysis and ML tasks. The following code snippet shows how to convert the extracted table information into a DataFrame with a single line of code: df=table[0].to_pandas() df Lastly, we can convert the table data into a CSV file. CSV files are often used to ingest data into relational databases or data warehouses. See the following code: table[0].to_csv() ',0,1,2,3,4,5\n0,,"December 31, 2021",,September,"30, 2022",\n1,,Total Estimated Fair Value,Cost or Amortized Cost,Gross Unrealized Gains,Gross Unrealized Losses,Total Estimated Fair Value\n2,Cash,"$ 10,942","$ 10,720",$ -,$ -,"$ 10,720"\n3,Level 1 securities:,,,,,\n4,Money market funds,"20,312","16,697",-,-,"16,697"\n5,Equity securities (1)(3),"1,646",,,,"5,988"\n6,Level 2 securities:,,,,,\n7,Foreign government and agency securities,181,141,-,(2),139\n8,U.S. government and agency securities,"4,300","2,301",-,(169),"2,132"\n9,Corporate debt securities,"35,764","20,229",-,(799),"19,430"\n10,Asset-backed securities,"6,738","3,578",-,(191),"3,387"\n11,Other fixed income securities,686,403,-,(22),381\n12,Equity securities (1)(3),"15,740",,,,19\n13,,"$ 96,309","$ 54,069",$ -,"$ (1,183)","$ 58,893"\n14,"Less: Restricted cash, cash equivalents, and marketable securities (2)",(260),,,,(231)\n15,"Total cash, cash equivalents, and marketable securities","$ 96,049",,,,"$ 58,662"\n'</p><h2> </h2> Conclusion The introduction of these new block and entity types ( TABLE_TITLE , TABLE_FOOTER , STRUCTURED_TABLE , SEMI_STRUCTURED_TABLE , TABLE_SECTION_TITLE , TABLE_FOOTER , and TABLE_SUMMARY ) marks a significant advancement in extraction of tabular structures from documents with Amazon Textract. These tools provide a more nuanced and flexible approach, catering to both structured and semistructured tables and making sure that no important data is overlooked, regardless of its location in a document. This means we can now handle diverse data types and table structures with enhanced efficiency and accuracy. As we continue to embrace the power of automation in document processing workflows, these enhancements will no doubt pave the way for more streamlined workflows, higher productivity, and more insightful data analysis. For more information on AnalyzeDocument and the Tables feature, refer to AnalyzeDocument . About the authors Raj Pathak is a Senior Solutions Architect and Technologist specializing in Financial Services (Insurance, Banking, Capital Markets) and Machine Learning. He specializes in Natural Language Processing (NLP), Large Language Models (LLM) and Machine Learning infrastructure and operations projects (MLOps). Anjan Biswas is a Senior AI Services Solutions Architect with focus on AI/ML and Data Analytics. Anjan is part of the world-wide AI services team and works with customers to help them understand, and develop solutions to business problems with AI and ML. Anjan has over 14 years of experience working with global supply chain, manufacturing, and retail organizations and is actively helping customers get started and scale on AWS AI services. Lalita Reddi is a Senior Technical Product Manager with the Amazon Textract team. She is focused on building machine learning-based services for AWS customers. In her spare time, Lalita likes to play board games, and go on hikes. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
AppsFlyer Amazon EKS Case Study _ Advertising _ AWS.txt
Français 2023 Español AppsFlyer also sought to simplify its offering on AWS. “We wanted to decrease the tooling and overall management and centralize our infrastructure,” says Victor Gershkovich, data platform team lead, real-time infrastructure at AppsFlyer. “Amazon EKS gives us the ability to do so with all the needed elements to run and control the Kubernetes cluster and use its services. We can deploy the application, control its lifecycle, and develop controllers and operators that fit our needs.”  AppsFlyer also enjoys maximum resource efficiency. Using AWS Graviton processors, the company can choose different CPU and storage types based on its needs. In fact, AppsFlyer has reduced its costs by an average of 65 percent thanks to this flexibility. “We improved performance, reduced costs, and did not harm our offering for our customers,” says Gershkovich. “We only improved it.” 日本語 AppsFlyer’s Solution Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » AppsFlyer is a mobile and measurement attribution company that helps its customers measure user activities across channels. Using its cloud-based solution, customers can access detailed analytics and make decisions that guide their campaign efforts. Get Started 한국어 About AppsFlyer Industry Challenge Daily, AppsFlyer’s ingress and internal service communication generates around eight hundred billion events. At peak hours, this traffic exceeds 12 million events per second. By adopting a scalable architecture based on Amazon EKS, AppsFlyer can scale infrastructure up and down based on load, paying for only what is used. The company has reduced latency by 30–90 percent, depending on the workload. It now performs version upgrades, configurations, and many other tasks in days or even hours instead of weeks. By binding each Kafka cluster to a different business logic, the company can avoid a single point of failure and tune each cluster for an optimal cost-performance ratio. Using this architecture, AppsFlyer has improved its performance and stability while reducing security risks. Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Benefits of Using AWS 中文 (简体) AppsFlyer runs over 1,000 microservices every day on Amazon EKS using Kafka clusters, with each cluster bound to specific business logic. This architecture is also powered by Amazon Graviton processors, which deliver optimal price performance for cloud workloads running in Amazon Elastic Compute Cloud (Amazon EC2), a broad and deep compute platform. We improved performance, reduced costs, and did not harm our offering for our customers. We only improved it.”  Discover how mobile and measurement attribution company AppsFlyer is running high-throughput advertising workloads in the cloud using Amazon EKS, reducing latency by 30–90 percent. AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. Learn more » Türkçe English Amazon Graviton Processor AppsFlyer saw an opportunity to optimize its advertising workloads and run them at scale on Amazon Web Services (AWS). The company migrated to a scalable, cloud-native architecture based on Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service to run Kubernetes in the AWS Cloud and on-premises data centers.  AppsFlyer Runs Near-Real-Time, Ultra-Low Latency, High-Throughput Workloads at Scale Using Amazon EKS Deutsch Running billions of workloads a day is no simple task. Traditional databases involve several moving parts, from continuous integration and continuous deployment pipelines to domain name services. As a result, day-to-day operations can become complex and time consuming; developers often need to focus their efforts on managing the infrastructure rather than developing new features and capabilities. Tiếng Việt Customer Stories / Advertising & Marketing Italiano ไทย Amazon EKS Victor Gershkovich Data Platform Team Lead, Real-time Infrastructure, AppsFlyer Learn more » Amazon EC2 Português
Arm Case Study.txt
Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Benefits of AWS Enabling Experimentation and Innovation Can scale EDA environment quickly—from 5,000 cores to 30,000 cores—on demand By using AWS, the Arm Physical Design IP team can scale its EDA environment up or down quickly—from 5,000 cores to 30,000 cores—on demand. “This scalability and flexibility brought by AWS translates to a faster turnaround time,” says Moyer. “Using AWS, our EDA workload characterization turnaround time was reduced from a few months to a few weeks.” Español Amazon Elastic File System 日本語 Arm now plans to use the next generation of Amazon EC2 Arm instances, powered by Graviton2 processors with 64-bit Arm Neoverse cores. “The Graviton2 offers even better performance and scalability and caters to a larger number of different EDA workloads,” Moyer says. “We are looking forward to using these AWS processors for better performance and additional cost savings.” Arm was looking for agility improvement to keep development on schedule. “With our on-premises environment, our data center was constrained in terms of scalability, and deployment of additional compute capacity would typically take one month for approvals and at least three months to procure and install hardware,” says Vicki Mitchell, vice president of systems engineering for Arm. “We have aggressive deadlines, and waiting that long could make or break a project for us.” Moving EDA Workloads to the AWS Cloud Get Started 한국어 Initially, the Arm Physical Design Group ran its EDA workloads on Amazon Elastic Compute Cloud (Amazon EC2) Intel processor–based instances. It also used Amazon Simple Storage Service (Amazon S3), in combination with Amazon Elastic File System (Amazon EFS), for EDA data storage. When AWS announced the availability of Amazon EC2 A1 instances powered by Arm-based Graviton processors, the Arm Physical Design IP team began to run portions of its EDA workloads on A1 instances. “Taking advantage of Graviton instances gives us the opportunity to contribute to the development of the EDA ecosystem on Arm architecture,” says Moyer. In addition, Arm uses Amazon EC2 Spot Instances for all workloads. Spot Instances are spare compute capacity available at up to 90 percent less than On-Demand Instances. Based in Cambridge, United Kingdom, Arm designs and manufactures silicon IP for intelligent systems-on-chip. The company’s processors have enabled intelligent computing in more than 190 billion chips, powering products from sensors to smartphones to supercomputers. Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Reducing Characterization Turnaround Time from Months to Weeks For many years, Arm relied on an on-premises environment to support electronic design automation (EDA) workloads, resulting in forecast challenges on compute capacity. “The nature of our Physical Design Group business demands a high-dynamic compute environment, and the flexibility to make changes on short notice,” says Philippe Moyer, vice president of design enablement for the Arm Physical Design Group. “In the past, the on-premises compute was sometimes sitting idle until the need arose, which is why the scalability and agility of the cloud is a good solution for our business.” AWS Services Used Running its EDA workloads on Arm-based Graviton instances, Arm is lowering its AWS operational costs. “The Graviton processor family enables us to reduce the AWS costs for our logic characterization workload by 30 percent per physical core versus using Intel-powered instances for the same throughput,” says Moyer. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. To gain the agility and scalability needed, in 2017 Arm chose to move part of its EDA workload to Amazon Web Services (AWS). “Selecting AWS made sense to us. AWS is a market leader, and it really understands the semiconductor space,” says Mitchell. “We were also very impressed with the EDA knowledge of the AWS solution architects we worked with.” 中文 (繁體) Bahasa Indonesia Philippe Moyer Vice President of Design Enablement, Arm Amazon S3 Ρусский عربي 中文 (简体) Reduces characterization turnaround time from months to weeks Amazon EC2 A1 Instances Arm Accelerates Innovation with Compute Solutions on AWS Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads. To learn more, visit aws.amazon.com/ec2/instance-types/a1. Türkçe Arm is a leading technology provider of silicon intellectual property (IP) for intelligent systems-on-chip that power billions of devices. Arm creates IP used by technology partners to develop integrated semiconductor circuits. The company estimates that 70 percent of the world’s population uses its technology in their smart devices and electronics. English With the company’s on-premises environment, Arm engineers sometimes had to wait for compute resources to begin working on projects. By using on-demand compute capacity, engineers are now free to innovate. “It’s much easier for our engineers to prototype and experiment in the cloud,” Mitchell says. “If they’re trying to validate a piece of logic or create a new feature, they can take advantage of Amazon EC2 Spot Instances to submit a job and get instantaneous scheduling without disrupting the project flow. They can move faster as a result.” Arm Reduces Characterization Turnaround Time and Costs by Using AWS Arm-Based Graviton Instances Amazon EC2 A1 instances deliver significant cost savings for scale-out and Arm-based applications such as web servers, containerized microservices, caching fleets, and distributed data stores that are supported by the extensive Arm ecosystem. Deutsch Tiếng Việt Using AWS, our EDA workload characterization turnaround time was reduced from a few months to a few weeks." Italiano ไทย Contact Sales 2020 Decreasing AWS Costs by 30% Learn more » Enables experimentation and innovation for developers Amazon EC2 Spot Instances Cuts logic characterization workload costs by 30% with Arm-based Graviton instances About Arm Português Gains flexibility to avoid the extra cost of approximate evaluation
Arm Limited Case Study.txt
Zhifeng Yun Technical Director, Arm Limited Français Benefits of AWS Amazon Elastic Compute Cloud (Amazon EC2) Español Arm hopes that its success in migrating and modernizing its EDA workloads will inspire other companies to change the way that they run workloads. “I would like to think that our experience using AWS not only benefits Arm but also benefits the EDA industry as a whole,” says Yun. “We want to demonstrate to the EDA industry not only the benefits of using AWS Graviton processors but also what a modernized cloud solution can do. Using AWS services has helped us realize the deep benefit of migrating to the cloud.” Completing the Migration for a Fully Modernized Solution 日本語 AWS Services Used Arm is both a consumer of and a supplier to AWS. The company supplied intellectual property for AWS Graviton processors, which are designed by AWS to deliver the best price performance for cloud workloads running in Amazon EC2. Using CPUs based on the Arm Neoverse N1 processor to support the design and verification of the future Arm chips is helping to drive Arm’s business success thanks to the CPUs’ delivery of higher performance at a lower cost. Using AWS is also helping Arm to achieve its sustainability goals. By continuing to migrate away from its on-site data center, optimizing its compute using Spot Instances, and taking advantage of the efficiencies of AWS Graviton processors, Arm is reducing its carbon footprint. The company has committed to being net-zero carbon certified by 2030. Get Started 한국어 Optimized compute costs through managed services Scaled up to 350,000 virtual CPUs Modernizing Its Solution to Accommodate Future Growth AWS Batch The company built its solution around AWS Batch, which lets developers, scientists, and engineers easily and efficiently run hundreds of thousands of batch and machine learning computing jobs on AWS. Arm uses Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. A core part of the company’s solution is the use of Amazon EC2 Spot Instances, which let users take advantage of unused Amazon EC2 capacity on AWS. Because Arm’s EDA workloads have varying compute and memory requirements, Arm uses a variety of instance families and types. “Using AWS Batch facilitates selecting different instance types and mixing them together,” says Yun. “That helps us to achieve the scalability that we need.” Using the high scalability of AWS Batch, Arm can now run more than 53 million jobs per week and up to 9 million jobs per day. The company has scaled up to 350,000 virtual CPUs across more than 25,800 instances and is working on scaling up to 600,000, all using Spot Instances. AWS Graviton processor Can run more than 53 million jobs per week Increased engineer productivity 中文 (繁體) Bahasa Indonesia Founded in 1990, Arm Limited is a semiconductor and software design company based in the United Kingdom. It designs energy-efficient CPU and GPU processors and system-on-a-chip infrastructure and software. Arm Limited (Arm) is a global leader in the development of licensable compute technology for semiconductor companies. As of February 2022, over 200 billion chips have been shipped that are based on Arm’s architecture and manufactured by its partners over the last 3 decades. However, the company’s on-premises data centers could not grow with the pace of engineering requirements, and in 2016, Arm decided it needed to make significant changes to achieve its projected growth target for the next 5–10 years. By migrating from on-premises data centers to Amazon Web Services (AWS), Arm created a scalable and reliable cloud-based solution for running EDA workloads. Using this solution, the company has optimized its compute costs, increased its engineering productivity, accelerated speed to market for its products, and enhanced its product quality. Additionally, using CPUs on AWS that are based on Arm architecture for the design and verification of new Arm chips has helped it to drive business success. Using AWS Batch facilitates selecting different instance types and mixing them together. That helps us to achieve the scalability that we need.”  Arm, a semiconductor and software design company based in the United Kingdom, wanted to modernize its engineering solution. The company’s on-premises data centers didn’t position Arm for future growth. “We couldn’t do any of the customization or optimization that we needed to do,” says Zhifeng Yun, technical director at Arm. “We didn’t have a sustainable plan to drive efficiency or to reduce the total cost of ownership given the growing engineering requirements.” The company also wanted to advance its business intelligence and create a delivery engineering road map. In 2016, Arm evaluated different cloud providers and ultimately decided to use AWS. “We chose AWS because it has highly sophisticated infrastructure and services,” says Yun. “It offers a lot in terms of the variety of instance types as well as the customer focus and support we need to get things moving more quickly.” Arm evaluated its internal workloads, weighing the technical difficulty of migrating each one against the benefits it would bring to the business. “Our number one concern is about the quality of the product, and number two is about the time to market,” says Yun. “If we delay bringing our product to market, the impact to the entire industry could be huge. And that means a big cost not only in terms of revenue but also in terms of Arm’s reputation.” After its evaluation was complete, Arm decided to prioritize its most compute-heavy verification workloads for the migration. These workloads involve running millions of jobs—such as those that help verify the design of the CPU core—in parallel. Rather than using a lift-and-shift approach to the migration, Arm opted to modernize immediately to take advantage of cloud-native technology and managed services. Ρусский عربي About Arm Limited 中文 (简体) Scaling Up Verification Workloads to over 350,000 Virtual CPUs Decreased turnaround time for verification jobs Learn more » Arm’s ability to select instance types to fit different jobs provides additional benefits. “Having the instance fit the job makes a huge difference in the usage of CPU and memory,” says Yun. “If you have a limited selection of instance types and try to force the job to fit in, naturally, you’ll have a lot of wasted resources.” Because the company can use a large variety of Spot Instance types, Arm has been able to optimize its compute costs. “Using the AWS Graviton2 instance types provides 32 percent lower runtime for our simulation workloads,” Yun says. “That performance is quite attractive in EDA workloads.” Achieved 32% lower runtime for simulation workloads Accelerated speed to market for products Türkçe Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. English Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Another benefit of using AWS is improved productivity for Arm’s engineering team. Before migrating to AWS, engineers had to submit jobs to a queue and wait for a resource to become available. Now, those verification jobs can be run with less waiting time, resulting in a much shorter turnaround time. This gives engineers more time to debug and tweak designs, if needed, meaning that products can be released on time or even earlier. “Because engineers can run as many necessary cycles as needed during the different design phases, we’ve been able to release product ahead of schedule, which doesn’t happen often in the EDA industry,” says Yun. Deutsch AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. Tiếng Việt Italiano ไทย Arm Accelerates Speed to Market by Migrating EDA Workflows to AWS Batch Arm will continue evaluating and prioritizing its workloads for migration. “We’ve been successful in migrating the most compute-intensive workloads to AWS,” says Yun. “But our goal was never limited to that.” The company will continue scaling workloads and hopes to run the complete design-verification process on AWS. “Our choice of using AWS was driven by the business. It’s driven by our understanding of the cloud,” Yun says. “It’s also driven by how we’re able to use what AWS has already created so we can build on top of that.” Contact Sales AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. 2022 Amazon EC2 Spot Instances Decreased carbon footprint Português Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.
Armitage Technologies case study.txt
Armitage Technologies Ltd., is a full-service IT company founded in Hong Kong in 1972, specializing in providing 21st century solutions and building technologies. Armitage serves international clients from different industries and delivers projects reliably and punctually over the years including, but not limited to, project development, IT support & maintenance, and AI solutions. Français Norman Lam Head of Innovation, Armitage Technologies Ltd. Español 日本語 AWS Services Used 2022 Get Started 한국어 Armitage built the application using Amazon SageMaker and AWS Panorama, an AWS-managed edge computing device that brings computer vision to on-premises camera networks. With the computer vision application on AWS, organizers improved crowd control by accurately recording 10,000 people daily, reduced security forces by 30 percent, and ensured protection of video data. Armitage needed to quickly deploy the solution. However, connecting traditional on-premises camera management systems with existing IP cameras is a complicated, time-consuming process. Furthermore, streaming and processing on-premises video streams in the cloud for applications often requires high network bandwidth and infrastructure provisioning. “To support our computer vision application, we needed reliable technology that’s highly available, even during weather disruptions,” Lam says. Additionally, because of strict security requirements, Armitage needed to ensure video data remained in a local network while still being monitored remotely.   In late 2021, a public organization approached Armitage to help manage the number of people attending and leaving large public events. Norman Lam, head of innovation at Armitage Technologies Ltd, says, “Hong Kong has venue capacity limits because of COVID-19. We needed to develop a computer vision solution that connected seamlessly with IP cameras, so the organization can accurately record and control human traffic.” AWS Identity and Access Management (IAM) Benefits Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows Armitage implemented its computer vision application on AWS Panorama to count human and vehicle traffic at two large outdoor public events in August and September 2022. The solution connected the company’s application with 10 IP cameras mounted at park entrances and exits, providing parallel multi-model, multi-stream support with one Panorama appliance. Armitage also used Amazon SageMaker to reduce costs and development time for training custom AI models to count traffic. Amazon SageMaker Neo also helps developers optimize machine learning models for inference on supported edge devices to run faster with no loss in accuracy. By deploying multiple camera sources with an AI model and application in one appliance, Armitage implemented the computer vision solution in under two days, which is 50 percent faster than an on-premises device. “It was very simple to deploy the solution, train the models, and connect to the IP cameras on site,” Lam says. “Instead of having to purchase multiple devices to manage the cameras, we only needed one device to connect and manage the entire solution.” Also, video inference at the edge does not require video streamed to the cloud, and only results without personal data are sent to AWS for analytics. “AWS Panorama provides accuracy, reliability, and security, which were the three elements we needed for our solution,” says Lam. 中文 (繁體) Bahasa Indonesia • 50 percent – Halves the time to deploy computer vision solution • Highly available – Automated surveillance around the clock • 30 percent – reduction in event security team • Zero downtime – Reliably captures video streams despite weather disruptions • Highly secure – Restricts access to video data   An AWS Partner, Armitage leveraged AWS Cloud technology to support its computer vision solution. “AWS manages security concerns like external access and provides data encryption. Plus, AWS offers seamless scalability, which was key in supporting our expansion plans in the broader Asia-Pacific market,” says Lam. AWS Panorama Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) With AWS Identity and Access Management (IAM), you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS. Learn more » Learn more » Solution Overview About Company Bringing Computer Vision to Existing IP Cameras at the Edge on AWS   Türkçe Armitage Technologies built a computer vision application with multiple machine learning models using AWS Panorama to process real-time video from IP cameras, accurately count crowd flow, and automatically encrypt data.   Armitage Technologies Ltd. (Armitage)—a Hong Kong-based technology services company founded in 1972—has delivered more than 10,000 IT projects to enterprises across Hong Kong and Mainland China. Increasingly, Armitage is focusing on emerging technologies such as Internet of Things (IoT), machine learning, computer vision, and artificial intelligence (AI). As part of this strategy, the company specializes in providing computer vision solutions—AI-based applications that use digital images from cameras and deep learning models to identify and classify objects quickly and accurately. AWS Panorama is a collection of machine learning (ML) devices and a software development kit (SDK) that brings CV to on-premises internet protocol (IP) cameras. Learn more » English Armitage uses AWS Panorama, an edge computing device that brings computer vision to on-premises camera networks via the AWS Panorama Appliance. The appliance can run computer vision models on a local area network, which is key for organizations with bandwidth constraints and data residency requirements. AWS Panorama also includes an IP62 (international protection) rating to protect video capture from dust and water in outdoor environments. In addition, Armitage implemented AWS Identity and Access Management (IAM) for enhanced security. With AWS Panorama, Armitage process video feeds at the edge to control where data is stored and make highly accurate predictions from a single management interface. Additionally, the provider limits application access with local storage encryption. Deutsch Opportunity Helping a Public Organization Provide Crowd Control at Events Tiếng Việt Furthermore, the public organization needed fewer event management employees, reducing its security team by 30 percent for both events. It also benefited from the reliability of the Armitage solution, which experienced no downtime throughout the two outdoor events, despite weather disruptions. Using the Armitage computer vision solution with AWS Panorama, the organization accurately counted more than 10,000 people each day during the two public events, with personnel analyzing video feeds within one second. This aided in the reporting of real-time human traffic numbers to organizers, who closed the entrance to the event location immediately upon reaching full capacity. “The public organization could easily comply with COVID-19 regulations on capacity restrictions because of the accuracy of our computer vision solution on AWS Panorama,” says Lam. Italiano ไทย Amazon CloudWatch Contact Sales The public organization could easily comply with COVID-19 regulations around the number of people in one location because of the accuracy of our computer vision solution on AWS Panorama.” Quickly Deploying an AI/ML Solution with Scalability and Accuracy   Armitage Technologies Uses Computer Vision Application at the Edge with AWS Panorama to Improve Crowd Management at Public Venues Outcome Customer Stories / Software and Internet Overview | Opportunity | Solution | Benefits | Outcome | AWS Services Used Next, Armitage plans to expand its computer vision solution on AWS Panorama to include transportation, logistics, and construction use cases. Lam concludes, “We’re having conversations with potential customers and are confident we can expand our solution because of the scalability, reliability, and security of AWS.” IT solution provider Armitage Technologies Ltd. needed to design and deploy a computer vision application for public event organizers to capture and record the number of people gathered at events. Português Amazon SageMaker
Armut Case Study.txt
Amazon Simple Email Service (SES) is a cost-effective, flexible, and scalable email service that enables developers to send mail from within any application. The company runs most of its infrastructure on AWS. This includes the machine learning services that power its matching algorithm, which links customers and professionals through the Armut website and mobile app.  AWS Lambda Français Benefits of AWS We’re launching in new countries and many of our internal services are going to use the notification system. Using AWS, we can grow with confidence.” Español Rerouted messages as push notifications or emails if SMS requests failed 日本語 The company also plans to implement the notification system for other brands to support growth. “We’re launching in new countries and many of our internal services are going to use the notification system,” says Ozgen. “Using AWS, we can grow with confidence.” Turkey-based Armut connects consumers with professionals offering a wide variety of services, including home improvement, tuition, home moving, and health and wellness. To manage jobs effectively, its service uses a matching algorithm and digital notifications. As the company grew, its legacy technology no longer met its needs, so Armut developed a new system using AWS, which increased notification reliability and the volume of jobs it could handle. The business is now generating more income and supporting a greater number of customers without any additional staff. Amazon Simple Email Service, a high-scale inbound and outbound cloud email service. The system uses Amazon Simple Notification Service (Amazon SNS) to send notifications to customers. Through this, Armut’s technical and customer teams can monitor and track notifications more closely than they could before. 한국어 Deniz Ozgen Associate Director of Engineering, Armut Serverless on AWS However, the existing notification system wasn’t keeping pace with the company’s rapid growth. Critically, Armut couldn’t track whether notifications had been sent, delivered, or read. It was clear from users’ feedback that notifications sometimes failed to send, which was often due to issues with local mobile network providers. In addition, the service was difficult to scale—with the tech team adjusting resources manually—and struggled to cope with sudden increases in demand. Armut uses Amazon MQ, a managed message broker service, to automatically resend failed notifications or reroute them to other channels. “With this setup, whenever a notification channel fails, it falls back to another channel, so all of the messages are delivered,” says Ozgen. With the old approach, local mobile network operators caused bottlenecks that prevented the timely delivery of the messages. Armut is a major local services marketplace in Turkey. It operates in seven other EMEA countries under the HomeRun brand. The company helps consumers find and arrange a wide variety of services including home improvement, lessons, moving, and health and wellness. Get Started After the customer and professional are matched, Armut’s platform provides a way for service providers to send quotes to the customer, and for both parties to agree to the work being carried out. The system then manages the entire workflow, through to job completion. Armut uses Amazon SageMaker, which helps data scientists and developers prepare, build, train, and deploy high-quality machine learning models. It also uses Amazon Kinesis Data Streams to easily stream data at any scale, and Amazon Managed Streaming for Apache Kafka to securely stream data with a fully managed, highly available Apache Kafka service. AWS Services Used Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. 中文 (繁體) Bahasa Indonesia Amazon SNS Ρусский عربي Learn more » 中文 (简体) Armut—which also operates under the HomeRun brand—aims to offer the best experience possible for both service providers and customers using the latest technologies available.  Delivering 1,000 Emails a Second and 1.5 Million a Day Using Machine Learning to Match Consumers and Professionals AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.   Thanks to the use of AWS best practices, Armut also handles the millions of requests sent each month more efficiently. Around 20 million push notifications and 3 million SMS notifications are sent a month, with this expected to grow as more customers use the service. The ability to send more notifications also has a direct impact on income, as Armut charges professionals to provide quotes. Managed millions of notifications and requests, and thousands of emails Amazon MQ Türkçe Armut is looking to use AWS machine learning to analyze customer behavior to determine the most effective channels for reaching consumers and professionals. For example, if data shows that customers don’t regularly check emails, it could send notifications through SMS instead. English Supported customer growth and international expansion Building on Success Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication. Armut Teknoloji Improves Customer Experience with Scalable Notification System Using AWS However, as the company grew by more than 1,000 percent over the last 5 years, its existing notification system no longer met its needs, with notifications often failing. It also didn’t scale well, while its day-to-day maintenance requirements were becoming challenging and time-consuming for the IT team. Armut decided to develop a new notification infrastructure built using Amazon Web Services (AWS) that could offer better performance and scale as its customer base expanded. Deutsch Amazon SES Improved accuracy of request matching process Armut can now send many more notifications in a given time period than it could previously—in one trial, the system sent 1,000 emails per second. It delivers more than 1.5 million emails a day over Tiếng Việt Armut helps consumers find and arrange a wide variety of services such as home improvement, lessons, moving, and health and wellness. One of the features it offers to customers is the ability to connect them with the right professionals for their needs. To do this, Armut needs an accurate matching algorithm and a fast, reliable digital notification system. Italiano ไทย Accurate notification tracking was a key benefit of the new system. “Traceability was the primary concern for this project,” says Ozgen. “Previously, we didn’t have this much visibility into our notification system.” 2022 Armut developed and implemented its new notification system in just 6 months using AWS Lambda, a serverless, event-driven compute service that lets it run code without thinking about servers or clusters. During the design and implementation phase, no customer data or service requests were lost—a key goal for Armut. “We were able to facilitate reliable communications with our customers throughout this transition,” says Deniz Ozgen, associate director of engineering at Armut. “It’s so important that they always know we’re here for them, helping them to take care of their to-do list.” About Armut Teknoloji The notifications sent to customers and professionals via email, SMS, or push notifications are central to the customer experience. They communicate the various steps needed for the work to be completed, such as confirming the job and setting up a time. They also notify customers if a professional arrives late, a job is cancelled, or payments are due. Improving Notifications with Serverless Technology Build and run applications without thinking about servers. Português
Auto-labeling module for deep learning-based Advanced Driver Assistance Systems on AWS _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Auto-labeling module for deep learning-based Advanced Driver Assistance Systems on AWS by Gopi Krishnamurthy and Shreyas Subramanian | on 03 JUL 2023 | in Amazon SageMaker , Amazon SageMaker Ground Truth , Artificial Intelligence , Intermediate (200) | Permalink | Comments |  Share In computer vision (CV), adding tags to identify objects of interest or bounding boxes to locate the objects is called labeling . It’s one of the prerequisite tasks to prepare training data to train a deep learning model. Hundreds of thousands of work hours are spent generating high-quality labels from images and videos for various CV use cases. You can use Amazon SageMaker Data Labeling in two ways to create these labels: Amazon SageMaker Ground Truth Plus – This service provides an expert workforce that is trained on ML tasks and can help meet your data security, privacy, and compliance requirements. You upload your data, and the Ground Truth Plus team creates and manages data labeling workflows and the workforce on your behalf. Amazon SageMaker Ground Truth – Alternatively, you can manage your own data labeling workflows and workforce to label data. Specifically, for deep learning-based autonomous vehicle (AV) and Advanced Driver Assistance Systems (ADAS), there is a need to label complex multi-modal data from scratch, including synchronized LiDAR, RADAR, and multi-camera streams. For example, the following figure shows a 3D bounding box around a car in the Point Cloud view for LiDAR data, aligned orthogonal LiDAR views on the side, and seven different camera streams with projected labels of the bounding box. AV/ADAS teams need to label several thousand frames from scratch, and rely on techniques like label consolidation, automatic calibration, frame selection, frame sequence interpolation, and active learning to get a single labeled dataset. Ground Truth supports these features. For a full list of features, refer to Amazon SageMaker Data Labeling Features . However, it can be challenging, expensive, and time-consuming to label tens of thousands of miles of recorded video and LiDAR data for companies that are in the business of creating AV/ADAS systems. One technique used to solve this problem today is auto-labeling, which is highlighted in the following diagram for a modular functions design for ADAS on AWS . In this post, we demonstrate how to use SageMaker features such as Amazon SageMaker JumpStart models and asynchronous inference capabilities along with Ground Truth’s functionality to perform auto-labeling. Auto-labeling overview Auto-labeling (sometimes referred to as pre-labeling ) occurs before or alongside manual labeling tasks. In this module, the best-so-far model trained for a particular task (for example, pedestrian detection or lane segmentation) is used to generate high-quality labels. Manual labelers simply verify or adjust the automatically created labels from the resulting dataset. This is easier, faster and cheaper than labeling these large datasets from scratch. Downstream modules such as the training or validation modules can use these labels as is. Active learning is another concept that is closely related to auto-labeling. It’s a machine learning (ML) technique that identifies data that should be labeled by your workers. Ground Truth’s automated data labeling functionality is an example of active learning. When Ground Truth starts an automated data labeling job, it selects a random sample of input data objects and sends them to human workers. When the labeled data is returned, it’s used to create a training set and a validation set. Ground Truth uses these datasets to train and validate the model used for auto-labeling. Ground Truth then runs a batch transform job to generate labels for unlabeled data, along with confidence scores for new data. Labeled data with low confidence scores is sent to human labelers. This process of training, validating, and batch transform is repeated until the full dataset is labeled. In contrast, auto-labeling assumes that a high-quality, pre-trained model exists (either privately within the company, or publicly in a hub). This model is used to generate labels that can be trusted and used for downstream tasks such as label verification tasks, training, or simulation. This pre-trained model in the case of AV/ADAS systems is deployed onto the car at the edge, and can be used within large-scale, batch inference jobs on the cloud to generate high-quality labels. JumpStart provides pretrained, open-source models for a wide range of problem types to help you get started with machine learning. You can use JumpStart to share models within your organization. Let’s get started! Solution overview For this post, we outline the major steps without going over every cell in our example notebook. To follow along or try it on your own, you can run the Jupyter notebook in Amazon SageMaker Studio . The following diagram provides a solution overview. Set up the role and session For this example, we used a Data Science 3.0 kernel in Studio on an ml.m5.large instance type. First, we do some basic imports and set up the role and session for use later in the notebook: import sagemaker, boto3, json from sagemaker import get_execution_role from utils import * Create your model using SageMaker In this step, we create a model for the auto-labeling task. You can choose from three options to create a model: Create a model from JumpStart – With JumpStart, we can perform inference on the pre-trained model, even without fine-tuning it first on a new dataset Use a model shared via JumpStart with your team or organization – You can use this option if you want to use a model developed by one of the teams within your organization Use an existing endpoint – You can use this option if you have an existing model already deployed in your account To use the first option, we select a model from JumpStart (here, we use mxnet-is-mask-rcnn-fpn-resnet101-v1d-coco . A list of models is available in the models_manifest.json file provided by JumpStart. We use this JumpStart model that is publicly available and trained on the instance segmentation task, but you are free to use a private model as well. In the following code, we use the image_uris , model_uris , and script_uris to retrieve the right parameter values to use this MXNet model in the sagemaker.model.Model API to create the model: from sagemaker import image_uris, model_uris, script_uris, hyperparameters from sagemaker.model import Model from sagemaker.predictor import Predictor from sagemaker.utils import name_from_base endpoint_name = name_from_base(f"jumpstart-example-infer-{model_id}") inference_instance_type = "ml.p3.2xlarge" # Retrieve the inference docker container uri deploy_image_uri = image_uris.retrieve( region=None, framework=None, # automatically inferred from model_id image_scope="inference", model_id=model_id, model_version=model_version, instance_type=inference_instance_type, ) # Retrieve the inference script uri. This includes scripts for model loading, inference handling etc. deploy_source_uri = script_uris.retrieve( model_id=model_id, model_version=model_version, script_scope="inference" ) # Retrieve the base model uri base_model_uri = model_uris.retrieve( model_id=model_id, model_version=model_version, model_scope="inference" ) # Create the SageMaker model instance model = Model( image_uri=deploy_image_uri, source_dir=deploy_source_uri, model_data=base_model_uri, entry_point="inference.py", # entry point file in source_dir and present in deploy_source_uri role=aws_role, predictor_cls=Predictor, name=endpoint_name, ) Set up asynchronous inference and scaling Here we set up an asynchronous inference config before deploying the model. We chose asynchronous inference because it can handle large payload sizes and can meet near-real-time latency requirements. In addition, you can configure the endpoint to auto scale and apply a scaling policy to set the instance count to zero when there are no requests to process. In the following code, we set max_concurrent_invocations_per_instance to 4. We also set up auto scaling such that the endpoint scales up when needed and scales down to zero after the auto-labeling job is complete. from sagemaker.async_inference.async_inference_config import AsyncInferenceConfig async_config = AsyncInferenceConfig( output_path=f"s3://{sess.default_bucket()}/asyncinference/output", max_concurrent_invocations_per_instance=4) . . . response = client.put_scaling_policy( PolicyName="Invocations-ScalingPolicy", ServiceNamespace="sagemaker", # The namespace of the AWS service that provides the resource. ResourceId=resource_id, # Endpoint name ScalableDimension="sagemaker:variant:DesiredInstanceCount", # SageMaker supports only Instance Count PolicyType="TargetTrackingScaling", # 'StepScaling'|'TargetTrackingScaling' TargetTrackingScalingPolicyConfiguration={ "TargetValue": 5.0, # The target value for the metric. - here the metric is - SageMakerVariantInvocationsPerInstance "CustomizedMetricSpecification": { "MetricName": "ApproximateBacklogSizePerInstance", "Namespace": "AWS/SageMaker", "Dimensions": [{"Name": "EndpointName", "Value": endpoint_name}], "Statistic": "Average", }, "ScaleInCooldown": 300, "ScaleOutCooldown": 300 }, ) Download data and perform inference We use the Ford Multi-AV Seasonal dataset from the AWS Open Data Catalog. First, we download and prepare the date for inference. We have provided preprocessing steps to process the dataset in the notebook; you can change it to process your dataset. Then, using the SageMaker API, we can start the asynchronous inference job as follows: import glob import time max_images = 10 input_locations,output_locations, = [], [] for i, file in enumerate(glob.glob("data/processedimages/*.png")): input_1_s3_location = upload_image(sess,file,sess.default_bucket()) input_locations.append(input_1_s3_location) async_response = base_model_predictor.predict_async(input_path=input_1_s3_location) output_locations.append(async_response.output_path) if i > max_images: break This may take up to 30 minutes or more depending on how much data you have uploaded for asynchronous inference. You can visualize one of these inferences as follows: plot_response('data/single.out') Convert the asynchronous inference output to a Ground Truth input manifest In this step, we create an input manifest for a bounding box verification job on Ground Truth. We upload the Ground Truth UI template and label categories file, and create the verification job. The notebook linked to this post uses a private workforce to perform the labeling; you can change this if you’re using other types of workforces. For more details, refer to the full code in the notebook. Verify labels from the auto-labeling process in Ground Truth In this step, we complete the verification by accessing the labeling portal. For more details, refer to here . When you access the portal as a workforce member, you will be able to see the bounding boxes created by the JumpStart model and make adjustments as required. You can use this template to repeat auto-labeling with many task-specific models, potentially merge labels, and use the resulting labeled dataset in downstream tasks. Clean up In this step, we clean up by deleting the endpoint and the model created in previous steps: # Delete the SageMaker endpoint base_model_predictor.delete_model() base_model_predictor.delete_endpoint() Conclusion In this post, we walked through an auto-labeling process involving JumpStart and asynchronous inference. We used the results of the auto-labeling process to convert and visualize labeled data on a real-world dataset. You can use the solution to perform auto-labeling with many task-specific models, potentially merge labels, and use the resulting labeled dataset in downstream tasks. You can also explore using tools like the Segment Anything Model for generating segment masks as part of the auto-labeling process. In future posts in this series, we will cover the perception module and segmentation. For more information on JumpStart and asynchronous inference, refer to SageMaker JumpStart and Asynchronous inference , respectively. We encourage you to reuse this content for use cases beyond AV/ADAS, and reach out to AWS for any help. About the authors Gopi Krishnamurthy is a Senior AI/ML Solutions Architect at Amazon Web Services based in New York City. He works with large Automotive customers as their trusted advisor to transform their Machine Learning workloads and migrate to the cloud. His core interests include deep learning and serverless technologies. Outside of work, he likes to spend time with his family and explore a wide range of music. Shreyas Subramanian is a Principal AI/ML specialist Solutions Architect, and helps customers by using Machine Learning to solve their business challenges using the AWS platform. Shreyas has a background in large scale optimization and Machine Learning, and in use of Machine Learning and Reinforcement Learning for accelerating optimization tasks. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
AWS announces 21 startups selected for the AWS generative AI accelerator _ AWS Startups Blog.txt
AWS Startups Blog AWS announces 21 startups selected for the AWS generative AI accelerator by Kathryn Van Nuys | on 24 MAY 2023 | in Announcements , Generative AI , Startup | Permalink |  Share AWS is excited to announce the cohort of startups accepted into the global AWS Generative AI Accelerator . The program kicks off May 24th at our San Francisco AWS Startup Loft and closes on July 27th. Over the course of their 10-week program, participants will receive tailored technical advice, dedicated mentorship, an opportunity to pitch their demos to venture capitalists (VCs) in the AWS network, and up to $300,000 in AWS credits. Critically, they will also have the opportunity to foster lifelong connections with their fellow founders and within AWS. Our finalists come from various industries, backgrounds, and geographic regions, but all they have one thing in common: they are using generative artificial intelligence (AI) technology to drive unprecedented innovation in their space. They’re exploring practical solutions to problems such as illiteracy and healthcare burnout and designing tools that drastically reduce time spent on costly, tedious tasks. No matter their vision, all of these startups are proving what’s possible with generative AI and boldly reinventing applications, data touchpoints, and customer experiences, to name a few. Backing the upcoming leaders of the generative AI landscape Startups are the lifeblood of innovation, and AWS is eager to support them in developing incredible generative AI solutions. Many of the AWS Startups team are former founders or VCs, and we embrace this chance to give back to these startups in meaningful, actionable ways. “Generative AI holds tremendous potential to revolutionize how humans interact with technology and with each other, while democratizing access to new and existing technology in a way that is unprecedented.” says Jon Jones, vice president of compute and AI/ML services at AWS. “Customers are already seeing value in streamlining processes, accelerating product development, and using AI as a trusted companion to increase productivity and better serve their clients.” We are excited to partner with these innovators on their journey to solve some of the world’s biggest challenges.” Drumroll, please Please join us in extending a warm welcome to the 21 AWS Generative AI Accelerator program finalists. Education Ello Ello leverages large language models (LLM) and AI solutions to perfectly tailor literacy lessons to each young student they reach. Through interactive reading sessions from real books, Ello becomes a motivational learning companion that transforms children into curious, enthusiastic readers. Marketing, social, and advertising Crate On a mission to create an open internet with no boundaries, Crate invites users to curate a personal, shareable artifact made up of their favorite pieces from anywhere on the web. The team puts AI in the hands of users to help them tell better stories with auto generated images, text, and instant summaries. q lip qlip is an AI-powered video highlights generator that helps users grow their social media presence by automatically repurposing long-form videos into short highlights primed for today’s audiences. OpenAds OpenAds solves advertising challenges for publishers, consumers, and advertisers by identifying and suggesting ads that match a business’ user experience UX, are tailored to customer advertising and privacy preferences, and keep creative control in the hands of advertisers. Entertainment and gaming Leonardo Ai Leonardo Ai is an AI-driven content production suite tailored for creators across diverse sectors, with a core focus on game development artists. Through the platform, developers can utilize generative AI solutions that integrate with their workflows to unlock their creativity and accelerate content production from months to minutes. Storia Built by leading AI researchers and engineers, Storia operates as a creative assistant for rapid film previsualization and production. Story producers can experiment with AI-generated videos, visualize what their product would look like shot in different styles, and build collaborative and comprehensive storyboards in minutes. Krikey Krikey uses generative AI to make it easier for creators to breathe life into animations, helping them automate character motion with a variety of 3D avatars, augmented reality (AR) gaming toolkits, and 3D animations. Animations can be seamlessly integrated and exported into the creator’s platform of choice, significantly shortening production time and enhancing the creative process. Poly Poly is an AI-enabled infinite design asset marketplace (offering seamless physically based rendering [PBR] textures, illustrations, icons, sounds, and many more) that lets anyone use or generate stunning, 8K high definition (HD) professional design assets in seconds with AI. Flawless To counteract the rising on-set production costs and time constraints, Flawless gives artists a suite of cinematic-quality AI-powered tools that allow them to rapidly and affordably iterate, experiment, and refine their content. Healthcare and life sciences Knowtex Knowtex empowers clinicians with voice AI automated note-taking and coding from natural conversation to combat burnout and allow focus on patient care. Vevo Vevo is building the world’s first atlas of how drugs interact with patient cells in living organisms at single cell resolution. Vevo’s foundation models trained on this atlas faithfully capture disease biology, enabling generative design of drugs that are more likely to treat disease in humans. Ordaōs Ordaōs is a human-enabled, machine-driven drug design company. Their miniPRO proteins help drug hunters deliver treatments that are safer and more effective than traditional discovery methods. Nosis Bio Nosis Bio is enabling the future of targeted drug delivery by integrating deep expertise in generative AI and high-throughput biochemistry. Finance Theia Insights Theia Insights leverages the power of AI to synthesize and distill financial data, generating real-time insights beyond human research capability, to inform the investment management community, helping individual and institutional investors make better decisions. Data and knowledge management Unwrap Powered by AI and ML, Unwrap analyzes data from multiple customer feedback channels at scale, providing them with auto-labeling, semantic search, and automatic alerts that strengthen the feedback loop between companies and their customers. Stack AI Stack is a no-code interface that helps businesses of all sizes build and deploy AI applications including chatbots, document processing, content creation, and automated customer support in minutes. Nixtla Nixtla is building a state-of-the-art disruptive open-source ecosystem that uses AI to unlock scalable, lightning-fast, and user-friendly time series forecasting and anomaly detection. Wand Wand enables businesses to sync data from multiple sources to rapidly build collaborative, measurable, and scalable AI solutions. From predictive models to customized LLMs, teams have the power to solve business problems and create value faster than ever before. Griptape Griptape’s open source framework and managed service enables developers to enhance LLMs with chain of thought capabilities, creating context-aware conversational, copilot, and autonomous agents. AI ethics, safety, and security Bunked Bunked distinguishes AI-generated content from real content using blockchain technology. Protopia AI Protopia AI provides data protection and privacy-preserving AI/ML technologies that specialize in enabling AI algorithms and software platforms to operate without the need to access plain-text information. The company works with enterprises and generative AI/LLM providers to enable maintaining ownership and confidentiality of enterprise data while using AI/ML solutions. AWS is excited to act as a catalyst for these forward-thinking startups. We continue to build upon the legacy of our previous accelerator programs—such as the AWS Impact Accelerator —to provide founders with the resources, guidance, and networking opportunities they need to scale and succeed. In the same way AWS democratized the cloud by expanding access to industry-leading technology, we look forward to offering our scale, expertise, and relationships to the next generation of companies at the forefront of generative AI innovation. TAGS: Accelerators Kathryn Van Nuys Kathryn Van Nuys is the Head of North America Startup Business Development at Amazon Web Services (AWS). Kathryn spent the earlier part of her career in financial services working in capital markets as well as sales and trading at Citigroup and Lehman Brothers. She later joined a number of early-stage startups, building their capital markets and partnership teams, before moving on to AWS to scale her efforts in helping startups to achieve growth. Resources AWS Activate AWS for Startups Resources Build Your Startup with AWS AWS for Startups Events Follow  AWS Startups Twitter  AWS Cloud Twitter  AWS Startups Facebook  AWS Startups Instagram  AWS Startups LinkedIn  Twitch  Email Updates
AWS Case Study - Ineos Team UK.txt
Français Amazon S3 Greater HPC Scale, Lower Cost Español To get the performance INEOS TEAM UK required and on budget, the team worked with AWS Solutions Architects and AWS Professional Services consultants, who helped design an HPC environment based on multiple Availability Zones in multiple regions and Amazon EC2 Spot Instances, which provided a 65 percent cost saving compared to on-demand capacity. 日本語 Amazon FSx for Lustre Formed in 2018, INEOS TEAM UK aims to bring the America’s Cup—the oldest international sporting trophy in the world—to Great Britain. Based in Portsmouth, INEOS TEAM UK is led and backed by Sir Jim Ratcliffe, the founder and chairman of INEOS, a global chemical producer. The team also includes Sir Ben Ainslie, a previous America’s Cup winner, as principal and skipper and four times America’s Cup winner Grant Simmer as CEO. For the hull, whose design needed hundreds of compute cores for every simulation, the team used Amazon EC2 C5 instances in addition to the latest Amazon EC2 C5n Nitro-powered instances with Elastic Fabric Adapter (EFA) network interfaces. Amazon FSx for Lustre makes it easy and cost effective to launch and run the world’s most popular high-performance file system. Use it for workloads where speed matters, such as machine learning, high performance computing (HPC), video processing, and financial modeling. AWS Professional Services Enables engineers to be more innovative 한국어 INEOS TEAM UK was formed in 2018 to bring the America’s Cup to Great Britain in 2021 when the 36th edition of the race takes place in Auckland, New Zealand. A Technology Boat Race AWS allows us to take bigger design steps, simply because we have more time to understand our results.” Gains large-scale HPC capacity Using AWS, INEOS TEAM UK can process thousands of design simulations for its America’s Cup boat in one week versus in more than a month using an on-premises environment. INEOS TEAM UK will compete in the 36th edition of the America’s Cup in 2021. The team is using an HPC environment running on Amazon EC2 Spot Instances to help design its boat for the competition. INEOS TEAM UK Accelerates Boat Design for America’s Cup Using AWS Driving Better Innovation Amazon EC2 AWS Services Used Companies of all sizes across all industries are transforming their businesses every day using AWS. Explore our web hosting solutions and start your own AWS Cloud journey today. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. The 36th America’s Cup race will be decided in Auckland, New Zealand in 2021. Like all the teams, INEOS TEAM UK will compete in a boat whose design will have followed guidelines set by race organizers to ensure the crew’s sailing skills are fully tested. 中文 (繁體) Bahasa Indonesia Ρусский To ensure fast disk performance for the thousands of simulations completed each week, the team also used Amazon FSx for Lustre to provide a fast, scalable, and secure high-performance file system based on Amazon Simple Storage Service (Amazon S3). عربي 中文 (简体) To run these simulations using the team’s on-premises high performance computing (HPC) resources could take more than a month. Nick Holroyd, head of design at INEOS TEAM UK, says, “With so many design decisions to make before the competition, a month was too long. It reduced the time our engineers had to consider the results, limiting the freedom they needed to be innovative and make the right choices.”   Learn More Benefits of AWS Get Started Holroyd says, “Heading towards a design deadline is always a frantic time. You have to make decisions fast. Using AWS, we have more time to think about what makes a design successful or not. We can then use this knowledge in our next design iteration. AWS allows us to take bigger design steps, simply because we have more time to understand our results.”   Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English By running its CFD workloads on AWS, INEOS TEAM UK engineers have more time to innovate. They can wake up on a Monday morning with an idea and test it, knowing that by the end of the day, they’ll have a set of results to look at and build on. Despite the restrictions, teams still have control over features such as the shape of the boat’s monohull and foils, but with limited on-water testing, engineers must turn to computer-based simulations to optimize their designs. They depend on the computational power available to process thousands of simulations, exploring possible boat shapes and positions on the water. In the case of INEOS TEAM UK, for example, it needs 2,000−3,000 computational fluid dynamics (CFD) simulations to design the dimensions of just a single boat foil. “The speed combined with the low cost of the Amazon EC2 Spot Instances means we can do many thousands more simulations within our design budget,” says Holroyd. “One question I constantly ask myself is whether we’re spending our money wisely. Using AWS, I have no doubts because it massively compresses the computational turnaround, maximizing design time.” Adopting the AWS Cloud can provide you with sustainable business advantages. Supplementing your team with specialized skills and experience can help you achieve those results.  The aim of the restrictions—which limit on-water design trials, too—is also to control the cost of entering the race and to attract as many entrants as possible. About INEOS TEAM UK Deutsch The America’s Cup Dream Tiếng Việt Reduces HPC costs using Amazon EC2 Spot Instances Italiano ไทย On A Mission To Win 2020 Learn more » Supports thousands of simulations each week Sir Ben Ainslie, skipper and team principal at Ineos Team UK, and Max Star, CFD engineer, explain how using an HPC environment on AWS helped the team design the INEOS Team UK boat. The team turned to Amazon Web Services (AWS) to migrate its CFD simulations to the AWS Cloud. The team chose AWS because of the scale of its HPC resources as well as its cost-effectiveness. INEOS TEAM UK could keep its costs low by using Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which allow customers to access unused Amazon EC2 capacity. Português Nick Holroyd Head of Design, INEOS TEAM UK
AWS Case Study - StreamAMG.txt
Français Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services. It can capture, transform, and deliver streaming data to Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, generic HTTP endpoints, and service providers like Datadog, New Relic, MongoDB, and Splunk. Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, multi-active, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. The flawless start to the season was greatly appreciated by StreamAMG's customers, according to Andrew and raised the company's profile across the industry: "In the OTT industry reputation is key and our ability to consistently deliver scalable and resilient platforms has afforded us such a dependable reputation," he says. Español Live sports streaming provider StreamAMG quickly realized early in the year that the 2020 English football calendar would be radically different from what had gone before. "We started working internally to formulate a plan which would deliver a technical solution that could scale above and beyond our requirements." With COVID-19 disruption growing and matches played behind closed doors, the company began planning for a very different season – one where more users than ever would rely on its over-the-top (OTT) platforms to support their club, and clubs would increasingly rely on streamed matches as a revenue source. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. 日本語 StreamAMG Scores Record Viewership and Uninterrupted Delivery To achieve that, the teams undertook a comprehensive application transformation, replacing the most important components of the previous application with a cloud-native system that underpinned the load-bearing parts of StreamAMG's products with microservices and serverless technologies based on AWS. A New Environment, a New Infrastructure Agility & Performance Get Started 한국어 Amazon Lambda A Great Time to Score Amazon CloudFront AWS Services Used 中文 (繁體) Bahasa Indonesia The company delivered 2.9 million streams, watched by hundreds of thousands of fans, and an overall data uplift of 500 percent – all without a hitch and with no updates to the architecture needed. Services include AWS API Gateway, AWS Lambda, Amazon CloudFront, Amazon DynamoDB, and Amazon ElastiCache for Memcached. StreamAMG also adopted Amazon Kinesis Data Firehose to collect and process actions and user activity in real-time, and stream the data for storage later on. StreamAMG enables organisations across sports, media and betting to deliver video content at scale and offer exceptional streaming experiences. Ρусский عربي Availability 中文 (简体) "Being in the live sports business, failure is not really an option at all. Even going down for 10 seconds is going to impact tens of thousands or hundreds of thousands of users simultaneously. Scale and resiliency were definitely the two most important elements for us," Andrew De Bono, StreamAMG's CTO, says.  To support the unprecedented load the new season was likely to bring, StreamAMG began to reexamine its platform architecture to cope with the challenges ahead. Benefits of AWS Scalability, Elasticity, Cost To accommodate the uncertain demands of the season, the team wanted to create an infrastructure that could cope with the heaviest loads and still scale with demand. When the season began, they proved they had done just that: despite the massive spike in usage, StreamAMG delivered all matches with near zero downtime or interruption. Cost Optimization & Cost Savings As well as coping with unexpected demand, the scalability of the new system made a significant difference to StreamAMG's cost optimization, raising its performance ceiling without raising running costs. Due to the nature of live sports, StreamAMG's system might receive only light usage the majority of the time when no live matches are being played, and then see a huge spike in demand on matchday. Türkçe In the OTT industry reputation is key and our ability to consistently deliver scalable and resilient platforms has afforded us such a dependable reputation." English Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. The previous system had to be primed to deal with maximum usage 24/7, even when the company knew 90 percent of the time that capacity wouldn't be required. That all changed with AWS. Elasticity Learn more Deutsch Amazon DynamoDB Tiếng Việt Amazon Kinesis Data Firehose In the first minutes and hours of the season, the StreamAMG team was able to monitor how the system was dealing with the matches through Amazon CloudWatch, which provided visibility on both the platform and the traffic in real time, allowing the company to be fully aware and in control of the application. Andrew De Bono CTO, StreamAMG Italiano ไทย "We really are paying for every single user on our platform, nothing less and nothing more, so we really could align the cost with the actual usage, rather than taking on massive capex hits to support the increased capacity on our application,” says De Bono.  About StreamAMG 2020 Learn more » The company needed a set-up agile enough to deal with the uncertainties of the new situation, while still managing potentially millions of hits per minute with zero failover. While the streaming part of the business had to operate with the highest levels of availability, the company also needed to ensure its user membership, payment and entitlement management systems could easily handle the predicted jump in demand. And both elements needed to be able to scale to traffic levels that could be 400 percent to 500 percent of what the company might see in a normal season. A project of similar scale and significance might be expected to take several months, even without the disruption caused by COVID-19. But working to a hard deadline of the new season kickoff, the project was delivered in just 12 weeks, thanks to the close collaboration between the AWS and StreamAMG teams. Companies of all sizes across all industries are transforming their businesses every day using AWS. Start your own AWS Cloud journey today. Português
AWS Case Study_ Creditsafe.txt
Creditsafe, headquartered in Dublin, Ireland, with 23 offices across 13 countries worldwide, specializes in business credit checking. Its database contains insights on more than 320 million businesses, with data coming from over 70 different countries and provided to over 200,000 subscribers globally. It is one of the world’s most-used providers of online business credit reports and, each month, it predicts more than 70 percent of all business insolvencies. Français About Creditsafe Amazon Redshift Español Migrating its terabytes of data and related tools to AWS was just the beginning, though. The migration was an opportunity to improve the accessibility and sharing of data across many regions and countries, strategize, and plan for the future. “For us, this wasn’t just lift and shift, but actually a way to improve our ways of working as an organization,” says Marsh. Working with AWS Partner Cognizant, Creditsafe identified its needs and worked out a timeline for the migration. Cognizant has years of experience on many migrations, meaning Creditsafe was able to implement real-world best practices. It began by migrating its UK data acquisition operations. Data from all Creditsafe’s providers now natively feed into Amazon Redshift, which uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes. The data then moves into the company’s data vaults, where it is ready for use. Eliminated burden of multiple on-premises servers 日本語 Creditsafe chose AWS as the platform for its data and all of its customer-facing services and products, such as business credit reports, international credit reports, and company monitoring. “The whole goal of moving to the cloud was to tick the three main boxes around scalability, reliability, and availability,” says Brian McGeough, director of production at Creditsafe. “Cloud eliminates risks like storage area network failures and other things that could be catastrophic. We can focus on delivering to our customers, not maintaining an on-premises system.” Cloud eliminates risks like storage area network failures and other things that could be catastrophic. We can focus on delivering to our customers, not maintaining an on-premises system.” 한국어 Rather than investing in building out and maintaining infrastructure, Creditsafe is reallocating staff and resources to expanding its data analysis skills, with plans to use artificial intelligence (AI) and machine learning (ML). “We found it very difficult to cross-reference data across regions and jurisdictions previously, because they were effectively independent systems and services,” says Marsh. “Now we can get more value out of our data and focus on innovating. That’s exciting.” AWS Glue AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development. Amazon EC2 Creditsafe, founded in Oslo, Norway, and now headquartered in Dublin, Ireland, discovered early success providing data analysis to business customers. The company specializes in business credit checking. It has the biggest wholly owned database in the industry, containing insights on more than 320 million businesses. This data comes from over 70 different countries and is provided to over 200,000 subscribers globally. It is one of the world’s most-used providers of online business credit reports and, each month, predicts more than 70 percent of all business insolvencies. Over more than two decades, it had gradually built large and complex on-premises systems. The company migrated to Amazon Web Services (AWS) to optimize how it worked and to prepare for future success. A Quick and Successful Migration Increased reliability for terabytes of data with cloud storage 中文 (繁體) Bahasa Indonesia For its migration to AWS, Creditsafe used the AWS Migration Acceleration Program (MAP), a comprehensive and proven cloud migration program based on AWS experience in migrating thousands of enterprise customers to the cloud. Enterprise migrations can be complex and time-consuming, but MAP can help organizations accelerate their cloud migration and modernization journeys with an outcome-driven methodology. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي The first phase of migration has seen about 20 percent of Creditsafe’s systems successfully migrated. “We’re happy with how it’s going and we’re now running parallel workstreams for other jurisdictions,” says Marsh. “The knowledge gained from the first migration will make it easier and faster. Using the AWS Migration Acceleration Program has definitely been the right choice for us.” 中文 (简体) Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps.  Learn more » As the business grew, its on-premises systems needed to expand to accommodate increasing amounts of data. The overall system was built piece by piece, with more and more resources going into keeping that setup running. “We wanted to put our efforts into our core business, not running servers,” says Ryland Marsh, director of technical engineering at Creditsafe. “Migrating to AWS was a great opportunity to plan a new, optimized approach.” Improving the collection, storage, and analysis of data while the business remained operational was key.  Benefits of AWS Get Started Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Eliminating Risks and Gaining Flexibility and Resilience on AWS Brian McGeough Director of Production, Creditsafe Creditsafe Cuts Technology Administration Burden, Builds for Future Success Using AWS Improved transparency of data within the company Deutsch Tiếng Việt Amazon S3 Italiano ไทย Participation in MAP really did accelerate Creditsafe’s migration. “Working with Cognizant, we were able to scale much more quickly,” says McGeough. “We have technical expertise in house but using MAP let us plan and execute the migration with confidence—and Cognizant’s experience helped us direct that where it needed to go and fill in any gaps.” 2022 AWS Services Used Achieved successful phase-one migration of UK data acquisition operations Português
AWS Case Study_ Immowelt.txt
AWS Lambda Français Gains visibility of IT costs and lowers maintenance overheads Español immowelt Modernizes Real Estate Portal, Controls Costs, and Boosts Innovation Using AWS AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources. Since 1991, immowelt has run real estate portals that help German-speaking businesses and individuals find their dream property. The company is part of AVIV Group, one of the world’s largest digital real estate tech companies, which is in turn part of German publishing giant, Axel Springer SE. About immowelt Group 日本語 The immowelt Group (IWG) runs property-finding portals for German-speaking businesses and individuals. The company has more than 500 employees and is headquartered in Nuremburg, Germany. IWG is part of the AVIV Group, one of the world’s largest digital real estate tech companies, which is in turn part of publishing giant, Axel Springer SE. Improves availability of web portals, resulting in better customer experiences Get Started 한국어 And, by being all-in on AWS, immowelt now has access to a wide array of services that it can deploy easily as the business evolves. “We benefit from AWS expertise and the possibilities it gives us as we move forward,” says Acar. “Our AWS team is like an additional department supporting the business.” Faster Innovation with Updates Several Times a Day Increases frequency of software release cycles to several times a day immowelt wanted to improve its development team’s ability to create new features and solutions as well as modernize the organization’s infrastructure to support business growth. To do this, it needed to make its systems more reliable. “We wanted a world where we didn’t have to think about maintaining the underlying hardware and its limitations while working on scale,” says Cemal Acar, group leader of DevOps and infrastructure at immowelt. “We wanted to focus on innovation, expanding the business, and reducing time to market.” immowelt business and IT leaders now have greater visibility of IT costs, which makes budgeting and planning easier and more effective. Previously, the complexity of systems made it difficult to track where budget was being spent. The company has also cut the cost of IT maintenance and hardware purchases compared to its on-premises systems. Responsive Support Eases a Complex Project AWS Services Used 中文 (繁體) Bahasa Indonesia Greater Visibility and Cost Control Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Re-architecting workloads during a lift-and-shift migration is a major undertaking, but the leadership and technical teams believed that the long-term benefits outweighed any risks. “It would have been too expensive to move our whole setup to AWS in its previous state,” says Acar. “By lifting and shifting some legacy systems, while re-architecting others, we had an opportunity to create a platform to support further modernization in the future.” The immowelt Group runs popular real estate portals that help German-speaking businesses and individuals find their dream property. When its on-premises data centers threatened its ability to innovate and provide a responsive service to customers, it turned to AWS. The company completed a successful lift-and-shift migration to AWS and simultaneously re-architected its core infrastructure. Using AWS, immowelt has achieved greater visibility of IT costs, lowered maintenance overheads, and created a more efficient, flexible development process for future growth. 中文 (简体) immowelt received funding and expertise from AWS throughout the migration. The immowelt team used the AWS Migration Acceleration Program (MAP), which provides companies with guidance and help to identify gaps in skills ahead of migration. The program also awards credits and assesses how prepared the wider organization is for change, through a Migration Readiness Assessment, which covers people and organizational design aspects, as well as technology. By using AWS Well-Architected reviews, immowelt received support from AWS solution architects on a regular basis. “Through the AWS Migration Acceleration Program, we could progress faster with the changes we wanted. It also helped with our expenses,” says Acar. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Learn more » With the help and use of AWS, immowelt’s development teams were able to enhance the “you build it, you run it” approach, allowing them to roll out new features and fixes more frequently and quickly. It publishes updates several times a day, compared to every 2–4 weeks on the on-premises architecture. This means customers on its real estate portals experience a responsive service with the latest capabilities. “It’s another world for us now—and for our customers,” says Acar.   Benefits of AWS Using AWS, immowelt has achieved greater visibility of IT costs, lowered maintenance overheads, and created a more efficient development process with flexibility for future growth and innovation.   Amazon EC2 Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. When the migration team ran into challenges, it turned to AWS for assistance. AWS Professional Services provided advice on architectural issues for immowelt, while AWS Enterprise Support responded quickly to urgent issues. “We’d open a ticket and our AWS support team would help us to resolve the problem through online chats or phone conversations,” says Acar. “It would get more hands-on, too, if we needed it. We appreciated the high levels of responsiveness, professionalism, and expertise.” Cemal Acar Group Leader of DevOps and Infrastructure, immowelt English Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. Since the migration, the teams have increased their use of APIs and infrastructure as code from 50 percent to 99 percent. This makes it easier to reuse development work and gives developers more time for innovation. Engineers are empowered to take ownership of their work too, with opportunities to gain new cloud skills that boost their motivation and productivity. immowelt’s infrastructure consisted of two data centers, some of which regularly suffered outages, so that customers could not access the immowelt real estate portals. The existing applications were complex, and changes to code or systems in one area often caused problems or failures elsewhere. Keeping the system up and running required significant time and specialized skills held by only a few team members, which left the business vulnerable in the event of employees leaving their roles. Learn More Deutsch Tiếng Việt immowelt was finding its existing IT estate expensive and cumbersome to maintain, hindering the business and its ability to innovate. The company looked to modernize its infrastructure using Amazon Web Services (AWS). It ran multiple projects simultaneously, with one stream focused on re-architecting workloads that were hosted on-premises or already migrated to the cloud. It also aimed to migrate its remaining workloads to AWS as a straight lift-and-shift project.  Italiano ไทย Amazon CloudFront We benefit from AWS expertise and the possibilities it gives us as we move forward. Our AWS team is like an additional department supporting the business.” 2022 Migrating to AWS and Re-Architecting Core Systems AWS Web Application Firewall (AWS WAF) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Learn more about how you can achieve your cloud goals faster and more reliably with our unmatched migrationexperience and solutions. Português
AWS Customer Case Study _ Kepler Provides Effective Monitoring of Elderly Care Home Residents Using AWS _ AWS.txt
Delivering a Stable System to Provide High-Quality Care Using Amazon CloudWatch Français Benefits of AWS Using AWS, Kepler can quickly and efficiently monitor residents in need with low-latency communication and easily scale to accommodate new residents. Today, Kepler Vision is growing very rapidly in Europe and is on track to achieve its mission: to look after the well-being of 1 million patients by 2030.  Amazon Elastic Container Service (ECS) Anywhere is a feature of Amazon ECS that enables you to easily run and manage container workloads on customer-managed infrastructure. Caregivers can now respond quickly to elderly residents without having to constantly monitor multiple video screens. Using Kepler Night Nurse, care homes can provide better quality care. “Now, residents aren’t disturbed unnecessarily by nightly rounds and can sleep through the night,” says Stokman. “If there are issues, caregivers can be there to help within minutes. Our solution also reduces false alarms so caregivers can provide care only when actually needed.” Español Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Sensitive data is only accessible to approved Kepler staff. Amazon Simple Storage Service (Amazon S3) Kepler Vision Technologies, based in the Netherlands, created Kepler Night Nurse—a monitoring solution that looks after the well-being and safety of elderly residents in care homes using automated video analysis. The company built this hybrid solution on Amazon Web Services (AWS) and uses edge devices managed by Amazon Elastic Container Service (ECS). Using AWS, Kepler can easily scale to accommodate demand and increase the speed of connecting new sensor devices from 50 to 500 a week. The company also improved its development speed by reducing the time required for neural network training on Amazon Elastic Compute Cloud (Amazon EC2) from several weeks to just a few hours. Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Kepler Vision Technologies, based in the Netherlands, uses computer vision and deep learning to assist caregivers in looking after the elderly in care homes. The Kepler Night Nurse monitors care home residents and alerts staff to any issues so they can provide high-quality care. Kepler Provides Effective Observation of Elderly Care Home Residents Using AWS Get Started 한국어 Dr Harro Stokman Chief Executive Officer and Founder, Kepler Vision Technologies Kepler monitors its software to ensure its solution remains reliable by using Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. Monitors, manages, and restarts edge devices automatically with Amazon CloudWatch While developing Kepler Night Nurse, the company faced a challenge: care homes do not have the computing resources required to process and analyze video images, but processing images must occur on premises to protect residents’ privacy. Kepler found a solution by taking a hybrid approach and using edge devices. The Kepler Night Nurse Edge Box, managed using Amazon ECS Anywhere, allows it to easily run containers on customer-managed infrastructure. AWS Services Used Kepler plans to use AWS services to continue to develop Kepler Night Nurse. It is working to improve the efficiency of its neural network training and to add new functionality, such as faster video processing and even more accurate video recognition. Amazon CloudWatch, which provides on-premises edge device monitoring. CloudWatch automatically manages and restarts edge devices if they fail, meaning Kepler’s IT team only needs to intervene for complex issues. “Building on AWS means we have a highly available system that warns us when something isn’t working,” says Stokman. “This allows us to immediately address issues so no lives are put at risk.” Amazon ECS Anywhere 中文 (繁體) Bahasa Indonesia The world’s population is aging. By 2030, it’s estimated that Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn more » Kepler Night Nurse is helping to address the challenges of caring for an aging population. “We’ve improved the quality of care in elder care homes,” says Stokman. “Built on AWS, our solution helps staff provide attentive care while affording residents the privacy and dignity they deserve.” 中文 (简体) Amazon Elastic Container Service (Amazon ECS) Supporting Care Homes Looking After Elderly Residents Using AWS Learn more » Developing a Hybrid Solution to Address Privacy Using Amazon ECS Anywhere Kepler is a deep learning startup founded in 2018. Its Kepler Night Nurse software uses artificial intelligence (AI) to look after elderly residents in their rooms. When the fully automatic video analysis detects that a resident has fallen or needs attention, it sends a text message to care home staff within 30 seconds.   Substantially reduces time for neural-network algorithm training   Improving Neural Network Training for Accurate Video Analysis at Lower Cost Addressing the Challenges in Caring for the Elderly Türkçe Scales to meet immediate increases in demand English The training data is anonymized, encrypted, and stored on Kepler Vision Technologies Kepler is also now able to remotely monitor and configure all edge devices installed at care homes and deploy its solution to new customers in minutes. “Using AWS, we can easily manage the hybrid setup with a single control panel view of all services, which gives us total visibility of product performance at every customer site,” says Stokman. Deutsch Tiếng Việt 1 in 6 people will be aged 60 and over and a predicted Italiano ไทย Amazon S3 is an object storage service offering industry-leading scalability, data availability, security, and performance. Amazon CloudWatch Improves residents’ safety and well-being, notifying caregivers within 30 seconds of detected need 2022 The company also uses Amazon EC2 to train its neural networks and improve video recognition accuracy. The training takes only a few hours, compared to several weeks using an on-premises approach. “We have on-demand scalability for GPU workloads to train our neural network models when we need to,” says Stokman. It’s fast and extremely cost effective, and we only pay for what we use. We’ve saved 70 percent in IT costs, giving us the cashflow we need to grow fast.” 日本語 shortfall of global healthcare workers will reach 18 million. Kepler Vision Technologies’ solutions address the challenges that these trends present in caring for the elderly. The Kepler Night Nurse software analyzes videos from care home rooms and automatically alerts staff to any issues so they can ensure the well-being and safety of residents. Since its launch, Kepler has worked with AWS to develop its hybrid solution. AWS Activate, a program that offers startups no-cost tools and resources—including credits—was particularly beneficial to Kepler. “AWS was the best choice to help us develop our product because its services are easy to use and well-documented,” says Dr Harro Stokman, chief executive officer (CEO) and founder of Kepler Vision Technologies. “We also got a lot of support from our AWS team throughout the development process on which services to choose and how to best design our solution to work with them.” Português We’ve improved the quality of care in elder care homes. Built on AWS, our solution helps staff provide attentive care while affording residents the privacy and dignity they deserve.”
AWS releases smart meter data analytics _ AWS for Industries.txt
AWS for Industries AWS releases smart meter data analytics by Sascha Janssen and Juan Yu | on 03 NOV 2020 | in Amazon Athena , Amazon Redshift , Amazon SageMaker , Industries , Power & Utilities , Sustainability , Technical How-to | Permalink | Comments |  Share Introduction Utilities have deployed MDMS (Meter Data Management Systems) since the late 90’s and MDMS deployments have accelerated alongside the deployment of smart metering and advance metering infrastructure (AMI) at utilities worldwide. MDMS collect energy consumption data from smart meter devices and send it to utility customer information systems (CIS) for billing and further processing. The most common MDMS use case for utilities is the performance of basic data validation, verification and editing (VEE) functions, and the creation of billing determinants from vast amounts of meter data. Nonetheless, petabytes of valuable energy consumption data remain trapped in legacy utility MDMS. Utilities confronting the need for transition driven by decarbonization and decentralization can benefit from unlocking the power of metering data and enriching it with other information sources like geographic information systems (GIS), CIS, and weather data. This provides compelling insights for various use cases such as forecasting energy usage, detecting system anomalies, and analyzing momentary service outages. Collectively, these uses cases present utilities with opportunities to improve customer satisfaction while increasing operational efficiency. An AWS Quick Start, which deploys a Smart Meter Data Analytics (MDA) platform on the AWS Cloud, helps utilities tap the unrealized value of energy consumption data while removing undifferentiated heavy lifting for utilities. This allows utilities to provide new services such as: Load prediction on the household, circuit, and distribution system level Deeper customer engagement through proactive notifications of high consumption or power outage status Predictive maintenance on distribution assets, circuit quality analytics, and much more This blog reviews the architecture of the AWS MDA Quick Start and its design aimed at providing utilities with a cost effective data platform to work with petabytes of energy consumption data. What does MDA Quick Start include? AWS MDA uses a data lake and machine learning capabilities to store the incoming meter reads, analyze them, and provide valuable insights. The Quick Start comes with three built-in algorithms to: Predict future energy consumption based on historical reads Detect unusual energy usage Provide details on meter outages The MDA platform is capable of processing up to 250TB of meter reads each day in batches. It also handles late-arriving data and prepares the data for different consumption endpoints like a data warehouse (Amazon Redshift), a machine learning pipeline (Amazon SageMaker), or APIs to make the data consumable for third-party applications. MDA architecture The core of the MDA is built on serverless components. Serverless ensures that the utility doesn’t have to manage infrastructure or provision it, and scaling is done automatically based on the load or the amount of the delivered meter reads. This approach minimizes utility cost. The following AWS services are included: A data lake formed by Amazon S3 buckets to store raw, clean, and partitioned business data. An extract, transform, load (ETL) process built with AWS Glue and AWS Glue workflow. Since AWS Glue only runs on demand, provisioning of infrastructure or managing nodes is not necessary. An Amazon Redshift cluster serves as a data warehouse for the business data. AWS Step Functions orchestrates machine learning pipelines. Amazon SageMaker supports model training and inferencing. A Jupyter Notebook with sample code to perform data science tasks and data visualization. Amazon API Gateway to expose the data, energy forecast, outages, and anomalies via HTTP. Data ingestion Utilities ingest meter data into the MDA from MDMS. An MDMS performs basic, but important, validations on the data before the data gets shipped to other systems. One advantage to this is that all data delivered to the MDA from the MDMS should be clean and can be directly processed. Furthermore, the MDMSs delivers the meter reads in batches, generally once a day, so the MDA must process the data when the batch arrives and finish processing it before the next batch arrives. Given their legacy architectures, the most commonly used interface to transfer data from MDMs are plain files over (S)FTP.  Utilities can connect their MDMS via AWS Storage Gateway for files, AWS DataSync, or AWS Transfer for SFTP to the data platform and store the meter read information directly to an S3 bucket, which is called a “landing zone.” From there, the ETL pipeline picks up the new meter reads and transforms them to a business valuable format. Data lake The heart of the MDA platform is the data lake. It is composed of three primary S3 buckets and an ETL pipeline that transforms the incoming data in batches and stores the results in different stages. The batch run can be either time- or event-based, depending on the delivery mechanism of the MDMS. The data lake handles late-arriving data and takes care of some basic aggregations (and re-aggregations). The workflow actively pushes the curated meter reads from the business zone to Amazon Redshift. The core ETL pipeline and its bucket layout The landing zone contains the raw data, which is a simple copy of the MDMS source data. On a periodic or event basis, the first AWS Glue job takes the raw data, cleans and transforms it to an internal schema, before they get stored in the “clean zone” bucket. The clean zone contains the original data converted into a standardized internal data schema. On top of that, dates are harmonized and unused fields are omitted. This optimizes the meter data for all subsequent steps. Another advantage of the standardized data schema is that different input formats can be adopted easily: only the first step of the pipeline needs to be adjusted in order to map different input formats to the internal schema, which allows all subsequent processes to work transparently with no further adjustment needed. A second AWS Glue job moves the data from the clean zone to the “business zone.” The business zone is the single point of truth for further aggregations and all downstream systems. Data is transformed to correct format and granularity for users. Data gets stored in Parquet and is partitioned by reading date and reading type. The column-based file format (Parquet) and the data partitioning enables efficient queries, therefore it is best practice to choose partition keys that correspond to the used query pattern. To prevent data from getting transformed twice,  Job Bookmarks are used on each job. Job Bookmarks are a feature to incrementally process the data and let AWS Glue keep track of data that has already been processed. For that, the ETL job persists state information from its previous run, so it can pick up where it has finished. This approach follows the modern data platform pattern, and more detailed descriptions can be found in this presentation . Handling late data In the meter world, late data is a common situation. Late data means that a certain meter didn’t deliver its consumption at the expected point in time due to issues with the network connection or the meter itself. If the meter is connected and working again, these reads get delivered in addition to the current reads. An example could be the following: Day 1 – both meter deliver the consumption reads: { meter_id: meter_1, reading_date: 2020/08/01, reading_value: 0.53, reading_type: INT } { meter_id: meter_2, reading_date: 2020/08/01, reading_value: 0.41, reading_type: INT } Day 2 – only meter_1 sends its consumption reads: { meter_id: meter_1, reading_date: 2020/08/02, reading_value: 0.32, reading_type: INT } Day 3 – both meter reads from meter 1 and 2 will be sent, the second meter also sends its outstanding read from the previous day: { meter_id: meter_1, reading_date: 2020/08/03, reading_value: 0.49, reading_type: INT } { meter_id: meter_2, reading_date: 2020/08/03, reading_value: 0.48, reading_type: INT } { meter_id: meter_2, reading_date: 2020/08/02, reading_value: 0.56, reading_type: INT } The data lake needs to handle the additional delivery of the third day. The ETL pipeline solves this automatically by sorting the additional read into the correct partition to make sure that each upstream system can find the late data and act on it. To make all following ETL steps aware of the late arriving data (that is, to re-aggregate monthly or daily datasets) a distinct list of all arriving dates in the current batch will be stored in a temporary file, which is only valid for the current pipeline run. distinct_dates = mapped_meter_readings\ .select( ‘reading_date’ )\ .distinct()\ .collect() distinct_dates_str_list = ‘,’ .join(value[ ‘reading_date’ ] for value in distinct_dates) This list can be consumed by everyone who is interested in the arrival of late data. The list defines which reading dates were delivered during the last batch. In this particular example, the list with the distinct value for each day would look like this: Day 1: {dates=[2020/08/01], …} Day 2: {dates=[2020/08/02], …} Day 3: {dates=[2020/08/03,2020/08/02], …} // day 3 has the late read from Aug 2nd Based on these results, an aggregation job that aggregates meter reads on a daily basis can derive which dates need to be re-aggregated. For day one and two, only the aggregation for the first and second day is expected. But on day three, the job needs to aggregate the data for the third and re-aggregate the consumption reads for the second. Because the re-aggregation is handled like the normal aggregation, the whole day will be calculated and previous results will be overwritten so no UPSERT is needed. Adopting a different input schema Different MDM systems deliver different file formats. Data input to the MDA is adaptable with minimal effort using a standardized internal data schema. The first step in the ETL pipeline transfers the input data from the landing zone to this internal schema. The schema is designed to hold all important information and it can be used as an input for different business zone representations. A closer look at the corresponding section of the AWS Glue jobs shows that it is fairly easy to adopt a different data schema by just changing the input mapping. The  ApplyMapping  class is used to apply a mapping to the loaded  DynamicFrame . datasource = glueContext.create_dynamic_frame.from_catalog(database = ‘meter-data’, table_name = ‘landingzone’, transformation_ctx = “datasource” ) mapped_reads= ApplyMapping.apply(frame = datasource, mappings = [\ ( “col0” , “long” , “meter_id” , “string” ), \ ( “col1” , “string” , “obis_code” , “string” ), \ ( “col2” , “long” , “reading_time” , “string” ), \ ( “col3” , “long” , “reading_value” , “double” ), \ ( “col4” , “string” , “reading_type” , “string” ) \ ], transformation_ctx = “mapped_reads” ) The left side of the example shows the input format with five columns (col0 – col4) and their respective data types. The right side shows the mapping to the internal data schema. The incoming data format is discovered automatically by an  AWS Glue Crawler . The Crawler checks the input file, detects its format and writes the metadata to an AWS Glue Data Catalog . The DynamicFrame then gets created from the information in the Data Catalog and is used by the AWS Glue job. Triggering the machine learning (ML) pipeline After the ETL has finished, the machine learning pipeline is triggered. Each ETL job publishes its state to Amazon CloudWatch Events that publishes each state change of the AWS Glue ETL job to an Amazon SNS topic. One subscriber of this topic is an AWS Lambda function. As soon as the business data has been written to the Amazon S3 bucket, this Lambda function checks if the ML pipeline is already running, or if the state machine that orchestrates the preparation and model training needs to be triggered. Machine learning architecture The machine learning pipelines are designed to meet both online and offline prediction needs. Online prediction allows users to run predictions against the latest data on a single meter upon request at any time of the day. Batch prediction allows users to generate predictions for many meters on a recurring schedule, such as weekly or monthly. Batch predictions are stored in the data lake and can be published via an API or used directly in any BI tool to feed dashboards to gain rapid insights. Meter readings are time series data. There are many algorithms that can be used for time series forecasting. Since some algorithms are designed for a single set of time series data, the model would needs to be trained individually for each meter before it can generate predictions. This approach does not scale well if used for even thousands of meters. The  DeepAR algorithm can train a single model jointly over many similar time series entries and it outperforms other popular forecasting algorithms. It can also be used to generate forecasts for new meters the model hasn’t been trained on. DeepAR allows up to 400 values for the  prediction_length , depending on the needed prediction granularity. DeepAR can generate hourly forecasts for up to two weeks, or daily forecasts for up to a year. There are many models that can be used for time series anomaly detection. The MDA Quick Start uses the  Prophet library ,  because it is easy to use and provides good results right out of the box. Prophet combines trend, seasonality, and holiday effects that suit meter consumption data well. The Quick Start uses hourly granularity for meter consumption forecasting and daily granularity for anomaly detection. The data preparation step can be modified to support different granularities. Preparing and training the model The input time series data for the model training should contain timestamps and corresponding meter consumption collected since last measurement. The data in the business zone, which acts as a single point of truth, is prepared accordingly. DeepAR also supports dynamic features like adding weather data that can be integrated into the ML pipeline as part of the training data to improve model accuracy. The weather data needs to be at the same frequency as the meter data. If the model is trained with weather data, the weather data also needs to be provided for both online inference and batch prediction. By default, weather data is not used, but utilities can be enable this as described in deployment documentation . The training pipeline can be run with a different set of hyperparameters , with or without the weather data, or even with another set of meter data, until the results of the model are acceptable. After the model has been trained, the training pipeline deploys it to a SageMaker endpoint, which is immediately ready for online inferences. The endpoint can be scaled by choosing a larger instance type to serve more concurrent online inference requests. To keep the model up to date, the training pipeline can be re-run daily to include new meter consumption data and learn pattern changes in customer consumption. Machine learning batch pipeline For energy consumption forecast and anomaly detection, the latency requirements are typically on the order of hours or days. So they can be generated periodically. By leveraging a serverless architecture incorporating AWS Lambda functions and Amazon SageMaker transform job , batch jobs can be parallelized increase the prediction speed. Each batch job includes an anomaly detection step, forecast data preparation step, forecasting step, and a step to store the results to Amazon S3. Step functions are used to orchestrate those steps and Map State to support custom batch size and meter ranges. This enables the MDA to scale and support millions of meters. The input of the batch pipeline includes the date range of meter data and the ML model. By default, it will use the latest model trained by the training pipeline, but a custom DeepAR model can also be specified. In general, the training jobs have to be run many times with different parameters and features before the model satisfies the expectations. Once the appropriate parameters and features are selected, the model training still needs to be re-run on a regular basis with the latest data to learn new patterns. In the MDA, the training and batch pipeline is managed in separate state machines that allows run of all pipelines as one workflow or each pipeline individually at different schedules to meet the requirements. How to get started and go build! To get started, the Quick Start can be deployed directly. Additional documentation explains step by step how to set up the MDA platform and use sample data to experiment with the components. This blog describes release one of the AWS smart meter data analytics (MDA) platform Quick Start. AWS plans to continue to extend the MDA based on customer feedback to unlock more possibilities to deliver value from smart meter data. TAGS: AWS MDA , AWS Meter Data Analytics , meter analytics , Meter Data Management Systems , Smart Meter Data , utility MDMS Sascha Janssen Sascha Janssen is a Senior Solutions Architect at AWS, helping Power & Utility customers to become a digital utility. He enjoys connecting 'things', build serverless solutions, and use data to deliver deeper insights. Juan Yu Juan Yu is a Data Warehouse Specialist Solutions Architect at Amazon Web Services, where she helps customers to adopt cloud data warehouse and solve analytic challenges at scale. Prior to AWS, she had fun building and enhancing MPP query engine to improve customer experience on Big Data workloads. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Bank of Montreal Case Study _ AWS.txt
additional stress test scenarios Français AWS also supports BMO’s Digital First strategy, using increased speed, scale, and the elimination of complexity to ensure customer experiences evolve continuously. Summing up the bank’s goals, Carl Gomes states, “BMO is working continuously to meet our initiative to modernize and simplify platforms, and we are in the process of migrating all components and capabilities to modern, cloud-native technologies. The bank is also implementing DevOps methodologies to automate the integration and development needed to respond agilely to fast-moving global markets. With the help of AWS, our focus now is training our staff on the latest cloud technologies so that we can build an elastic, scalable and modern risk platform that will meet the bank’s needs and ambitions for years to come.” BMO is a leading North American bank with a strong global reputation for disciplined risk management. After the 2007–2009 financial crisis, regulatory demands for disclosing market risk increased, requiring BMO to scale its risk platforms.  2023 Español Supported by the BMO Technology and Operations team, BMO’s three primary operating groups, Personal and Commercial Banking, BMO Capital Markets, and BMO Wealth Management, serve customers in Canada and the United States, with BMO Capital Markets operating in select global markets internationally. 日本語 Get Started 한국어 AWS has worked closely with BMO through the process. Teams across BMO’s business lines say the experience supports the bank’s ambition to digitize, increase flexibility, and drive product innovation for customers. “The real challenge is not the new services themselves. It’s adapting legacy processes and skillsets to get the full potential from cloud adoption,” notes Managing Director, Market Risk Technology, Harsh Katoch. “This requires a new and more simplified operating model that supports DevSecOps and product ownership, consistent Cloud governance, embracing Cloud Economics, and having the right skills across all our teams to make the most of AWS services.” Overview | Opportunity | Solution | Outcome | AWS Services Used This new platform gives BMO the flexibility to meet future regulatory challenges. “If in the future we have new regulatory requirements which need another 200 million or more calculations, we still need to complete them in the same fixed window,” notes Head of Market Risk and Chief Risk Officer for BMO Capital Markets, Jason Rachlin. “This will only happen if our platform is elastic and scalable.” The Amazon Web Services (AWS) solution has flexibility and elasticity to scale when needed. Market Risk Oversight took advantage of the increased computational capacity to add over 500 more stress scenarios, improving the Stress Test results accuracy. Also, the new platform can run Value At Risk (VAR) and Daily Stress Test batches in parallel, so detailed and aggregated risk numbers are delivered well before 7:30 am ET, saving the risk team five hours each day. BMO’s North American trading desks could then manage risk in a timely and effective manner. “We’ve now reached the point where all of our lines of business are using a broad array of cloud services and driving increasingly detailed cloud adoption roadmaps to meet those objectives,” says Chief Information Officer for Market Risk Technology and Corporate Treasury Technology at BMO, Carl Gomes. “For example, in Market Risk Technology, we are spinning off 8,000 to 10,000 on-demand and spot elastic compute cloud (EC2) instances nightly on AWS. These machines are also joining our Market Risk compute grid to perform various risk calculations.” More Data, Faster The BMO Market Risk Technology team builds and maintains the bank’s risk platform. First developed in 2015, BMO’s Market-Risk Next-Generation (MRNG) platform calculates market risk for all capital market positions in various asset classes such as Fixed Income, Commodity, FX, Interest Rate, Equity, and Structured products. AWS Services Used Amazon Elastic Compute Cloud (Amazon EC2) provides secure and resizable compute capacity for virtually any workload. 中文 (繁體) Bahasa Indonesia BMO had to run far more complex risk models to predict the bank’s ability to withstand hypothetical future adverse events. The BMO Market Risk Technology team also faced time challenges—all calculations and aggregations had to run at the close of business (10 pm ET) and be ready for the opening of markets (7:30 am ET). nightly calculations Carl Gomes Chief Information Officer for Market Risk Technology and Corporate Treasury Technology, BMO Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Amazon EC2 With the help of AWS, our focus now is training our staff on the latest cloud technologies so that we can build an elastic, scalable, and modern risk platform that will meet the bank’s needs and ambitions for years to come." AWS Increases Flexibility and Drives Innovation Overview With increased demand for disclosing market risk post-financial crisis, banks needed to perform regular stress tests on a variety of data, including revenues, expenses, losses, pre-tax net income and capital ratios, plus distinguish between the trading book (assets intended for active trading) and the banking book (assets expected to be held to maturity, such as customer loans). Banks also had to calculate the risk of market illiquidity and assess the use of expected shortfall rather than the value at risk when measuring risk under stress. The introduction of Basel Reforms (2018) and implementation of the Fundamental Review of the Trading Book (2019) significantly increased the volume of risk calculations needed. BMO is a leading North American bank driven by a single purpose: to Boldly Grow the Good in business and life. Our Purpose informs our strategy, drives our ambition, and reinforces our commitments to progress: for a thriving economy, a sustainable future and an inclusive society. Customer Stories / Financial Services Grid Computing for Financial Services Added 500+ Türkçe English Delivering Business Objectives with Cloud Services Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. Learn more » The BMO Market Risk Technology team uses Amazon Elastic Compute Cloud (Amazon EC2), Grid Computing, and Amazon CloudWatch to continue innovating and optimizing computational resources. A scaled-up, elastic cloud platform helps BMO to run multiple risk metrics and regulatory stress calculations in parallel and scale computational capacity for future regulatory requirements. Able to run ∼10,000 Solving with Scalability Increased computing capacity to one billion+ Saved five hours daily Deutsch processing detailed and aggregated risk numbers Tiếng Việt BMO Market Risk Uses AWS to Optimize Computational Capacity Italiano ไทย By building and running grids with AWS, companies are able to execute a larger number of parallel tasks, which leads to increased speed of analysis and reduced time to results. Learn more » Amazon CloudWatch Contact Sales Navigating a Changing Regulatory Landscape Learn more » BMO’s Market Risk Technology team already spent many years using AWS and had the foundational skills and capabilities to meet the needs of the bank’s business partners. Now, with Amazon EC2, grid computing, and CloudWatch as the foundation for BMO’s Cloud Platform, the team is better positioned to support business needs across the enterprise. on-demand and spot EC2 instances nightly Leading North American bank BMO used AWS to build a more elastic platform for calculating risk metrics, scaling the bank’s computational capacity to comply with future regulatory requirements. To meet the regulatory market risk demands, the team needed a highly scalable compute platform to calculate complex models in similar or less time and allow for simultaneous calculation of multiple sets of test results. This new solution delivers on both fronts. It performs more than one billion calculations each night and maintains terabytes of data with significant daily growth. About BMO Português
Bazaarvoice Case Study _ AWS.txt
Eliminates error-prone Instantaneously Français Español 82% Amazon SageMaker Serverless Inference deployment time for new models 日本語 Using Serverless Inference made it simple for Bazaarvoice to deploy a model and move it to a dedicated endpoint if the model experienced high traffic. As a result, the company has improved its throughput while reducing costs. It saved 82 percent on its ML inference costs by migrating all models across 12,000 clients to Serverless Inference. Bazaarvoice analyzes and augments millions of pieces of content per month, which results in tens of millions of monthly calls to SageMaker, or about 30 inference calls per second. But most of its ML models get called by clients only once every few minutes, so it doesn’t make sense for Bazaarvoice to allocate dedicated resources. “We needed the flexibility to change between dedicated hosts for large, expensive models and low-cost options for models used less frequently,” says Kratz. Using Serverless Inference, the company can scale up or down seamlessly to match demand, increasing efficiency and saving costs. “The big win for us is that we don’t have to manage servers or pay for compute time that we’re not using,” says Kratz. “And we can keep up with all the content coming in so that the client sees it moderated and augmented in a timely fashion.” sends data to existing models  2022 With headquarters in Austin, Texas, and offices across the globe, Bazaarvoice uses ML to automate content moderation for enterprise retailers and brands. The company collects, syndicates, and moderates reviews, social content, photos, and videos, which customers can use to enhance their product pages and drive sales. Bazaarvoice also uses ML to augment this content with semantic information to help clients categorize the content and glean insights. Get Started 한국어 Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Solution | Achieving Simpler, More Scalable ML Deployments From 30 to 5 minutes Bazaarvoice, a leading provider of product reviews and user-generated content solutions, helps brands and retailers enrich their product pages with product ratings, reviews, and customer photos and videos. It uses machine learning (ML) to moderate and augment content quickly and to expedite the delivery of content to clients’ websites. AWS Services Used 中文 (繁體) Bahasa Indonesia innovation Contact Sales Ρусский Bazaarvoice desired an improved ML architecture to accelerate model deployment, to reduce its costs and its engineers’ workload, and to accelerate innovation for its clients. Having some of its infrastructure already on Amazon Web Services (AWS), Bazaarvoice migrated its ML workloads to Amazon SageMaker, which data scientists and developers use to prepare, build, train, and deploy high-quality ML models with fully managed infrastructure, tools, and workflows. In doing so, the company accelerated model deployment, improved scalability, and reduced costs by 82 percent. And it’s reinvesting those cost savings to improve its service further. عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Elastic Container Service (Amazon ECS) Customer Stories / Advertising & Marketing Overview By using SageMaker Serverless Inference, we can do ML efficiently at scale, quickly getting out a lot of models at a reasonable cost and with low operational overhead.”  As Bazaarvoice delivers content more quickly, its customers can display that content much sooner for new end users. Using SageMaker, it takes only 5 minutes. “Sending new client data to an existing model used to take 15–20 minutes,” says Kratz. “Now, it happens right away.” And deploying a brand-new model takes only 5 minutes instead of 20–30 minutes. On AWS, Bazaarvoice has seen an increase in model delivery throughput. The company can build a model, ship it, and run it on Serverless Inference to evaluate its performance before sending any content to it, reducing the risks of using live content. And there’s no need to redeploy when it’s time to send content to the model because the model is already running on SageMaker. Instead, it can deploy new models as soon as validation is complete. “Using Amazon SageMaker has vastly improved our ability to experiment and get new models to production quickly and inexpensively,” says Dave Anderson, technical fellow at Bazaarvoice. “We have the flexibility to drive our value proposition forward, and that’s exciting.” The company has helped its data scientists move faster and has added more value for customers. Opportunity | Accelerating ML Innovation on AWS Outcome | Continuing to Improve the Customer Experience Türkçe English Accelerates Bazaarvoice has unlocked significant cost savings while improving the ML development experience for its team and enhancing what it offers to its customers. The company plans to bring even more benefits to customers by using the SageMaker Serverless Inferences API to power quick access. “ML is becoming the norm in this industry—you can’t compete without it,” says Kratz. “By using SageMaker Serverless Inference, we can do ML efficiently at scale, quickly getting out a lot of models at a reasonable cost and with low operational overhead.” Lou Kratz Principal Research Engineer, Bazaarvoice Bazaarvoice considered building its own serverless hosting solution, but such a project would have been expensive and labor intensive. Instead, it adopted Amazon SageMaker Serverless Inference—a purpose-built inference option that makes it simple for businesses to deploy and scale ML models—to reduce the operational burden for its teams. “This project was the start of the unification of our model deployment,” says Edgar Trujillo, senior ML engineer at Bazaarvoice. The company began sending traffic to its new system in December 2021, and by February 2022, it was handling all production traffic. When Bazaarvoice feeds content into one of its ML models, the model outputs a confidence value and uses that to decide on the content. On the company’s previous architecture, Bazaarvoice had to ship a new model anytime that it wanted to change the decision logic. Bazaarvoice began using Amazon Elastic Container Service (Amazon ECS)—a fully managed container orchestration service that makes it easy for businesses to deploy, manage, and scale containerized applications—to handle decision logic outside the ML model. “Separating the decision logic was hugely beneficial because the content operations team can now get the results and make decisions virtually instantaneously,” says Kratz. “They don’t have to ship a new model and wait for it to deploy and update.” Deutsch About Bazaarvoice With headquarters in Austin, Texas, and offices around the world, Bazaarvoice provides tools for brands and retailers to create smart shopper experiences across the entire customer journey through a global retail, social, and search syndication network. Tiếng Việt Bazaarvoice wanted to improve its scalability, speed, and efficiency, but it was facing challenges with its older and slower ML solution. For example, every time the company needed to onboard a new client or train new models, it had to manually edit multiple model files, upload them, and wait for the system to register the change. The process took about 20 minutes and was prone to errors. Further, the architecture hadn’t been designed to support the company’s growing scale efficiently: each machine that supported its nearly 1,600 models needed 1 TB of RAM. “The cost was quite high, and because the architecture was built as a monolith, it couldn’t automatically scale, which was one of our key goals,” says Lou Kratz, principal research engineer at Bazaarvoice. Agility was also crucial to supporting Bazaarvoice’s growing number of clients and to experimenting on ML models. “We wanted to be able to increase the number of models in production by 10 times without running into memory limits,” says Kratz. Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Italiano ไทย Amazon SageMaker Serverless Inference is a purpose-built inference option that makes it easy for you to deploy and scale ML models. Bazaarvoice Reduces Machine Learning Inference Costs by 82% Using Amazon SageMaker Serverless Inference reduction in ML inference costs manual work Learn more » Português Amazon SageMaker
Better Mortgage using Amazon Elastic Kubernetes _ Better Mortgage Video _ AWS.txt
Français 2023 Español 日本語 Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي 中文 (简体) Customer Stories / Financial Services Türkçe Vishal Garg, founder and chief executive officer (CEO), discusses how Better Mortgage (NMLS #330511) uses Amazon Web Services (AWS) to grow its business and launch innovative solutions such as Equity Unlocker and One Day Mortgage.  Equity Unlocker has revolutionized the concept of what qualifies for a down payment to purchase a home by enabling tech employees to pledge vested equity toward a down payment. Historically, in the traditional homebuying process, buyers would wait for weeks to receive a decision from their lenders. One Day Mortgage delivers a Commitment Letter in 24 hours and was built entirely on AWS. Better chose AWS because of its Amazon Elastic Kubernetes Service (Amazon EKS) and machine learning and articificial intelligence capabilities. Watch the video to learn more about Better’s journey of innovation. English Better Mortgage Builds Innovative Mortgage Solutions for its Customers on AWS Deutsch Tiếng Việt Italiano ไทย Amazon EKS Learn more » Português
BIPO Improves Customer Experience on its HR Management System Using Machine Learning on AWS _ Case Study _ AWS.txt
BIPO Improves Customer Experience on its HR Management System Using Machine Learning on AWS Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Français 2023 Solution | Expanding HRMS Platform’s Capabilities Using Machine Learning Español Reduced the cost of implementing facial recognition devices by 80 percent Learn More 日本語 AWS Services Used BIPO also plans to introduce these facial recognition-based access controls at meetings, conferences, exhibitions, and other similarly sized events.  Established in 2010 and headquartered in Singapore, BIPO is a global payroll and people solutions provider. Our enterprise-ready Human Capital Management (HCM) solution automates HR processes, simplifies workflows, and delivers actionable insights. Complemented by our global payroll outsourcing and Employer of Record (EOR) services, we support your global workforce needs through a network of 40+ offices, four R&D centres, and business partners in 100+ countries. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used In 2020, BIPO integrated Amazon Textract with its HRMS mobile app and cut claims submission times by up to 50 percent for each receipt. Amazon Textract automatically extracts and uploads printed text on physical receipts using the cameras on their mobile devices. The new feature also minimized erroneous claims entries by up to 70 percent. BIPO has introduced this feature on its own internal HRMS and saved its employees up to 100 hours a month on claims submissions.  In late 2021, BIPO used Amazon Rekognition to implement a facial recognition-based attendance-taking feature at 20 percent of its initial estimated costs. Using Amazon Rekognition, BIPO eliminated the need for pricey, proprietary hardware and dedicated servers. Companies with the HRMS can use existing devices, such as employees’ own mobile phones or company tablets to take attendance, which reduces time spent on manual clock-ins by 80 percent. The facial recognition tool also incorporates liveness detection, which prevents fraudulent attendance-taking through pre-recorded videos. Get Started BIPO is a Singapore-based software company that provides cloud and mobile-based human resource management solutions for 3,300 customers worldwide, including those in the retail, food and beverages, and logistics industries. Its Human Resource Management System (HRMS) platform manages HR-related processes for more than 400,000 employees.  中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي In 2020, BIPO saw a growing trend amongst their customers for a facial recognition-powered employee attendance-taking tool. It explored integrating existing facial recognition-based clocking systems on the market with the attendance-taking function on its HRMS. However, the costs were too high for its customers. BIPO would need to help its customers purchase devices and on-premises servers, which cost at least US$50,000.  Cost savings 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Looking ahead, BIPO will roll out the image-to-text claims processing and facial recognition-based attendance-taking feature to more customers. The features have been welcomed by customers, with an adoption rate of 20 percent since their introduction. Amazon Textract Reduced claims submission times by 50 percent for employees Optical Character Recognition (OCR) Overview Aside from attendance-taking, BIPO is also looking to utilize its facial recognition feature for other use cases, such as granting access control for authorized personnel. Most access control systems on the market use fingerprints or identification cards as their primary input methods. However, such methods are unreliable because fingerprints change over time, and identification cards are easily misplaced by users. A facial recognition-based access control eliminates these problems, while allowing for more seamless and secure entries. Opportunity | Reducing Cost and Inefficiencies on the HRMS Platform Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from your images and videos. Learn more » Türkçe English Amazon Rekognition By using AWS AI/ML services, BIPO has successfully reduced cost- and productivity-related inefficiencies on the HRMS platform.  BIPO’s HRMS platform allows employees to perform everyday HR tasks, such as payroll, leave applications, claims submissions, and attendance taking, via a web- or mobile-based portal. To enhance the user experience, BIPO sought to improve its claims submission process, which was one of the most time-consuming tasks. Employees typically have multiple claims to file each month, and each claim can take an average of up to 20 minutes to upload. When combined, this resulted in up to 50 hours of lost productivity each month. Furthermore, finance departments spent up to 100 hours each month to rectify any errors that resulted from the highly manual process.  “We must constantly introduce cutting-edge features and solutions to serve our customers better. With AWS, we have halved the time it takes to innovate, build, and implement these features from four months to two months,” said Derick Teo, director of enterprise go-digital solutions at BIPO.  “Our employees generate a large number of claims monthly. The OCR technology on BIPO’s HRMS not only allows them to upload claims in an accurate and timely basis, but the time savings can also be redirected to other higher-value work within the company, and that has been truly invaluable to us,” said Derick Teo. Learn how BIPO introduced new features for HR Management System within weeks using AWS Machine Learning services Deutsch More efficient HR workflow In 2020, BIPO expanded the capabilities of its HRMS platform using artificial intelligence and machine learning (AI/ML). The company integrated Amazon Textract with the platform, which reduced claims submission times by 50 percent. In 2021, BIPO introduced a new attendance-taking function based on facial recognition using Amazon Rekognition. Tiếng Việt Italiano ไทย To learn more, visit aws.amazon.com/machine-learning/.  Outcome | Seamless Integration of AI/ML Features Learn more » Derick Teo Director, Enterprise Go-Digital Solutions, BIPO Integrated ML-powered facial recognition features for more efficient access control and security capabilities  About BIPO Integrated ML-powered OCR features with its HR Management System Facial recognition feature We must constantly introduce cutting-edge features and solutions to serve our customers better. With AWS, we have halved the time it takes to innovate, build, and implement these features from four months to two months.” Português
BNS Group Case Study _ Amazon Web Services.txt
Reducing Virtual Machines from 40 to 12 The founders of BNS had been contemplating a migration from the company’s on-premises data center to the public cloud and observed a growing demand for cloud-based operations among current and potential BNS customers. Français Configures security according to cloud best practices Clive Pereira, R&D director at BNS Group, explains, “The database that records Praisal’s SMS traffic resides in Praisal’s AWS environment. Praisal can now run complete analytics across its data and gain insights into what’s happening with its SMS traffic, which is a real game-changer for the organization.”  Español AWS ISV Accelerate Program Receiving Strategic, Foundational Support from ISV Specialists Learn More The value that AWS places on the ISV stream sealed the deal in our choice of cloud provider.” 日本語 Contact Sales BNS is an Australian software provider focused on secure enterprise SMS and fax messaging. Its software runs on the Windows platform and is licensed to public sector organizations such as the Australian Taxation Office and to private firms like Suncorp. For Suncorp, BNS software handles between 2 million and 3 million monthly SMS messages. About BNS Group Get Started 한국어 BNS Group is an Australian independent software vendor providing enterprise SMS and fax messaging solutions. Its customers include public sector organizations such as the Australian Taxation Office and private clients like Suncorp, for which it handles up to 3 million SMS messages monthly. After its migration, BNS began developing a custom SMS solution for Praisal on AWS. Developers decided to use Microsoft SQL as a front-end application programming interface (API). Within two days, BNS developed an SQL API that could send and receive SMS from Praisal’s clients without its team having to learn any REST API calls or other technical complexities. Over the course of five months, BNS performed an AWS Foundational Technical Review with the support of the AWS ISV team and completed its cloud migration in June 2022. “AWS has been very responsive throughout our migration journey and guided us in setting up the right cloud foundation from day one. The review process really helped us understand the AWS security paradigm,” adds Buchanan. BNS founders gravitated to AWS because of its high availability and the AWS ISV Accelerate Program. “We really liked that AWS has an ISV competency in its partner program,” says Buchanan. “It was important for us to have our enterprise SMS software verified for use on AWS. The value that AWS places on the ISV stream sealed the deal in our choice of cloud provider.”  To learn more, visit aws.amazon.com/solutions/migration. Accelerating Transaction Rates While Increasing Productivity Pursuing a Cloud-Based Deployment Model AWS Services Used One of the areas BNS Group is focusing on with clients is to track the journey of each SMS—those received and not received by target customers—via out-of-the-box analytics models. With enhanced analytics, BNS Group’s clients can drive customer engagement, increase retention, and reduce churn. Reduces virtual machines from 40 to 12 The AWS Foundational Technical Review (FTR) enables you to identify and remediate risks in your software or solutions. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Accesses new client base as an AWS-certified ISV To further enhance its analytics ambitions, BNS is now exploring how artificial intelligence and machine learning can benefit its clients. The company is also looking to list its enterprise solutions on AWS Marketplace to increase its customer reach. “If we didn’t migrate to AWS, we wouldn’t be able to engage the wide AWS customer base,” Buchanan says. “It’s been a win-win for BNS and AWS, and we look forward to what the future brings.” Ρусский Reduces infrastructure costs by 50% عربي BNS Group Meets Growing Demand for Cloud-Based SMS Solution on AWS 中文 (简体) The BNS SMS solution includes user-friendly dashboards that clients such as Praisal can use to understand their data and perform predictive analytics. The business plans to further enhance its analytics capabilities as part of its product development strategy, and recently hired two data scientists. According to Buchanan, onboarding new hires on AWS is much faster and easier compared to the BNS data center environment. Amazon RDS for SQL Server Learn more » Processes and transmits data faster Amazon RDS for SQL Server makes it easy to set up, operate, and scale SQL Server deployments in the cloud. Benefits of AWS AWS Foundational Technical Review By strategically starting with a clean slate on the AWS Cloud, BNS decreased its virtual machines from 40 to 12 and reduced infrastructure costs by 50 percent. The business spins up resizable Amazon Elastic Compute Cloud (Amazon EC2) instances for Microsoft Windows servers and uses Amazon RDS for SQL Server for database management. Enables predictive analytics and data insights for clients Türkçe Amazon Elastic Compute Cloud English Learn more » The final push for cloud migration came when BNS customer Praisal approached BNS for a cloud-based SMS solution in its Amazon Web Services (AWS) tenancy to connect with its users. BNS then consulted with AWS on how best to build new virtual machines and relicense software development tools securely on the cloud. The business wanted to steer away from the “lift and shift” migration approach to avoid transferring technical debt and “baggage” from the data center into the cloud. When harnessed strategically, Short Message Service (SMS) can be an extremely effective marketing tool. According to Gartner, SMS open rates are as high as 98 percent compared to email’s 20 percent average. Companies looking to run scalable SMS applications often rely on commercial software from independent software vendors (ISVs) such as BNS Group to reach their target audience. Onboarding Data Scientists to Enhance Analytics Capabilities Laurence Buchanan CEO, BNS Group Increases productivity with reduced maintenance burden Deutsch Tiếng Việt Italiano ไทย Laurence Buchanan, CEO at BNS Group, says, “Some of our larger customers have started asking about the cloud as they begin their own modernization journey. As an independent software vendor, we knew we had to be on the cloud too. We had to ensure our products work in our customers’ cloud tenancy and build documentation to support a cloud versus an on-premises deployment of our software.” Receives focused ISV support from AWS specialists 2022 The company has also experienced faster transaction rates on its SMS platform since the migration to AWS. “I’ve seen big improvements in throughput. We’re able to process and transmit data faster on AWS,” Buchanan says. BNS has also reduced time spent on backend operations, because it no longer carries out server maintenance, firewall updates, and disaster recovery planning and testing—elements that are now automated on AWS. Productivity has risen, and Buchanan can now allocate his time to R&D, quality assurance, creating documentation, and customer engagement.  The AWS ISV Accelerate Program is a co-sell program for organizations that provide software solutions that run on or integrate with AWS. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Boehringer Ingelheim Establishes Data-Driven Foundations Using AWS to Accelerate the Launch of New Medicines _ Boehringer Ingelheim Case Study _ AWS.txt
mso-font-pitch:variable { mso-pagination:widow-orphan { Français use of data for 10,000+ employees The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Learn more » The GxP Compliance on AWS solution expedites cloud migration by focusing on specific AWS applications which establish the environment needed to maintain compliance. Learn more » About Boehringer Ingelheim Español Founded in 1885, Boehringer Ingelheim works on breakthrough therapies to transform lives. More than 52,000 employees serve over 130 markets in three business areas: human pharma, animal health, and biopharmaceutical contract manufacturing. *.MsoChpDefault { Boehringer Ingelheim expects to be more effective in targeting diseases and developing breakthrough treatments, with an ambition to considerably shorten clinical trials. “By centralizing and improving the availability of our data, we accelerate our use case development in the future,” says Henrich. “That results in faster time to market, which translates into better patient outcomes.” Learn how Boehringer Ingelheim is transforming its ability to develop breakthrough treatments with its Dataland solution built on AWS. 日本語 2023 mso-generic-font-family:roman { AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development. Learn more » AWS Professional Services Get Started 한국어 Opportunity | Collaborating alongside AWS in a Company-Wide Data Transformation Initiative for Boehringer Ingelheim  Overview | Opportunity | Solution | Outcome | AWS Services Used Working alongside Amazon Web Services (AWS), Boehringer Ingelheim is implementing an advanced company-wide initiative called Dataland, which aims to make data findable, accessible, interoperable, and reusable in the cloud. Using Dataland, the company has a structured catalog that accelerates data-driven decision-making across the organization and spreads a company-wide culture focused on data centricity. “Our Dataland initiative, powered by AWS, is establishing a data-driven mindset and working culture at Boehringer Ingelheim and will offer an unprecedented complete data solution for all colleagues,” says Andreas Henrich, vice president of enterprise data and platforms. Family owned since 1885, Boehringer Ingelheim serves more than 130 markets worldwide and spends €4.1 billion annually in research and development. In 2020, the company turned to AWS for help with an ambitious data transformation initiative to break down data silos and standardize enterprise-wide data solutions in 2 years while maintaining strict governance. “The main challenge is not the size of these huge datasets but knowing how to structure them. We chose AWS because we needed a trustworthy collaborator who fulfilled two main criteria: compliance and flexibility,” says Henrich. “Using AWS, we comply with regulations without too much customization. And its services are flexible enough to incorporate solutions from other third-party vendors to fill gaps in our current requirements.” Improved compliance mso-bidi-font-family:Cambria { *, serif { AWS Services Used In the pharmaceutical industry, massive amounts of data—from clinical trials, biobanks, electronic health records, supply chain, and production—can help uncover origins of disease, cures, and quicker development and delivery of new treatments to patients. But insights are often trapped in data silos. Global pharmaceutical company Boehringer Ingelheim is working to unlock the potential of data along its entire value chain by creating the infrastructure and processes to use data effectively. The company plans to increase the number of use cases in the pipeline and improve self-service capabilities, with a goal to phase out the initiative by 2025. “We see that data transformation on AWS helps us to create value and to work better and faster,” says Urgeles. “Together, we are progressing and pioneering these topics. It’s a great feeling to finally see the result and how far our impact can go.” GxP Compliance on AWS 中文 (繁體) Bahasa Indonesia } ไทย Ρусский Upskilled عربي 中文 (简体) With Dataland, the company makes huge amounts of curated data available for the entire workforce through a self-service solution that helps to drive insights. Its centralized data hub optimizes the structure of datasets while accounting for their size, up to multiple petabytes for external data. For its data lake, the company turned to Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Using Amazon S3, Boehringer Ingelheim stores structured and unstructured data, such as information from handwritten documents or videos. To extract maximum value, Boehringer Ingelheim uses AWS Glue, a serverless data integration service that makes it simpler to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning, and application development. Using AWS Glue, Boehringer Ingelheim relieves its data scientists of the heavy lifting previously required to maintain an extensive data catalog. Data scientists used to spend 60 percent of their time cleaning data, determining who could access it, and otherwise making it available. Now, users start working within hours instead of the months they previously needed. “This is what we wanted to turn around,” says Henrich. “Now, as soon as we have a great idea, data is available at our fingertips, and data scientists can start working right away.” More than 10,000 Boehringer Ingelheim employees so far have experienced the new solution through visualization models, dashboards, and other methods.   Using solutions from GxP Compliance on AWS, Boehringer Ingelheim’s highly secure architecture aligns with industry requirements for improved compliance. The company’s data governance structure establishes clear guardrails and data quality rules that facilitate data reusability and improve synergies across an application landscape of roughly 1,000 systems. Boehringer Ingelheim’s infrastructure spans two AWS Availability Zones, which helps the company to meet varying data residency requirements in Europe and the United States.   Boehringer Ingelheim realized that organized datasets would be helpful only if its teams had the right skills to generate insights for their daily work. In October 2021, Boehringer Ingelheim launched its Data Science Academy, with a mission to upskill employees and help them identify how to use data effectively, to build a focus on data culture, and to address the difference in data maturity levels across the organization. More than 3,000 employees across experience levels have participated in the program. “This program is intended to increase our pool of data scientists and engineers through retraining and recruitment and to strengthen our company-wide data literacy,” Henrich says. “This will increase awareness about the business potential of data and foster a data-driven culture across the company, encouraging an openness to new ways of working.”   The company is already deriving financial, efficiency, and compliance benefits from its 10 initial use cases. The use cases are real-world examples of how the Dataland initiative breaks down data silos, incorporates external real-world data, establishes strong data governance and data quality, and helps the company to better collaborate with external partners. “Most important, it is helping us to focus across the entire value chain, from research and development to commercialization, to enrich the lives of the human patients and animals that we serve,” says Henrich. “Our research pipeline can now make innovative products available sooner for patients.” div.WordSection1 { Solution | Building a Centralized Data Hub that Has Reached 10,000 Employees  Learn more » Established goal Overview Facilitated of reduction in clinical trial time mso-ascii-font-family:Cambria { Arial, sans-serif { Boehringer Ingelheim Establishes Data-Driven Foundations Using AWS to Accelerate the Launch of New Medicines Türkçe English page: WordSection1; to access data from months to hours Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. AWS Glue Outcome | Pioneering Data Transformation on AWS for Clinical Value Deutsch mso-fareast-font-family:Cambria { Our Dataland initiative, powered by AWS, is establishing a data-driven mindset and working culture at Boehringer Ingelheim and will offer an unprecedented complete data solution for all colleagues.” * { Tiếng Việt Amazon S3 Andreas Henrich Vice President of Enterprise Data and Platforms, Boehringer Ingelheim Italiano Customer Stories / Life Sciences p.MsoNormal, li.MsoNormal, div.MsoNormal { Contact Sales with GxP regulations 3,000+ employees through data academy mso-hansi-font-family:Cambria { Cut time To implement Dataland’s technological foundation, Boehringer Ingelheim collaborated closely alongside AWS Professional Services, a global team of experts that can help organizations realize desired business outcomes when using the cloud. “The AWS Professional Services team not only worked with us in the setup but also helped us understand what we needed to do to grow,” says Ferran Urgeles, program manager of Dataland. “It helped us establish clear processes and guardrails so that we had clarity on how to operate based on our organizational needs.” p.Normal0, li.Normal0, div.Normal0 { Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Bosch Thermotechnology Accelerates IoT Deployment Using AWS Serverless Computing and AWS IoT Core _ Case Study _ AWS.txt
AWS CloudFormation AWS Lambda Français Increased reduction in product time to market 2023 Bosch Thermotechnology North America is a source of high-quality heating, cooling, and hot water systems. It is a division of Robert Bosch GmbH, a supplier of technology and services. In early 2021, Bosch TTNA began developing its first cloud-connected device, a heat pump system that technicians can remotely monitor, analyze, and troubleshoot. The company wanted to build a solution that could scale to handle highly variable workloads while requiring the least amount of effort to manage infrastructure. Español Bosch Thermotechnology North America (Bosch TTNA) built a smart source of heating, ventilating, and air-conditioning (HVAC) systems by modernizing and migrating its business to the cloud to monitor products remotely while removing the undifferentiated heavy lifting of managing the infrastructure. As part of the North American division of Robert Bosch GmbH, Bosch TTNA was new to smart device development and wanted a cost-effective solution to expand its infrastructure capacity and scalability while creating new smart technologies. Bosch TTNA used Amazon Web Services (AWS) to build solutions to connect its devices to AWS Internet of Things (AWS IoT). The solution uses AWS serverless technologies for data processing, application integration, and the scaling required to manage its business. Bosch TTNA can now remotely monitor its new smart energy and building devices with minimal operational overhead, improving customer service. 日本語 Bosch TTNA offers hardware solutions for its HVAC business and wants to transform to a software-driven company to better support wholesale, contractor, and homeowner customers. It is committed to offering state-of-the-art energy-efficient and smart systems that help reduce carbon emissions by building a portfolio of smart connected heating and cooling systems. The company saw an opportunity to use real-time device data to inform after-sale HVAC system maintenance and support. With the readiness of technology, a cloud-connected solution that captures, processes, and analyzes real-time device data can benefit customers and service providers. “We want to be smart HVAC champions. Sustainability is at the core of everything we do. The smarter our technologies are, the more efficient they will be,” says Pablo Ferreyra, head of software development for Bosch CI Americas. “We see using AWS as critical to that overall vision.” Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used First-ever operational overhead Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.  Learn more » Bosch Thermotechnology Accelerates IoT Deployment Using AWS Serverless Computing and AWS IoT Core development team’s agility AWS Services Used Reduced 中文 (繁體) Bahasa Indonesia Pablo Ferreyra Head of Software Development for Bosch CI Americas, Bosch Thermotechnology North America We use AWS to achieve our business goals and to innovate in the technology space. Using AWS, we accelerate the change that we’re driving.” AWS IoT Core lets you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure. Learn more » Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Customer Stories / Hi Tech, Electronics & Semiconductor Bosch TTNA realized the importance and challenge of hiring and upskilling a new team to develop and maintain its smart products. Although the architecture is now in place, Bosch TTNA initially turned to AWS for these skills as its new team developed the competencies that the company needed to be successful. Eighty percent of its development team received AWS Certification, which validates technical skills and cloud expertise. “Our talent is pretty autonomous at this point, and that is largely from using the support that we received from AWS,” says Ferreyra. AWS IoT Core Overview AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. Learn more » Solution | Creating Cloud Competency while Developing a New Product Türkçe AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. Outcome | Expanding Product Capabilities Using AWS Solutions English smart product released by Bosch total cost of ownership (TCO) 4 months About Bosch Thermotechnology North America Given Bosch TTNA’s history selling connecting thermostats, it knew that managing IoT infrastructure required significant resources. Bosch TTNA’s goals led it to use AWS services to deliver compelling products and services to customers at an optimized cost with reduced operational complexity. The company used AWS serverless technologies and AWS IoT Core to connect large numbers of IoT devices and route a high volume of messages to AWS services without managing infrastructure. AWS serverless technologies feature automatic scaling, built-in high availability, and a pay-for-use billing model to increase agility and optimize costs. By using AWS to do the heavy lifting, Bosch TTNA’s developers can focus on adding value to the business by bringing new use cases and features to market. Deutsch The new architecture that Bosch TTNA develops is reusable across IoT use cases. Bosch TTNA uses AWS CloudFormation—a service to model, provision, and manage AWS and third-party resources by treating infrastructure as code—to standardize its architecture and scale it globally to other teams. This standardization accelerates the workloads for other teams because they do not have to start every IoT project from scratch, and they can build solutions faster than before, which has reduced time to market by an average of 4 months. “We have Bosch’s innovation on top of AWS innovation, which accelerates us further,” says Ferreyra. AWS Lambda—a serverless, event-driven compute service that can run code for virtually any type of application or backend service without provisioning or managing servers—fit this need, and Bosch TTNA decided to use it as the core service for the project. “For us, AWS Lambda was the perfect fit in terms of the burstiness of the workload and the cost considerations that we have for the solution,” says Ferreyra. With this solution, AWS managed the backend and infrastructure provisioning so that Bosch TTNA could focus on application innovation. For fully managed message queuing, the company incorporated Amazon Simple Queue Service (Amazon SQS), which sends, stores, and receives messages between software components at any volume. Bosch TTNA launched this connected heat pump system in June 2022 in the United States, and its success has led the company to plan multiple future smart products. Tiếng Việt Bosch TTNA is developing and implementing innovative technologies within the HVAC space, which benefits its products and customers. Using Bosch TTNA’s solution, service partners benefit from near-real-time installation support, remote diagnostics, troubleshooting support, and smart system health alerting. Before going onsite, service partners can use the Bosch TTNA mobile app to remotely determine if there are problems with a system and find the steps and tools required for the repair, reducing service visits and expediting service delivery. The mobile app can also tell onsite installers whether they have performed an installation correctly, a valuable feature because the number one cause of warranty claims comes from defects introduced during system installation. This increases customer satisfaction and product durability and reduces warranty costs. Additionally, Bosch TTNA now has data from the field that shows how its devices behave and hold up under different external conditions. The company can use this data to quantify the durability of its devices and target the reliability of specific product components. Italiano ไทย Contact Sales Opportunity | Accelerating Product Innovation Using AWS Services to Create Smart HVAC Systems for Bosch TTNA Learn more » Amazon SQS Bosch TTNA can now focus on making better products for its customers and service partners in less time and at a lower cost. Since the move to smart products and services, it has received better-than-expected sales results, and its successes have led it to explore other uses of AWS services, such as data lakes, data analytics, and machine learning. Bosch TTNA also wants to expand its current environment to extract more value from its data and thereby increase the service level and value to customers. “We use AWS to achieve our business goals and to innovate in the technology space. Using AWS, we accelerate the change that we’re driving,” says Ferreyra. Bosch Thermotechnology North America developed its first cloud-connected device using AWS Lambda and AWS IoT Core, optimizing costs while improving customer experience. Português
Botprise Reduces Time to Remediation by 86 on Average Using Automation and AWS Security Hub _ Botprise Case Study _ AWS.txt
Amazon GuardDuty By using the infrastructure of AWS services to build its automation, Botprise significantly reduced the time to market for its solution. Time savings early on are particularly important for a startup looking to acquire customers quickly. “Using AWS services and support from the AWS team, we could move much faster,” says Bulusu. “We built our security solution in 1 year, cutting the time to market in half.” As Botprise continues to increase its customer base, the company can scale as needed in a cost-effective way using AWS services. Botprise continues to experience ongoing cost savings as well, reducing its operational costs by 34 percent because of the reduced manpower costs of using AWS services to automate tasks. Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. Français Solution | Cutting Operational Costs by 34% and Saving Time Using AWS Security Hub 2023 Español Using AWS services and support from the AWS team, we could move much faster. We built our security solution in 1 year, cutting the time to market in half.” Botprise Reduces Time to Remediation by 86% on Average Using Automation and AWS Security Hub 日本語 AWS Services Used in time to remediation for security issues Contact Sales Get Started 한국어 Founded in October 2019, Botprise provides a security solution that monitors for configuration issues in cloud environments and offers automation of cloud operations. Because automation is complex and expensive to scale, Botprise offers apps and templates for customers to set up automation in a matter of minutes or days. Botprise’s customers don’t need technical expertise, and they gain value right away rather than taking months or years to build tools on their own. Learn how Botprise in the cloud security automation industry reduced costs and time to remediation by centralizing security operations using AWS Security Hub. Amazon Inspector is an automated vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. 34% reduction Botprise modernized and strengthened its security posture using AWS services. Using insights from services such as AWS Security Hub, Botprise reduced the time it takes from issue identification to remediation by 86 percent on average because many issues no longer require manual remediation. It also bolstered the security of its solution using AWS services, increasing customer confidence and facilitating more growth. With time savings from automation, customer IT teams can focus on complex issues, which is important for Botprise’s customers that span the energy, financial services, and technology industries and have mission-critical security needs. Using AWS Security Hub, Botprise can see data from multiple sources, including other AWS services and supported third-party products, on a centralized dashboard. This dashboard gives Botprise complete visibility into its security posture, helping the company better understand challenges and identify areas that need automation. Using AWS Security Hub, Botprise can show findings from Amazon GuardDuty, which protects AWS accounts with intelligent threat detection. “Bringing data from all services into a centralized dashboard makes life a lot easier,” says Bulusu. “You can monitor your security posture and see everything you need to keep an eye on.” Findings from Amazon Inspector, an automated bug management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure, also appear in AWS Security Hub. Kishan Bulusu Founder and Chief Executive Officer, Botprise Amazon Inspector 中文 (繁體) Bahasa Indonesia Founded in October 2019, Botprise is a cloud security automation company. Its solution saves customers time and effort by monitoring for configuration issues in cloud environments and automating cloud security operations tasks. Botprise Architecture Diagram About Botprise Ρусский Customer Stories / Software & Internet عربي Learn more » 中文 (简体) Opportunity | Using Programs Like AWS MAP to Build Momentum and Facilitate Growth for Botprise AWS Well-Architected Learn more » Botprise plans to continue building more automation around AWS services to maintain its security posture, facilitate growth, and help its customers get the most out of AWS. The company expects to scale rapidly in the next year, growing from 30 customers to over 100 by the end of 2023. “We want to use as many AWS services as we can to drive value to our customers in their automation journey, particularly in the areas of security and cloud operations,” says Bulusu. Achieved 86% average reduction Overview Botprise has aggressive growth goals for its no-code automation solution that helps customers reduce the amount of manual intervention needed for managing cloud systems. To scale effectively while meeting stringent requirements for its security operation automation solution, Botprise looked to Amazon Web Services (AWS). Using services such as AWS Security Hub, a cloud security posture management service for automating AWS security checks and centralizing security alerts, Botprise achieved operational cost savings, significantly reduced the time to remediate a security issue, and cut the time to market in half to stay on track with its growth goals of nearly quadrupling its number of customers in the next year. Türkçe English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Cut time Deutsch AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads. Tiếng Việt In both 2020 and 2022, Botprise went through the AWS Well-Architected review process, which helps companies learn, measure, and build using architectural best practices and a framework of six pillars. Its security pillar focuses on using cloud technology to protect information and systems, such as managing confidentiality and security controls. “The AWS Well-Architected reviews gave us good guidance about what we can work on and what gaps we need to fill to make our company better,” says Kishan Bulusu, founder and chief executive officer at Botprise. In June 2022, Botprise also went through the AWS Migration Acceleration Program (AWS MAP), a comprehensive cloud migration program that uses outcome-driven methodology developed by migrating thousands of enterprise customers to the cloud. rapid customer growth Italiano ไทย Architecture Diagram to market in half Close AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation. Click to enlarge for fullscreen viewing.  in operational costs using automation AWS Security Hub Beginning as a startup, Botprise needed a cloud solution that could scale to support its future growth while maintaining high security standards for itself and its customers. From its founding, Botprise used AWS services to improve its security posture. The company started with automation around IT operations, building automation for internal purposes first and then offering it to customers. In 2022, Botprise pivoted to develop more cloud automation solutions with an increasing focus on security operation automation. During this pivot, Botprise received support from AWS, which Botprise used to gain momentum and grow by 400 percent in the security operations automation sector. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Outcome | Continuing to Grow Using AWS Services
Build a powerful question answering bot with Amazon SageMaker Amazon OpenSearch Service Streamlit and LangChain _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Build a powerful question answering bot with Amazon SageMaker, Amazon OpenSearch Service, Streamlit, and LangChain by Amit Arora , Navneet Tuteja , and Xin Huang | on 25 MAY 2023 | in Advanced (300) , Amazon SageMaker , Amazon SageMaker JumpStart , Expert (400) , Generative AI , Technical How-to | Permalink | Comments |  Share One of the most common applications of generative AI and large language models (LLMs) in an enterprise environment is answering questions based on the enterprise’s knowledge corpus. Amazon Lex provides the framework for building AI based chatbots . Pre-trained foundation models (FMs) perform well at natural language understanding (NLU) tasks such summarization, text generation and question answering on a broad variety of topics but either struggle to provide accurate (without hallucinations) answers or completely fail at answering questions about content that they haven’t seen as part of their training data. Furthermore, FMs are trained with a point in time snapshot of data and have no inherent ability to access fresh data at inference time; without this ability they might provide responses that are potentially incorrect or inadequate. A commonly used approach to address this problem is to use a technique called Retrieval Augmented Generation (RAG). In the RAG-based approach we convert the user question into vector embeddings using an LLM and then do a similarity search for these embeddings in a pre-populated vector database holding the embeddings for the enterprise knowledge corpus. A small number of similar documents (typically three) is added as context along with the user question to the “prompt” provided to another LLM and then that LLM generates an answer to the user question using information provided as context in the prompt. RAG models were introduced by Lewis et al. in 2020 as a model where parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. To understand the overall structure of a RAG-based approach, refer to Question answering using Retrieval Augmented Generation with foundation models in Amazon SageMaker JumpStart . In this post we provide a step-by-step guide with all the building blocks for creating an enterprise ready RAG application such as a question answering bot. We use a combination of different AWS services, open-source foundation models ( FLAN-T5 XXL for text generation and GPT-j-6B for embeddings) and packages such as LangChain for interfacing with all the components and Streamlit for building the bot frontend. We provide an AWS Cloud Formation template to stand up all the resources required for building this solution. We then demonstrate how to use LangChain for tying everything together: Interfacing with LLMs hosted on Amazon SageMaker. Chunking of knowledge base documents. Ingesting document embeddings into Amazon OpenSearch Service. Implementing the question answering task. We can use the same architecture to swap the open-source models with the Amazon Titan models. After Amazon Bedrock launches, we will publish a follow-up post showing how to implement similar generative AI applications using Amazon Bedrock, so stay tuned. Solution overview We use the SageMaker docs as the knowledge corpus for this post. We convert the HTML pages on this site into smaller overlapping chunks (to retain some context continuity between chunks) of information and then convert these chunks into embeddings using the gpt-j-6b model and store the embeddings in OpenSearch Service. We implement the RAG functionality inside an AWS Lambda function with Amazon API Gateway to handle routing all requests to the Lambda. We implement a chatbot application in Streamlit which invokes the function via the API Gateway and the function does a similarity search in the OpenSearch Service index for the embeddings of user question. The matching documents (chunks) are added to the prompt as context by the Lambda function and then the function uses the flan-t5-xxl model deployed as a SageMaker endpoint to generate an answer to the user question. All the code for this post is available in the GitHub repo . The following figure represents the high-level architecture of the proposed solution. Figure 1: Architecture Step-by-step explanation: The User provides a question via the Streamlit web application. The Streamlit application invokes the API Gateway endpoint REST API. The API Gateway invokes the Lambda function. The function invokes the SageMaker endpoint to convert user question into embeddings. The function invokes invokes an OpenSearch Service API to find similar documents to the user question. The function creates a “prompt” with the user query and the “similar documents” as context and asks the SageMaker endpoint to generate a response. The response is provided from the function to the API Gateway. The API Gateway provides the response to the Streamlit application. The User is able to view the response on the Streamlit application, As illustrated in the architecture diagram, we use the following AWS services: SageMaker and Amazon SageMaker JumpStart for hosting the two LLMs. OpenSearch Service for storing the embeddings of the enterprise knowledge corpus and doing similarity search with user questions. Lambda for implementing the RAG functionality and exposing it as a REST endpoint via the API Gateway . Amazon SageMaker Processing jobs for large scale data ingestion into OpenSearch. Amazon SageMaker Studio for hosting the Streamlit application. AWS Identity and Access Management roles and policies for access management. AWS CloudFormation for creating the entire solution stack through infrastructure as code. In terms of open-source packages used in this solution, we use LangChain for interfacing with OpenSearch Service and SageMaker, and FastAPI for implementing the REST API interface in the Lambda. The workflow for instantiating the solution presented in this post in your own AWS account is as follows: Run the CloudFormation template provided with this post in your account. This will create all the necessary infrastructure resources needed for this solution: SageMaker endpoints for the LLMs OpenSearch Service cluster API Gateway Lambda function SageMaker Notebook IAM roles Run the data_ingestion_to_vectordb.ipynb notebook in the SageMaker notebook to ingest data from SageMaker docs into an OpenSearch Service index. Run the Streamlit application on a terminal in Studio and open the URL for the application in a new browser tab. Ask your questions about SageMaker via the chat interface provided by the Streamlit app and view the responses generated by the LLM. These steps are discussed in detail in the following sections. Prerequisites To implement the solution provided in this post, you should have an AWS account and familiarity with LLMs, OpenSearch Service and SageMaker. We need access to accelerated instances (GPUs) for hosting the LLMs. This solution uses one instance each of ml.g5.12xlarge and ml.g5.24xlarge; you can check the availability of these instances in your AWS account and request these instances as needed via a Sevice Quota increase request as shown in the following screenshot. Figure 2: Service Quota Increase Request Use AWS Cloud Formation to create the solution stack We use AWS CloudFormation to create a SageMaker notebook called aws-llm-apps-blog and an IAM role called LLMAppsBlogIAMRole . Choose Launch Stack for the Region you want to deploy resources to. All parameters needed by the CloudFormation template have default values already filled in, except for the OpenSearch Service password which you’d have to provide. Make a note of the OpenSearch Service username and password, we use those in subsequent steps. This template takes about 15 minutes to complete . AWS Region Link us-east-1 us-west-2 eu-west-1 ap-northeast-1 After the stack is created successfully, navigate to the stack’s Outputs tab on the AWS CloudFormation console and note the values for OpenSearchDomainEndpoint and LLMAppAPIEndpoint . We use those in the subsequent steps. Figure 3: Cloud Formation Stack Outputs Ingest the data into OpenSearch Service To ingest the data, complete the following steps: On the SageMaker console, choose Notebooks in the navigation pane. Select the notebook aws-llm-apps-blog and choose Open JupyterLab . Figure 4: Open JupyterLab Choose data_ingestion_to_vectordb.ipynb to open it in JupyterLab. This notebook will ingest the SageMaker docs to an OpenSearch Service index called llm_apps_workshop_embeddings . Figure 5: Open Data Ingestion Notebook When the notebook is open, on the Run menu, choose Run All Cells to run the code in this notebook. This will download the dataset locally into the notebook and then ingest it into the OpenSearch Service index. This notebook takes about 20 minutes to run. The notebook also ingests the data into another vector database called FAISS . The FAISS index files are saved locally and the uploaded to Amazon Simple Storage Service (S3) so that they can optionally be used by the Lambda function as an illustration of using an alternate vector database. Figure 6: Notebook Run All Cells Now we’re ready to split the documents into chunks, which can then be converted into embeddings to be ingested into OpenSearch. We use the LangChain RecursiveCharacterTextSplitter class to chunk the documents and then use the LangChain SagemakerEndpointEmbeddingsJumpStart class to convert these chunks into embeddings using the gpt-j-6b LLM. We store the embeddings in OpenSearch Service via the LangChain OpenSearchVectorSearch class. We package this code into Python scripts that are provided to the SageMaker Processing Job via a custom container. See the data_ingestion_to_vectordb.ipynb notebook for the full code. Create a custom container, then install in it the LangChain and opensearch-py Python packages. Upload this container image to Amazon Elastic Container Registry (ECR). We use the SageMaker ScriptProcessor class to create a SageMaker Processing job that will run on multiple nodes. The data files available in Amazon S3 are automatically distributed across in the SageMaker Processing job instances by setting s3_data_distribution_type='ShardedByS3Key' as part of the ProcessingInput provided to the processing job. Each node processes a subset of the files and this brings down the overall time required to ingest the data into OpenSearch Service. Each node also uses Python multiprocessing to internally also parallelize the file processing. Therefore, there are two levels of parallelization happening, one at the cluster level where individual nodes are distributing the work (files) amongst themselves and another at the node level where the files in a node are also split between multiple processes running on the node . # setup the ScriptProcessor with the above parameters processor = ScriptProcessor(base_job_name=base_job_name, image_uri=image_uri, role=aws_role, instance_type=instance_type, instance_count=instance_count, command=["python3"], tags=tags) # setup input from S3, note the ShardedByS3Key, this ensures that # each instance gets a random and equal subset of the files in S3. inputs = [ProcessingInput(source=f"s3://{bucket}/{app_name}/{DOMAIN}", destination='/opt/ml/processing/input_data', s3_data_distribution_type='ShardedByS3Key', s3_data_type='S3Prefix')] logger.info(f"creating an opensearch index with name={opensearch_index}") # ready to run the processing job st = time.time() processor.run(code="container/load_data_into_opensearch.py", inputs=inputs, outputs=[], arguments=["--opensearch-cluster-domain", opensearch_domain_endpoint, "--opensearch-secretid", os_creds_secretid_in_secrets_manager, "--opensearch-index-name", opensearch_index, "--aws-region", aws_region, "--embeddings-model-endpoint-name", embeddings_model_endpoint_name, "--chunk-size-for-doc-split", str(CHUNK_SIZE_FOR_DOC_SPLIT), "--chunk-overlap-for-doc-split", str(CHUNK_OVERLAP_FOR_DOC_SPLIT), "--input-data-dir", "/opt/ml/processing/input_data", "--create-index-hint-file", CREATE_OS_INDEX_HINT_FILE, "--process-count", "2"]) Close the notebook after all cells run without any error. Your data is now available in OpenSearch Service. Enter the following URL in your browser’s address bar to get a count of documents in the llm_apps_workshop_embeddings index. Use the OpenSearch Service domain endpoint from the CloudFormation stack outputs in the URL below. You’d be prompted for the OpenSearch Service username and password, these are available from the CloudFormations stack. https://your-opensearch-domain-endpoint/llm_apps_workshop_embeddings/_count The browser window should show an output similar to the following. This output shows that 5,667 documents were ingested into the llm_apps_workshop_embeddings index. {"count":5667,"_shards":{"total":5,"successful":5,"skipped":0,"failed":0}} Run the Streamlit application in Studio Now we’re ready to run the Streamlit web application for our question answering bot. This application allows the user to ask a question and then fetches the answer via the /llm/rag REST API endpoint provided by the Lambda function. Studio provides a convenient platform to host the Streamlit web application. The following steps describes how to run the Streamlit app on Studio. Alternatively, you could also follow the same procedure to run the app on your laptop. Open Studio and then open a new terminal. Run the following commands on the terminal to clone the code repository for this post and install the Python packages needed by the application: git clone https://github.com/aws-samples/llm-apps-workshop cd llm-apps-workshop/blogs/rag/app pip install -r requirements.txt The API Gateway endpoint URL that is available from the CloudFormation stack output needs to be set in the webapp.py file. This is done by running the following sed command. Replace the replace-with-LLMAppAPIEndpoint-value-from-cloudformation-stack-outputs in the shell commands with the value of the LLMAppAPIEndpoint field from the CloudFormation stack output and then run the following commands to start a Streamlit app on Studio. EP=replace-with-LLMAppAPIEndpoint-value-from-cloudformation-stack-outputs # replace __API_GW_ENDPOINT__ with output from the cloud formation stack sed -i "s|__API_GW_ENDPOINT__|$EP|g" webapp.py streamlit run webapp.py When the application runs successfully, you’ll see an output similar to the following (the IP addresses you will see will be different from the ones shown in this example). Note the port number (typically 8501) from the output to use as part of the URL for app in the next step. sagemaker-user@studio$ streamlit run webapp.py Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False. You can now view your Streamlit app in your browser. Network URL: http://169.255.255.2:8501 External URL: http://52.4.240.77:8501 You can access the app in a new browser tab using a URL that is similar to your Studio domain URL. For example, if your Studio URL is https://d-randomidentifier.studio.us-east-1.sagemaker.aws/jupyter/default/lab? then the URL for your Streamlit app will be https://d-randomidentifier.studio.us-east-1.sagemaker.aws/jupyter/default/proxy/8501/webapp (notice that lab is replaced with proxy/8501/webapp ). If the port number noted in the previous step is different from 8501 then use that instead of 8501 in the URL for the Streamlit app. The following screenshot shows the app with a couple of user questions. A closer look at the RAG implementation in the Lambda function Now that we have the application working end to end, lets take a closer look at the Lambda function. The Lambda function uses FastAPI to implement the REST API for RAG and the Mangum package to wrap the API with a handler that we package and deploy in the function. We use the API Gateway to route all incoming requests to invoke the function and handle the routing internally within our application. The following code snippet shows how we find documents in the OpenSearch index that are similar to the user question and then create a prompt by combining the question and the similar documents. This prompt is then provided to the LLM for generating an answer to the user question. @router.post("/rag") async def rag_handler(req: Request) -> Dict[str, Any]: # dump the received request for debugging purposes logger.info(f"req={req}") # initialize vector db and SageMaker Endpoint _init(req) # Use the vector db to find similar documents to the query # the vector db call would automatically convert the query text # into embeddings docs = _vector_db.similarity_search(req.q, k=req.max_matching_docs) logger.info(f"here are the {req.max_matching_docs} closest matching docs to the query=\"{req.q}\"") for d in docs: logger.info(f"---------") logger.info(d) logger.info(f"---------") # now that we have the matching docs, lets pack them as a context # into the prompt and ask the LLM to generate a response prompt_template = """Answer based on context:\n\n{context}\n\n{question}""" prompt = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) logger.info(f"prompt sent to llm = \"{prompt}\"") chain = load_qa_chain(llm=_sm_llm, prompt=prompt) answer = chain({"input_documents": docs, "question": req.q}, return_only_outputs=True)['output_text'] logger.info(f"answer received from llm,\nquestion: \"{req.q}\"\nanswer: \"{answer}\"") resp = {'question': req.q, 'answer': answer} if req.verbose is True: resp['docs'] = docs return resp Clean up To avoid incurring future charges, delete the resources. You can do this by deleting the CloudFormation stack as shown in the following screenshot. Figure 7: Cleaning Up Conclusion In this post, we showed how to create an enterprise ready RAG solution using a combination of AWS service, open-source LLMs and open-source Python packages. We encourage you to learn more by exploring JumpStart , Amazon Titan models, Amazon Bedrock , and OpenSearch Service and building a solution using the sample implementation provided in this post and a dataset relevant to your business. If you have questions or suggestions, leave a comment. About the Authors Amit Arora is an AI and ML Specialist Architect at Amazon Web Services, helping enterprise customers use cloud-based machine learning services to rapidly scale their innovations. He is also an adjunct lecturer in the MS data science and analytics program at Georgetown University in Washington D.C. Dr. Xin Huang is a Senior Applied Scientist for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He focuses on developing scalable machine learning algorithms. His research interests are in the area of natural language processing, explainable deep learning on tabular data, and robust analysis of non-parametric space-time clustering. He has published many papers in ACL, ICDM, KDD conferences, and Royal Statistical Society: Series A. Navneet Tuteja is a Data Specialist at Amazon Web Services. Before joining AWS, Navneet worked as a facilitator for organizations seeking to modernize their data architectures and implement comprehensive AI/ML solutions. She holds an engineering degree from Thapar University, as well as a master’s degree in statistics from Texas A&M University. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Build a semantic search engine for tabular columns with Transformers and Amazon OpenSearch Service _ AWS Big Data Blog.txt
AWS Big Data Blog Build a semantic search engine for tabular columns with Transformers and Amazon OpenSearch Service by Kachi Odoemene , Austin Welch , and Taylor McNally | on 01 MAR 2023 | in Amazon ML Solutions Lab , Amazon OpenSearch Service , Amazon SageMaker , Analytics , AWS Glue , Intermediate (200) , Technical How-to | Permalink | Comments |  Share Finding similar columns in a data lake has important applications in data cleaning and annotation, schema matching, data discovery, and analytics across multiple data sources. The inability to accurately find and analyze data from disparate sources represents a potential efficiency killer for everyone from data scientists, medical researchers, academics, to financial and government analysts. Conventional solutions involve lexical keyword search or regular expression matching, which are susceptible to data quality issues such as absent column names or different column naming conventions across diverse datasets (for example, zip_code, zcode, postalcode ). In this post, we demonstrate a solution for searching for similar columns based on column name, column content, or both. The solution uses approximate nearest neighbors algorithms available in Amazon OpenSearch Service to search for semantically similar columns. To facilitate the search, we create features representations (embeddings) for individual columns in the data lake using pre-trained Transformer models from the sentence-transformers library in Amazon SageMaker . Finally, to interact with and visualize results from our solution, we build an interactive Streamlit web application running on AWS Fargate . We include a code tutorial for you to deploy the resources to run the solution on sample data or your own data. Solution overview The following architecture diagram illustrates the two-stage workflow for finding semantically similar columns. The first stage runs an AWS Step Functions workflow that creates embeddings from tabular columns and builds the OpenSearch Service search index. The second stage, or the online inference stage, runs a Streamlit application through Fargate. The web application collects input search queries and retrieves from the OpenSearch Service index the approximate k-most-similar columns to the query. Figure 1. Solution architecture The automated workflow proceeds in the following steps: The user uploads tabular datasets into an Amazon Simple Storage Service (Amazon S3) bucket, which invokes an AWS Lambda function that initiates the Step Functions workflow. The workflow begins with an AWS Glue job that converts the CSV files into Apache Parquet data format. A SageMaker Processing job creates embeddings for each column using pre-trained models or custom column embedding models. The SageMaker Processing job saves the column embeddings for each table in Amazon S3. A Lambda function creates the OpenSearch Service domain and cluster to index the column embeddings produced in the previous step. Finally, an interactive Streamlit web application is deployed with Fargate. The web application provides an interface for the user to input queries to search the OpenSearch Service domain for similar columns. You can download the code tutorial from GitHub to try this solution on sample data or your own data. Instructions on the how to deploy the required resources for this tutorial are available on Github . Prerequistes To implement this solution, you need the following: An AWS account . Basic familiarity with AWS services such as the AWS Cloud Development Kit (AWS CDK), Lambda, OpenSearch Service, and SageMaker Processing. A tabular dataset to create the search index. You can bring your own tabular data or download the sample datasets on GitHub . Build a search index The first stage builds the column search engine index. The following figure illustrates the Step Functions workflow that runs this stage. Figure 2 – Step functions workflow – multiple embedding models Datasets In this post, we build a search index to include over 400 columns from over 25 tabular datasets. The datasets originate from the following public sources: s3://sagemaker-sample-files/datasets/tabular/ NYC Open Data Chicago Data Portal For the the full list of the tables included in the index, see the code tutorial on GitHub . You can bring your own tabular dataset to augment the sample data or build your own search index. We include two Lambda functions that initiate the Step Functions workflow to build the search index for individual CSV files or a batch of CSV files, respectively. Transform CSV to Parquet Raw CSV files are converted to Parquet data format with AWS Glue. Parquet is a column-oriented format file format preferred in big data analytics that provides efficient compression and encoding. In our experiments, the Parquet data format offered significant reduction in storage size compared to raw CSV files. We also used Parquet as a common data format to convert other data formats (for example JSON and NDJSON) because it supports advanced nested data structures. Create tabular column embeddings To extract embeddings for individual table columns in the sample tabular datasets in this post, we use the following pre-trained models from the sentence-transformers library. For additional models, see Pretrained Models . Model name Dimension Size (MB) all-MiniLM-L6-v2 384 80 all-distilroberta-v1 768 290 average_word_embeddings_glove.6B.300d 300 420 The SageMaker Processing job runs create_embeddings.py ( code ) for a single model. For extracting embeddings from multiple models, the workflow runs parallel SageMaker Processing jobs as shown in the Step Functions workflow. We use the model to create two sets of embeddings: column_name_embeddings – Embeddings of column names (headers) column_content_embeddings – Average embedding of all the rows in the column For more information about the column embedding process, see the code tutorial on GitHub . An alternative to the SageMaker Processing step is to create a SageMaker batch transform to get column embeddings on large datasets. This would require deploying the model to a SageMaker endpoint. For more information, see Use Batch Transform . Index embeddings with OpenSearch Service In the final step of this stage, a Lambda function adds the column embeddings to a OpenSearch Service approximate k-Nearest-Neighbor ( kNN) search index . Each model is assigned its own search index. For more information about the approximate kNN search index parameters, see k-NN . Online inference and semantic search with a web app The second stage of the workflow runs a Streamlit web application where you can provide inputs and search for semantically similar columns indexed in OpenSearch Service. The application layer uses an Application Load Balancer , Fargate, and Lambda. The application infrastructure is automatically deployed as part of the solution. The application allows you to provide an input and search for semantically similar column names, column content, or both. Additionally, you can select the embedding model and number of nearest neighbors to return from the search. The application receives inputs, embeds the input with the specified model, and uses kNN search in OpenSearch Service to search indexed column embeddings and find the most similar columns to the given input. The search results displayed include the table names, column names, and similarity scores for the columns identified, as well as the locations of the data in Amazon S3 for further exploration. The following figure shows an example of the web application. In this example, we searched for columns in our data lake that have similar Column Names ( payload type ) to district ( payload ). The application used all-MiniLM-L6-v2 as the embedding model and returned 10 ( k ) nearest neighbors from our OpenSearch Service index. The application returned transit_district , city , borough , and location as the four most similar columns based on the data indexed in OpenSearch Service. This example demonstrates the ability of the search approach to identify semantically similar columns across datasets. Figure 3: Web application user interface Clean up To delete the resources created by the AWS CDK in this tutorial, run the following command: cdk destroy --all Conclusion In this post, we presented an end-to-end workflow for building a semantic search engine for tabular columns. Get started today on your own data with our code tutorial available on GitHub . If you’d like help accelerating your use of ML in your products and processes, please contact the Amazon Machine Learning Solutions Lab . About the Authors Kachi Odoemene is an Applied Scientist at AWS AI. He builds AI/ML solutions to solve business problems for AWS customers. Taylor McNally is a Deep Learning Architect at Amazon Machine Learning Solutions Lab. He helps customers from various industries build solutions leveraging AI/ML on AWS. He enjoys a good cup of coffee, the outdoors, and time with his family and energetic dog. Austin Welch is a Data Scientist in the Amazon ML Solutions Lab. He develops custom deep learning models to help AWS public sector customers accelerate their AI and cloud adoption. In his spare time, he enjoys reading, traveling, and jiu-jitsu. TAGS: Data Lake , Embedding , Python , tutorial Comments View Comments Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Build custom chatbot applications using OpenChatkit models on Amazon SageMaker _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Build custom chatbot applications using OpenChatkit models on Amazon SageMaker by Vikram Elango , Andrew Smith , and Dhawalkumar Patel | on 12 JUN 2023 | in Amazon SageMaker , Customer Solutions , Expert (400) , Technical How-to | Permalink | Comments |  Share Open-source large language models (LLMs) have become popular, allowing researchers, developers, and organizations to access these models to foster innovation and experimentation. This encourages collaboration from the open-source community to contribute to developments and improvement of LLMs. Open-source LLMs provide transparency to the model architecture, training process, and training data, which allows researchers to understand how the model works and identify potential biases and address ethical concerns. These open-source LLMs are democratizing generative AI by making advanced natural language processing (NLP) technology available to a wide range of users to build mission-critical business applications. GPT-NeoX, LLaMA, Alpaca, GPT4All, Vicuna, Dolly, and OpenAssistant are some of the popular open-source LLMs. OpenChatKit is an open-source LLM used to build general-purpose and specialized chatbot applications, released by Together Computer in March 2023 under the Apache-2.0 license. This model allows developers to have more control over the chatbot’s behavior and tailor it to their specific applications. OpenChatKit provides a set of tools, base bot, and building blocks to build fully customized, powerful chatbots. The key components are as follows: An instruction-tuned LLM, fine-tuned for chat from EleutherAI’s GPT-NeoX-20B with over 43 million instructions on 100% carbon negative compute. The GPT-NeoXT-Chat-Base-20B model is based on EleutherAI’s GPT-NeoX model, and is fine-tuned with data focusing on dialog-style interactions. Customization recipes to fine-tune the model to achieve high accuracy on your tasks. An extensible retrieval system enabling you to augment bot responses with information from a document repository, API, or other live-updating information source at inference time. A moderation model, fine-tuned from GPT-JT-6B, designed to filter which questions the bot responds to. The increasing scale and size of deep learning models present obstacles to successfully deploy these models in generative AI applications. To meet the demands for low latency and high throughput, it becomes essential to employ sophisticated methods like model parallelism and quantization. Lacking proficiency in the application of these methods, numerous users encounter difficulties in initiating the hosting of sizable models for generative AI use cases. In this post, we show how to deploy OpenChatKit models ( GPT-NeoXT-Chat-Base-20B and GPT-JT-Moderation-6B ) models on Amazon SageMaker using DJL Serving and open-source model parallel libraries like DeepSpeed and Hugging Face Accelerate. We use DJL Serving, which is a high-performance universal model serving solution powered by the Deep Java Library (DJL) that is programming language agnostic. We demonstrate how the Hugging Face Accelerate library simplifies deployment of large models into multiple GPUs, thereby reducing the burden of running LLMs in a distributed fashion. Let’s get started! Extensible retrieval system An extensible retrieval system is one of the key components of OpenChatKit. It enables you to customize the bot response based on a closed domain knowledge base. Although LLMs are able to retain factual knowledge in their model parameters and can achieve remarkable performance on downstream NLP tasks when fine-tuned, their capacity to access and predict closed domain knowledge accurately remains restricted. Therefore, when they’re presented with knowledge-intensive tasks, their performance suffers to that of task-specific architectures. You can use the OpenChatKit retrieval system to augment knowledge in their responses from external knowledge sources such as Wikipedia, document repositories, APIs, and other information sources. The retrieval system enables the chatbot to access current information by obtaining pertinent details in response to a specific query, thereby supplying the necessary context for the model to generate answers. To illustrate the functionality of this retrieval system, we provide support for an index of Wikipedia articles and offer example code demonstrating how to invoke a web search API for information retrieval. By following the provided documentation, you can integrate the retrieval system with any dataset or API during the inference process, allowing the chatbot to incorporate dynamically updated data into its responses. Moderation model Moderation models are important in chatbot applications to enforce content filtering, quality control, user safety, and legal and compliance reasons. Moderation is a difficult and subjective task, and depends a lot on the domain of the chatbot application. OpenChatKit provides tools to moderate the chatbot application and monitor input text prompts for any inappropriate content. The moderation model provides a good baseline that can be adapted and customized to various needs. OpenChatKit has a 6-billion-parameter moderation model, GPT-JT-Moderation-6B , which can moderate the chatbot to limit the inputs to the moderated subjects. Although the model itself does have some moderation built in, TogetherComputer trained a GPT-JT-Moderation-6B model with Ontocord.ai’s OIG-moderation dataset . This model runs alongside the main chatbot to check that both the user input and answer from the bot don’t contain inappropriate results. You can also use this to detect any out of domain questions to the chatbot and override when the question is not part of the chatbot’s domain. The following diagram illustrates the OpenChatKit workflow. Extensible retrieval system use cases Although we can apply this technique in various industries to build generative AI applications, for this post we discuss use cases in the financial industry. Retrieval augmented generation can be employed in financial research to automatically generate research reports on specific companies, industries, or financial products. By retrieving relevant information from internal knowledge bases, financial archives, news articles, and research papers, you can generate comprehensive reports that summarize key insights, financial metrics, market trends, and investment recommendations. You can use this solution to monitor and analyze financial news, market sentiment, and trends. Solution overview The following steps are involved to build a chatbot using OpenChatKit models and deploy them on SageMaker: Download the chat base GPT-NeoXT-Chat-Base-20B model and package the model artifacts to be uploaded to Amazon Simple Storage Service (Amazon S3). Use a SageMaker large model inference (LMI) container, configure the properties, and set up custom inference code to deploy this model. Configure model parallel techniques and use inference optimization libraries in DJL serving properties. We will use Hugging Face Accelerate as the engine for DJL serving. Additionally, we define tensor parallel configurations to partition the model. Create a SageMaker model and endpoint configuration, and deploy the SageMaker endpoint. You can follow along by running the notebook in the GitHub repo . Download the OpenChatKit model First, we download the OpenChatKit base model. We use huggingface_hub and use snapshot_download to download the model, which downloads an entire repository at a given revision. Downloads are made concurrently to speed up the process. See the following code: from huggingface_hub import snapshot_download from pathlib import Path import os # - This will download the model into the current directory where ever the jupyter notebook is running local_model_path = Path("./openchatkit") local_model_path.mkdir(exist_ok=True) model_name = "togethercomputer/GPT-NeoXT-Chat-Base-20B" # Only download pytorch checkpoint files allow_patterns = ["*.json", "*.pt", "*.bin", "*.txt", "*.model"] # - Leverage the snapshot library to donload the model since the model is stored in repository using LFS chat_model_download_path = snapshot_download( repo_id=model_name,#A user or an organization name and a repo name cache_dir=local_model_path, #Path to the folder where cached files are stored. allow_patterns=allow_patterns, #only files matching at least one pattern are downloaded. ) DJL Serving properties You can use SageMaker LMI containers to host large generative AI models with custom inference code without providing your own inference code. This is extremely useful when there is no custom preprocessing of the input data or postprocessing of the model’s predictions. You can also deploy a model using custom inference code. In this post, we demonstrate how to deploy OpenChatKit models with custom inference code. SageMaker expects the model artifacts in tar format. We create each OpenChatKit model with the following files: serving.properties and model.py . The serving.properties configuration file indicates to DJL Serving which model parallelization and inference optimization libraries you would like to use. The following is a list of settings we use in this configuration file: openchatkit/serving.properties engine = Python option.tensor_parallel_degree = 4 option.s3url = {{s3url}} This contains the following parameters: engine – The engine for DJL to use. option.entryPoint – The entry point Python file or module. This should align with the engine that is being used. option.s3url – Set this to the URI of the S3 bucket that contains the model. option.modelid – If you want to download the model from huggingface.co, you can set option.modelid to the model ID of a pretrained model hosted inside a model repository on huggingface.co ( https://huggingface.co/models ). The container uses this model ID to download the corresponding model repository on huggingface.co. option.tensor_parallel_degree – Set this to the number of GPU devices over which DeepSpeed needs to partition the model. This parameter also controls the number of workers per model that will be started up when DJL Serving runs. For example, if we have an 8 GPU machine and we are creating eight partitions, then we will have one worker per model to serve the requests. It’s necessary to tune the parallelism degree and identify the optimal value for a given model architecture and hardware platform. We call this ability inference-adapted parallelism . Refer to Configurations and settings for an exhaustive list of options. OpenChatKit models The OpenChatKit base model implementation has the following four files: model.py – This file implements the handling logic for the main OpenChatKit GPT-NeoX model. It receives the inference input request, loads the model, loads the Wikipedia index, and serves the response. Refer to model.py (created part of the notebook) for additional details. model.py uses the following key classes: OpenChatKitService – This handles passing the data between the GPT-NeoX model, Faiss search, and conversation object. WikipediaIndex and Conversation objects are initialized and input chat conversations are sent to the index to search for relevant content from Wikipedia. This also generates a unique ID for each invocation if one is not supplied for the purpose of storing the prompts in Amazon DynamoDB . ChatModel – This class loads the model and tokenizer and generates the response. It handles partitioning the model across multiple GPUs using tensor_parallel_degree , and configures the dtypes and device_map . The prompts are passed to the model to generate responses. A stopping criteria StopWordsCriteria is configured for the generation to only produce the bot response on inference. ModerationModel – We use two moderation models in the ModerationModel class: the input model to indicate to the chat model that the input is inappropriate to override the inference result, and the output model to override the inference result. We classify the input prompt and output response with the following possible labels: casual needs caution needs intervention (this is flagged to be moderated by the model) possibly needs caution probably needs caution wikipedia_prepare.py – This file handles downloading and preparing the Wikipedia index. In this post, we use a Wikipedia index provided on Hugging Face datasets. To search the Wikipedia documents for relevant text, the index needs to be downloaded from Hugging Face because it’s not packaged elsewhere. The wikipedia_prepare.py file is responsible for handling the download when imported. Only a single process in the multiple that are running for inference can clone the repository. The rest wait until the files are present in the local file system. wikipedia.py – This file is used for searching the Wikipedia index for contextually relevant documents. The input query is tokenized and embeddings are created using mean_pooling . We compute cosine similarity distance metrics between the query embedding and the Wikipedia index to retrieve contextually relevant Wikipedia sentences. Refer to wikipedia.py for implementation details. #function to create sentence embedding using mean_pooling def mean_pooling(token_embeddings, mask): token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.0) sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None] return sentence_embeddings #function to compute cosine similarity distance between 2 embeddings def cos_sim_2d(x, y): norm_x = x / np.linalg.norm(x, axis=1, keepdims=True) norm_y = y / np.linalg.norm(y, axis=1, keepdims=True) return np.matmul(norm_x, norm_y.T) conversation.py – This file is used for storing and retrieving the conversation thread in DynamoDB for passing to the model and user. conversation.py is adapted from the open-source OpenChatKit repository. This file is responsible for defining the object that stores the conversation turns between the human and the model. With this, the model is able to retain a session for the conversation, allowing a user to refer to previous messages. Because SageMaker endpoint invocations are stateless, this conversation needs to be stored in a location external to the endpoint instances. On startup, the instance creates a DynamoDB table if it doesn’t exist. All updates to the conversation are then stored in DynamoDB based on the session_id key, which is generated by the endpoint. Any invocation with a session ID will retrieve the associated conversation string and update it as required. Build an LMI inference container with custom dependencies The index search uses Facebook’s Faiss library for performing the similarity search. Because this isn’t included in the base LMI image, the container needs to be adapted to install this library. The following code defines a Dockerfile that installs Faiss from the source alongside other libraries needed by the bot endpoint. We use the sm-docker utility to build and push the image to Amazon Elastic Container Registry (Amazon ECR) from Amazon SageMaker Studio . Refer to Using the Amazon SageMaker Studio Image Build CLI to build container images from your Studio notebooks for more details. The DJL container doesn’t have Conda installed, so Faiss needs to be cloned and compiled from the source. To install Faiss, the dependencies for using the BLAS APIs and Python support need to be installed. After these packages are installed, Faiss is configured to use AVX2 and CUDA before being compiled with the Python extensions installed. pandas , fastparquet , boto3 , and git-lfs are installed afterwards because these are required for downloading and reading the index files. FROM 763104351884.dkr.ecr.us-east-1.amazonaws.com/djl-inference:0.21.0-deepspeed0.8.0-cu117 ARG FAISS_URL=https://github.com/facebookresearch/faiss.git RUN apt-get update && apt-get install -y git-lfs wget cmake pkg-config build-essential apt-utils RUN apt search openblas && apt-get install -y libopenblas-dev swig RUN git clone $FAISS_URL && \ cd faiss && \ cmake -B build . -DFAISS_OPT_LEVEL=avx2 -DCMAKE_CUDA_ARCHITECTURES="86" && \ make -C build -j faiss && \ make -C build -j swigfaiss && \ make -C build -j swigfaiss_avx2 && \ (cd build/faiss/python && python -m pip install ) RUN pip install pandas fastparquet boto3 && \ git lfs install --skip-repo && \ apt-get clean all Create the model Now that we have the Docker image in Amazon ECR, we can proceed with creating the SageMaker model object for the OpenChatKit models. We deploy GPT-NeoXT-Chat-Base-20B input and output moderation models using GPT-JT-Moderation-6B . Refer to create_model for more details. from sagemaker.utils import name_from_base chat_model_name = name_from_base(f"gpt-neoxt-chatbase-ds") print(chat_model_name) create_model_response = sm_client.create_model( ModelName=chat_model_name, ExecutionRoleArn=role, PrimaryContainer={ "Image": chat_inference_image_uri, "ModelDataUrl": s3_code_artifact, }, ) chat_model_arn = create_model_response["ModelArn"] print(f"Created Model: {chat_model_arn}") Configure the endpoint Next, we define the endpoint configurations for the OpenChatKit models. We deploy the models using the ml.g5.12xlarge instance type. Refer to create_endpoint_config for more details. chat_endpoint_config_name = f"{chat_model_name}-config" chat_endpoint_name = f"{chat_model_name}-endpoint" chat_endpoint_config_response = sm_client.create_endpoint_config( EndpointConfigName=chat_endpoint_config_name, ProductionVariants=[ { "VariantName": "variant1", "ModelName": chat_model_name, "InstanceType": "ml.g5.12xlarge", "InitialInstanceCount": 1, "ContainerStartupHealthCheckTimeoutInSeconds": 3600, }, ], ) Deploy the endpoint Finally, we create an endpoint using the model and endpoint configuration we defined in the previous steps: chat_create_endpoint_response = sm_client.create_endpoint( EndpointName=f"{chat_endpoint_name}", EndpointConfigName=chat_endpoint_config_name ) print(f"Created Endpoint: {chat_create_endpoint_response['EndpointArn']},") Run inference from OpenChatKit models Now it’s time to send inference requests to the model and get the responses. We pass the input text prompt and model parameters such as temperature , top_k , and max_new_tokens . The quality of the chatbot responses is based on the parameters specified, so it’s recommended to benchmark model performance against these parameters to find the optimal setting for your use case. The input prompt is first sent to the input moderation model, and the output is sent to ChatModel to generate the responses. During this step, the model uses the Wikipedia index to retrieve contextually relevant sections to the model as the prompt to get domain-specific responses from the model. Finally, the model response is sent to the output moderation model to check for classification, and then the responses are returned. See the following code: def chat(prompt, session_id=None, **kwargs): if session_id: chat_response_model = smr_client.invoke_endpoint( EndpointName=chat_endpoint_name, Body=json.dumps( { "inputs": prompt, "parameters": { "temperature": 0.6, "top_k": 40, "max_new_tokens": 512, "session_id": session_id, "no_retrieval": True, }, } ), ContentType="application/json", ) else: chat_response_model = smr_client.invoke_endpoint( EndpointName=chat_endpoint_name, Body=json.dumps( { "inputs": prompt, "parameters": { "temperature": 0.6, "top_k": 40, "max_new_tokens": 512, }, } ), ContentType="application/json", ) response = chat_response_model["Body"].read().decode("utf8") return response prompts = "What does a data engineer do?" chat(prompts) Refer to sample chat interactions below. Clean up Follow the instructions in the cleanup section of the to delete the resources provisioned as part of this post to avoid unnecessary charges. Refer to Amazon SageMaker Pricing for details about the cost of the inference instances. Conclusion In this post, we discussed the importance of open-source LLMs and how to deploy an OpenChatKit model on SageMaker to build next-generation chatbot applications. We discussed various components of OpenChatKit models, moderation models, and how to use an external knowledge source like Wikipedia for retrieval augmented generation (RAG) workflows. You can find step-by-step instructions in the GitHub notebook . Let us know about the amazing chatbot applications you’re building. Cheers! About the Authors Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker. Vikram Elango is a Sr. AIML Specialist Solutions Architect at AWS, based in Virginia, US. He is currently focused on generative AI, LLMs, prompt engineering, large model inference optimization, and scaling ML across enterprises. Vikram helps financial and insurance industry customers with design and thought leadership to build and deploy machine learning applications at scale. In his spare time, he enjoys traveling, hiking, cooking, and camping with his family. Andrew Smith is a Cloud Support Engineer in the SageMaker, Vision & Other team at AWS, based in Sydney, Australia. He supports customers using many AI/ML services on AWS with expertise in working with Amazon SageMaker. Outside of work, he enjoys spending time with friends and family as well as learning about different technologies. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Buildigo.txt
From there, Buildigo has big ambitions for the future. “We aim to be the number one player in this market within 5 years,” says Huegli. “Using AWS, we can scale at speed while remaining focused on delivering what our customers want.” Customer Stories / Software & Intenet Buildigo runs its customer-facing website, databases, data lake, and development pipeline on AWS. It uses AWS Step Functions, a low-code, visual workflow service to build distributed applications, and automate IT and business processes, to help developers keep on top of complex application workflows. Français Buildigo Gains Competitive Advantage with AWS Technology AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. Learn more » Buildigo offers an easy way to link homeowners and renters with local craftspeople who can work on their houses and gardens. The Swiss startup’s online platform facilitates communication about jobs, the delivery of quotes, and payment for completed work. Español The flexibility of Buildigo’s platform helped when it had to onboard hundreds of new tradespeople after the acquisition. The 200-year old insurance company handles tens of thousands of damage claims each year, and has contacts with hundreds of local traders. Using AWS, Buildigo could cope with managing this increase in service providers. Buildigo recognizes that data is an asset. It analyzes customer usage to give employees insight into how to improve its services. For this, it uses AWS Glue, a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. It also uses Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud for its storage. 日本語 2022 Weekly features 3,000 job requests 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram From Automating Damage Claims to Becoming Number One AWS Glue Generates data-driven insights to improve customer experience. Scaling to Accommodate Customer Growth Using AWS The team has noticed seasonal trends, such as a rising demand for gardeners during the summer months and electricians during the winter. It also noticed a correlation between rising energy prices and increased demand for installing alternative heating systems such as heat pumps and solar installations. Buildigo matches homeowners and renters with the craftspeople they need to work on their properties. Based in Switzerland, its cloud-based systems matches by skills and location and provides simple payment solutions. The company is owned by Swiss insurance company La Mobilière, which has 2 million customers. AWS Services Used Data-driven insights Amazon RDS The company is also able to control costs as it grows. “As a young company, expanding in a cost-effective way is essential to our success,” says Mathieu Meylan, chief technology officer at Buildigo. “Using AWS serverless technology, we only pay for the resources we use. This helps us to manage our overheads and invest any funds saved into mission-critical projects.” Scales to accommodate rising customer demand including a 4x rise in demand for solar panels and heat pumps in the last 12 months. 中文 (繁體) Bahasa Indonesia Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Using AWS, Buildigo can instantly scale its compute resources to accommodate rising customer demand, so its users always experience a responsive service. This capability was vital to Buildigo during the COVID-19 pandemic because demand for its services fluctuated wildly. Demand for craftspeople disappeared at first but then increased rapidly as people spent more time at home. With the shift to remote working during lockdowns, Buildigo saw many requests for the creation of home offices. Ρусский AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. Learn more » عربي One recently launched feature is mobile device support. “Many of our customers, especially craftspeople, are on the move and prefer to access our services on their phones,” says Huegli. “We’re now able to offer Buildigo on any device or operating system.” Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (简体) As a rapidly growing young company, it needsto quickly scale its IT systems as customer demand increases while minimizing costs and maintenance tasks for its small team. Buildigo can quickly roll out new capabilities to improve its offerings as it learns more about its customers. The company’s IT team has a short development cycle and typically deploys a new feature at least once a week using AWS CloudFormation and AWS Cloud Development Kit Buildigo’s service differentiator is not being available to all traders. Instead, craftspeople can only join the service by invitation. It aims to provide the best quality craftspeople and most suitable individual for a job as opposed to giving homeowners a long list of unvetted providers. The next steps for Buildigo include automating damage claims processing and providing insurance claimants with a quick way to get quotes for repair work. It is using Amazon API Gateway, a fully managed service for monitoring and securing APIs at scale and AWS Lambda, a serverless, event driven computing service, to run this automation without worrying about infrastructure. Overview Mathieu Meylan, Chief Technology Officer, Buildigo Get Started To support quick development, the company uses AWS CloudFormation to model, provision, and manage its resources by treating infrastructure as code. It also uses Amazon CloudFront, a content delivery network service, which automatically adapts multimedia elements on Buildigo’s website to to different screen sizes and devices. Buildigo offers an easy way to link homeowners and renters with local crafts people who can work on their houses and gardens. The Swiss startup needs to quickly scale its IT systems as customer demand increases while minimizing costs and maintenance tasks for its small team. It built its platform on AWS, running its customer-facing website, databases, data lake, and development pipeline in the AWS cloud. This enables Buildigo to focus on developing its core application, provide a responsive service for customers, and release weekly feature updates to meet their changing needs. Amazon Cloudfront Türkçe English About Buildigo Buildigo Scales at Speed While Delivering for Customers with AWS 4x rise Buildigo prioritized cutting-edge, cloud-based technologies from its inception. The decision proved to be a competitive advantage and was a factor in Swiss insurance company La Mobilière acquiring the company in 2020. “Several companies offer similar services but we wanted a company using state-of-the-art technology,” says Michael Huegli, managing director at Buildigo, who was previously head of home ecosystem at La Mobilière. “We knew that because Buildigo built its platform on AWS, it would be scalable, reliable, and support fast development times.” AWS Cloudformation These insights allow Buildigo to make sure it has the right tradespeople in place to meet customer demand at the right times. It also helps it to tailor marketing messages so they’re relevant to customer interest, thus increasing job requests. Buildigo built its platform on Amazon Web Services (AWS) from the start, so it could focus on developing its core services. It chose AWS for its scalability and managed services which means the team can concentrate on developing new features. Using AWS, it provides a responsive service for customers and releases weekly feature updates to meet their changing needs. Using AWS serverless technology, we only pay for the resources we use. This helps us to manage our overhead and invest any funds saved into mission-critical projects.” Deutsch Learn more » Tiếng Việt Italiano ไทย Matched 3,000 job requests with hundreds of trades people. Contact Sales Buildigo offers an easy way to link homeowners and renters with local craftspeople who can work on their houses and gardens. The Swiss start up needs to quickly scale its IT systems as customer demand increases while minimizing costs and maintenance tasks for its small team. It built its platform on AWS, running its customer-facing website, databases, data lake, and development pipeline in the AWS cloud. Developing Buildigo for Mobile-First Customers Noticing Seasonal Peaks Through Smart Data Analysis The model of actively selecting tradespeople has proved popular because it offers more than simple directory services or user reviewers but instead relies on real recomendations. Since relaunching in February 2021, the company has matched 3,000 job requests with hundreds of tradespeople. Buildigo Scales at Speed While Delivering for Customers with AWS Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Releases at least one new feature per week.
Building a medical image search platform on AWS _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Building a medical image search platform on AWS by Gang Fu , Erhan Bas , and Ujjwal Ratan | on 14 OCT 2020 | in Amazon Comprehend Medical , Amazon OpenSearch Service , Amazon SageMaker , Analytics , Artificial Intelligence , AWS Amplify , AWS AppSync , AWS Fargate | Permalink | Comments |  Share Improving radiologist efficiency and preventing burnout is a primary goal for healthcare providers. A nationwide study published in Mayo Clinic Proceedings in 2015 showed radiologist burnout percentage at a concerning 61% [1]. In additon, the report concludes that “burnout and satisfaction with work-life balance in US physicians worsened from 2011 to 2014. More than half of US physicians are now experiencing professional burnout.”[2] As technologists, we’re looking for ways to put new and innovative solutions in the hands of physicians to make them more efficient, reduce burnout, and improve care quality. To reduce burnout and improve value-based care through data-driven decision-making, Artificial Intelligence (AI) can be used to unlock the information trapped in the vast amount of unstructured data (e.g. images, texts, and voice) and create clinically actionable knowledge base. AWS AI services can derive insights and relationships from free-form medical reports, automate the knowledge sharing process, and eventually improve personalized care experience. In this post, we use Convolutional Neural Networks (CNN) as a feature extractor to convert medical images into a one-dimensional feature vector with a size of 1024. We call this process medical image embedding . Then we index the image feature vector using the K-nearest neighbors (KNN) algorithm in Amazon OpenSearch Service to build a similarity-based image retrieval system. Additionally, we use the AWS managed natural language processing (NLP) service Amazon Comprehend Medical to perform named entity recognition (NER) against free text clinical reports. The detected named entities are also linked to medical ontology, ICD-10-CM, to enable simple aggregation and distribution analysis. The presented solution also includes a front-end React web application and backend GraphQL API managed by AWS Amplify and AWS AppSync , and authentication is handled by Amazon Cognito . After deploying this working solution, the end-users (healthcare providers) can search through a repository of unstructured free text and medical images, conduct analytical operations, and use it in medical training and clinical decision support. This eliminates the need to manually analyze all the images and reports and get to the most relevant ones. Using a system like this improves the provider’s efficiency. The following graphic shows an example end result of the deployed application. Dataset and architecture We use the MIMIC CXR dataset to demonstrate how this working solution can benefit healthcare providers, in particular, radiologists. MIMIC CXR is a publicly available database of chest X-ray images in DICOM format and the associated radiology reports as free text files[3]. The methods for data collection and the data structures in this dataset have been well documented and are very detailed [3]. Also, this is a restricted-access resource. To access the files, you must be a registered user and sign the data use agreement . The following sections provide more details on the components of the architecture. The following diagram illustrates the solution architecture. The architecture is comprised of the offline data transformation and online query components. The offline data transformation step, the unstructured data, including free texts and image files, is converted into structured data. Electronic Heath Record (EHR) radiology reports as free text are processed using Amazon Comprehend Medical, an NLP service that uses machine learning to extract relevant medical information from unstructured text, such as medical conditions including clinical signs, diagnosis, and symptoms. The named entities are identified and mapped to structured vocabularies, such as ICD-10 Clinical Modifications (CMs) ontology. The unstructured text plus structured named entities are stored in Amazon ES to enable free text search and term aggregations. The medical images from Picture Archiving and Communication System (PACS) are converted into vector representations using a pretrained deep learning model deployed in an Amazon Elastic Container Service (Amazon ECS) AWS Fargate cluster. Similar visual search on AWS has been published previously for online retail product image search. It used an Amazon SageMaker built-in KNN algorithm for similarity search, which supports different index types and distance metrics. We took advantage of the KNN for Amazon ES to find the k closest images from a feature space as demonstrated on the GitHub repo . KNN search is supported in Amazon ES version 7.4+. The container running on the ECS Fargate cluster reads medical images in DICOM format, carries out image embedding using a pretrained model, and saves a PNG thumbnail in an Amazon Simple Storage Service (Amazon S3) bucket, which serves as the storage for AWS Amplify React web application. It also parses out the DICOM image metadata and saves them in Amazon DynamoDB . The image vectors are saved in an OpenSearch cluster and are used for the KNN visual search, which is implemented in an AWS Lambda function. The unstructured data from EHR and PACS needs to be transferred to Amazon S3 to trigger the serverless data processing pipeline through the Lambda functions. You can achieve this data transfer by using AWS Storage Gateway or AWS DataSync , which is out of the scope of this post. The online query API, including the GraphQL schemas and resolvers, was developed in AWS AppSync. The front-end web application was developed using the Amplify React framework, which can be deployed using the Amplify CLI. The detailed AWS CloudFormation templates and sample code are available in the Github repo . Solution overview To deploy the solution, you complete the following steps: Deploy the Amplify React web application for online search. Deploy the image-embedding container to AWS Fargate. Deploy the data-processing pipeline and AWS AppSync API. Deploying the Amplify React web application The first step creates the Amplify React web application, as shown in the following diagram. Install and configure the AWS Command Line Interface (AWS CLI). Install the AWS Amplify CLI . Clone the code base with stepwise instructions. Go to your code base folder and initialize the Amplify app using the command amplify init . You must answer a series of questions, like the name of the Amplify app. After this step, you have the following changes in your local and cloud environments: A new folder named amplify is created in your local environment A file named aws-exports.js is created in local the src folder A new Amplify app is created on the AWS Cloud with the name provided during deployment (for example, medical-image-search ) A CloudFormation stack is created on the AWS Cloud with the prefix amplify- <AppName> You create authentication and storage services for your Amplify app afterwards using the following commands: amplify add auth amplify add storage amplify push When the CloudFormation nested stacks for authentication and storage are successfully deployed, you can see the new Amazon Cognito user pool as the authentication backend and S3 bucket as the storage backend are created. Save the Amazon Cognito user pool ID and S3 bucket name from the Outputs tab of the corresponding CloudFormation nested stack (you use these later). The following screenshot shows the location of the user pool ID on the Outputs tab. The following screenshot shows the location of the bucket name on the Outputs tab. Deploying the image-embedding container to AWS Fargate We use the Amazon SageMaker Inference Toolkit to serve the PyTorch inference model, which converts a medical image in DICOM format into a feature vector with the size of 1024. To create a container with all the dependencies, you can either use pre-built deep learning container images or derive a Dockerfile from the Amazon Sagemaker Pytorch inference CPU container , like the one from the GitHub repo , in the container folder. You can build the Docker container and push it to Amazon ECR manually or by running the shell script build_and_push.sh . You use the repository image URI for the Docker container later to deploy the AWS Fargate cluster. The following screenshot shows the sagemaker-pytorch-inference repository on the Amazon ECR console. We use Multi Model Server (MMS) to serve the inference endpoint . You need to install MMS with pip locally, use the Model archiver CLI to package model artifacts into a single model archive .mar file, and upload it to an S3 bucket to be served by a containerized inference endpoint. The model inference handler is defined in dicom_featurization_service.py in the MMS folder. If you have a domain-specific pretrained Pytorch model, place the model.pth file in the MMS folder; otherwise, the handler uses a pretrained DenseNET121[4] for image processing. See the following code: model_file_path = os.path.join(model_dir, "model.pth") if os.path.isfile(model_file_path): model = torch.load(model_file_path) else: model = models.densenet121(pretrained=True) model = model._modules.get('features') model.add_module("end_relu", nn.ReLU()) model.add_module("end_globpool", nn.AdaptiveAvgPool2d((1, 1))) model.add_module("end_flatten", nn.Flatten()) model = model.to(self.device) model.eval() The intermediate results of this CNN-based model is to represent images as feature vectors. In other words, the convolutional layers before the final classification layer is flattened to convert feature layers to a vector representation. Run the following command in the MMS folder to package up the model archive file: model-archiver -f --model-name dicom_featurization_service --model-path ./ --handler dicom_featurization_service:handle --export-path ./ The preceding code generates a package file named dicom_featurization_service.mar . Create a new S3 bucket and upload the package file to that bucket with public read Access Control List (ACL). See the following code: aws s3 cp ./dicom_featurization_service.mar s3:// <S3bucketname> / --acl public-read --profile <profilename> You’re now ready to deploy the image-embedding inference model to the AWS Fargate cluster using the CloudFormation template ecsfargate.yaml in the CloudFormationTemplates folder. You can deploy using the AWS CLI: go to the CloudFormationTemplates folder and copy the following command: aws cloudformation deploy --capabilities CAPABILITY_IAM --template-file ./ecsfargate.yaml --stack-name <stackname> --parameter-overrides ImageUrl= <imageURI> InferenceModelS3Location=https:// <S3bucketname> .s3.amazonaws.com/dicom_featurization_service.mar --profile <profilename> You need to replace the following placeholders: stackname – A unique name to refer to this CloudFormation stack imageURI – The image URI for the MMS Docker container uploaded in Amazon ECR S3bucketname – The MMS package in the S3 bucket, such as https:// <S3bucketname> .s3.amazonaws.com/dicom_featurization_service.mar profilename – Your AWS CLI profile name (default if not named) Alternatively, you can choose Launch stack for the following Regions: us-east-1 – us-west-2 – After the CloudFormation stack creation is complete, go to the stack Outputs tab on the AWS CloudFormation console and copy the InferenceAPIUrl for later deployment. See the following screenshot. You can delete this stack after the offline image embedding jobs are finished to save costs, because it’s not used for online queries. Deploying the data-processing pipeline and AWS AppSync API You deploy the image and free text data-processing pipeline and AWS AppSync API backend through another CloudFormation template named AppSyncBackend.yaml in the CloudFormationTemplates folder, which creates the AWS resources for this solution. See the following solution architecture. To deploy this stack using the AWS CLI, go to the CloudFormationTemplates folder and copy the following command: aws cloudformation deploy --capabilities CAPABILITY_NAMED_IAM --template-file ./AppSyncBackend.yaml --stack-name <stackname> --parameter-overrides AuthorizationUserPool =<CFN_output_auth> PNGBucketName =<CFN_output_storage> InferenceEndpointURL= <inferenceAPIUrl> --profile <profilename> Replace the following placeholders: stackname – A unique name to refer to this CloudFormation stack AuthorizationUserPool – Amazon Cognito user pool PNGBucketName – Amazon S3 bucket name InferenceEndpointURL – The inference API endpoint Profilename – The AWS CLI profile name (use default if not named) Alternatively, you can choose Launch stack for the following Regions: us-east-1 – us-west-2 – You can download the Lambda function for medical image processing, CMprocessLambdaFunction.py , and its dependency layer separately if you deploy this stack in AWS Regions other than us-east-1 and us-west-2 . Because their file size exceeds the CloudFormation template limit, you need to upload them to your own S3 bucket (either create a new S3 bucket or use the existing one, like the aforementioned S3 bucket for hosting the MMS model package file) and override the LambdaBucket mapping parameter using your own bucket name. Save the AWS AppySync API URL and AWS Region from the settings on the AWS AppSync console. Edit the src/aws-exports.js file in your local environment and replace the placeholders with those values: const awsmobile = { "aws_appsync_graphqlEndpoint": "<AppSync API URL>", "aws_appsync_region": "<AWS AppSync Region>", "aws_appsync_authenticationType": "AMAZON_COGNITO_USER_POOLS" }; After this stack is successfully deployed, you’re ready to use this solution. If you have in-house EHR and PACS databases, you can set up the AWS Storage Gateway to transfer data to the S3 bucket to trigger the transformation jobs. Alternatively, you can use the public dataset MIMIC CXR: download the MIMIC CXR dataset from PhysioNet (to access the files, you must be a credentialed user and sign the data use agreement for the project) and upload the DICOM files to the S3 bucket mimic-cxr-dicom- and the free text radiology report to the S3 bucket mimic-cxr-report- . If everything works as expected, you should see the new records created in the DynamoDB table medical-image-metadata and the Amazon ES domain medical-image-search . You can test the Amplify React web application locally by running the following command: npm install && npm start Or you can publish the React web app by deploying it in Amazon S3 with AWS CloudFront distribution, by first entering the following code: amplify hosting add Then, enter the following code: amplify publish You can see the hosting endpoint for the Amplify React web application after deployment. Conclusion We have demonstrated how to deploy, index and search medical images on AWS, which segregates the offline data ingestion and online search query functions. You can use AWS AI services to transform unstructured data, for example the medical images and radiology reports, into structured ones. By default, the solution uses a general-purpose model trained on ImageNET to extract features from images. However, this default model may not be accurate enough to extract medical image features because there are fundamental differences in appearance, size, and features between medical images in its raw form. Such differences make it hard to train commonly adopted triplet-based learning networks [5], where semantically relevant images or objects can be easily defined or ranked. To improve search relevancy, we performed an experiment by using the same MIMIC CXR dataset and the derived diagnosis labels to train a weakly supervised disease classification network similar to Wang et. Al [6]. We found this domain-specific pretrained model yielded qualitatively better visual search results. So it’s recommended to bring your own model (BYOM) to this search platform for real-world implementation. The methods presented here enable you to perform indexing, searching and aggregation against unstructured images in addition to free text. It sets the stage for future work that can combine these features for multimodal medical image search engine. Information retrieval from unstructured corpuses of clinical notes and images is a time-consuming and tedious task. Our solution allows radiologists to become more efficient and help them reduce potential burnout. To find the latest development to this solution, check out medical image search on GitHub . Reference: https://www.radiologybusiness.com/topics/leadership/radiologist-burnout-are-we-done-yet https://www.mayoclinicproceedings.org/article/S0025-6196(15)00716-8/abstract#secsectitle0010 Johnson, Alistair EW, et al. “MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports.” Scientific Data 6, 2019. Huang, Gao, et al. “Densely connected convolutional networks.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. Wang, Jiang, et al. “Learning fine-grained image similarity with deep ranking.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014. Wang, Xiaosong, et al. “Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. About the Authors   Gang Fu is a Healthcare Solution Architect at AWS. He holds a PhD in Pharmaceutical Science from the University of Mississippi and has over ten years of technology and biomedical research experience. He is passionate about technology and the impact it can make on healthcare. Ujjwal Ratan is a Principal Machine Learning Specialist Solution Architect in the Global Healthcare and Lifesciences team at Amazon Web Services. He works on the application of machine learning and deep learning to real world industry problems like medical imaging, unstructured clinical text, genomics, precision medicine, clinical trials and quality of care improvement. He has expertise in scaling machine learning/deep learning algorithms on the AWS cloud for accelerated training and inference. In his free time, he enjoys listening to (and playing) music and taking unplanned road trips with his family. Erhan Bas is a Senior Applied Scientist in the AWS Rekognition team, currently developing deep learning algorithms for computer vision applications. His expertise is in machine learning and large scale image analysis techniques, especially in biomedical, life sciences and industrial inspection technologies. He enjoys playing video games, drinking coffee, and traveling with his family. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Building a Scalable Interactive Learning Application for Kids Using AWS Services with Yellow Class _ Case Study _ AWS.txt
bandwidth required to view videos Security is of the utmost importance because the company’s customers are families with children, so data is encrypted during transit and at storage. To further fortify security, Yellow Class performed an AWS Well-Architected review, which is a process that helps the company learn, measure, and build using architectural best practices. Yellow Class met with experts at AWS and did exercises to align its security practices with recommendations, such as protecting data integrity and managing user permissions. Another security safeguard for Yellow Class is increasing observability so that the company is the first to know about issues. Yellow Class stays informed with dashboard data and alarms using Amazon CloudWatch, which helps organizations observe and monitor AWS resources and applications in the cloud and on premises. Français Working with solutions architects at AWS, Yellow Class optimized its application to improve performance. The company reduced the file size and segment of the videos on its application while improving the video quality. Yellow Class also transcoded the raw video to an industry-standard format using AWS Elemental MediaConvert, which reduced the time that it takes for videos to start playing from 4 seconds to less than 1 second. As a result, Yellow Class could keep kids engaged with the videos, reduce distribution and storage costs, and reach more users who live in low-bandwidth areas. To make its videos accessible from remote areas, Yellow Class also uses Amazon CloudFront, a content delivery network service for securely delivering content with low latency and high transfer speeds. Amazon CloudFront has coverage all over India using AWS edge locations and regional edge cache. “When we optimized our media pipeline using AWS services, core metrics, like average time on the application and conversion, increased,” says Jindal. 2023 Yellow Class launched the first deployment of its application using AWS services in September 2020, and the company has continued to evolve the application to support additional users and features. Although Yellow Class started with a small team of developers, it grew quickly and increased developer productivity with the support of AWS solutions architects. “At the start of a new project, subject matter experts from AWS scheduled a kickoff call with information about how to solve a particular problem using an AWS service, which helped save multiple weeks’ worth of research and development,” says Jindal. Español Using services like AWS Elemental MediaConvert, a file-based video transcoding service to prepare on-demand content for distribution or archiving, Yellow Class optimized transcoded video file sizes, reduced storage and distribution costs with enhanced playout experience for users, and scaled to create a secure and reliable application for its customers. Using AWS services, Yellow Class could also experiment with new codecs and product features quickly. Amazon ElastiCache is a fully managed, in-memory caching service supporting flexible, real-time use cases. Learn more » Yellow Class also keeps costs low by using AWS services rather than engaging with multiple vendors. When costs kept rising for a third-party provider that Yellow Class used to serve images on its website and application, the company transitioned to use Amazon CloudFront and AWS Lambda, a serverless, event-driven compute service for running code without thinking about servers or clusters. “Overnight, we were able to save $2,000 per month by replacing the entire third-party service with Amazon CloudFront and AWS Lambda,” says Jindal. “That’s the power of AWS. You can replace many third-party tools because of the sheer scale and low cost of AWS services.” 日本語 Contact Sales saved per month by replacing a third-party service Solution | Increasing Speed and Reliability While Reducing Costs by 50–60% Using AWS Services AWS Elemental MediaConvert is a file-based video transcoding service with broadcast-grade features. Create live stream content for broadcast and multi-screen delivery at scale. Get Started 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Elemental MediaConvert AWS Services Used Reduced 中文 (繁體) Bahasa Indonesia When we optimized our media pipeline using AWS services, core metrics, like average time on the application and conversion, increased.” distribution and storage costs by reducing file size Ρусский Customer Stories / Software & Internet عربي Yellow Class, an educational technology startup, wanted to develop an educational application for kids. Developing the infrastructure from scratch would require significant time and resources for its small team. To focus on customers instead of infrastructure, Yellow Class needed a cost-effective and scalable cloud solution, so the company looked to Amazon Web Services (AWS). Yellow Class engages young children across India with its practice-based learning application for subjects like math, English, and art. Its application provides exercises, information, and concept video streaming to supplement classroom learning. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Weeks Overview Amazon ElasticCache Lowered Outcome | Reaching Additional Users without Impacting Performance Using Amazon ElastiCache Türkçe Reached English Opportunity | Using AWS Services to Reduce Research and Development Time for Yellow Class About Yellow Class Yellow Class plans to continue expanding to reach more users. It also plans to improve the customer experience using artificial intelligence offerings from AWS, which can offer recommendations and make the application more adaptive. “If we had built our entire infrastructure to support video streaming, it would have taken ages and cost a lot in terms of time and people resources,” says Jindal. “Using AWS, we get access to services that are readily available right off the shelf, which has helped us accelerate development. of research and development saved using AWS support As the company grows, Yellow Class can reach a larger audience using the scalability of AWS services. Yellow Class handles the increasing volume of users without impacting performance using Amazon ElastiCache, a fully managed in-memory caching service for unlocking microsecond latency and scale. Mohit Jindal Head of Engineering, Yellow Class users in low-bandwidth areas Deutsch Based in India, Yellow Class provides an application for children aged 5–10 to learn subjects like math, English, and art through daily practice. Its application offers exercises, information, and concept video streaming to supplement classroom learning in schools. $2000 Tiếng Việt Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you. Learn more » Italiano ไทย Amazon CloudFront Yellow Class sought a cloud provider that offered infrastructure and support so that it wouldn’t have to pay steep costs upfront or hire additional employees. The company chose AWS services because it could get started with limited funding, gain access to a wide variety of services, and scale up as the company grew without worrying about infrastructure. In August 2020, Yellow Class started developing its application using AWS Activate, which offers tools, resources, content, and expert support to accelerate startup companies. “Using AWS, we can serve videos at scale across different geographies with good reliability, good performance, and a limited amount of latency,” says Mohit Jindal, head of engineering at Yellow Class. “We’ve also been able to provision different infrastructure to scale and manage traffic.” Learn more » Amazon EC2 Building a Scalable Interactive Learning Application for Kids Using AWS Services with Yellow Class Learn how Yellow Class, a startup in the educational technology industry, reduced costs, optimized video performance, and scaled its application using AWS Elemental MediaConvert. The company’s optimization efforts reduced costs for Yellow Class and its customers with improved speed and reliability. By significantly reducing video file and segment sizes, Yellow Class reduced its distribution and storage costs by 50–60 percent using the Quality-Defined Variable Bitrate feature of AWS Elemental MediaConvert, which minimizes wasted bits to optimize output file sizes and maintains consistent video quality. Its customers save expenses by consuming less bandwidth while viewing videos in the application. Yellow Class further reduces costs using features like automatic scaling from Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. By adding or removing compute capacity to meet the application’s changing demand, Yellow Class scales to meet traffic needs while optimizing performance and cost. Português
Building a Scalable Machine Learning Model Monitoring System with DataRobot _ AWS Partner Network (APN) Blog.txt
AWS Partner Network (APN) Blog Building a Scalable Machine Learning Model Monitoring System with DataRobot by Shun Mao and Oleksandr Saienko | on 29 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence , AWS Marketplace , AWS Partner Network , Customer Solutions , Technical How-to , Thought Leadership | Permalink | Comments |  Share By Shun Mao, Sr. Partner Solutions Architect – AWS By Oleksandr Saienko, Solutions Consultant – DataRobot DataRobot From improving customer experiences to developing products, there is almost no area of the modern business untouched by artificial intelligence (AI) and machine learning (ML). With the rise of generative AI , companies continue to invest more in their AI/ML strategies. However, many organizations struggle to work across the AI lifecycle, especially on the MLOps part. They often find it hard to build an easy-to-manage and scalable machine learning monitoring system that can work for different ML frameworks and environments. Maintaining multiple ML models across different teams can be challenging. Having a centralized platform to monitor and manage them can significantly reduce operational overhead and improve efficiency. DataRobot is an open, complete AI lifecycle platform that leverages machine learning and has broad interoperability with Amazon Web Services (AWS) and end-to-end capabilities for ML experimentation, ML production, and MLOps. DataRobot is an AWS Partner and AWS Marketplace Seller that has achieved Competencies in Machine Learning, Data and Analytics, and Financial Services, and holds the Amazon SageMaker service ready specialization. In this post, we will discuss how the models trained and deployed in Amazon SageMaker can be monitored in platform in a highly scalable fashion. In this way, together with a previously-published AWS blog post , customers can monitor both DataRobot-originated models and SageMaker-originated models under a single pane of glass in DataRobot. Solution Overview The following diagram illustrates a high-level architecture for monitoring Amazon SageMaker models in DataRobot. Figure 1 – Solution architecture diagram. In this diagram, users can build their own custom SageMaker containers to train a machine learning model and host the model as a SageMaker endpoint. The inference container has DataRobot MLOps libraries installed and model monitoring code written so it can collect the inference metrics and statistics and send it to an Amazon Simple Queue Service (SQS) spooler channel. The information queued in SQS is pulled by a DataRobot MLOps agent implemented by Amazon Elastic Container Service (Amazon ECS). Finally, the agent sends the message to the DataRobot environment and users can see the results in the DataRobot user interface (UI). This architecture design is serverless and highly scalable, and it can be used to monitor a large number of models simultaneously. To monitor multiple models, the inference containers send messages to the SQS queue and the agent in ECS can be auto-scaled to accommodate the workload depending on the queue length, which reduces the operational overhead and increases cost efficiency. Prerequisites This post assumes you have access to Amazon SageMaker and also a DataRobot account. DataRobot comes with three deployment types: multi-tenant software as a service (SaaS), single-tenant SaaS, and virtual private cloud (VPC), depending on customers’ requirements. If you don’t have a DataRobot account, follow the instructions to create a trial SaaS account. Create a DataRobot External Deployment to Monitor Models To monitor models hosted in Amazon SageMaker, you need to create an external model deployment in DataRobot with the following steps. Each step generates some necessary information to be collected when deploying the endpoint in SageMaker. Register training data in the DataRobot AI catalog Create DataRobot model package Create DataRobot external prediction environment Create DataRobot deployment These steps can be done manually from the DataRobot UI, or you can use the DataRobot MLOps command line interface (CLI) tool. The example we’re using here is the Iris flower species prediction . To use the DataRobot MLOps CLI tool, you need to install datarobot-mlops-connected-client and set up the DataRobot API token (which you can find in your DataRobot UI) as environment variables. ! pip install datarobot-mlops-connected-client %env MLOPS_SERVICE_URL=https://app.datarobot.com %env MLOPS_API_TOKEN=YOUR_API_TOKEN DataRobot stores statistics about predictions to monitor how distributions and values of features change over time. As a baseline for comparing distributions of features, DataRobot uses the distribution of the training data, which needs to be uploaded to the DataRobot AI Catalog. To register the training data in the DataRobot AI catalog, you can import a dataset through the AI catalog drop-down , which generates a dataset ID that will be used later. DataRobot supports a wide variety of data sources, including some of the most popular AWS services to allow easy data importing. For DataRobot multi-tenant SaaS, DataRobot uses an Amazon Simple Storage Service (Amazon S3) bucket for storing imported data that’s managed by DataRobot. There is no direct access to this bucket, however, as data is secured at-rest using encryption and all data transferred to and from S3 is encrypted in transit using TLS 1.2. Figure 2 – DataRobot AI Catalog and data connectors. After the training dataset is uploaded, you need to create a model package . In the UI, you can create one under Model Registry > Model Packages . Figure 3 – DataRobot model package UI. . Or, you can run the following CLI code and it returns a MODEL_PACKAGE_ID: MODEL_PACKAGE_NAME="SageMaker_MLOps_Demo" prediction_type="Multiclass" model_target = "variety" class_names = ["setosa", "versicolor", "virginica"] model_config = { "name": MODEL_PACKAGE_NAME, "modelDescription": { "modelName": "Iris classification model", "description": "Classification on iris dataset" }, "target": { "type": prediction_type, "name": model_target, "classNames": class_names } } with open("demo_model.json", "w") as model_json_file: model_json_file.write(json.dumps(model_config, indent=4)) !mlops-cli model create --json-config "demo_model.json" --training-dataset-id $TRAINING_DATASET_ID --json --quiet Next, we need to create a custom external prediction environment. Details can be found in the documentation for using the UI. To use the CLI tool, run the following code and it generates a PREDICTION_ENVIRONMENT_ID: demo_pe_config = { "name": "MLOps SageMaker Demo", "description": "Sagemaker DataRobot MLOps", "platform": "aws", "supportedModelFormats": ["externalModel"] } with open("demo_pe.json", "w") as demo_pe_file: demo_pe_file.write(json.dumps(demo_pe_config, indent=4)) !mlops-cli prediction-environment create --json-config "demo_pe.json" --json --quiet Finally, you can create a DataRobot deployment associated with the SageMaker model. In the UI, this can be done under Model Registry > Model Package > Deployments . Figure 4 – DataRobot model deployment UI. To use the CLI, run the following code with proper environment variable setup and it produces a DEPLOYMENT_ID: !mlops-cli model deploy --model-package-id $MODEL_PACKAGE_ID --prediction-environment-id $PREDICTION_ENVIRONMENT_ID --deployment-label "SageMaker_MLOps_Demo" --json --quiet Until now, we have finished all the preparations that are needed inside DataRobot. Next, we will train and host a SageMaker model in AWS. Build a SageMaker Custom Container To build an Amazon SageMaker custom container for training and inference, we are leveraging an existing SageMaker workshop on how to build a custom container; the code artifacts can be found in this GitHub repo . We will keep the original structure of code untouched, but with some key changes in the Dockerfile and predictor.py . In the Dockerfile, we’ll need to add one line to install datarobot-mlops library, which is key for the SageMaker container to send monitoring data out. Add the following line of code right after installation of python in the original Dockerfile: RUN pip --no-cache-dir install datarobot-mlops[aws] For predictor.py , the main changes are on the ScoringService object, where we need to call datarobot.mlops library to collect the metrics and send it to SQS spool channel. from datarobot.mlops.mlops import MLOps class ScoringService(object): model = None mlops = None @classmethod def get_mlops(cls): """MLOPS: initialize mlops library""" # Get environment parameters MLOPS_DEPLOYMENT_ID = os.environ.get('MLOPS_DEPLOYMENT_ID') MLOPS_MODEL_ID = os.environ.get('MLOPS_MODEL_ID') MLOPS_SQS_QUEUE = os.environ.get('MLOPS_SQS_QUEUE') if cls.mlops == None: cls.mlops = MLOps() \ .set_async_reporting(False) \ .set_deployment_id(MLOPS_DEPLOYMENT_ID) \ .set_model_id(MLOPS_MODEL_ID) \ .set_sqs_spooler(MLOPS_SQS_QUEUE) \ .init() return cls.mlops @classmethod def get_model(cls): if cls.model == None: with open(os.path.join(model_path, "decision-tree-model.pkl"), "rb") as inp: cls.model = pickle.load(inp) return cls.model @classmethod def predict(cls, input): clf = cls.get_model() class_names = json.loads(os.environ.get('CLASS_NAMES')) start_time = time.time() predictions_array = clf.predict_proba(input.values) prediction = np.take(class_names, np.argmax(predictions_array, axis=1)) execution_time = time.time() - start_time ml_ops = cls.get_mlops() ml_ops.report_deployment_stats(predictions_array.shape[0], execution_time * 1000) ml_ops.report_predictions_data( features_df=input, predictions=predictions_array.tolist(), class_names=class_names, association_ids=None ) return prediction Here, we do not modify the training code since the monitoring is mainly for the inference. With the above changes ready, we build a Docker image and push it to Amazon Elastic Container Registry (Amazon ECR) with the name sagemaker-datarobot-decision-trees:latest . Deploy Amazon SQS and ECS to Receive Inference Monitoring Info The main infrastructure we need here is Amazon SQS and Amazon ECS on AWS Fargate . SQS serves as a spool channel to receive monitoring data from the SageMaker inference container, and it’s highly scalable and flexible to adapt to a variety of scenarios. Create an SQS queue in your AWS account named aws-mlops-agent-demo by following the instructions and leave everything else as default. The data in SQS will be picked up by the DataRobot agent deployed in ECS by a pre-built Docker image running on AWS Fargate. The steps to build the Docker image with the DataRobot MLOps agent are: Download the DataRobot MLOps package from your DataRobot UI in the Developer Tool tab. Unzip the package and navigate to the folder datarobot_mlops_package-8.2.13/tools/agent_docker . As of this writing, the latest version of this package is 8.2.13. Find the file mlops.agent.conf.yaml in the datarobot_mlops_package-8.2.13/tools/agent_docker/conf folder and edit the information in the following sections: #URL to the DataRobot MLOps service mlopsUrl: https://app.datarobot.com # DataRobot API token apiToken: "you api token" channelConfigs: # - type: "FS_SPOOL" # details: {name: "filesystem", directory: "/tmp/ta"} - type: "SQS_SPOOL" details: {name: "sqs", queueUrl: "https://sqs.us-east-1.amazonaws.com/651505238245/aws-mlops-agent-demo", queueName: "aws-mlops-agent-demo"} # - type: "RABBITMQ_SPOOL" You can see that DataRobot supports several communication channels (spooler channels) to collect model monitoring statistics, and in this example we choose to use Amazon SQS. With above edit in place, build the agent Docker image and push it to Amazon ECR. The step of creating an Amazon ECS cluster with Fargate deployment can be found in the documentation . When selecting s container image, choose the DataRobot agent image we just built. You can keep everything else as default. Train the Model and Deploy it as a SageMaker Endpoint Running the following code in Amazon SageMaker Studio Notebook can train a simple decision tree model in SageMaker. import sagemaker as sage sess = sage.Session() account = sess.boto_session.client("sts").get_caller_identity()["Account"] region = sess.boto_session.region_name image = "{}.dkr.ecr.{}.amazonaws.com/sagemaker-datarobot-decision-trees:latest".format(account, region) # Save your input data in the /data folder WORK_DIRECTORY = "data" data_location = sess.upload_data(WORK_DIRECTORY, key_prefix=prefix) tree = sage.estimator.Estimator( image, role, 1, "ml.c4.2xlarge", output_path="s3://{}/output".format(sess.default_bucket()), sagemaker_session=sess, ) tree.fit(data_location) The following code will deploy the model as an endpoint with necessary DataRobot MLOps information that we generated in previous steps, such as “MLOPS_DEPLOYMENT_ID”, “MLOPS_MODEL_ID”, “MLOPS_SQS_QUEUE”, “prediction_type” and “CLASS_NAMES” in the inference container. from sagemaker.serializers import CSVSerializer import json prediction_type="Multiclass" class_names = ["setosa", "versicolor", "virginica"] MLOPS_SQS_QUEUE="https://sqs.us-east-1.amazonaws.com/ 651505238245/ aws-mlops-agent-demo" #passing all needed environment variables to sagemaker deployment: env_vars={ "MLOPS_DEPLOYMENT_ID": deployment_id, "MLOPS_MODEL_ID": model_id, "MLOPS_SQS_QUEUE": MLOPS_SQS_QUEUE, "prediction_type": prediction_type, "CLASS_NAMES": json.dumps(class_names)} print(env_vars) predictor = tree.deploy(1, "ml.m4.xlarge", serializer = CSVSerializer(), env=env_vars) Now, this has completed all of the deployment and the endpoint is ready to serve inference request. Once the endpoint is called, the monitoring information will be seen in the DataRobot UI. For more details on the code, please refer to this GitHub repo . Explore DataRobot’s Monitoring Capabilities DataRobot offers a central hub for monitoring model health and accuracy for all deployed models with low latency. For each deployment, DataRobot provides a status banner with model-specific information. Figure 5 – DataRobot model monitoring main UI. When you select a specific deployment, DataRobot opens an overview page for that deployment. The overview page provides a model and environment specific summary that describes the deployment, including the information you supplied when creating the deployment and any model replacement activity. Figure 6 – DataRobot deployment options. The Service Health tab tracks metrics about a deployment’s ability to respond to prediction requests quickly and reliably. This helps identify bottlenecks and assess capacity, which is critical to proper provisioning. The tab also provides informational tiles and a chart to help monitor the activity level and health of the deployment. Figure 7 – DataRobot model health monitoring. As training and production data change over time, a deployed model loses predictive power, and the data surrounding the model is said to be drifting. By leveraging the training data and prediction data that’s added to your deployment, the Data Drift dashboard helps you analyze a model’s performance after it has been deployed. Figure 8 – DataRobot model drift monitoring. There are several other tabs related to deployment (like Accuracy, Challenger Models, Usage, Custom Metrics, and Segmented Analysis) which are not in scope of this post but you can get more details in the DataRobot documentation . Conclusion In this post, you learned how to build a highly scalable machine learning model monitoring system using DataRobot for Amazon SageMaker hosted models. DataRobot also has other features, such as automatic feature discovery, autoML, model deployment, and ML notebook development. To get started with DataRobot, visit the website to set up a personalized demo . DataRobot is also available in AWS Marketplace . . . DataRobot – AWS Partner Spotlight DataRobot is an AWS Partner and open, complete AI lifecycle platform that leverages machine learning and has broad interoperability with AWS and end-to-end capabilities for ML experimentation, ML production, and MLOps. Contact DataRobot | Partner Overview | AWS Marketplace | Case Studies TAGS: AWS Competency Partners , AWS Partner Guest Post , AWS Partner Solutions Architects (SA) , AWS Partner Success Stories , AWS Service Ready Products , DataRobot Comments View Comments Resources AWS Partner and Customer Case Studies AWS Partner Network Case Studies Why Work with AWS Partners Join the AWS Partner Network Partner Central Login AWS Training for Partners AWS Sponsorship Opportunities Follow  AWS Partners LinkedIn  AWS Partners Twitter  AWS Partners YouTube  AWS Email Updates  APN Blog RSS Feed
Building generative AI applications for your startup part 1 _ AWS Startups Blog.txt
AWS Startups Blog Building generative AI applications for your startup, part 1 by Hrushikesh Gangur | on 05 JUL 2023 | in Amazon Machine Learning , Artificial Intelligence , AWS for Startups , Generative AI , Startup | Permalink |  Share This blog series in two parts discusses how to build artificial intelligence (AI) systems that can generate new content. The first part gives an introduction, explains various approaches to build generative AI applications, and reviews their key components. The second part maps these components with the right AWS services, which can help startups quickly develop and launch generative AI products or solutions by avoiding time and money spent on undifferentiated heavy lifting work. Recent generative AI advancements are raising the bar on tools that can help startups to rapidly build, scale, and innovate. This widespread adoption and democratization of machine learning (ML), specifically with the transformer neural network architecture , is an exciting inflection point in technology. With the right tools, startups can build new ideas or pivot their existing product to harness the benefits of generative AI for their customers. Are you ready to build a generative AI application for your startup? Let’s first review the concepts, core ideas, and common approaches to build generative AI applications. What are generative AI applications? Generative AI applications are programs that are based on a type of AI that can create new content and ideas, including conversations, stories, images, videos, code, and music. Like all AI applications, generative AI applications are powered by ML models that are pre-trained on vast amounts of data, and commonly referred to as foundation models (FMs). An example of a generative AI application is Amazon CodeWhisperer , an AI coding companion that helps developers to build applications faster and more securely by providing whole line and full function code suggestions in your integrated development environment (IDE). CodeWhisperer is trained on billions of lines of code, and can generate code suggestions ranging from snippets to full functions instantly, based on your comments and existing code. Startups can use AWS Activate credits with the CodeWhisperer Professional Tier, or start with the Individual Tier which is free to use. Figure 1: Amazon CodeWhisperer writes a JavaScript code using comments as the prompt. The rapidly-developing generative AI landscape There is rapid growth occurring in generative AI startups, and also within startups building tools to simplify the adoption of generative AI. Tools such as LangChain —an open source framework for developing applications powered by language models—are making generative AI more accessible to a wider range of organizations, which will lead to faster adoption. These tools also include prompt engineering, augmenting services (such as embedding tools or vector databases), model monitoring, model quality measurement, guard rails, data annotation, reinforced learning from human feedback (RLHF), and many more. Figure 2: Components of the generative AI landscape. An introduction to foundation models For a generative AI application or tool, at the core is the foundation model. Foundation models are a class of powerful machine learning models that are differentiated by their ability to be pre-trained on vast amounts of data in order to perform a wide range of downstream tasks. These tasks include text generation, summarization, information extraction, Q&A, and/or chatbots. In contrast, traditional ML models are trained to perform a specific task from a data set. Figure 3: Demonstrates the difference between a traditional ML model and a foundation model. So how does a foundation model “generate” the output that generative AI applications are known for? These capabilities result from learning patterns and relationships that allow the FM to predict the next item or items in a sequence, or generate a new one: In text-generating models, FMs output the next word, next phrase, or the answer to a question. For image-generation models, FMs output an image based on the text. When an image is an input, FMs output the next relevant or upscaled image, animation, or 3D images. In each case, the model starts with a seed vector derived from a “prompt”: Prompts describe the task the model has to perform. The quality and detail (also known as the “context”) of the prompt determine the quality and relevance of the output. Figure 4: A user inputs a prompt into a foundation model and it generates a response. The simplest implementation of generative AI applications The simplest approach for building a generative AI application is to use an instruction-tuned foundation model, and provide a meaningful prompt (“prompt engineering”) using zero-shot learning or few-shot learning. An instruction-tuned model (such as FLAN T5 XXL, Open-Llama, or Falcon 40B Instruct) uses its understanding of related tasks or concepts to generate predictions to prompts. Here are some prompt examples: Zero-shot learning Title: \”University has new facility coming up“\\n Given the above title of an imaginary article, imagine the article.\n RESPONSE: <a 500-word article> Few-shot learning This is awesome! // Positive This is bad! // Negative That movie was hopeless! // Negative What a horrible show! // RESPONSE: Negative Startups, in particular, can benefit from the rapid deployment, minimal data needs, and cost optimization that result from using an instruction-tuned model. To learn more about considerations for selecting a foundation model, check out Selecting the right foundation model for your startup . Customizing foundation models Not all use cases can be met by using prompt engineering on instruction-tuned models. Reasons for customizing a foundation model for your startup may include: Adding a specific task (such as code generation) to the foundation model Generating responses based on your company’s proprietary dataset Seeking responses generated from higher quality datasets than those that pre-trained the model Reducing “hallucination,” which is output that is not factually correct or reasonable There are three common techniques to customize a foundation model. Instruction-based fine-tuning This technique involves training the foundation model to complete a specific task, based on a task-specific labeled dataset. A labeled data set consists of pairs of prompts and responses. This customization technique is beneficial to startups who want to customize their FM quickly and with a minimal dataset: It takes a fewer data sets and steps to train. The model weights update based on the task or the layer that you are fine-tuning. Figure 5: The instruction-based fine-tuning workflow. Domain adaptation (also known as “further pre-training”) This technique involves training the foundation model using a large “corpus”—a body of training materials—of domain-specific unlabeled data (known as “self-supervised learning”). This technique benefits use cases that include domain-specific jargon and statistical data that the existing foundation model hasn’t seen before. For example, startups building a generative AI application to work with proprietary data in the financial domain may benefit from further pre-training the FM on custom vocabulary and from “tokenization,” a process of breaking down text into smaller units called tokens. To achieve higher quality, some startups implement reinforced learning from human feedback (RLHF) techniques in this process. On top of this, instruction-based fine-tuning will be required to fine-tune a specific task. This is an expensive and time-consuming technique compared to the others. The model weights update across all the layers. Figure 6: The domain adaptation workflow. Information retrieval (also known as “retrieval-augmented generation” or “RAG”) This technique augments the foundation model with an information retrieval system that is based on dense vector representation. The closed-domain knowledge or proprietary data goes through a text-embedding process to generate a vector representation of the corpus, and is stored in a vector database. A semantic search result based on the user query becomes the context for the prompt. The foundation model is used to generate a response based on the prompt with context. In this technique, the foundation model’s weight is not updated. Figure 7: The RAG workflow. Components of a generative AI application In the above sections, we learnt various approaches startups can take with foundation models when building generative AI applications. Now, let’s review how these foundation models are part of the typical ingredients or components required to build a generative AI application. Figure 8: Components of a generative AI application. At the core is a foundation model (center). In the simplest approach discussed earlier in this blog, this requires a web application or mobile app (top left) that accesses the foundation model through an API (top). This API is either a managed service through a model provider or self-hosted using an open source or proprietary model. In the self-hosting case, you may need a machine learning platform that is supported by accelerated computing instances to host the model. In the RAG technique, you will need to add a text embedding endpoint and a vector database (left and lower left). Both of these are provided as either an API service or are self-hosted. The text embedding endpoint is backed by a foundation model, and the choice of foundation model depends on the embedding logic and tokenization support. All of these components are connected together using developer tools, which provide the framework for developing generative AI applications. And, lastly, when you choose the customization techniques of fine-tuning or further pre-training of a foundation model (right), you need components that help with data pre-processing and annotation (top right), and an ML platform (bottom) to run the training job on specific accelerated computing instances. Some model providers support API-based fine-tuning, and in such cases, you need not worry about the ML platform and underlying hardware. Regardless of the customization approach, you may also want to integrate components that provide monitoring, quality metrics, and security tools (lower right). Conclusion In this part of the blog, we learnt various approaches or patterns startups can take to build a generative AI application and the key components involved. In the next part, we will learn how these components map to AWS services, and showcase an example architecture. TAGS: AIML Hrushikesh Gangur Hrushikesh Gangur is a Principal Solutions Architect for AI/ML startups with expertise in both AWS machine learning and networking services. He helps startups building generative AI, autonomous vehicles, and ML platforms to run their business efficiently and effectively on AWS. Resources AWS Activate AWS for Startups Resources Build Your Startup with AWS AWS for Startups Events Follow  AWS Startups Twitter  AWS Cloud Twitter  AWS Startups Facebook  AWS Startups Instagram  AWS Startups LinkedIn  Twitch  Email Updates
Calgary Airport Authority Enhances Passenger Services and Cybersecurity on the AWS Cloud _ Case Study _ AWS.txt
Customer Stories / Transportation As part of the Authority’s efforts to grow and diversify its services, Ian Turner, general manager of IT enterprise architecture at YYC, recognized the opportunity to strengthen Calgary’s critical infrastructure. The Authority honed in on how it could build new capacity to mitigate potential events without service interruptions. To do that, YYC decided to migrate its critical private workloads to the Amazon Web Services (AWS) Cloud and rearchitect its public websites for added security and scalability. Français Increased The automatic capabilities of AWS tools have significantly reduced YYC technician workforce hours and maintenance costs. In parallel, IT teams at YYC gained valuable hands-on experience and knowledge from working closely with the AWS Professional Services team, and now administer many systems themselves. To achieve this, YYC rearchitected with edge caching from Amazon CloudFront, a content delivery network (CDN) service, and deployed an application load balancer. For database scalability and automatic backup, it used Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. For file-system workloads, the company used Amazon FSx, which makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. Español The airport is seeing other business advantages too. Monitoring and tagging within the AWS Cloud environment indicate where resources are being used, helping business groups manage costs and primary key performance indicators (KPIs). Cloud services have also reduced the need for onsite equipment and cooling. Those gains are reducing YYC’s overall carbon footprint. 日本語 2023 Ian Turner General Manager of IT Enterprise Architecture, Calgary Airport Authority AWS Professional Services Calgary Airport Authority Enhances Passenger Services and Cybersecurity on the AWS Cloud 한국어 AWS WAF helps you protect against common web exploits and bots that can affect availability, compromise security, or consume excessive resources.  Learn more » Opportunity | Strengthening Security at a Top-Tier Air Hub Overview | Opportunity | Solution | Outcome | AWS Services Used To address the failover issue, YYC set up an AWS Cloud environment as a “third site” with an independent power source and redundant connections over multiple internet service providers. YYC used AWS Transit Gateway, a distributed service that applies a hub-and-spoke method to public clouds. The new environment and architecture have improved the airport’s data-transmission capabilities while enhancing security. “The combination makes us feel comfortable that we’re protected,” says Turner. AWS Services Used YYC undertook a final risk assessment, surveyed available cloud providers, and made its decision. For Turner, the choice was clear: “AWS was the best fit for us all around.” For added security, YYC now uses AWS WAF, which helps to protect against common web exploits and bots, and AWS Shield to protect its on premises workloads from distributed denial of service (DDoS) attacks. Amazon GuardDuty is a threat-detection service that continuously monitors businesses AWS accounts and workloads for malicious activity and unauthorized behavior, and AWS Security Hub is a cloud security posture management service that centralizes and automates security checks and alerts. Solution | A Cloud Solution Built for Performance 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский to build and complete the solution عربي AWS WAF 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Learn more » For a project of its size, the deployment took place rapidly. YYC and AWS Professional Services, a global team of experts that can help businesses realize desired business outcomes when using the AWS Cloud, completed the solution in just 2.5 months, and it was implemented in December 2022. Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Overview resiliency Get Started The biggest challenge was to ensure that data from public workloads and applications (such as flight information or parking bookings) moved efficiently and securely between the remaining on-premises applications and the AWS Cloud. The solution needed high availability, increased speed, better load distribution, a scalable database, and a high-performing file system. Türkçe AWS Transit Gateway connects your Amazon Virtual Private Clouds (VPCs) and on-premises networks through a central hub. This connection simplifies your network and puts an end to complex peering relationships. Learn more » English About Calgary Airport Authority 2.5 months As a not-for-profit organization, cost was a key consideration for the Authority. Some of the YYC’s technical infrastructure was nearing the end of its lifespan. The cost-efficient option was to retire it and move to the cloud, which offered evergreen infrastructure on a permanent basis. The Calgary International Airport (YYC) is the fourth-busiest airport in Canada and home to Canada’s second-largest airline, WestJet and its global hub,. YYC meets the needs of multiple airline partners and approximately 50,000 travelers a day. Until recently, it did this entirely with on-premises equipment. Today, with its services on AWS, YYC delivers faster, better passenger services. Because of the cloud’s increased redundancy and resiliency, the risk of system downtime is negligible. As a key part of YYC’s broader digital transformation journey, the migration has had major positive impacts on the organization overall. “AWS is a lot further ahead in its technology, offerings, and capabilities than other cloud providers,” says Turner. YYC now has a solid foundation to spin up more cloud-based customer service improvements. High Scalability at Low Cost The experience we had working with AWS, from the presales calls to sign-off at the end—you couldn’t ask for anything better.” Deutsch The migration challenge was twofold. From a business perspective, the transition needed to be seamless. YYC manages a significant amount of data flowing in and out of its systems, and services needed to continue without disruption. From an IT perspective, YYC needed assurance that the technologies would perform in the same way in the cloud as they did on premises. Tiếng Việt Airports serve travelers all day, every day. To fulfill this mission, they need passenger services that are highly flexible, secure, and available without interruption.For the Calgary Airport Authority (the Authority), security has always been a top priority. In 2022, as post-COVID travel resumed, the Authority took the opportunity to plan ahead and prioritize an agile, highly secure, digital-first travel experience for its passengers. Moving workloads to the cloud became a key part of this road map.   As the airport moves more services, technologies, and applications to the cloud, it plans to use additional AWS features to innovate service delivery across more customer service areas. Turner is confident in choosing the AWS Cloud: “I would recommend AWS over other providers based on its offerings and capabilities alone.” Italiano ไทย IT business groups from across YYC—airport systems, corporate services, cybersecurity, and technical infrastructure—met in July 2022 to perform an internal-needs assessment and determine requirements. The top priorities were cybersecurity, scalability, resiliency, and cost. YYC needed access to a wide range of leading-edge services. It also wanted ease of integration and hands-on assistance standing up the foundational cloud environment. Amazon CloudFront Outcome | A Resilient Foundation Learn more » The Calgary Airport Authority (the Authority) is a not-for-profit, non-share capital corporation, incorporated under the Province of Alberta's Regional Airports Authorities Act (Alberta). Since 1992, it has been responsible for the operation, management and development of YYC Calgary International Airport (YYC) and, since 1997, Springbank Airport (YBW), under a long-term lease from the Government of Canada. AWS Transit Gateway security On AWS, YYC benefits from the elasticity of the cloud and the ability to scale its storage on demand. “We don’t have to worry about running out of space,” says Ian Turner, general manager, IT Enterprise Architecture. With its on-premises servers, the airport needed new hardware when capacity limits were reached. With that came added costs and procurement challenges. But on AWS, notes Turner, “there's no procurement. There's no requisitions through the supply chain. It just does what it does.” Português To mitigate passenger-service disruptions in the event of a cybersecurity incident, the Calgary Airport Authority migrated its on-premises data center to the AWS Cloud.
CalvertHealth-case-study.txt
Using AWS Solutions for Speedier System Recovery Improving Resilience Using AWS Français Achieving a Secure, Cost-Effective Solution Reduced potential revenue losses caused by reputation damage Español Melissa Hall Chief Information Officer, CalvertHealth  Learn More 日本語 Because they need to access EHR quickly, CalvertHealth nurses and clinicians benefit from the fact that the new system looks the same. Staff members work faster with a system that looks familiar.  Based in Calvert County, Maryland, CalvertHealth is a not-for-profit, community-owned hospital with over 200 active and consulting physicians on staff. It provides primary care and other services in its offices in several other locations around the county. Get Started 한국어 To learn more, visit https://aws.amazon.com/disaster-recovery/.  AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. About CalvertHealth Benefits Created resilience in the electronic health records system As a stand-alone hospital in rural Maryland, CalvertHealth found itself in a trifecta of risk in terms of its RTO. CalvertHealth depends on technology, but because of its rural location, it has no nearby organizations to rely on for backup should disaster strike. At the same time, the hospital’s mid-Atlantic location puts it in the path of hurricanes and other natural events. Its trove of valuable patient data increases the risk of ransomware and other cyberattacks. On average, such disasters can cost a midsize hospital nearly $5,600 per minute or over $300,000 per hour, according to a recent Gartner report—a serious and costly risk.  “The goal of almost every healthcare organization that has sensitive data is to bring the system back up as quickly as it can to decrease the amount of downtime,” says Melissa Hall, chief information officer of CalvertHealth.  AWS Services Used CalvertHealth Improves Electronic Health Records System Resilience and Shortens Recovery Time Using AWS Elastic Disaster Recovery Bahasa Indonesia Improved staff morale and confidence in the system Ρусский عربي 中文 (简体) “The fact that it’s hybrid and in the AWS environment means that staff members don’t have to monitor the connection as much as they previously had to,” Hall says. “That’s a plus because it lets us focus on more important things. We can trust that we have others who are watching the system to keep it working the way it should.”  It takes stress off me and the other executives knowing that we have AWS tools in place that can help us get things back up and running as soon as we possibly can.” Learn more » Contemporary patient care relies on information exchange with other organizations. CalvertHealth regularly communicates with the Maryland Health Information Exchange and the state about patients’ health histories, current prescriptions, and opioid usage, for instance. If the system is down, CalvertHealth can’t make appropriate decisions about patient care. This not only potentially harms patients but can also cause damage to the organization’s reputation.  Türkçe AWS Backup 中文 (繁體) English Reduced disaster recovery time by 97%, from 72 hours to under 2 hours Use AWS Backup to centralize and automate data protection across AWS services and hybrid workloads. AWS Backup offers a cost-effective, fully managed, policy-based service that further simplifies data protection at scale.   Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. “Using solutions from AWS and HealthCare Triangle, we’ve achieved something that not a lot of rural stand-alone hospitals can do,” says Hall. “It takes stress off me and the other executives knowing that we have AWS tools in place that can help us get things back up and running as soon as we possibly can. That’s a win-win for us.” To accomplish its goal, CalvertHealth deployed several solutions, including AWS Elastic Disaster Recovery (CloudEndure Disaster Recovery), which minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. It also used AWS Backup, which organizations can use to centralize and automate data protection across AWS services and hybrid workloads. A CalvertHealth network engineer worked alongside HealthCare Triangle and the AWS team to deploy AWS Elastic Disaster Recovery and AWS Backup on almost 140 servers. They pulled the information through a VPN setup that helped them to replicate the data in the AWS environment. The changes reduced CalvertHealth’s RTO from 72 hours to under 2 hours—a 97 percent improvement. Deutsch Finally, CalvertHealth could get the system up and running with no up-front costs. The AWS team worked with HealthCare Triangle to minimize costs and invest in the project as part of an AWS initiative to help hospitals. The minimal up-front costs meant that Hall didn’t need to take it to the board or present it as a cost to anyone other than her supervisor. “We could just do the right thing rather than worrying about how to do it,” Hall says. CalvertHealth’s consultant HealthCare Triangle, a subsidiary of AWS Partner SecureKloud, has MEDITECH expertise and recommended that CalvertHealth migrate its EHR recovery site to the AWS cloud. Doing so not only added resilience to CalvertHealth’s EHR but also kept the organization’s data in a usable interface. In addition, migrating its application recovery system to AWS meant that CalvertHealth would not have to configure and manage all the servers manually in its corporate data center in the event of a disaster, hastening recovery time.  Tiếng Việt Italiano ไทย Contact Sales AWS Elastic Discovery Recovery Its new EHR backup and recovery solution has meant an improvement in CalvertHealth’s security and compliance. During a recent third-party security audit, the substantial reduction in RTO improved CalvertHealth’s overall security rating. The organization also shared this information with its cybersecurity insurance vendor. “They were impressed that a little stand-alone hospital has been able to achieve such a short RTO,” Hall says. “That was a big win for us.”  Disaster recovery, the ability to restore services quickly after any sort of interruption, is important for any organization. But for healthcare organizations, it’s critical. An organization’s resilience when it comes to disaster recovery is measured by two metrics. The first is the recovery time objective (RTO), which measures the maximum allowable time between interruption and recovery of service. The second is the recovery point objective (RPO), which measures the amount of data that can be lost within a period before significant harm occurs.  CalvertHealth had been using the MEDITECH EHR system to provide access to patient data. Data backups were done on premises in a corporate data center on servers that used third-party software. The RTO for CalvertHealth’s EHR system was 48 to 72 hours—an unacceptable amount of time.  Implementing the AWS solutions to shorten the RTO has improved the resilience of the CalvertHealth system, a relief for administrators and staff alike.  Português Improving CalvertHealth’s resilience would help the hospital serve patients more reliably. So, when Amazon Web Services (AWS) approached CalvertHealth with a proposal that would shorten the RTO and RPO for its primary electronic healthcare records (EHR) system, the organization gladly accepted. By using AWS robust backup and disaster recovery capabilities, CalvertHealth could drastically decrease its RTO and RPO.
Capital One Saves Developer Time and Reduces Costs Going Serverless Using AWS Lambda and Amazon ECS _ Case Study _ AWS.txt
AWS Lambda Outcome | Continuing to Modernize and Improve Using AWS Français Capital One is still in the process of modernizing its applications, and going serverless is not where this modernization will end. The company plans to become as cloud native as possible and is potentially looking to shift its extract, transform, and load jobs to AWS Lambda. Capital One recently adopted AWS Glue, a serverless data integration service used to discover, prepare, move, and integrate data from multiple sources, and at the same time, evaluated other new serverless options, such as AWS Step Functions, visual workflows for distributed applications, alongside AWS Lambda. “Any organization that’s committed to its technical transformation should work alongside the AWS team to go in the right direction,” says Mao. 2023 Another benefit of going serverless is the improved cost efficiency. By migrating to AWS Lambda, Capital One hopes to improve its costs. It can achieve this in part by saving developer time. “If we can save developers’ time by reducing infrastructure-related work, that savings is enormous,” says Mao. The other cost-efficiency factor is AWS Lambda’s pay-as-you-use model. The company pays at a per-millisecond interval for compute costs. “The cost efficiency is awesome. It changes the way that we think about building applications,” says Mao. “Using AWS Lambda, our engineers learn to build small and think about performance.” One application achieved 90 percent cost savings by migrating to AWS Lambda. Español Learn more » cost savings for applications 日本語 AWS Services Used Capital One Saves Developer Time and Reduces Costs by Going Serverless on AWS Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS SAM Improved Solution | Improving Speed to Market and Reducing Costs Using AWS Serverless Technologies George Mao Senior Distinguished Engineer, Capital One Financial Corporation Amazon ECS Overview 中文 (繁體) Bahasa Indonesia Opportunity | Using AWS Lambda to Save Developer Time for Capital One Many Capital One applications run once a day, and others run once a month, which makes leaving instances up all the time inefficient. “When we migrate to AWS Lambda, our teams don’t have to worry about whether to scale instances up or down,” says Mao. “The same batch process that runs 1 or 100 times a day runs on AWS Lambda.” Developers can spend their time and effort making better products for the customers rather than worrying about managing or operating the infrastructure. The company is making better applications and delivering more features faster with a quicker time to market. “All the things that make the cloud great are enhanced by going serverless, which is a win-win for us and our customers,” says Mao. Capital One Financial Corporation (Capital One) exited its last legacy, on-premises data centers in 2020 to go all in on the cloud. Capital One has strict timelines for code patches, machine refreshes, and bug remediation. Its engineers, who would prefer to be building applications, were spending significant time working on infrastructure. Capital One improved its cost efficiency, speed to market, and developer quality of life by using Amazon Web Services (AWS) such as AWS Lambda—a serverless, event-driven compute service that businesses use to run code for virtually any type of application or backend service without provisioning or managing servers. The company is now achieving significant time savings for its developers in applications that are migrated to serverless compute while remaining well governed. Ρусский Any organization that’s committed to its technical transformation should work alongside the AWS team to go in the right direction. عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. This strategy has resulted in a large shift in the developer mindset and tooling process for the company—migrating away from a monolithic infrastructure and toward the building of smaller applications with higher-quality performance. During this digital transformation, the company has benefited from directly communicating with AWS service specialists for near-real-time support when it has production outages and service issues. “We treat the AWS account team as an extension of our internal architecture teams and communicate with the team daily to handle service issues and get updates quickly,” says Mao. Learn how Capital One in financial services saved developer time and reduced cost by going serverless using AWS Lambda and Amazon ECS. Learn more » operational efficency   Up to 90% With its applications running in various states of monolithic and modern architectures, Capital One’s default strategy is to migrate its applications to serverless compute, where it can reduce the overall operational burden for its engineering teams and increase operational efficiency. This migration helped the company ease the challenges that are associated with legacy architectures by reducing idle times and improving local debugging. For use cases when AWS Lambda cannot be used, the company uses Amazon Elastic Container Service (Amazon ECS)—which runs highly secure, reliable, and scalable containers—powered by AWS Fargate, a serverless, pay-as-you-go compute engine that is used to build applications without managing servers. AWS Fargate Customer Stories / Financial Services The company’s engineers use a central pipeline that has been upgraded to adapt to serverless computing to release code. To reduce the idle time that its engineers have to spend waiting for releases to go through this pipeline, Capital One uses the AWS Serverless Application Model (AWS SAM), an open-source framework for building serverless applications that provides shorthand syntax to express functions, APIs, databases, and event source mappings. By using AWS SAM, its engineers can run as much as possible locally before touching the release pipeline. Capital One has adapted its tooling and release process to deploy tens of thousands of AWS Lambda functions. “We can get what we need out of standard tooling like AWS SAM,” says Mao. new applications in days Türkçe English AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications.. Saved About Capital One Financial Corporation Built Deutsch AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. By migrating its applications to serverless services like AWS Lambda, Capital One has achieved significant time savings across different developer teams. This saved time translates directly into an improved speed to market. Migrating its old applications to AWS Lambda could take weeks to months, depending on the underlying architecture of the application. For new applications, some teams at the company have put together a working application in days. Tiếng Việt Capital One is one of the top 10 largest banks in the United States, providing its banking and credit card services to its customers since 1994. The technical organization within the company has more than 12,000 people, the majority of whom are engineers. In 2020, the company closed its last physical data center and migrated everything to AWS. “Since then, we’ve made the decision to go serverless whenever possible,” says George Mao, senior distinguished engineer at Capital One. “Most of our technical organization is focused on modernizing our entire offering of applications.” As of the end of 2022, more than a third of Capital One’s apps use serverless technology. Italiano ไทย Contact Sales significant time for developers Capital One Financial Corporation is one of the top 10 largest banks in the United States and has been providing banking and credit card services since its founding in 1994. Português
Capture public health insights more quickly with no-code machine learning using Amazon SageMaker Canvas _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Capture public health insights more quickly with no-code machine learning using Amazon SageMaker Canvas by Henrik Balle and Dan Sinnreich | on 28 JUN 2023 | in Amazon SageMaker , Amazon SageMaker Canvas , Artificial Intelligence , Intermediate (200) | Permalink | Comments |  Share Public health organizations have a wealth of data about different types of diseases, health trends, and risk factors. Their staff has long used statistical models and regression analyses to make important decisions such as targeting populations with the highest risk factors for a disease with therapeutics, or forecasting the progression of concerning outbreaks. When public health threats emerge, data velocity increases, incoming datasets can grow larger, and data management becomes more challenging. This makes it more difficult to analyze data holistically and capture insights from it. And when time is of the essence, speed and agility in analyzing data and drawing insights from it are key blockers to forming rapid and robust health responses. Typical questions public health organizations face during times of stress include: Will there be sufficient therapeutics in a certain location? What risk factors are driving health outcomes? Which populations have a higher risk of reinfection? Because answering these questions requires understanding complex relationships between many different factors—often changing and dynamic—one powerful tool we have at our disposal is machine learning (ML), which can be deployed to analyze, predict, and solve these complex quantitative problems. We have increasingly seen ML applied to address difficult health-related problems such as classifying brain tumors with image analysis and predicting the need for mental health to deploy early intervention programs. But what happens if public health organizations are in short supply of the skills required to apply ML to these questions? The application of ML to public health problems is impeded, and public health organizations lose the ability to apply powerful quantitative tools to address their challenges. So how do we remove these bottlenecks? The answer is to democratize ML and allow a larger number of health professionals with deep domain expertise to use it and apply it to the questions they want to solve. Amazon SageMaker Canvas is a no-code ML tool that empowers public health professionals such as epidemiologists, informaticians, and bio-statisticians to apply ML to their questions, without requiring a data science background or ML expertise. They can spend their time on the data, apply their domain expertise, quickly test hypothesis, and quantify insights. Canvas helps make public health more equitable by democratizing ML, allowing health experts to evaluate large datasets and empowering them with advanced insights using ML. In this post, we show how public health experts can forecast on-hand demand for a certain therapeutic for the next 30 days using Canvas. Canvas provides you with a visual interface that allows you to generate accurate ML predictions on your own without requiring any ML experience or having to write a single line of code. Solution overview Let’s say we are working on data that we collected from states across the US. We may form a hypothesis that a certain municipality or location doesn’t have enough therapeutics in the coming weeks. How can we test this quickly and with a high degree of accuracy? For this post, we use a publicly available dataset from the US Department of Health and Human Services, which contains state-aggregated time series data related to COVID-19, including hospital utilization, availability of certain therapeutics, and much more. The dataset ( COVID-19 Reported Patient Impact and Hospital Capacity by State Timeseries (RAW) ) is downloadable from healthdata.gov, and has 135 columns and over 60,000 rows. The dataset is updated periodically. In the following sections, we demonstrate how to perform exploratory data analysis and preparation, build the ML forecasting model, and generate predictions using Canvas. Perform exploratory data analysis and preparation When doing a time series forecast in Canvas, we need to reduce the number of features or columns according to the service quotas. Initially, we reduce the number of columns to the 12 that are likely to be the most relevant. For example, we dropped the age-specific columns because we’re looking to forecast total demand. We also dropped columns whose data was similar to other columns we kept. In future iterations, it is reasonable to experiment with retaining other columns and using feature explainability in Canvas to quantify the importance of these features and which we want to keep. We also rename the state column to location . Looking at the dataset, we also decide to remove all the rows for 2020, because there were limited therapeutics available at that time. This allows us to reduce the noise and improve the quality of the data for the ML model to learn from. Reducing the number of columns can be done in different ways. You can edit the dataset in a spreadsheet, or directly inside Canvas using the user interface. You can import data into Canvas from various sources, including from local files from your computer, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Athena , Snowflake (see Prepare training and validation dataset for facies classification using Snowflake integration and train using Amazon SageMaker Canvas ), and over 40 additional data sources . After our data has been imported, we can explore and visualize our data to get additional insights into it, such as with scatterplots or bar charts. We also look at the correlation between different features to ensure that we have selected what we think are the best ones. The following screenshot shows an example visualization. Build the ML forecasting model Now we’re ready to create our model, which we can do with just a few clicks. We choose the column identifying on-hand therapeutics as our target. Canvas automatically identifies our problem as a time series forecast based on the target column we just selected, and we can configure the parameters needed. We configure the item_id , the unique identifier, as location because our dataset is provided by location (US states). Because we’re creating a time series forecast, we need to select a time stamp, which is date in our dataset. Finally, we specify how many days into the future we want to forecast (for this example, we choose 30 days). Canvas also offers the ability to include a holiday schedule to improve accuracy. In this case, we use US holidays because this is a US-based dataset. With Canvas, you can get insights from your data before you build a model by choosing Preview model . This saves you time and cost by not building a model if the results are unlikely to be satisfactory. By previewing our model, we realize that the impact of some columns is low, meaning the expected value of the column to the model is low. We remove columns by deselecting them in Canvas (red arrows in the following screenshot) and see an improvement in an estimated quality metric (green arrow). Moving on to building our model, we have two options, Quick build and Standard build . Quick build produces a trained model in less than 20 minutes, prioritizing speed over accuracy. This is great for experimentation, and is a more thorough model than the preview model. Standard build produces a trained model in under 4 hours, prioritizing accuracy over latency, iterating through a number of model configurations to automatically select the best model. First, we experiment with Quick build to validate our model preview. Then, because we’re happy with the model, we choose Standard build to have Canvas help build the best possible model for our dataset. If the Quick build model had produced unsatisfactory results, then we would go back and adjust the input data to capture a higher level of accuracy. We could accomplish this by, for instance, adding or removing columns or rows in our original dataset. The Quick build model supports rapid experimentation without having to rely on scarce data science resources or wait for a full model to be completed. Generate predictions Now that the model has been built, we can predict the availability of therapeutics by location . Let’s look at what our estimated on-hand inventory looks like for the next 30 days, in this case for Washington, DC. Canvas outputs probabilistic forecasts for therapeutic demand, allowing us to understand both the median value as well as upper and lower bounds. In the following screenshot, you can see the tail end of the historical data (the data from the original dataset). You can then see three new lines: the median (50th quantile) forecast in purple, the lower bound (10th quantile) in light blue, and upper bound (90th quantile) in dark blue. Examining upper and lower bounds provides insight into the probability distribution of the forecast and allows us to make informed decisions about desired levels of local inventory for this therapeutic. We can add this insight to other data (for example, disease progression forecasts, or therapeutic efficacy and uptake) to make informed decisions about future orders and inventory levels. Conclusion No-code ML tools empower public health experts to quickly and effectively apply ML to public health threats. This democratization of ML makes public health organizations more agile and more efficient in their mission of protecting public health. Ad hoc analyses that can identify important trends or inflection points in public health concerns can now be performed directly by specialists, without having to compete for limited ML expert resources and slowing down response times and decision-making. In this post, we showed how someone without any knowledge of ML can use Canvas to forecast the on-hand inventory of a certain therapeutic. This analysis can be performed by any analyst in the field, through the power of cloud technologies and no-code ML. Doing so distributes capabilities broadly and allows public health agencies to be more responsive, and to more efficiently use centralized and field office resources to deliver better public health outcomes. What are some of the questions you might be asking, and how may low-code/no-code tools be able to help you answer them? If you are interested in learning more about Canvas, refer to Amazon SageMaker Canvas and start applying ML to your own quantitative health questions. About the authors Henrik Balle is a Sr. Solutions Architect at AWS supporting the US Public Sector. He works closely with customers on a range of topics from machine learning to security and governance at scale. In his spare time, he loves road biking, motorcycling, or you might find him working on yet another home improvement project. Dan Sinnreich leads Go to Market product management for Amazon SageMaker Canvas and Amazon Forecast. He is focused on democratizing low-code/no-code machine learning and applying it to improve business outcomes. Previous to AWS Dan built enterprise SaaS platforms and time-series risk models used by institutional investors to manage risk and construct portfolios. Outside of work, he can be found playing hockey, scuba diving, traveling, and reading science fiction. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
CaratLane Case Study - Amazon Web Services (AWS).txt
CaratLane Scales To Meet Seasonal Peaks and Deliver Seamless Customer Experience With AWS Français About CaratLane To learn more, visit https://aws.amazon.com/retail/   Español 日本語 Contact Sales CaratLane is a leading player in jewelry ecommerce in India and is one of the country’s largest omnichannel jewelry retailer with over 140 physical stores in more than 40 cities. Over the years, CaratLane has focused on delivering a great unified customer experience across its digital and physical channels. As a result, over 70 percent of its sales today originates on its website and mobile app and concludes in the store. It has millions of active users every month and hundreds of thousands of sessions daily.  Get Started 한국어 Amazon RDS To Learn More Working with AWS gives us the confidence and peace of mind that our cloud infrastructure will scale to meet seasonal demand spikes. In addition, AWS is constantly introducing ways to optimize operational costs. This gives our teams the freedom to explore new innovations that improve the customer experience and differentiate us from the competition.” CaratLane uses Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Container Service (Amazon ECS) to automatically scale its capacity and instances based on load patterns, traffic patterns, and seasonal demands without over-provisioning or experiencing any downtimes. CaratLane also uses Amazon ElastiCache for Redis to reduce the latency of its applications and maintain high performance during peak seasonal loads.  Amazon EC2 Reduced the cost of server maintenance by up to 20% AWS Services Used A scalable, secure infrastructure 中文 (繁體) Bahasa Indonesia Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي 中文 (简体) CaratLane is also building a data lake using AWS Glue, Amazon Kinesis, Amazon Simple Storage Service (Amazon S3), Amazon Elastic MapReduce (Amazon EMR), and Amazon Redshift. Once completed, the data lake will consolidate disparate data sources onto a single location, allowing developers and business users to tap a larger pool of data to generate deep customer insights and personalized user interventions.  CaratLane has been an early adopter of ML to improve customer experience. For instance, they use ML models to measure customer sentiment by analysing customer queries and feedback collected in-store, through email, phone, website, and the mobile app. These ML models, deployed on Amazon EC2, have helped shrink the number of customer complaint escalations by around 10%.  Learn more » Amazon ECS Benefits of AWS Provided infrastructure to deploy Machine Learning models for customer sentiment analysis, which reduced complaints by 10% per month Gurukeerthi Gurunathan Co-founder and Chief Technology Officer, CaratLane Machine Learning (ML) to improve the customer experience Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon ElastiCache for Redis English Building a data lake for greater data visibility and accessibility Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. In 2021, CaratLane migrated its applications to AWS Fargate, a serverless, pay-as-you-go managed service, and reduced the cost of its server operations by 10-20 percent. CaratLane is one of India’s largest omnichannel jewelry retailer with over 140 physical stores in more than 40 cities across the country. CaratLane is exploring several other use cases with ML and plans to adopt Amazon SageMaker to increase the velocity of ML development.   Innovative use cases for customer engagement CaratLane is constantly on the look out for new technologies to enhance customer engagement. They are currently building a video calling solution using Amazon Chime SDK that will allow sale agents to showcase its jewellery collection to customers via live video call sessions. Similarly, CaratLane is also exploring blockchain related use cases. Helped CaratLane scale its storage and computational capacity up during seasonal traffic peaks Over 70 percent of CaratLane’s sales today originates on its ecommerce platform, which consists of its website and mobile app, and concludes in its physical stores. It has millions of active users every month and hundreds of thousands of sessions daily. Deutsch Purchasing jewelry is a deeply entrenched cultural tradition in India, and demands spike exponentially during festivals like Akshaya Trithiya, Diwali, and Dhanteras. In addition, special occasions like Valentine’s Day and Women’s Day also contribute to spikes in traffic. Having moved their infrastructure completely to the cloud in 2012, CaratLane is able to scale effortlessly to handle such seasonal peaks while optimizing for cost and performance. Using managed services has freed up time for the IT team to focus on innovative projects that improve the customer experience.  CaratLane also uses Amazon Relational Database Service (Amazon RDS) to operate its database. As a managed service, Amazon RDS automates and simplifies much of the manual, time-consuming administrative tasks associated with database management. Italiano ไทย “Working with AWS gives us the confidence and peace of mind that our cloud infrastructure will scale to meet seasonal demand spikes. In addition, AWS is constantly introducing ways to optimize operational costs. This gives our teams the freedom to explore new innovations that improve the customer experience and differentiate us from the competition,” said Gurukeerthi Gurunathan, co-founder and chief technology officer at CaratLane.  For security purposes, CaratLane uses AWS WAF and Amazon GuardDuty to secure and protect its customers’ information. Specifically, AWS WAF protects CaratLane’s web applications against common web exploits and bots, allowing CaratLane to build a secure and scalable infrastructure, which in turn facilitates its growth strategy.  2022 Tiếng Việt Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Português
CarTrade Tech Drives a Seamless Car Buying and Selling Experience with Improved Website Performance and Analytics _ Case Study _ AWS.txt
Français CarTrade Tech also sought to simplify the management of its containerized applications, which were running on the Kubernetes container orchestration system. “It was time-consuming to manage containers on our own, and we wanted to put more resources into feature development,” Vasa says. CarTrade Tech Drives a Seamless Car Buying and Selling Experience with Improved Website Performance and Analytics 2023 Furthermore, by moving its BI stack to Amazon QuickSight, CarTrade Tech can use dashboards to visualize data, gaining a more detailed view of how customers use its website. “Improved data and reporting help us make more informed business decisions and guides feature development. We can analyze customer behavior to determine how customers use our site features and identify those requiring further focus,” says Vasa. Español CarTrade Tech Ltd. is a multi-channel automobile platform offering various vehicle types and value-added services, with several brands in its portfolio: CarWale, CarTrade, Shriram Automall, BikeWale, CarTradeExchange, Adroit Auto, and AutoBiz. The company’s goal is to enable new and used automobile customers, vehicle dealerships, Vehicle OEMs, and other businesses to buy and sell vehicles in a simple and efficient manner.  日本語 CarTrade Tech uses Amazon Elastic Kubernetes Service to manage containerized applications, Amazon CloudFront to manage and scale its websites and services, and Amazon QuickSight to analyze and understand customers. As a result, the company offers a better car buying and selling experience by improving its website performance and deriving new insights from customer behavior data.  Since its founding in 2010, CarTrade Tech has hosted its web platforms in a colocated data center, which caused management challenges and limited the company’s ability to scale easily as traffic grew by 400 percent over 5 years. To address this, the company migrated its application platform to Amazon Web Services (AWS), running primarily on Amazon Elastic Compute Cloud (Amazon EC2) instances.  Next, CarTrade Tech migrated its business intelligence (BI) technology stack from a third-party solution to Amazon QuickSight, a serverless BI service offering interactive dashboards and natural language querying to help companies better understand their data. “We found that Amazon QuickSight provides the balanced feature set we require and integrates with other AWS services such as Amazon Athena and Amazon S3,” says Vasa. 한국어 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. Learn more » CarTrade Tech Ltd. uses Amazon EKS and Amazon CloudFront to seamlessly manage and scale its website environment, improving the user experience and reducing costs. reduction in data transfer costs Get Started CarTrade Tech implemented Amazon Elastic Kubernetes Service (Amazon EKS) to automatically manage the availability and scalability of Kubernetes containers on AWS, as well as application security. By using Amazon EKS to simplify container management, the company can launch Amazon EC2 Spot Instances easily. If Spot Instances are unavailable, Amazon EKS alerts the business and automatically moves to on-demand instances. AWS Services Used Outcome | Improving the Customer Experience through Better Website Performance and Behavioral Analysis  improves website experience  More than 31 million people in India conduct research on what vehicle to purchase on CarTrade Tech Ltd.—a multi-channel automobile platform with portals CarWale, CarTrade, and BikeWale—every month. These platforms garner 1.2 million car listings for sale annually. 中文 (繁體) Bahasa Indonesia However, as web traffic increased further, the business sought a new content delivery network to improve website performance. “Customer experience on our websites is of utmost importance, and lower latency can improve that,” says Pratik Vasa, vice president, technology at CarTrade Tech Ltd. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 70% 中文 (简体) Furthermore, CarTrade Tech is exploring adding AWS machine learning services such as Amazon SageMaker, to gain further insights from customer data. Concludes Vasa, “Using AWS, we know we can find new ways to continue improving the online buying and selling experience for our customers.” Opportunity | Seeking to Better Serve a Growing Market of Car Buyers and Sellers Learn more » reduction in latency Overview Amazon Elastic Kubernetes Service With its new capabilities, CarTrade Tech has created a faster web experience for customers. “We’re able to provide a seamless experience for anyone looking to buy or sell a vehicle with Amazon Quicksight and Amazon CloudFront,” says Vasa. Data-powered insights The company also migrated its CarTrade, CarWale, and BikeWale application environments to Amazon CloudFront, a content delivery network (CDN) designed for low latency and high data transfer speeds. “With Amazon CloudFront, we knew we could improve performance and scalability for our websites,” Vasa says. By running its key websites on Amazon CloudFront, CarTrade Tech has reduced website latency by 10–15 percent and outgoing data transfer costs by 70 percent. Türkçe Amazon Elastic Compute Cloud About CarTrade Tech Ltd. English We’re able to provide a seamless experience for anyone looking to buy or sell a vehicle with Amazon Quicksight and Amazon CloudFront.” Customer Stories / Software & Internet  Learn More Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. Learn more » Pratik Vasa Vice President, Technology at CarTrade Tech compute cost savings Deutsch 10% Tiếng Việt Amazon QuickSight is a fast, cloud-powered business intelligence service that makes it easy to deliver insights to everyone in your organization. Learn more » Italiano ไทย Amazon CloudFront 20% To learn more, visit aws.amazon.com/cloudfront. Solution | Deploying AWS for Container Management and BI CarTrade Tech Ltd. is a multi-channel automobile platform with coverage and presence across vehicle types and value-added services through its various brands: CarWale, CarTrade, and BikeWale. The company migrated its websites and applications to AWS to simplify server management, security, and scaling. Amazon QuickSight Additionally, CarTrade Tech now runs 70 percent of its Amazon EKS instances on Amazon EC2 Spot instances, compared with 25 percent previously. As a result, the business has reduced its compute costs by 20%, investing the savings back into the business and more AWS services.  Português
Central East Ontario Hospital Partnership Launches a Clinical Information System in the AWS Cloud _ Case Study _ AWS.txt
9 months A regional partnership of seven acute care hospital organizations located in Central East Ontario, Central East Healthcare (CEHC), covers 16,673 km2 of urban and rural geography and serves over 1.5 million patients. CEHC deployed a clinical information system (CIS) with an alternate production or disaster recovery (DR) system in the Amazon Web Services (AWS) Cloud that successfully serves the entire regional partnership. The implementation of a new CIS helped CEHC to focus on clinical transformation, because it supports the delivery of the highest-quality patient care and improves healthcare services in the region. CEHC shared similarities with many other hospitals in the province, but it changed course by pursuing a move to the cloud. To better use clinical information, CEHC collaborated with AWS, the CIS platform vendor, and Deloitte, an AWS Partner, to implement a CIS with assets and DR in the AWS Cloud. Choosing to build the alternate production/DR environment on AWS let CEHC avoid equipment procurement, saving both time and money to optimize the project. AWS provides redundancy at every layer of the architecture, so there is no single point of failure throughout the environment. Français while also increasing uptime and performance 2023 Amazon EBS Español Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » 日本語 Facilitating a single healthcare record across the region was the highest priority for CEHC. Using a “Think Big” approach, CEHC used design ideas to apply data and secure processes so that patients could walk in the door of any CEHC hospital, and providers would already have their information ready to go. The CEHC hospitals’ primary motivation was to provide the safest, highest quality of care to patients across the region. “AWS delivered cloud services and experience for CEHC using automated tools and processes. AWS delivered both quickly and cost effectively. When compared to the brick-and-mortar production build, the alternate production/DR environment was built in days rather than months, and at a fraction of the cost,” says Eric Foote, Deloitte’s managing director of Healthcare Cloud Engineering. Outcome | Building for the Future To put the inventive plan into motion, CEHC needed partners. “When you have a CIS ready to implement, you need a scalable, reliable data center to support it,” says Andrew Kelly, chief digital officer of the Central East Regional Operations team, established post-live as a regional IT service for the seven-hospital partnership. As the AWS environment transitioned from Deloitte to internal CEHC staff, AWS Enterprise Support began directly working to provide enhanced technical support, billing and account management, and concierge services. A dedicated technical account manager (TAM) supports the entire CEHC AWS environment. The TAM provides consultative architectural guidance, knowledge, and reporting to help implement proactive and preventative programs, and, when needed, brings in AWS subject matter experts. Looking ahead, CEHC will evaluate AWS as an option for future migrations and use the CEHC team’s growing AWS skill set. Get Started 한국어 Improved innovation Overview | Opportunity | Solution | Outcome | AWS Services Used By choosing AWS, CEHC can make use of cloud-native services while simultaneously driving increased innovation and improved uptime and performance. Alongside AWS, CEHC can use and surface data for clinical use, reporting, and operational improvement, which helps increase efficiency, patient safety, and quality of patient care. AWS delivered both quickly and cost effectively. When compared to the brick-and-mortar production build, the alternate production/DR environment was built in days rather than months, and at a fraction of the cost." Opportunity | Integrating Medical Records to Improve Patient Care Central East Ontario Hospital Partnership Launches a Clinical Information System in the AWS Cloud Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). Learn more » Saved $10 million AWS Services Used Achieving Cost Savings and Security Benefits 中文 (繁體) Bahasa Indonesia CEHC went live in December 2021, after completing three successful tests on AWS. These successful tests for both alternate production/DR and production systems helped CEHC to have full confidence in the solution and to receive its Epic Good Install certification, which is designed to help healthcare organizations that use the company’s EHR achieve implementation best practices in patient outcomes, quality of care, workflow efficiency, and financial performance. “CEHC had limited experience building and supporting solutions in the cloud. Deloitte and AWS were our sherpas,” says Kelly of the collaboration. “They led us up the mountain the proper way.” Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي The region’s facilities were not set up to talk to each other and share medical records easily. The lack of a regional CIS caused chart fragmentation, creating barriers for clinicians providing care within an organization and across the region. Patients’ referrals to other hospitals for specialized treatment, such as cancer services, mental health treatment, or emergency cardiac care, created friction for medical practitioners at partner sites because they had difficulty accessing information about the referred patients. Safety mechanisms relied on antiquated technology and staff doing multiple checks. This resulted in increased labor, which led to a higher probability of error in all tasks. 中文 (简体) Learn more » Security was paramount on the project, irrespective of cost. “It was incumbent on us to ensure that we were raising the bar on security and that we have done,” says Kelly. CEHC’s migration to AWS met heightened security demands.   Overview Furthermore, the region’s healthcare organizations lacked the right technical infrastructure for the CIS. Could a CIS run in the cloud and meet the stringent Canadian regulatory requirements? CEHC is trailblazing innovation in the Canadian healthcare industry. Its EHR environment is now live and serving healthcare providers and patients at Campbellford Memorial Hospital, Haliburton Highlands Health Services, Lakeridge Health, Northumberland Hills Hospital, Peterborough Regional Health Centre, Ross Memorial Hospital, and Scarborough Health Network (SHN). Eric Foote Managing Director of Healthcare Cloud Engineering, Deloitte Türkçe English to build, test, and deploy CIS In 2017, seven organizations came together to form the CEHC and a Regional Executive Forum (REF) committee to guide the procurement of a CIS for the region. “We wanted that utopian state, where a patient comes to the hospital and you know everything about them, to ensure the safest and the highest quality care,” says Ilan Lenga, REF member and chief medical information officer for Lakeridge Health. Regional health records infrastructure would generate critical clinical information and operational insights to make a profound difference in the lives of patients and providers. Further, partnership would make the grand ideas of each member organization real. Using AWS, CEHC operates less equipment in the disaster recovery environment versus the primary data center. The scalability and automation of AWS helps CEHC to manage smaller, on-demand environments on a regular basis and reduce costs. In the event of a disaster, the environment in AWS scales up to support the full region. “We’re paying for only what we’re using,” says Kelly, “versus paying overhead for equipment that’s needed only in the event of a disaster.” Building in the cloud translated to a significant cost savings, estimated at more than $10 million over 10 years. CEHC builds on AWS with solutions such as Amazon Elastic Compute Cloud (Amazon EC2), specifically the utilization of Amazon EC2 R5b instances, a set of next-generation, memory-optimized instances used to host the database. CEHC also uses Amazon Elastic Block Store (Amazon EBS), and Amazon FSx for Windows File Server for easy-to-use, scalable, and high-performance storage. Passing the 1-year anniversary of its regional go-live in December 2022, the collaboration has experienced many benefits, including improved clinical workflows, information exchange, and data security among regional healthcare providers, resulting in improved services and higher-quality patient care. These early successes are a product of the work that the team did together, to build not only a compliant but also a cost-effective solution. The AWS Cloud solution matched the flexibility and scalability that CEHC needed for a medical records management solution, with the CEHC CIS running in alternate production in the AWS Cloud—instead of in a secondary data center, with potentially millions of dollars in operating costs. across the partnership To design a new approach, CEHC selected Wisconsin-based industry leader Epic as the preferred solution for its new electronic health records (EHR) environment and the CIS platform. It met CEHC’s clinical, performance, data security, and cost benefit needs. “It was the best of all the possibilities,” says Lenga. “Providers can just pick up the patient’s chart and keep moving with the diagnosis, as if the data were originally on their site.” Deutsch The Central East Healthcare (CEHC), a partnership between seven acute care hospital organizations located in Central East Ontario, covers 16,673 km2 of urban and rural geography and serves over 1.5 million patients. Tiếng Việt Solution | Collaborating to Pave the Way for Innovation Customer Stories / Healthcare “Despite the challenges of the journey, constrained by budget and timing, our collaborators met us where we were and helped us rethink what was possible. Looking forward, we’re set up for success and know that further advancement is on the horizon to deliver better care for patients,” says David Graham, president and CEO of SHN. Italiano ไทย over 10 years Although the selected EHR installation is in a primary, traditional data center, AWS hosted the alternate production/DR environment—plus other clinical systems ancillary to the EHR, applications, and regionally shared assets that form the CIS. This option innovates and improves the traditional alternate production/DR approach, whereby AWS works closely with the EHR vendor to validate and continue to optimize the environment. This architecture strives to deliver an optimal customer experience with cloud-powered scalability, reliability, and agility. By deploying on AWS and with the assistance of Deloitte, CEHC was able to build, test, and deploy the CIS rapidly under an aggressive timeframe of 9 months. “We all wanted to take a quantum leap forward in terms of the quality and safety of tools that existed in the marketplace today,” says Lenga. By the looks of things, CEHC did just that. Supports 7 hospitals The implementation of a new clinical information system helped CEHC to focus on clinical transformation, because it supports the delivery of the highest-quality patient care and improves healthcare services in the region. About Central East Ontario Hospital Partnership CEHC’s executive committee found a collaborator in AWS, which offered a solution that met CEHC internal benchmarks and merged well with the Epic-prescribed technology stack. With the support of implementation partner, Deloitte, the teams landed on an innovative hybrid solution. Amazon FSx for Windows File Server Português Amazon FSx for Windows File Server provides fully managed shared storage built on Windows Server, and delivers a wide range of data access, data management, and administrative capabilities.
Circle of Life _ Amazon Web Services.txt
Updates and renews Kubernetes configuration automatically Deploys Kubernetes clusters in 40 minutes instead of 1 hour Français Benefits of AWS Pundarikaksha Mishra Lead DevOps, Circle of Life Español Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Circle of Life Migrates Mission-Critical Healthcare App to AWS to Eliminate Downtime 日本語 AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. Reduces development costs by 15% Amazon EC2 for Microsoft Windows Server 한국어 AWS CodePipeline Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. To learn more, visit aws.amazon.com/eks. Circle of Life offers health institutions cloud-based analytics tools to facilitate data-driven decision-making. Its main product, ZEVAC, accesses more than 7 million patient records each day to analyze how medication—primarily, antibiotics—are used. ZEVAC is a software as a service (SaaS) currently used to process data from multiple hospitals across India. Get Started Supporting Kubernetes Workloads with 200 Windows Virtual Machines AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Since migrating to AWS, Circle of Life has received positive feedback from its external and internal customers on improved application performance. While PC Solutions was initially managing its AWS environment, Circle of Life’s IT team has since taken over and finds the AWS console simple to work with. عربي Learn more » Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (简体) Achieves 99.999% uptime for mission-critical application Dhananjay Yogi, head of cloud services at PC Solutions, explains, “We successfully integrated Jenkins with Amazon Resource Names, which automatically spins up Amazon EC2 instances on-demand to run Amazon EKS clusters. All updates and patches are performed automatically without downtime or manual effort, so performance has improved while lowering cost.” Supporting Prescription Decisions with Artificial Intelligence Autoscales instances when 60% threshold is exceeded Speed has likewise improved on AWS, as deploying new instances is faster. In Circle of Life’s previous cloud environment, it took at least an hour to deploy a new instance, whereas on AWS engineers can deploy new Amazon EKS nodes in 40 minutes. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. ZEVAC is a containerized application that runs in Kubernetes clusters in the cloud. In 2020, the company’s cloud provider experienced several bouts of downtime that interrupted customers’ ability to interact with ZEVAC—a round-the-clock, high-availability system. That same year, Circle of Life decided to migrate the SaaS to Amazon Web Services (AWS) to improve uptime. In addition to the Kubernetes migration, PC Solutions worked with Circle of Life to automate the company’s continuous integration/continuous development (CI/CD) pipeline. Circle of Life is now using AWS CodePipeline as a fully managed continuous delivery service and Jenkins as an open-source automation server. By integrating native AWS and open-source tools, Circle of Life has reduced its costs by 15 percent. Amazon Elastic Kubernetes Service Gaining Intuitive Dashboards and 25% Faster Deployment Right-sizes instances for optimal compute vs. cost Türkçe English Pundarikaksha Mishra, lead DevOps at Circle of Life Healthcare, says, “AWS dashboards are intuitive, which allows smooth performance of any task.” He continues, “The team at PC Solutions helped with the transition to AWS, which was extremely valuable as our team was new to the platform. The support we’ve received directly from AWS has also been amazing. Within minutes of raising a query, we get a response.” ZEVAC deploys in Docker containers running on Amazon Elastic Compute Cloud (Amazon EC2) instances for Microsoft Windows Server. Currently, Circle of Life runs several hundred Windows virtual machines to support its Amazon EKS nodes. PC Solutions right-sized instances for optimal compute versus cost, and app-integrated Amazon CloudWatch stack for monitoring. When traffic exceeds the 60 percent threshold, autoscaling provisions additional resources, which ensures ZEVAC remains highly available and durable regardless of data processing volumes. About Circle of Life Amazon Relational Database Service We’ve experienced greater processing power and faster computing on AWS.” Automating CI/CD Pipeline Reduces Costs by 15% Deutsch Tiếng Việt Receives technical support within minutes of raising a query Learn More Italiano ไทย Circle of Life worked with AWS Partner PC Solutions to migrate ZEVAC and other peripheral applications to AWS. In the year since migration, the company and its customers have experienced zero downtime with the ZEVAC platform, with 99.999 percent availability. Circle of Life is a software company with a mission to improve data-based decision making in the healthcare sector. Its main product, ZEVAC, analyzes 7 million patient records daily to show how antibiotics are being prescribed in hospitals. Before the migration, Circle of Life’s engineers had to manually monitor and check whether its container orchestration tool was updated when Kubernetes configurations changed. It now uses Amazon Elastic Kubernetes Service (Amazon EKS) for container orchestration and Amazon Relational Database Service (Amazon RDS) to manage PostgreSQL and MySQL databases. With Amazon EKS, the company benefits from automatic updates and version control. Improving Uptime with Kubernetes on AWS 2022 Healthcare analytics is an emerging area of data science that aims to make sense of the enormous volume of data, often unstructured and analog, generated in hospitals and clinics every day. The 2020 pandemic, however, highlighted many of the obstacles faced when sharing health data across, and data siloes within, organizations. Português Circle of Life’s roadmap for ZEVAC includes enhancing artificial intelligence to help guide physicians’ decisions when prescribing medication. The company continues to consult with PC Solutions and AWS to support evolving and potential use cases in the cloud. Mishra says, “We’re now thinking of ways to intelligently recommend the course of antibiotics for each patient based on empirical data and the patient’s profile.”
Claro Embratel Credits AWS Training and Certification as Key Driver in Fourfold Growth of Sales Opportunities _ Claro Embratel Case Study _ AWS.txt
To build its AWS practice, Claro Embratel established a Cloud Center of Excellence to train its employees on the latest cloud technologies and promote cloud adoption among its clients. The company engaged AWS Training and Certification, which equips organizations with the practical skills and industry-recognized credentials necessary to succeed in the cloud, to support this initiative. achieved in 6 months Contact Sales Français Solution | Leaning into Foundational AWS Partner Training Courses Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Español Learn how telecommunications provider Claro Embratel empowered its cloud sales teams with AWS Training and Certification. Opportunity | Improving the Sales Team’s Cloud Expertise with AWS Partner Training 日本語 2023 4x increase Earned AWS Partner Accreditations can help you have more prescriptive conversations with customers in the field and provide prospective customers with proof of your AWS Cloud skills and expertise. AWS Partner Accreditations are also a simple way to contribute to Knowledge Requirements and progress through the APN Consulting Partner tiers. Learn more » Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used This credential helps organizations identify and develop talent with critical knowledge related to implementing cloud initiatives. Earning AWS Certified Cloud Practitioner validates cloud fluency and foundational AWS knowledge. Since 1965, Claro Embratel has kept pace with technological innovations and invests in its infrastructure and its people to meet new marketplace requirements. The company has been an AWS Partner since 2017 and has engaged in over 300 customer launches on AWS. It holds 261 AWS Certifications, demonstrating knowledge and skills in AWS technology across a wide range of AWS services. “Selling in the cloud requires a deeper understanding of the customer’s business challenges,” says Fabiana Couto Falcone de Melo, cloud business lead at Claro Embratel. “It is difficult to find skilled cloud-certified workers. By training our sales teams, we can build trust between our sales teams and potential customers.” Claro Embratel Credits AWS Training and Certification as Key Driver in Fourfold Growth of Sales Opportunities AWS Services Used 中文 (繁體) Bahasa Indonesia 293 Cloud Economics accreditations AWS Certified Cloud Practitioner Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. By equipping its employees with cloud knowledge, Claro Embratel has experienced significant business growth. “Within 5 months, we have quadrupled the number of sales opportunities generated by our sales professionals year over year,” says de Melo. Claro Embratel is a Brazilian telecommunications company and member of Grupo América Móvil. It provides a diverse range of offerings to meet customer needs, including security, data center, cloud, customer experience, and connectivity solutions. in year-over-year sales opportunities in 5 months Overview Fabiana Couto Falcone de Melo Cloud Business Lead, Claro Embratel For over 50 years, Claro Embratel has been a major telecommunications provider in Brazil. With the rapid advancement of technology, the telecommunications company identified the need to modernize and upgrade its solutions to keep pace with changing customer demands. Claro Embratel entered a multiyear strategic collaboration with the Amazon Web Services (AWS) team to support customers moving to the cloud. The company also achieved its 4-year target for the number of professionals with AWS expertise in a matter of months. “Our sales professionals and business developers have a broader repertoire about the cloud and its benefits and challenges,” says de Melo. “They can apply this knowledge in more productive conversations with customers, better qualify business opportunities, and drive new products and services to the market.” Türkçe Claro Embratel began its collaboration with AWS Training and Certification by helping its sales teams learn more about cloud economics and migration. First, the employees participated in AWS Partner: Sales Accreditation (Business), which teaches basic cloud concepts and communication skills to effectively articulate the value of AWS and engage in successful sales conversations with customers. Then, they took the AWS Partner: Cloud Economics Accreditation course, which teaches the benefits of migrating customers to AWS, including cost savings, better performance, and improved agility. English AWS Certified Solutions Architect - Associate Earning AWS Certified Solutions Architect – Associate validates the ability to design and implement distributed systems on AWS. Learn more » Within 5 months, we have quadrupled the number of sales opportunities generated by our sales professionals year over year.” achieved in 4 months Outcome | Quadrupling Sales Opportunities and Driving Growth AWS Partner Accreditation Deutsch In 2023, Claro Embratel will build on the success of its initial AWS Training and Certification program. It plans to expand course offerings to support sales and technical teams working on AWS migration and data analytics solutions. The company projects that it will have more than 600 accredited professionals through this engagement. As part of its strategic collaboration, Claro Embratel prioritized the upskilling of its team through AWS Partner Accreditation, which equips AWS Partners with foundational AWS knowledge, and AWS Certification, which validates technical skills and cloud expertise. The company mobilized over 500 professionals to earn these industry-recognized credentials and immediately improved its sales pipeline, with year-over-year sales opportunities quadrupling in just 5 months. Through this engagement, Claro Embratel has established itself as a trusted provider of AWS-based solutions. Tiếng Việt Through AWS Partner Training, Claro Embratel mobilized its sales representatives to earn as many industry-recognized credentials as possible. Within 4 months, these representatives achieved 456 Sales Accreditations. They also earned 293 Cloud Economics Accreditations in 6 months. These experts were noticeably more proficient at identifying sales prospects within 1 month after the training. “With the support of AWS Training and Certification, our sales professionals are better able to articulate the connection between AWS capabilities and our customers’ business needs,” says José Eduardo Aires Carneiro Braga, alliance lead at Claro Embratel. “Through AWS Training and Certification, we were able to transform our culture and market discourse to position the AWS Cloud.” AWS Training and Certification Italiano ไทย About Claro Embratel 456 sales accreditations Learn more » Claro Embratel also wanted its presales and technical teams to earn AWS Certification(s). By earning these industry-recognized credentials, the company could demonstrate its AWS expertise further and build trust with clients. From 2022 to April 2023, presales and technical team members earned a total of 54 AWS Certification(s), including AWS Certified Cloud Practitioner, which demonstrates a foundational understanding of AWS Cloud concepts, services, and terminology; AWS Certified Solutions Architect - Associate, which showcases knowledge and skills in AWS technology; and AWS Certified Security - Specialty, which validates expertise in the creation and implementation of security solutions in the AWS Cloud. “The sales accreditation course in particular had a great deal of engagement and impact on a daily basis,” says Fátima A. de Sousa, human resources specialist for corporate education at Claro Embratel. “This is due to the knowledge acquired, the opportunity for personal and professional development, and the digital badges that can be shared with colleagues and social networks.” Português “Our strategic alliance with the AWS team is a key pillar in building capabilities that contribute to our relevance in the IT solutions market,” says de Sousa.
Climedo Case Study.txt
AWS KMS We chose AWS because it helps us to meet data protection standards and provides the scalability we need.” Français Benefits of AWS Solution meets rigorous data protection, encryption, and security standards Climedo Health Captures Patient-Centric, Compliant, and Secure Clinical Data Using AWS Amazon EC2 AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. Climedo Health’s mission is to offer patients the best medical treatment through intelligent software solutions. Its powerful, modular, and secure solutions for decentralized clinical trials facilitate faster implementation, higher data quality, and better patient engagement. MedTech and pharmaceutical companies use the cloud-based platform for cutting-edge clinical validation and post-market surveillance of their products. 日本語 AWS Services Used Its data protection and security architecture, based on AWS Key Management Service (AWS KMS), has been successfully audited by multiple private and German government data protection and security institutions for compliance with all legal requirements. “AWS was a great help,” says Sauer. “We have regular calls to discuss our goals. AWS helps us to problem solve on everything from encryption and architecture to growing and scaling our company.” AWS Facilitates Rapid Growth These organizations can then access the information through a centralized location that allows them to easily view the data while meeting all regulatory standards for security, data protection, and product safety. 한국어 Benjamin Sauer Head of Backend Engineering at Climedo Health Reduced compliance challenges for customers by providing updates on relevant regulations Get Started Another Climedo patient diary solution, ePRO (electronic Patient-Reported Outcome), proved useful when social distancing restrictions limited hospital access during the COVID-19 pandemic. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. The ability to provide data remotely meant that more patients could participate in research. And this meant that Climedo Health’s customers could complete more trials—the current patient completion rate is around 90 percent. “This decentralized approach puts patients at the center of the clinical trial process,” says Higginson. “The hospitals and other healthcare providers then benefit from a larger, more diverse group of trial participants, which leads to better clinical results.” About Climedo Health Within 12 months of beginning the project, approximately 140 offices were using eDiaries to keep an up-to-date view of potential COVID-19 cases. 中文 (繁體) Bahasa Indonesia With easy-to-build dashboards and modular features, Climedo Health allows its customers to conduct high-quality and efficient clinical research including product registries, patient diaries, and feedback surveys. Smart dashboards reveal real-time insights into the live status of a study, meaning that customers can view results at a glance and react quickly. “To ensure security, our main goal was to enforce complete isolation between customers’ data,” says Benjamin Sauer, head of backend engineering at Climedo Health. “We chose AWS because it helps us meet data protection standards and provides the scalability we need.” Decentralized Clinical Trials Boost Participation Ρусский عربي Migrating its patient diary solutions to AWS also increased the number of study subjects that Climedo Health could support, from 500 participants per study to hundreds of thousands of individuals. At the height of the pandemic, the solution allowed Climedo Health to process more than 30,000 SMS messages sent per day from public health offices to suspected COVID-19 patients. 中文 (简体) Scalable system can quickly and securely pivot to meet MedTech demands Using AWS, Climedo Health created a secure, cloud-native, and scalable electronic data capture (EDC) system for conducting clinical trials. The solution is fully data compliant and continuously updated to meet regulatory requirements. Learn more » Climedo Health’s ability to securely scale its services for customers at pace, as demonstrated by its work with public health offices across Germany during COVID-19 pandemic, has led to more customers and rapid growth. The Climedo Health team has quadrupled in size in the last 18 months. German EDC (electronic data capture) software provider Climedo Health used AWS to create secure, cloud-native, and scalable solutions to better capture and manage clinical data used by pharmaceutical companies, medical device manufacturers, hospitals, and around 150 public health offices. The fast-growing company accelerated its customers’ clinical trials and onboarded hundreds of thousands of patients in a short period of time. Español The scalability has also made it possible for the team to meet this rising demand and it has given the company confidence that it can continue to grow. “Using AWS has made it a lot easier for us to win new customers, and our successes will hopefully help us to win even more future customers too,” says Sauer. Having seen that many medical researchers used spreadsheets and paper-based systems to capture and manage clinical data for their trials, Germany’s Climedo Health saw an opportunity to create a more efficient digital solution. The new Symptoms eDiary solution was immediately put into use when the COVID-19 pandemic began and public health officials across Germany struggled with the volume of manual work generated by tracking symptoms of possible cases. Türkçe In early 2020, Climedo Health re-architected its eDiary for Public Health Offices using AWS. eDiaries help healthcare professionals capture and manage data about the experience of trial participants. English Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Serverless on AWS Deutsch Tiếng Việt Italiano ไทย Building a Secure Foundation Without the eDiary, this process would have required thousands of manual phone calls from public servants to collect the data. “We’ve made life much easier for public health officers, who were previously relying on fax machines and phone calls for tracking cases,” says Catherine Higginson, marketing manager at Climedo Health. “Officials reduced the time spent on tracking symptoms by 80 percent, because, with eDiary, it’s fully automated. We’ve revolutionized their systems.” 2022 Supporting Public Health Officials during Difficult Times Thanks to the ease of onboarding new customers with the AWS architecture, public health offices could quickly use the eDiary solutions to ease the load. Phone calls and visits were replaced with patients inputting symptom data directly into the eDiary from their own mobile devices. Clinical trials need good data to produce valuable outcomes that study managers can use. One way to ensure this is to provide study participants with a convenient and user-friendly way to share the data with those conducting the study, such as medical device manufacturers, pharmaceutical companies, hospitals, and public health offices. AWS Lambda Build and run applications without thinking about servers. Português
CloudCall Invests in AWS Skill Builder Pivots to a SaaS Model _ CloudCall Case Study _ AWS.txt
About CloudCall Français Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. AWS Cloud Quest Español Improved troubleshooting match customer data 日本語 2023 Get Started 한국어 AWS Cloud Quest is the only role-playing game to help you build practical AWS Cloud skills. Whether you’re starting your cloud learning journey or diving into specialized skills, AWS Cloud Quest helps you learn in an interactive, engaging way. Overview | Opportunity | Solution | Outcome | AWS Services Used To get started, CloudCall performed an AWS Learning Needs Analysis, which helps identify an organization’s cloud skills gaps. Using the results of the assessment, it identified the disparities in team members’ AWS knowledge and built a data-driven plan to accelerate learning. At the core of CloudCall’s training program is AWS Skill Builder, an online learning center. CloudCall relies on the AWS Skill Builder Team subscription to gain visibility across its entire learning community, using its administrative tools to assign identical courses to all participants and establish a base level of knowledge across teams. Participants can also launch self-paced learning experiences on AWS Skill Builder, where they can practice different cloud skills based on their project needs and interests. With on-demand training, participants can schedule learning time around normal work activities, making it simple to learn on the job. “Having a mix of on-demand and in-person training meant that we could support different learning styles seamlessly,” says Alan Churley, director of software engineering at CloudCall. “Participants could take the courses as they needed to, as many times as required to feel comfortable.” CloudCall employees prepared for their exams using the preparation materials included with their AWS Skill Builder subscription; these resources include 6–8 hours of practice materials such as videos, hands-on labs, additional practice questions, and access to the Official Practice Exam. Employees then practiced their AWS skills using AWS Cloud Quest, a digital training option where employees can develop in-demand cloud skills in an interactive role-playing game. “AWS Cloud Quest is an exciting environment because it provides a gamified role-based learning experience, which works best for some learners,” says Ardinois. In November 2022, CloudCall hosted its first AWS Immersion Day, an event that educates companies about AWS products and services, to teach its employees about serverless architecture and practices using AWS Lambda, a serverless, event-driven compute service. Participants attended lectures by AWS solutions architects during the first half of the day and participated in hands-on activities during the second half. CloudCall’s entire product and engineering group is engaged in the training initiative, and 95 percent have achieved AWS Certified Cloud Practitioner Certification. With their improved AWS expertise, CloudCall’s employees are empowered to implement new features and projects, which fosters innovation toward its goal of providing better customer insights. For example, CloudCall has enhanced its capability to scale services and accelerated the time taken to release products from development to production. To support its SaaS transformation, it seamlessly adopted new AWS services like Amazon OpenSearch Service, which unlocks near-real-time search, monitoring, and analysis of business and operational data. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Immersion Days are a series of events that are designed to educate you about AWS products and services and help you develop the skills needed to build, deploy, and operate your infrastructure and applications in the cloud. Hands on labs provide you with an immersive experience in the AWS console. Learn more » AWS Services Used AWS Skill Builder Team subscription Outcome | Empowering Organizational Cloud Skills with Specialized Training Learn how SaaS provider CloudCall upskilled its engineers with AWS Training and Certification. 中文 (繁體) Bahasa Indonesia Opportunity | Contact Sales Ρусский and technical support عربي AWS Training and Certification aligned well with our goals, one of which was to provide the product and engineering group with a structured learning path on our cloud journey.” Customer Stories / Telecommunications Solution | Upskilling Employees on AWS with 100% Workforce Engagement AWS Training and Certification Klaas Ardinois Chief Technology Officer, CloudCall Accelerated transition Overview CloudCall’s software integrates directly with CRM systems to provide businesses with a 360-degree view of their customers. Beginning as a traditional provider of voice-over-internet-protocol telephony, it is evolving to offer more advanced features, such as automatic call distribution and near-real-time coaching for new hires. “We aim to use machine learning and artificial intelligence to provide valuable insights to our customers based on call data,” says Klaas Ardinois, chief technology officer of CloudCall. “Choosing a SaaS approach gave us greater data control, which facilitated capturing more intelligence during calls to provide additional information to end users.” CloudCall chose AWS to drive its transformation to a SaaS model. It had previously built architecture components on AWS but soon discovered that its engineering team had varying levels of AWS experience. “Some people had never heard of AWS because they came from a pure on-premises world, and others were definitely on their way to learning more on AWS but were not advanced,” says Ardinois. “Our first step was to get everyone on the same baseline.” In the summer of 2022, CloudCall engaged AWS Training and Certification to upskill its product and engineering group. “AWS Training and Certification aligned well with our goals, one of which was to provide the product and engineering group with a structured learning path on our cloud journey,” says Ardinois. To drive the program, CloudCall required its engineers to earn their AWS Certified Cloud Practitioner Certification by the end of the year. This sought-after industry credential validates a foundational understanding of AWS Cloud concepts, services, and terminology. CloudCall’s AWS Training would help employees earn this valuable certification and build their cloud expertise, thus improving their employability. Türkçe The AWS Skill Builder Team subscription grants unlimited access to expert-led AWS Digital Training, self-paced labs, learning plans, practice exams, and more. Team challenges and role-playing games make learning fun, and administrative features enable you to assign goals and track progress. Learn more » English Transforming the Business Model for CloudCall with AWS Training and Certification AWS Immersion Days to a Saas model and accelerated the time taken to release products CloudCall is a provider of communication software designed for businesses that use customer relationship management solutions. CloudCall aims to unify communications across organizations. Deutsch Tiếng Việt Scaled services Italiano ไทย CloudCall Invests in AWS Skill Builder, Pivots to a SaaS Model With its digital telephony software, CloudCall helps businesses unlock the full potential of their customer relationship management (CRM) solutions. As part of a mission to enhance digital capabilities, CloudCall is transitioning from a traditional telecommunications company to a software-as-a-service (SaaS) model, powered by Amazon Web Services (AWS). For this cloud transformation to work, its product and engineering group needed a baseline knowledge of AWS services. To strengthen its internal cloud skills, CloudCall engaged in a strategic training initiative with AWS Training and Certification, which helps organizations make the best of cloud capabilities. Now, the company can provide better technical support and advanced solutions to help its customers get the most from their CRM data. Learn more » Following the AWS Training initiative, CloudCall built a solution that accelerates the process of synchronizing contacts from a customer’s CRM into its system, making it 15 times faster. This process used to take 5–6 hours; with the new solution, it can take less than 20 minutes. This fully serverless solution is powered by several AWS services, including AWS Lambda and Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database. Now, CloudCall is encouraging employees to explore advanced paths. Employees are targeting many AWS Certifications, such as AWS Certified Security – Specialty, which validates expertise in securing data and workloads in the AWS Cloud, and AWS Certified Developer – Associate, which showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices. CloudCall also plans to host two AWS Immersion Days per year. “AWS Training and Certification helped us set our program up and make this happen,” says Ardinois. “If I had to figure this out myself, I’d still be struggling. It’s been great to work with the AWS team and see them push this initiative forward for us. 中文 (简体) 15% faster Português
CloudWave Modernizes EHR Disaster Recovery and Provides Fast Secure Access to Archived Imaging Data on AWS _ Case Study _ AWS.txt
AWS CloudFormation Français Founded in 1991, CloudWave is a provider of cloud and managed services for healthcare organizations, supporting over 125 EHR, clinical, and enterprise applications. The company previously hosted the environments for customers’ EHR systems and disaster recovery services in two separate data centers. “To provide the disaster recovery service, we had to keep a fully redundant set of infrastructure and hardware at each of our facilities,” says Matt Donahue, chief technical officer and vice president for product development at CloudWave. Hardware and infrastructure costs made this setup expensive, and it required significant manual effort to maintain. Amazon S3 Searching for a cost-efficient and high-performing solution, CloudWave chose to migrate its EHR and disaster recovery systems from its private cloud platform to the cloud on Amazon Web Services (AWS). Through this initiative, the company effectively scaled its EHR and disaster recovery environments, reducing return-to-operations time for its healthcare customers by approximately 83 percent without increasing service fees. Now, CloudWave is offering a reliable, cost-optimized disaster recovery solution with reduced return to operations and recovery point objectives for MEDITECH EHR and enterprise applications to customers, powered by AWS. Enhanced Español security and compliance through automation S3 Intelligent-Tiering is the only cloud storage class that delivers automatic storage cost savings when data access patterns change, without performance impact or operational overhead. 日本語 To reduce the cost to customers and improve the efficiency of the disaster recovery environment, CloudWave decided to use the cloud. After evaluating potential vendors, the company chose AWS. “The business support that AWS provided, as well as the functionality of the services, was much better than the competitors that we evaluated,” says Donahue. “Due to the maturity of AWS services and the ease at which our operations team adopted them, we were able to deploy faster than we would have if we had gone with another vendor.” The AWS team also supported CloudWave in identifying pain points that other healthcare customers had experienced, helping the company avoid common mistakes. 2022 25% reduction Matt Donahue Chief Technical Officer and Vice President for Product Development, CloudWave Get Started 한국어 Amazon S3 Glacier Instant Retrieval Overview | Opportunity | Solution | Outcome | AWS Services Used CloudWave is a cloud and managed services provider that builds and supports clinical, enterprise, and electronic health record applications for medical providers. Founded in 1991, it serves more than 280 hospitals and healthcare organizations. Solution | Improving EHR System Resilience on AWS About CloudWave Outcome | Continuing to Transform Healthcare Together AWS Services Used CloudWave Modernizes EHR Disaster Recovery and Provides Fast, Secure Access to Archived Imaging Data on AWS 中文 (繁體) Bahasa Indonesia unlocked in annual savings Ρусский CloudWave understands the importance of protecting patient data. Over 280 hospitals and healthcare organizations rely on the software company for mission-critical services, including secure electronic health record (EHR) applications. Without secure and reliable access to patient data, caregivers cannot perform their jobs and patients’ lives could be at risk. CloudWave’s on-premises disaster recovery environment provided a backup in case of an outage, but the company wanted to further improve the system’s availability and resilience. عربي 中文 (简体) On AWS, our return to operation is much faster, and the patient’s medical record can be available to a caregiver within a 2-hour time frame.” Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds.  On AWS, CloudWave provides the clinicians that it serves with fast, secure access to patient data, supporting patient care quality and business continuity. The company will continue to use AWS services to improve its applications and deliver new services to customers. Learn more » 150% 83% reduction Overview CloudWave configured its EHR backups to target Amazon S3 Intelligent-Tiering, an Amazon S3 storage class that delivers automatic storage cost savings when data access patterns change, without operational overhead or performance impact. If a disaster occurs, CloudWave can rapidly deploy all its customers’ environments from an Amazon S3 bucket, facilitating business continuity. Using this solution, CloudWave reduced its return-to-operation time from 12 hours to 2 hours, effectively improving the resilience of its disaster recovery environment. “Patients don’t realize that their lives might depend on an EHR system being up or down. Outages also prevent providers from performing their jobs,” says Donahue. “On AWS, our return to operation is much faster, and the patient’s medical record can be available to a caregiver within a 2-hour time frame.” Türkçe English CloudWave appreciates the collaborative and proactive nature of the AWS team and looks forward to continuing to build on AWS in the future. “AWS wants to help us improve our services and bring new offerings to market rather than relying on us to say what we want to do,” says Donahue. “The team has been phenomenal to work with.” improvement in data storage in storage costs using Amazon S3 Glacier Instant Retrieval  in return-to-operation time AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. $1 million Deutsch Opportunity | Breaking Free from an On-Premises Backup Environment Tiếng Việt Previously, deploying the disaster recovery environment was a people-heavy operation for CloudWave. To streamline this process, the company adopted AWS CloudFormation, a service that lets customers model, provision, and manage AWS and third-party resources by treating infrastructure as code. “Previously, our team followed a paper runbook for configuration standards and conducted a monthly audit to catch any gaps,” says Donahue. “Now, we have everything built into an AWS CloudFormation template that we can audit and validate ahead of time. We know that every deployment looks the same, feels the same, and has the exact same security apparatus, which has been very beneficial.” With the automation of security and compliance processes, CloudWave has improved its security posture and significantly reduced manual labor for its employees. Customer Stories / Healthcare Italiano ไทย Contact Sales To store large picture archiving and communication system files, CloudWave relies on Amazon S3 Glacier Instant Retrieval, an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. “We were able to dramatically reduce costs by tiering our backups to Amazon S3 Glacier Instant Retrieval,” says Donahue. “We are now able to provide medical image archiving as a service for our customers at a price that fits their budget while offering the security, resiliency, and redundancy required for healthcare compliance.” By migrating its backups from on-premises storage systems to Amazon S3 Glacier Instant Retrieval, CloudWave reduced its storage costs by 25 percent. This cost reduction, combined with infrastructure and hardware savings, has led CloudWave to unlock $1 million in annual storage cost savings. Amazon S3 Intelligent-Tiering CloudWave’s customers require fast and reliable data access so that they can provide patients with the medical care that they need when they need it. To improve data storage capacity and retrieval speed, the company adopted Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Using Amazon S3, CloudWave improved its data storage capacity by 150 percent, exceeding 5 PB of stored data—all while reducing its costs and strengthening its agility. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
CMD Solutions Case Study _ AWS.txt
136% Français AWS Partner Training and Certification Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. revenue growth in 1 year Español Additionally, Mantel Group as a company grew from 300 to 800 employees in about 18 months and has significantly accelerated its onboarding process to keep up with customer demand. Rather than taking about 6 months to start working with customers, CMD Solutions employees can now begin doing billable work within 2 weeks. Plus, the company’s designation as an AWS Training Partner has created a new revenue stream, driving a return on investment of more than 130 percent in 2022 that includes $18 million in potential annual recurring revenue. “Through the training, we’ve been able to bring people in, upskill them, and add to our culture,” says Becker. “It’s also shown that we’re willing to invest in our employees, a value which can be difficult to quantify.” The investment in training contributed to back-to-back top rankings for Mantel Group in Australia’s “Best Workplaces” List for 2021 and 2022, compiled by an Australian workplace research group. We’re investing in our employees to meet the demand of our customers and help us to scale and grow. The robust training program that we built with AWS Training and Certification was a central part of achieving that.” 日本語 Outcome | | Looking to the Future with AWS Training Programs About CMD Solutions acceleration of cloud migration speed for customers Get Started 한국어 return on investment Overview | Opportunity | Solution | Outcome | AWS Services Used Collaborating with AWS Training and Certification and with funded support, CMD Solutions created a unique deep-dive program that features its own field consultants teaching the practical use of AWS alongside publicly available digital AWS courses in cloud theory. The company itself became an authorized AWS Training Reseller, resulting in a new revenue stream. The boot camps helped drive a greater than 130 percent return on investment and attracted new talent to the company. Plus, CMD Solutions saw a 30 percent increase in revenue corresponding with an increase in upskilled employees that helped to meet demand and accelerate customer migration to the cloud by five times. CMD Solutions assists organizations by transforming their IT operations using specialized AWS automation expertise. The company creates fully automated, customized AWS environment deployments using DevOps tool sets. CMD Solutions had been experiencing an uptick in demand from customers seeking to migrate to the cloud using Amazon Web Services (AWS), especially during the COVID-19 pandemic. The company needed to hire more skilled AWS consultants internally to meet customer needs. Plus, the external market had an extreme skills shortage, making it expensive and impractical to hire the necessary talent. Recruiting and retaining diverse employees and promoting a culture of loyalty also have positive effects on the company’s return on investment for the training program. The average experience of the employees going through the training program was 14.9 years. These IT professionals have, in some cases, decades of industry experience with servers, scripting skills, and understanding of DevOps, with little experience on AWS until participating in LearnCMD. The COVID-19 pandemic accelerated cloud migration demand from CMD Solutions’ customers. The corresponding increase in demand for engineers with cloud expertise exacerbated a lack of highly skilled AWS consultants in Australia and New Zealand. CMD Solutions realized that it needed to satisfy increased customer demand through more AWS training for its employees. CMD Solutions worked with AWS Training and Certification to create a specialized training program called LearnCMD, an AWS boot camp designed to upskill IT professionals with no AWS experience. Starting in November 2020, the company ran a 4-week LearnCMD program once per quarter. About 30 percent of CMD Solutions consultants engaged in the program, with 85 percent of internal recruits earning their AWS Certified Solutions Architect – Associate certification within 30 days of the training. As an authorized AWS Training Partner, CMD Solutions will continue expanding its training program to more customers. As customers complete the LearnCMD program, CMD Solutions plans to offer AWS Skill Builder, a digital learning center to build in-demand cloud skills. Through AWS Skill Builder, CMD Solutions provides customers with a path to train their employees further with deep subject matter knowledge that they can then bring in house. CMD Solutions Bridges Skills Gaps to Grow Revenue by 30% Working with AWS Training and Certification As the internal program grew, CMD Solutions saw an opportunity to support its customers by helping them to address the skills shortage through similar types of training programs. It developed an external training offering for LearnCMD to upskill customers who desired the same training to fill AWS skills gaps in their own teams. CMD Solutions held its first customer-facing training sessions in January of 2022, ultimately training 37 attendees from 10 customers on AWS. Each training featured 15 days of classes, including five AWS Solutions-Focused Immersion Days events, which are designed to educate businesses about AWS products and services and help them develop the skills needed to build, deploy, and operate infrastructure and applications in the cloud. 中文 (繁體) Bahasa Indonesia CMD Solutions also is integrating LearnCMD into its diversity and inclusion initiatives. For example, participants in its future associate program, Women Who Code, have the opportunity to opt into LearnCMD during the 6-month Women Who Code program. That way, they can additionally focus on AWS skills and eventually contribute to diversity within CMD Solutions’ workforce. “We’re investing in our employees to meet the demand of our customers and help us to scale and grow,” says Becker. “The robust training program that we built with AWS Training and Certification was a central part of achieving that.” Contact Sales Ρусский Approximately 34 percent of the current CMD Solutions workforce graduated from LearnCMD, and 20 percent of participants are running LearnCMD courses on their own. Through the training programs, CMD Solutions grew from 72 skilled consultants to 170 in less than 1.5 years, a 136 percent increase. Since implementing these training programs, CMD Solutions has seen a 30 percent growth in revenue, and it has helped customers accelerate their cloud migrations by five times. The increase in skilled consultants also helps meet increasing demand for CMD Solutions’ services. عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Services Used 5x 2022 Opportunity | Using AWS Training & Certification to Upskill Employees and Meet Customer Demand for CMD Solutions Overview > 130% AWS Skill Builder increase in employment of skilled consultants in 1.5 years Türkçe English Bryan Becker Cloud Excellence Practice Manager, CMD Solutions Customer Stories / Professional Services Based in Australia, CMD Solutions, part of Mantel Group, helps organizations to transform IT operations using specialized AWS automation expertise. Founded in 2015 and acquired by the Mantel Group in 2019, CMD Solutions creates fully automated, customized AWS environment deployments using DevOps continuous integration and delivery tool sets. The company not only delivers quality services to its customers but also works to empower, educate, and prioritize employees as valued consultants within the company. “Within CMD Solutions, we are extremely focused on AWS,” says Bryan Becker, CMD Solutions cloud excellence practice manager. “We work with customers to provide additional skill sets in digital advisory and data security areas. We are a one-stop shop for solutions that our customers need.” Solution | Growing Revenue and Accelerating Customer Cloud Migrations Deutsch Tiếng Việt Build in-demand cloud skills—your way—with our online learning center. Learn more » Italiano ไทย time of 6 months to 2 weeks Learn more » Learn how CMD Solutions grew by 30 percent with AWS in professional services. 30% Improved onboarding Português
Cognitran Deploys Customized CDN Solution in under 12 Weeks Using Amazon CloudFront.txt
Solution | Deploying a Custom-Built CDN System in 3 Months David Butterworth Director and Business Leader, Cognitran Français deployed a customized CDN solution Automotive software provider Cognitran Limited (Cognitran) was looking to build and deliver a customized content delivery network (CDN) solution in under 3 months so that it could quickly disperse technical information and meet the requests of one of its customers. Cognitran’s customer was looking for an optimized CDN solution that would provide a competitive performance with commercial benefits. The customer was also working on an accelerated timeline, because it was facing an automatic contract renewal with the previous CDN vendor. Cognitran decided to build a new solution that would quickly deliver large, complex files after one of its customers approached the company with this request. Previously, Cognitran and this customer had collaborated to build a custom technical information distribution system that ran on a CDN from a third-party vendor. However, the customer was looking for a CDN solution that would balance performance and cost-effectiveness. “We have users all around the world, and they want the best possible experience in terms of responsiveness,” says Butterworth. Cognitran’s customer was also under pressure to come up with a new solution because it was facing a contract renewal with its incumbent vendor in 3 months. Español By April 2022, Cognitran had completed the proof of concept and received approval from its customer’s IT team to deploy the new CDN system. From there, Cognitran worked on the implementation so that it would not affect its customer’s production environment. “We had zero downtime or service interruption during the switchover,” says Butterworth. “It was an incredible achievement for us, especially considering the time constraints.” Using this custom-built system, Cognitran’s customer can quickly deliver content anywhere with 99.99 percent uptime—without having any physical infrastructure in place. “Using Amazon CloudFront means that we can deliver content to our customer very quickly,” says Butterworth. “That reliability is key to speeding up the technician experience.” Cognitran and its customer also have greater visibility into the performance of its new solution compared with its previous CDN. Greater visibility helps Cognitran troubleshoot errors and develop relevant new features as needed. 日本語 “We want to expand into different areas, such as connected vehicles, remote diagnostics, and vehicle monitoring,” says Butterworth. “Becoming an AWS Partner will help us target a specific market share and attract more OEMs to use our SaaS solutions.” Expanaded Get Started 한국어 AWS Professional Services Using Amazon CloudFront means that we can deliver content to our customer very quickly. That speed is key to speeding up the technician experience.”  Overview | Opportunity | Solution | Outcome | AWS Services Used Cognitran developed this new solution using CloudFront as the backbone for delivering content in milliseconds. To meet its customer’s security requirements, the company also implemented AWS Shield, a managed distributed-denial-of-service protection solution, along with AWS Firewall Manager, which gives companies the ability to centrally configure and manage firewall rules across accounts and applications. “Using out-of-the-box solutions like AWS Shield and AWS Firewall Manager was very attractive to us,” says Butterworth. Customer Stories / Industry Name To meet this request, Cognitran engaged Amazon Web Services (AWS), and the company worked on developing a scalable solution that could deliver technical files and service information with low latency and baked-in security provisions. In less than 12 weeks, Cognitran implemented a new solution using AWS services, including Amazon CloudFront, which securely delivers content with low latency and high transfer speeds. Now, Cognitran’s customer can deliver content almost instantaneously while maximizing cost savings. After successfully deploying this new solution, Cognitran joined the AWS Partner Network, and the company plans to incorporate this custom-built solution into its software offering. Amazon CloudFront AWS Services Used 中文 (繁體) Bahasa Indonesia AWS Firewall Manager is a security management service which allows you to centrally configure and manage firewall rules across your accounts and applications. Learn more » Contact Sales Ρусский SaaS offering عربي Given its history of using AWS, Cognitran decided to engage AWS Professional Services, which helps companies achieve their desired business outcomes using AWS solutions. Cognitran relied on technical advice from the AWS Professional Services team to accelerate its creation of a secure solution that would receive authorization from its customer’s internal IT team. “It was critical to get this system implemented in the timescale we were given,” says Butterworth. “We built a proof of concept alongside the AWS Professional Services team that included some augmented security aspects.” 中文 (简体) Automotive software-as-a-service (SaaS) provider Cognitran offers technical information software and systems around after-sales, diagnostic services, data analytics, content management, and multilingual publications. The company serves over 200,000 active users across original equipment manufacturers (OEMs). The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Learn more » Learn more » 2022 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Overview AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. Learn more » and distributed content globally during switchover Scaled Türkçe Automotive SaaS provider Cognitran used Amazon CloudFront and AWS Shield to deploy a custom-built content delivery network solution for one of its customers in under 12 weeks, helping it deliver content in near real time. experienced uptime English Automotive internal software has become more advanced over time, and many of Cognitran’s OEMs require complex calibration files so that they can perform the necessary maintenance and repairs. “Cars often have new technologies, like autonomous driving, electrification, infotainment, and telematics,” says David Butterworth, director and business leader at Cognitran. “The amount of software content and technical information required for one car has grown exponentially.” Cognitran Deploys Customized CDN Solution in under 12 Weeks Using Amazon CloudFront AWS Firewall Manager Outcome | Joining the AWS Partner Network Deutsch No downtime 99.99% Tiếng Việt Italiano ไทย Automotive software-as-a-service provider Cognitran offers technical information software and systems around after-sales, diagnostic service analytics, content management, and multilingual publications. The company serves over 200,000 customers across 130 countries. 12 weeks About Cognitran Based on the results of this project, Cognitran has decided to add this new system to its SaaS offering. “We can secure a new revenue stream by offering this solution,” says Butterworth. Cognitran has also joined the AWS Partner Network, which will help it grow its business on AWS. The company has already enrolled in several AWS training opportunities to deepen its understanding of CloudFront and upskill its teams. AWS Shield Opportunity | Distributing Complex Calibration Files to OEMs Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Comscore Maintains Privacy While Cross-Analyzing Data using AWS Clean Rooms _ Case Study _ AWS.txt
Brian Pugh Chief Information Officer, Comscore Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » 2023 Español Then, Comscore can set up its own privacy controls, including a mutually agreed upon join key that gives collaborators the ability to match data tables and perform analyses using a double-blind method. This method means that all parties can protect sensitive data, such as cookies, first-party IDs, and IP addresses, and run queries on combined data to gain richer, more comprehensive insights. “Instead of ingesting all that information and doing the analysis behind our firewall, we can join those things in AWS Clean Rooms and get back what we need,” says Brian Pugh, chief information officer at Comscore. Additionally, Comscore can organize its analytics by demographics or other categories so that it can identify trends in how groups of people interact with certain media. Comscore can also connect AWS Clean Rooms with Amazon QuickSight—a solution that provides unified business intelligence at hyperscale—so that it can visualize its data in one place using interactive, customizable dashboards. 日本語 About Comscore Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Industry Challenge AWS Clean Rooms helps customers and their partners more easily and securely collaborate and analyze their collective datasets—without sharing or copying one another’s underlying data. AWS Services Used 中文 (繁體) Bahasa Indonesia AWS Clean Rooms...helps Comscore to provide the best possible measurement and support to our data partners to trust that the data that they’re providing is safe and protected.” Ρусский عربي Analytics and insights provider Comscore provides a wide range of data-driven solutions that support planning, transacting, and measuring media across channels. It serves media companies and advertisers, promoting transparency and trust within the industry. Benefits of Using AWS 中文 (简体) Comscore turned to Amazon Web Services (AWS) and chose AWS Clean Rooms to uphold privacy-enhanced collaborations with its partners. AWS Clean Rooms helps Comscore’s customers and partners to securely match, analyze, and collaborate on their combined datasets with ease and without sharing or revealing underlying data. Using this solution, Comscore can invite up to five collaborators into an AWS Clean Room and pull pre-encrypted data into a configured data table from Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere. Media ratings company Comscore can provide richer insights to advertisers while maintaining data privacy by securely collaborating on its data with third parties using AWS Clean Rooms. Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale.  Learn more » Comscore Maintains Privacy while Cross-Analyzing Data Using AWS Clean Rooms Türkçe Comscore, a global media ratings company, provides its advertising customers with rich, accurate insights about their audiences and campaign effectiveness by ingesting and cross-analyzing its panel data with multiple other sources—a process that generally involves migrating data from server to server. Comscore wanted to provide customers with a simpler option: an interoperable environment that collaborators can access to analyze datasets without revealing their raw data. English AWS Clean Rooms Comscore's Solution Deutsch Tiếng Việt Amazon S3 Customer Stories / Advertising & Marketing Italiano ไทย Contact Sales Learn more » With its underlying infrastructure built on AWS, Comscore can scale to ingest data from thousands of data sources and standardize its processes for data collaboration with other enterprises by using AWS Clean Rooms. Further, Comscore can avoid the costs and risks associated with the physical migration of data from one environment to another, or the development costs involved in standing up an environment with the necessary security and governance provisions. As a result, Comscore can maintain its competitive edge and improve the accuracy of its analytics for its customers as it continues to ingest and cross-analyze new information from different sources. “AWS Clean Rooms...helps Comscore to provide the best possible measurement and support to our data partners to trust that the data that they’re providing is safe and protected,” says Pugh. Amazon QuickSight Português
Concert.ua Manages 1000 Traffic Spikes Using AWS Serverless _ AWS EC2.txt
Concert.ua had migrated to a small cloud provider in 2017 but the arrangement was frustrating the company. Although the cloud was more efficient and flexible than managing its own on-premises servers, it had to provision servers manually, a process that could occupy several staff for many hours. Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Choose from seven popular engines — Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server — and deploy on-premises with Amazon RDS on AWS Outposts. Français Benefits of AWS Concert.ua turned to AWS for out-of-the-box services that would automatically scale fast enough to deal with unexpected traffic spikes. Español Concert.ua developers have also reduced the time it takes to implement APIs using Amazon API Gateway and AWS Lambda. Before using AWS, when a customer purchased a ticket during a busy period, they had to wait for the database to work through a queue of requests before receiving a confirmation. Learn more » Instead of spending time coding, the developers send high-level instructions to AWS Lambda and can manipulate backend services to access data, business logic, and application functionality. “We couldn’t launch APIs as quickly as we can now,” says Lysenko. “Previously, we had to do a lot of coding but now it’s 300–500 percent faster. Using AWS, our software development cycle takes less time and effort, by fewer people. And it costs less than our previous setup.” Concert.ua Manages 1000% Traffic Spikes Using AWS Serverless Reduced total infrastructure costs Before using AWS, technical staff estimated how many servers were needed but often ended up overprovisioning and paying for unused resources. “Even when a traffic spike was expected, it was always a guess as to how many servers we’d need,” says Yevgen Lysenko, founder and chief technology officer (CTO) at Concert.ua. “But there was no other option with the resources and technologies we had at the time.” AWS Aurora Serverless Launched APIs 300–500% faster Now Concert.ua uses AWS Lambda to process multiple transactions simultaneously—so customers no longer have to wait. As soon as they complete their transaction, Concert.ua generates and dispatches the ticket. “Using AWS Lambda and AWS Fargate, we can have simultaneous transactions running in real time,” says Lysenko. “Everything just works and it’s all automated, which is fantastic.” Lysenko admits he was surprised by the results. “We didn’t think Fargate would be useful, but we quickly changed our minds once we tested the service. We discovered that it not only scales much faster, it’s also cost efficient. Fargate containers are twice the capacity of our previous containers, and so we use fewer containers than expected,” he says. “We were looking for a magic button that we could press to make our transaction processing run faster. Instead, we found AWS Fargate.” 한국어 Amazon RDS for Aurora To automatically scale up or down to handle traffic spikes, Concert.ua initially chose to use open-source Docker containers to package its SQL database. It then uploaded them to Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity. Amazon Lambda Delivering Real-Time Transactions Using AWS Lambda Get Started Amazon Fargate Concert.ua wanted to find a solution that would allow its staff to focus on improving its ticketing application and working on innovative marketing strategies instead of spending time troubleshooting its infrastructure. “Looking after the infrastructure was a never-ending story,” says Lysenko. “Something was always wrong and we never had enough people to do all the work.” AWS Services Used The announcement of a popular event, or a mention in social media, results in a sudden influx of visitors to the Concert.ua site. This causes traffic increases of anywhere between 400 and 1,000 percent within minutes. Some initial provisioning experiments reduced the time to spin up a server, but the solution was still too slow to deal with sudden large spikes in traffic. So Concert.ua tried AWS Fargate, a serverless, pay-as-you-go compute engine. 中文 (繁體) Bahasa Indonesia Concert.ua is one of Ukraine’s largest ticketing companies in terms of revenue, customers, and ticket sales. Its ticketing site receives almost 2 million visitors every month. Improved website reliability to 99.9% uptime Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use.File processing Stream processing Web applications IoT backends Mobile backends. عربي Learn more » 中文 (简体) Dealing with 1,000% traffic spikes The company migrated to AWS Lambda, a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Concert.ua also used Amazon Aurora, a MySQL and PostgreSQL-compatible relational database built for the cloud. AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Automated scaling to handle 1000% traffic spikes Ukrainian event ticketing company Concert.ua experienced unexpected spikes in traffic that overwhelmed its website, leaving customers unable to complete transactions. Using fully automated scaling and a serverless architecture built on Amazon Web Services (AWS), the company has increased the reliability and availability of its systems and reduced infrastructure costs. Its customers are able to reliably purchase tickets for popular events, even when traffic is high. We were looking for a magic button that we could press to make our transaction processing run faster. Instead, we found AWS Fargate.” Concert.ua is Ukraine’s largest ticketing agency and handles almost half of the country’s online ticket sales. To win over customers, it needs to provide fast and reliable services so event-goers don’t choose to purchase tickets from competitors. Concert.ua transitioned from its traditional approach of service provisioning to infrastructure as code, using the open-source Terraform tool. Türkçe The migration has improved system reliability while also reducing the cost of operating its ticketing infrastructure. “When we used the AWS calculators we were unsure how much the services might cost us, but most of the time our bill has been less than we estimated,” says Lysenko. “The bill is always relative to our business activity, so when the bills are high it means that we have been earning more.” English Getting 99.9% Uptime for Less Cost Deutsch Amazon Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora. It automatically starts up, shuts down, and scales capacity up or down based on your application's needs. You can run your database on AWS without managing database capacity.. Tiếng Việt Migrating to a Serverless Architecture Italiano ไทย Ukrainian music ticketing firm Concert.ua experienced unexpected spikes in traffic that overwhelmed its website. This left customers unable to complete transactions and affected the company’s revenue and reputation. In addition, its reliance on manual server provisioning made it difficult to quickly scale to meet demand. Since migrating to a serverless architecture built on AWS—and with fully automated scaling—Concert.ua has cut its infrastructure costs and improved customers’ ticket-purchasing experience. Contact Sales About Concert.ua 2022 Concert.ua’s ticketing site can handle large, unexpected spikes in traffic and reports 99.9 percent uptime. In addition, its technical staff focus on higher value projects that helps the business grow its market share and further improve customer experience. 日本語 Yevgen Lysenko, Founder and Chief Technology Officer (CTO), Concert.ua Português
Cost Savings of 20 and 8 Hours of Data Processing Saved across 500 Spark Jobs Using AWS Graviton2 Processors _ Wealthfront Case Study _ AWS.txt
To achieve these goals, Wealthfront uses Amazon Web Services (AWS) for its data processing and compute workloads. The company runs its data processing on Amazon Elastic Compute Cloud (Amazon EC2), a service that provides secure and resizable compute capacity for virtually any workload. By upgrading its infrastructure, the company has saved 20 percent on costs, reduced runtime by 5 percent, and lowered its carbon footprint. Français carbon footprint For more information about Wealthfront, including full disclosures, visit here. “Using AWS Graviton2 processors, our pipelines run faster and cheaper, providing us with important benefits,” says Bandaru. “Running our data workloads faster means downstream jobs run faster. And because Amazon EMR is one of our main expenses, the profitability of the service was important to us.” Saving runtime was the main motivator for Wealthfront in using AWS Graviton2 processors. Each of the company’s 500 data ingestion pipelines ingests data every day. Across all pipelines, the company has saved 8 hours of data processing a day, amounting to a reduction of 5 percent. Amazon Elastic Compute Cloud (Amazon EC2) Español 日本語 Lowered 2023 Contact Sales Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used in costs Wealthfront integrates smart investing and saving products to help young professionals build long-term wealth in all market conditions. Outcome | Expanding AWS Graviton2 Processor Use for Future Growth AWS Graviton processor AWS Services Used Wealthfront currently runs around 95 percent of its data workloads using AWS Graviton2 processors. The company serves more than 500,000 clients, and this solution can scale to support over a million clients while still producing faster runtime. “We are able to serve more clients without incurring large additional data processing costs,” says Ray. “Using AWS, we’ve optimized our infrastructure to scale along with our company’s growth. And running AWS Graviton2 processors is a cost-efficient way of improving our elasticity.” Cost Savings of 20% and 8 Hours of Data Processing Saved across 500 Spark Jobs Using AWS Graviton2 Processors with Wealthfront 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. About Wealthfront Using AWS Graviton processors, the data ingestion pipelines run automatically in the background while engineers work on other tasks. When these pipelines run faster, the output for the day is improved and the whole operation is completed more quickly. “Any saved time increases our ability to start trading at the right moment,” says Arup Ray, head of data engineering at Wealthfront. “Accelerating our data processing is critical from a business perspective.” This faster runtime translates to more time for our automated investing algorithms to better manage our clients’ investments, and with the same instances running for a shorter duration, lower power consumption translates to a lower carbon footprint for the company. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. Learn more » 20% reduction 中文 (简体) 5% reduction Overview Solution | Running Amazon EMR to Provide Automated Investment Services As part of the upgrade, the company also did some prerequisite work that included Scala and Spark version upgrades compatible with Amazon EMR 6.2. From 2019, Wealthfront made several upgrades before migrating to Amazon EMR 6.2 in February 2022. Once EMR 6.2 was implemented, the implementation of AWS Graviton2 processors took less than a month, and the rollout was completed in March 2022. “Because of the way the code is structured to launch Amazon EMR infrastructure, the upgrade went smoothly,” says Nithin Bandaru, data infrastructure engineer at Wealthfront. “We needed to make sure critical pipelines were functional and do some runtime analysis, and the entire upgrade went well.” Each year during re:Invent, an AWS conference for the global cloud community, the company produces innovative ideas to help improve infrastructure and efficiency and to further reduce costs. “AWS is awesome,” says Bandaru. “It has been a really nice experience working on AWS.” Customer Stories / Financial Services Türkçe AWS Graviton2 processors deliver a major leap in performance and capabilities over first-generation AWS Graviton processors. Graviton2-based instances provide the best price performance for workloads in Amazon EC2. Learn more » in runtime across 500 data ingestion Spark jobs English Wealthfront integrates smart saving and investing products to help the next generation of investors build long-term wealth. Founded in Palo Alto, California, in 2008, the startup has grown to manage more than $30 billion in assets for over 500,000 clients. It has been using AWS from the beginning. Now, Wealthfront manages over 500 data pipelines, running some of its preinvesting jobs. The large financial data processing workloads are on a combination of transient and persistent clusters that run continuously using Amazon EMR, an industry-leading cloud big data solution for petabyte-scale processing, interactive analytics, and machine learning. By using Amazon EMR to support its compute workloads, Wealthfront generates derived datasets for marketing needs, clickstream data, client financial data, and tax-related data. Another major benefit of upgrading to AWS Graviton2 processors is the cost savings. “Using AWS Graviton2 processors provides, at a minimum, a 20 percent discount for the same jobs in the same amount of time compared with the old system,” says Bandaru. The company has seen performance reports of higher discounts as well. Each month, the company saves 20 percent by using AWS Graviton2 processors. Implementing the service on more pipelines will offer even more savings. “The main impact of using AWS Graviton2 processors is the cost savings,” says Bandaru. “As the underlying architecture of the processors changes, we will reap more benefits.” Opportunity | Using AWS Graviton2 Processors Saved 20% On Costs for Wealthfront Amazon EMR Deutsch To provide automated financial investment services to young professionals who want to build long-term wealth through their investments, Wealthfront decided to upgrade its infrastructure to improve automation while lowering business costs. The company wanted to reduce data processing workload runtime and save on costs while providing a better product for its customers. Tiếng Việt Nithin Bandaru Data Infrastructure Engineer, Wealthfront Italiano ไทย Learn how Wealthfront, an industry-leading automated wealth manager, saved 20 percent on costs and reduced runtime by 5 percent using AWS Graviton2–based instances. Wealthfront has been improving its Amazon EMR infrastructure every year and wanted to take these improvements a step further by using AWS Graviton processors, which are designed to deliver the best price performance for cloud workloads running on Amazon EC2. Saving time is critical for Wealthfront because its customers depend on fast and efficient data pipelines to make financial investment trades. On busy trading days, all the available financial data needs to be processed before the following day’s trading can be computed. To better support this workload and accelerate data processing on Amazon EMR, Wealthfront migrated to AWS Graviton2 processors. Learn more » The main impact of using AWS Graviton2 processors is the cost savings. As the underlying architecture of the processors changes, we will reap more benefits.” Português
Coventry University Group Empowers Next Generation of IT Professionals Using AWS Educate and AWS Academy _ Case Study _ AWS.txt
Hands-on learning Opportunity | Addressing the Need for Specialized Skills in an Adaptable Format Français by preparing students for industry-recognized AWS Certifications Daniel Flood Lecturer in Cloud Computing, CU Coventry Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Despite this demand, students pursuing careers in the IT industry face challenges in gaining the hands-on experience and résumé-boosting certifications necessary to overcome IT access hurdles. To address student and industry needs and offer a strong foundation for future IT careers, CU Coventry, a wholly owned subsidiary of Coventry University Group, began to build bachelor of science (BSc) programs dedicated to cloud computing. The programs included a 3-year bachelor of science degree in cloud computing and a 2-year accelerated version of the same degree. The cloud computing BSc was designed with core skills and technical knowledge components in mind, incorporating a contemporary approach to meet the digital workplace’s growing and varied needs. “The ability to use cloud tools without additional cost to the students is an amazing value and helps them develop more advanced skills,” says Daniel Flood, lecturer in cloud computing at CU Coventry. Working with various AWS Training and Certification features, the program helps graduates learn the skills and functions needed to keep pace with the industry. The most important thing is for the modules to reflect what the industry needs. We want students to add value to the global workforce.”  Español Solution | Creating a Tech-Driven Solution 日本語 2022 On-demand cloud skills 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Educate Coventry University Group saw an opportunity to help students get hands-on experience to meet UK employers’ needs for trained workers with IT experience and digital skills—particularly with the cloud and cloud-based services. To meet this high demand, Coventry University Group chose Amazon Web Services (AWS) and worked with AWS Educate to design a bachelor of science degree in cloud computing. Customer Stories / Education Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud Learn more » Coventry University Group has more than 30,000 students and 200 undergraduate and postgraduate degrees and is based in the United Kingdom, which is quickly establishing itself as a global tech powerhouse. In the first 6 months of 2021, $18 billion in tech funding was raised, three times the amount raised in 2020. The tech boom has led to a surge in hiring, with IT-related jobs now making up 13 percent of all vacancies in the UK. Cloud-related skills are valuable assets in today’s marketplace, with available positions ranging from cloud engineering and analysis to administration and security. Coventry University Group Empowers Next Generation of IT Professionals Using AWS Educate and AWS Academy AWS Services Used About Coventry University Group 中文 (繁體) Bahasa Indonesia AWS Academy Contact Sales Ρусский to equip students for careers in the cloud عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. In early 2019, Coventry University Group subsidiary CU Coventry piloted this approach by introducing students to cloud computing using resources from AWS Educate, which offers hundreds of hours of self-paced training and resources for new-to-cloud learners. CU Coventry’s bachelor of science in cloud computing course officially began in September 2020 and has already seen success from the program’s industry-driven framework. Overview Validate technical skills and cloud expertise to grow your career and business. Learn more » Get Started on AWS services using AWS Academy Learner Labs Build your cloud skills at your own pace, on your own time, and completely for free. Looking ahead, Coventry University Group plans to expand bachelor of science degree in cloud computing courses to its campuses in London and Wroclaw. “The ability to have hands-on experience with AWS services—the same ones that companies use in the real world—is invaluable,” said Tomasz, a student of the Cloud Computing Course. “Once we join the workforce, we can apply our skill sets and hit the ground running.” Türkçe English Students successfully engaging in the program graduate with in-demand skills for careers in the cloud, including valuable experience with AWS services through AWS Academy Learner Labs. AWS Academy provides higher education institutions with ready-to-teach cloud computing curriculum to prepare students for AWS Certifications, which validate technical skills and cloud expertise for in-demand cloud jobs. “The most important thing is for the modules to reflect what the industry needs. We want students to add value to the global workforce,” says Flood. Taking advantage of AWS Education Programs, CU Coventry’s BSc degree in cloud computing innovates on AWS to track the IT industry’s rapid pace. AWS Certification Deutsch Coventry University Group is based in the United Kingdom with more than 30,000 students and more than 200 undergraduate and postgraduate degrees across its schools, faculties, and campuses. Tiếng Việt AWS Training and Certification Italiano ไทย Outcome | Looking to the Future of Coventry University Group’s Cloud Computing Program Learn more » Increases employability Coventry University Group used AWS Education Programs to create a comprehensive and flexible degree to help students meet growing IT industry cloud skills demand. Both the 3-year bachelor of science degree in cloud computing and its accelerated version were developed in collaboration with AWS. These programs were designed by working backwards from the cloud skills employers are currently seeking in the UK and across the global labor market. “The approach gave us insights into what skill gaps were lacking in the industry. From there, we designed the courses, with the AWS team providing helpful inputs,” says Flood. “For example, the AWS team pointed out that there was an industry need for serverless computing skills, and we integrated that into our curriculum.” Português
Create high-quality images with Stable Diffusion models and deploy them cost-efficiently with Amazon SageMaker _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Create high-quality images with Stable Diffusion models and deploy them cost-efficiently with Amazon SageMaker by Simon Zamarin , Vikram Elango , Joao Moura , and Saurabh Trikande | on 26 MAY 2023 | in Amazon Machine Learning , Amazon SageMaker , Artificial Intelligence , Expert (400) , Technical How-to | Permalink | Comments |  Share Text-to-image generation is a task in which a machine learning (ML) model generates an image from a textual description. The goal is to generate an image that closely matches the description, capturing the details and nuances of the text. This task is challenging because it requires the model to understand the semantics and syntax of the text and to generate photorealistic images. There are many practical applications of text-to-image generation in AI photography, concept art, building architecture, fashion, video games, graphic design, and much more. Stable Diffusion is a text-to-image model that empowers you to create high-quality images within seconds. When real-time interaction with this type of model is the goal, ensuring a smooth user experience depends on the use of accelerated hardware for inference, such as GPUs or AWS Inferentia2 , Amazon’s own ML inference accelerator. The steep costs involved in using GPUs typically requires optimizing the utilization of the underlying compute, even more so when you need to deploy different architectures or personalized (fine-tuned) models. Amazon SageMaker multi-model endpoints (MMEs) help you address this problem by helping you scale thousands of models into one endpoint. By using a shared serving container, you can host multiple models in a cost-effective, scalable manner within the same endpoint, and even the same GPU. In this post, you will learn about Stable Diffusion model architectures, different types of Stable Diffusion models, and techniques to enhance image quality. We also show you how to deploy Stable Diffusion models cost-effectively using SageMaker MMEs and NVIDIA Triton Inference Server. Prompt: portrait of a cute bernese dog, art by elke Vogelsang, 8k ultra realistic, trending on artstation, 4 k Prompt: architecture design of living room, 8 k ultra-realistic, 4 k, hyperrealistic, focused, extreme details Prompt: New York skyline at night, 8k, long shot photography, unreal engine 5, cinematic, masterpiece Stable Diffusion architecture Stable Diffusion is a text-to-image open-source model that you can use to create images of different styles and content simply by providing a text prompt. In the context of text-to-image generation, a diffusion model is a generative model that you can use to generate high-quality images from textual descriptions. Diffusion models are a type of generative model that can capture the complex dependencies between the input and output modalities text and images. The following diagram shows a high-level architecture of a Stable Diffusion model. It consists of the following key elements: Text encoder – CLIP is a transformers-based text encoder model that takes input prompt text and converts it into token embeddings that represent each word in the text. CLIP is trained on a dataset of images and their captions, a combination of image encoder and text encoder. U-Net – A U-Net model takes token embeddings from CLIP along with an array of noisy inputs and produces a denoised output. This happens though a series of iterative steps, where each step processes an input latent tensor and produces a new latent space tensor that better represents the input text. Auto encoder-decoder – This model creates the final images. It takes the final denoised latent output from the U-Net model and converts it into images that represents the text input. Types of Stable Diffusion models In this post, we explore the following pre-trained Stable Diffusion models by Stability AI from the Hugging Face model hub. stable-diffusion-2-1-base Use this model to generate images based on a text prompt. This is a base version of the model that was trained on LAION-5B . The model was trained on a subset of the large-scale dataset LAION-5B , and mainly with English captions. We use StableDiffusionPipeline from the diffusers library to generate images from text prompts. This model can create images of dimension 512 x 512. It uses the following parameters: prompt – A prompt can be a text word, phrase, sentences, or paragraphs. negative_prompt – You can also pass a negative prompt to exclude specified elements from the image generation process and to enhance the quality of the generated images. guidance_scale – A higher guidance scale results in an image more closely related to the prompt, at the expense of image quality. If specified, it must be a float. stable-diffusion-2-depth This model is used to generate new images from existing ones while preserving the shape and depth of the objects in the original image. This stable-diffusion-2-depth model is fine-tuned from stable-diffusion-2-base , an extra input channel to process the (relative) depth prediction. We use StableDiffusionDepth2ImgPipeline from the diffusers library to load the pipeline and generate depth images. The following are the additional parameters specific to the depth model: image – The initial image to condition the generation of new images. num_inference_steps (optional) – The number of denoising steps. More denoising steps usually leads to a higher-quality image at the expense of slower inference. This parameter is modulated by strength . strength (optional) – Conceptually, this indicates how much to transform the reference image. The value must be between 0–1. image is used as a starting point, adding more noise to it the larger the strength. The number of denoising steps depends on the amount of noise initially added. When strength is 1, the added noise will be maximum and the denoising process will run for the full number of iterations specified in num_inference_steps . A value of 1, therefore, essentially ignores image . For more details, refer to the following code . stable-diffusion-2-inpainting You can use this model for AI image restoration use cases. You can also use it to create novel designs and images from the prompts and additional arguments. This model is also derived from the base model and has a mask generation strategy. It specifies the mask of the original image to represent segments to be changed and segments to leave unchanged. We use StableDiffusionUpscalePipeline from the diffusers library to apply inpaint changes on original image. The following additional parameter is specific to the depth model: mask_input – An image where the blacked-out portion remains unchanged during image generation and the white portion is replaced stable-diffusion-x4-upscaler This model is also derived from the base model, additionally trained on the 10M subset of LAION containing 2048 x 2048 images. As the name implies, it can be used to upscale lower-resolution images to higher resolutions Use case overview For this post, we deploy an AI image service with multiple capabilities, including generating novel images from text, changing the styles of existing images, removing unwanted objects from images, and upscaling low-resolution images to higher resolutions. Using several variations of Stable Diffusion models, you can address all of these use cases within a single SageMaker endpoint. This means that you’ll need to host large number of models in a performant, scalable, and cost-efficient way. In this post, we show how to deploy multiple Stable Diffusion models cost-effectively using SageMaker MMEs and NVIDIA Triton Inference Server. You will learn about the implementation details, optimization techniques, and best practices to work with text-to-image models. The following table summarizes the Stable Diffusion models that we deploy to a SageMaker MME. Model Name Model Size in GB stabilityai/stable-diffusion-2-1-base 2.5 stabilityai/stable-diffusion-2-depth 2.7 stabilityai/stable-diffusion-2-inpainting 2.5 stabilityai/stable-diffusion-x4-upscaler 7 Solution overview The following steps are involved in deploying Stable Diffusion models to SageMaker MMEs: Use the Hugging Face hub to download the Stable Diffusion models to a local directory. This will download scheduler, text_encoder, tokenizer, unet, and vae for each Stable Diffusion model into its corresponding local directory. We use the revision="fp16" version of the model. Set up the NVIDIA Triton model repository, model configurations, and model serving logic model.py . Triton uses these artifacts to serve predictions. Package the conda environment with additional dependencies and the package model repository to be deployed to the SageMaker MME. Package the model artifacts in an NVIDIA Triton-specific format and upload model.tar.gz to Amazon Simple Storage Service (Amazon S3). The model will be used for generating images. Configure a SageMaker model, endpoint configuration, and deploy the SageMaker MME. Run inference and send prompts to the SageMaker endpoint to generate images using the Stable Diffusion model. We specify the TargetModel variable and invoke different Stable Diffusion models to compare the results visually. We have published the code to implement this solution architecture in the GitHub repo . Follow the README instructions to get started. Serve models with an NVIDIA Triton Inference Server Python backend We use a Triton Python backend to deploy the Stable Diffusion pipeline model to a SageMaker MME. The Python backend lets you serve models written in Python by Triton Inference Server. To use the Python backend, you need to create a Python file model.py that has the following structure: Every Python backend can implement four main functions in the TritonPythonModel class: import triton_python_backend_utils as pb_utils class TritonPythonModel: """Your Python model must use the same class name. Every Python model that is created must have "TritonPythonModel" as the class name. """ def auto_complete_config(auto_complete_model_config): def initialize(self, args): def execute(self, requests): def finalize(self): Every Python backend can implement four main functions in the TritonPythonModel class: auto_complete_config , initialize , execute , and finalize . initialize is called when the model is being loaded. Implementing initialize is optional. initialize allows you to do any necessary initializations before running inference. In the initialize function, we create a pipeline and load the pipelines using from_pretrained checkpoints. We configure schedulers from the pipeline scheduler config pipe.scheduler.config . Finally, we specify xformers optimizations to enable the xformer memory efficient parameter enable_xformers_memory_efficient_attention . We provide more details on xformers later in this post. You can refer to model.py of each model to understand the different pipeline details. This file can be found in the model repository. The execute function is called whenever an inference request is made. Every Python model must implement the execute function. In the execute function, you are given a list of InferenceRequest objects. We pass the input text prompt to the pipeline to get an image from the model. Images are decoded and the generated image is returned from this function call. We get the input tensor from the name defined in the model configuration config.pbtxt file. From the inference request, we get prompt , negative_prompt , and gen_args , and decode them. We pass all the arguments to the model pipeline object. Encode the image to return the generated image predictions. You can refer to the config.pbtxt file of each model to understand the different pipeline details. This file can be found in the model repository. Finally, we wrap the generated image in InferenceResponse and return the response. Implementing finalize is optional. This function allows you to do any cleanups necessary before the model is unloaded from Triton Inference Server. When working with the Python backend, it’s the user’s responsibility to ensure that the inputs are processed in a batched manner and that responses are sent back accordingly. To achieve this, we recommend following these steps: Loop through all requests in the requests object to form a batched_input . Run inference on the batched_input . Split the results into multiple InferenceResponse objects and concatenate them as the responses. Refer to the Triton Python backend documentation or Host ML models on Amazon SageMaker using Triton: Python backend for more details. NVIDIA Triton model repository and configuration The model repository contains the model serving script, model artifacts and tokenizer artifacts, a packaged conda environment (with dependencies needed for inference), the Triton config file, and the Python script used for inference. The latter is mandatory when you use the Python backend, and you should use the Python file model.py . Let’s explore the configuration file of the inpaint Stable Diffusion model and understand the different options specified: name: "sd_inpaint" backend: "python" max_batch_size: 8 input [ { name: "prompt" data_type: TYPE_STRING dims: [ -1 ] }, { name: "negative_prompt" data_type: TYPE_STRING dims: [ -1 ] optional: true }, { name: "image" data_type: TYPE_STRING dims: [ -1 ] }, { name: "mask_image" data_type: TYPE_STRING dims: [ -1 ] }, { name: "gen_args" data_type: TYPE_STRING dims: [ -1 ] optional: true } ] output [ { name: "generated_image" data_type: TYPE_STRING dims: [ -1 ] } ] instance_group [ { kind: KIND_GPU } ] parameters: { key: "EXECUTION_ENV_PATH", value: {string_value: "/tmp/conda/sd_env.tar.gz" } } The following table explains the various parameters and values: Key Details name It’s not required to include the model configuration name property. In the event that the configuration doesn’t specify the model’s name, it’s presumed to be identical to the name of the model repository directory where the model is stored. However, if a name is provided, it must match the name of the model repository directory where the model is stored. sd_inpaint is the config property name. backend This specifies the Triton framework to serve model predictions. This is a mandatory parameter. We specify python , because we’ll be using the Triton Python backend to host the Stable Diffusion models. max_batch_size This indicates the maximum batch size that the model supports for the types of batching that can be exploited by Triton. input→ prompt Text prompt of type string. Specify -1 to accept dynamic tensor shape. input→ negative_prompt Negative text prompt of type string. Specify -1 to accept dynamic tensor shape. input→ mask_image Base64 encoded mask image of type string. Specify -1 to accept dynamic tensor shape. input→ image Base64 encoded image of type string. Specify -1 to accept dynamic tensor shape. input→ gen_args JSON encoded additional arguments of type string. Specify -1 to accept dynamic tensor shape. output→ generated_image Generated image of type string. Specify -1 to accept dynamic tensor shape. instance_group You can use this this setting to place multiple run instances of a model on every GPU or on only certain GPUs. We specify KIND_GPU to make copies of the model on available GPUs. parameters We set the conda environment path to EXECUTION_ENV_PATH . For details about the model repository and configurations of other Stable Diffusion models, refer to the code in the GitHub repo . Each directory contains artifacts for the specific Stable Diffusion models. Package a conda environment and extend the SageMaker Triton container SageMaker NVIDIA Triton container images don’t contain libraries like transformer, accelerate, and diffusers to deploy and serve Stable Diffusion models. However, Triton allows you to bring additional dependencies using conda-pack . Let’s start by creating the conda environment with the necessary dependencies outlined in the environment.yml file and create a tar model artifact sd_env.tar.gz file containing the conda environment with dependencies installed in it. Run the following YML file to create a conda-pack artifact and copy the artifact to the local directory from where it will be uploaded to Amazon S3. Note that we will be uploading the conda artifacts as one of the models in the MME and invoking this model to set up the conda environment in the SageMaker hosting ML instance. %%writefile environment.yml name: mme_env dependencies: - python=3.8 - pip - pip: - numpy - torch --extra-index-url https://download.pytorch.org/whl/cu118 - accelerate - transformers - diffusers - xformers - conda-pack !conda env create -f environment.yml –force Upload model artifacts to Amazon S3 SageMaker expects the .tar.gz file containing each Triton model repository to be hosted on the multi-model endpoint. Therefore, we create a tar artifact with content from the Triton model repository. We can use this S3 bucket to host thousands of model artifacts, and the SageMaker MME will use models from this location to dynamically load and serve a large number of models. We store all the Stable Diffusion models in this Amazon S3 location. Deploy the SageMaker MME In this section, we walk through the steps to deploy the SageMaker MME by defining container specification, SageMaker model and endpoint configurations. Define the serving container In the container definition, define the ModelDataUrl to specify the S3 directory that contains all the models that the SageMaker MME will use to load and serve predictions. Set Mode to MultiModel to indicate that SageMaker will create the endpoint with the MME container specifications. We set the container with an image that supports deploying MMEs with GPU. See Supported algorithms, frameworks, and instances for more details. We see all three model artifacts in the following Amazon S3 ModelDataUrl location: container = {"Image": mme_triton_image_uri, "ModelDataUrl": model_data_url, "Mode": "MultiModel"} Create an MME object We use the SageMaker Boto3 client to create the model using the create_model API. We pass the container definition to the create model API along with ModelName and ExecutionRoleArn : create_model_response = sm_client.create_model( ModelName=sm_model_name, ExecutionRoleArn=role, PrimaryContainer=container ) Define configurations for the MME Create an MME configuration using the create_endpoint_config Boto3 API. Specify an accelerated GPU computing instance in InstanceType (we use the same instance type that we are using to host our SageMaker notebook). We recommend configuring your endpoints with at least two instances with real-life use cases. This allows SageMaker to provide a highly available set of predictions across multiple Availability Zones for the models. create_endpoint_config_response = sm_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[ { "InstanceType": instance_type, "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": sm_model_name, "VariantName": "AllTraffic", } ], ) Create an MME Use the preceding endpoint configuration to create a new SageMaker endpoint and wait for the deployment to finish: create_endpoint_response = sm_client.create_endpoint( EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name ) The status will change to InService when the deployment is successful. Generate images using different versions of Stable Diffusion models Let’s start by invoking the base model with a prompt and getting the generated image. We pass the inputs to the base model with prompt, negative_prompt, and gen_args as a dictionary. We set the data type and shape of each input item in the dictionary and pass it as input to the model. inputs = dict(prompt = "Infinity pool on top of a high rise overlooking Central Park", negative_prompt = "blur,low detail, low quality", gen_args = json.dumps(dict(num_inference_steps=50, guidance_scale=8)) ) payload = { "inputs": [{"name": name, "shape": [1,1], "datatype": "BYTES", "data": [data]} for name, data in inputs.items()] } response = runtime_sm_client.invoke_endpoint( EndpointName=endpoint_name, ContentType="application/octet-stream", Body=json.dumps(payload), TargetModel="sd_base.tar.gz", ) output = json.loads(response["Body"].read().decode("utf8"))["outputs"] decode_image(output[0]["data"][0]) Prompt: Infinity pool on top of a high rise overlooking Central Park Working with this image, we can modify it with the versatile Stable Diffusion depth model. For example, we can change the style of the image to an oil painting, or change the setting from Central Park to Yellowstone National Park simply by passing the original image along with a prompt describing the changes we would like to see. We invoke the depth model by specifying sd_depth.tar.gz in the TargetModel of the invoke_endpoint function call. In the outputs, notice how the orientation of the original image is preserved, but for one example, the NYC buildings have been transformed into rock formations of the same shape. inputs = dict(prompt = "highly detailed oil painting of an inifinity pool overlooking central park", image=image, gen_args = json.dumps(dict(num_inference_steps=50, strength=0.9)) ) payload = { "inputs": [{"name": name, "shape": [1,1], "datatype": "BYTES", "data": [data]} for name, data in inputs.items()] } response = runtime_sm_client.invoke_endpoint( EndpointName=endpoint_name, ContentType="application/octet-stream", Body=json.dumps(payload), TargetModel="sd_depth.tar.gz", ) output = json.loads(response["Body"].read().decode("utf8"))["outputs"] print("original image") display(original_image) print("generated image") display(decode_image(output[0]["data"][0])) Original image Oil painting Yellowstone Park Another useful model is Stable Diffusion inpainting, which we can use to remove certain parts of the image. Let’s say you want to remove the tree in the following example image. We can do so by invoking the inpaint model sd_inpaint.tar.gz. To remove the tree, we need to pass a mask_image , which indicates which regions of the image should be retained and which should be filled in. The black pixel portion of the mask image indicates the regions that should remain unchanged, and the white pixels indicate what should be replaced. image = encode_image(original_image).decode("utf8") mask_image = encode_image(Image.open("sample_images/bertrand-gabioud-mask.png")).decode("utf8") inputs = dict(prompt = "building, facade, paint, windows", image=image, mask_image=mask_image, negative_prompt = "tree, obstruction, sky, clouds", gen_args = json.dumps(dict(num_inference_steps=50, guidance_scale=10)) ) payload = { "inputs": [{"name": name, "shape": [1,1], "datatype": "BYTES", "data": [data]} for name, data in inputs.items()] } response = runtime_sm_client.invoke_endpoint( EndpointName=endpoint_name, ContentType="application/octet-stream", Body=json.dumps(payload), TargetModel="sd_inpaint.tar.gz", ) output = json.loads(response["Body"].read().decode("utf8"))["outputs"] decode_image(output[0]["data"][0]) Original image Mask image Inpaint image In our final example, we downsize the original image that was generated earlier from its 512 x 512 resolution to 128 x 128. We then invoke the Stable Diffusion upscaler model to upscale the image back to 512 x 512. We use the same prompt to upscale the image as what we used to generate the initial image. While not necessary, providing a prompt that describes the image helps guide the upscaling process and should lead to better results. low_res_image = output_image.resize((128, 128)) inputs = dict(prompt = "Infinity pool on top of a high rise overlooking Central Park", image=encode_image(low_res_image).decode("utf8") ) payload = { "inputs": [{"name": name, "shape": [1,1], "datatype": "BYTES", "data": [data]} for name, data in inputs.items()] } response = runtime_sm_client.invoke_endpoint( EndpointName=endpoint_name, ContentType="application/octet-stream", Body=json.dumps(payload), TargetModel="sd_upscale.tar.gz", ) output = json.loads(response["Body"].read().decode("utf8"))["outputs"] upscaled_image = decode_image(output[0]["data"][0]) Low-resolution image Upscaled image Although the upscaled image is not as detailed as the original, it’s a marked improvement over the low-resolution one. Optimize for memory and speed The xformers library is a way to speed up image generation. This optimization is only available for NVIDIA GPUs. It speeds up image generation and lowers VRAM usage. We have used the xformers library for memory-efficient attention and speed. When the enable_xformers_memory_efficient_attention option is enabled, you should observe lower GPU memory usage and a potential speedup at inference time. Clean Up Follow the instruction in the clean up section of the notebook to delete the resource provisioned part of this blog to avoid unnecessary charges. Refer Amazon SageMaker Pricing  for details the cost of the inference instances. Conclusion In this post, we discussed Stable Diffusion models and how you can deploy different versions of Stable Diffusion models cost-effectively using SageMaker multi-model endpoints. You can use this approach to build a creator image generation and editing tool. Check out the code samples in the GitHub repo to get started and let us know about the cool generative AI tool that you build. About the Authors Simon Zamarin is an AI/ML Solutions Architect whose main focus is helping customers extract value from their data assets. In his spare time, Simon enjoys spending time with family, reading sci-fi, and working on various DIY house projects. Vikram Elango is a Sr. AI/ML Specialist Solutions Architect at AWS, based in Virginia, US. He is currently focused on generative AI, LLMs, prompt engineering, large model inference optimization, and scaling ML across enterprises. Vikram helps financial and insurance industry customers with design and architecture to build and deploy ML applications at scale. In his spare time, he enjoys traveling, hiking, cooking, and camping with his family. João Moura is an AI/ML Specialist Solutions Architect at AWS, based in Spain. He helps customers with deep learning model training and inference optimization, and more broadly building large-scale ML platforms on AWS. He is also an active proponent of ML-specialized hardware and low-code ML solutions. Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing machine learning. He focuses on core challenges related to deploying complex ML applications, multi-tenant ML models, cost optimizations, and making deployment of deep learning models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch, and spending time with his family. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Creating Air Taxi Simulations Using Amazon EC2 with Wisk Aero _ Wisk Aero Case Study _ AWS.txt
10-20% improvement Converge, alongside the AWS HPC team, created a pilot environment on AWS for the Wisk Aero team. The fully funded environment helped Wisk Aero to benchmark performance of the Amazon EC2 Hpc6a Instances—HPC instances powered by 3rd generation AMD EPYC processors—and run the necessary software to simulate a smooth transition to AWS. In addition to meeting technical and performance requirements, Wisk Aero worked with Converge to make sure the financial model for using AWS was also part of the pilot deliverables. Wisk Aero can benefit from cloud elasticity to help drive better economics, instead of expanding its physical footprint in its colocated data center. Wisk Aero’s autonomous eVTOL aircraft is the first-ever candidate for type certification by the Federal Aviation Administration and aims to make it possible for passengers to skip traffic and get to their destinations faster. By migrating its HPC to AWS, the company can run simulations more efficiently and at a lower cost. “Using AWS, we quickly scaled and added the needed on-demand compute power for the CFD team, compared with the months required and significant capital to build and scale an on-premises HPC cluster,” says Colin Haubrich, head of IT at Wisk Aero. Français AWS GovCloud (US) Wisk Aero uses Amazon FSx Lustre—fully managed shared storage built on the world’s most popular high-performance file system—for high-performance, scalable storage for HPC compute workloads. The company runs these workloads on AWS GovCloud (US), designed to host sensitive data and regulated workloads and address the most stringent US government security and compliance requirements. AWS GovCloud satisfies the compliance requirements for the software from NASA that Wisk Aero uses. In addition, test models on AWS GovCloud showed a 10–20 percent improvement in runtime compared with the on-premises solution. Achieves high-performance Español Opportunity | Using Amazon EC2 to Improve Job Runtime for Wisk Aero 日本語 Using AWS, we quickly scaled and added the needed on-demand compute power for the CFD team, compared with the months required and significant capital to build and scale an on-premises HPC cluster.” 한국어 Wisk Aero has developed the first-ever autonomous electrical vertical take-off and landing (eVTOL) aircraft and is using Amazon Web Services (AWS) to build high performance compute (HPC) clusters to run simulations. The company relies on HPC to run computationally intensive and complex simulations, each of which uses thousands of CPU cores. Purchasing on-premises computers for its HPC workload presented challenges, such as cost and managing enough CPU cores for peak runs. Wisk Aero migrated its HPC clusters to AWS to improve job runtime, achieve scalable storage, and drive improved economics. Overview | Opportunity | Solution | Outcome | AWS Services Used AWS GovCloud (US) gives government customers and their partners the flexibility to architect secure cloud solutions that comply with the FedRAMP High baseline. Customer Stories / Aerospace Get Started Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Wisk Aero is an advanced air mobility company dedicated to delivering safe, everyday flight for everyone. The company is backed by the Boeing Company and Kitty Hawk Corporation. AWS Services Used The Converge client-executive supporting Wisk Aero for its on-premises infrastructure introduced Converge’s Cloud Platforms team and its AWS offerings to the engineering manager of core infrastructure at Wisk Aero. Converge shared a similar use case when Converge—using its AWS Competency Program, which highlights AWS technical expertise and specialization—helped the client successfully migrate its HPC workload to AWS. 中文 (繁體) Bahasa Indonesia Outcome | Creating Innovative Technologies Using AWS The use of CFD simulations gives engineers a clear understanding of the aircraft’s expected performance under various loading and boundary conditions. Because of the novel design of Wisk Aero’s sixth-generation four-seat self-flying eVTOL, it is not possible to use previous simulations or design models. Wisk Aero engineers rely on HPC to run these computationally intensive and complex CFD simulations, each using thousands of CPU cores. To purchase on-premises computers for these HPC workloads, Wisk Aero would need to spend more on hardware that might go entirely unused when not running at peak jobs. Wisk Aero also had to address the increased operational overhead of managing physical hardware as the size of the on-premises cluster increased. To solve these challenges, Wisk Aero turned to the AWS HPC team and Converge Technology Solutions (Converge), an AWS Advanced Consulting Partner, to assist in migrating the company’s HPC simulations to Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. Ρусский in job runtime عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. NASA software requirements After the successful pilot, Wisk Aero chose to use AWS for another round of CFD simulations for its eVTOL aircraft. Now, Wisk Aero can build HPC clusters on the fly and achieve a significant performance increase over running simulations on premises. It uses purpose-built Amazon EC2 Hpc6a Instances to achieve the desired scalability by accessing CPU architectures alongside AWS ParallelCluster, which helps users to quickly build HPC compute environments on AWS. Learn more » 2022 Learn how Wisk Aero in the Aerospace industry built HPC clusters and improved performance using Amazon EC2. Overview Drives About Wisk Aero Amazon EC2 Hpc6a instances offer the best price performance for compute-intensive high performance computing (HPC) workloads in Amazon EC2. AWS ParallelCluster Türkçe Creating Air Taxi Simulations Using Amazon EC2 with Wisk Aero English Satisfies scalable storage Colin Haubrich Head of IT, Wisk Aero Wisk Aero is an aviation company focused on developing eVTOL aircraft and revolutionizing mobility through quiet, fast, and clean air travel. The company has over 10 years of experience, has locations around the world, and is backed by the Boeing Company and Kitty Hawk Corporation. To study the in-flight airflow, Wisk Aero engineers perform computational fluid dynamics (CFD) simulations using in-house and NASA CFD applications, such as OVERFLOW and FUN3D. Wisk Aero focuses more on using CFD than traditional aircraft builders because CFD supports rapid design iteration as the team explores different aircraft designs and architectures, especially in the early phase of the design process. Amazon FSx Lustre Solution | Choosing AWS for Agility, Elasticity, Storage, and Security Deutsch Tiếng Việt AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Italiano ไทย Contact Sales improved economics Amazon Elastic Compute Cloud (Amazon EC2) Hpc6a Instances Português
Creating an App for 12000 Game Show Viewers Using Amazon CloudFront with TUI _ TUI Case Study _ AWS.txt
Français 2023 Español 90% Creating an App for 12,000 Game Show Viewers Using Amazon CloudFront with TUI The company decided to use AWS because of the increased agility that it could achieve using services such as Amazon CloudFront. “In the past, this sort of request would have required considerable upfront planning, design, and development work,” says Timmermans. Using AWS, TUI built its voting application quickly and cost effectively, without having to worry about resource scaling. 日本語 Amazon S3 The development team at TUI began working on the voting app just a few weeks before the season finale of De Mol. Within a matter of hours, the team had created a working prototype of the application: a static website with an embedded iFrame element containing the interactive game content. To host the site, TUI used Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. “We opted for a static website hosted on Amazon S3 for the simplicity of the solution,” says Jeroen Daemers, cloud architect at TUI. “Fronting our Amazon S3 bucket with Amazon CloudFront offered a scalable, secure delivery method for the website.” 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used 12,000 Outcome | Accelerating the Journey to the Cloud With roots dating back to the 1800s, TUI is one of the world’s leading travel companies and has served 27 million customers and counting. Through its 1,600 travel agencies across Europe, its line of hotels and cruise ships, and its fleet of planes, TUI helps travelers enjoy experiences in 180 destinations around the world. Sponsoring the popular game show De Mol would be an exciting way for the organization to increase brand awareness. However, when it came to building a custom, branded voting application within a tight timeframe, TUI was challenged by the limitations of its on-premises hardware, which was managed by regional teams. “We needed to build and host an application that would be used by 12,000 people for one night only, all at the same time, during each commercial break,” says Peter Timmermans, head of technology at TUI. “When you’re building for that sort of scenario using fixed, on-premises infrastructure, you have to carefully manage the limited resources that you have.” Get Started Customer Stories / Travel & Hospitality About TUI Amazon CloudFront AWS Services Used TUI would be placing its logo prominently within the voting application interface, so creating a great audience member experience was of paramount importance. For instance, the company wanted to give audience members the opportunity to share their game experiences on social media platforms and used Amazon CloudFront to achieve the elasticity necessary to handle the increased data load. “With our old, fixed infrastructure, that scenario would have been potentially concerning because we might not have had the resources to support additional load,” says Daemers. “We knew that Amazon CloudFront could handle any additional load and that the outcome for the business would be positive, with more individuals engaging with the brand.” Reduced 中文 (繁體) Bahasa Indonesia cost of development Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Opportunity | Using Amazon CloudFront to Build a Voting Application for TUI faster development time Achieved Overview TUI Group (TUI), a leading leisure, travel, and tourism company, was seeking a way to maximize its brand exposure by creating a voting application for use on the popular Belgian television show De Mol (The Mole). The format of the game show, which pits contestants against a secret saboteur in the pursuit of cash prizes, encourages audience members to participate by voting on which contestant they believe to be the mole. By developing a branded voting application, TUI—a sponsor of the game show—would be able to put its logo in front of an in-studio audience of 12,000 people. The challenge was completing the app in just 2 weeks, in time for the show’s season finale. TUI had historically used on-premises hardware and didn’t have the agility needed to respond quickly to short-term business requirements, such as building the voting app. TUI is a global tourism group consisting of tour operators, 1,600 travel agencies and online portals, 5 airlines, over 400 hotels, 16 cruise liners, and incoming agencies in all major holiday destinations around the world. Türkçe Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer experience. Learn more » Solution | Creating a Positive User Experience for 12,000 Audience Members English Using AWS to build its voting application quickly and cost effectively, with the elasticity necessary to support a high level of user interaction, helped TUI to demonstrate the value of increased agility. The company has used its learnings to accelerate its cloud migration for other systems, including its reservation and booking infrastructure. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Peter Timmermans, Head of Technology, TUI Deutsch scalability and elasticity Tiếng Việt Learn how TUI in the travel industry used AWS to build a game show voting application quickly and cost effectively. TUI completed its voting application on time, and the app was successfully used by the 12,000 audience members in attendance at the series finale of De Mol. The company delivered a positive experience at an exciting moment for the show’s viewers, leading to positive brand impressions. “Due to the one-night-only nature of the application, we would have historically struggled to justify the expense of this project,” says Timmermans. “Using Amazon S3 and Amazon CloudFront, we could build the app in hours, at a fraction of the cost of any on-premises solution.” Italiano ไทย “The significance of this project is how much faster we were able to respond to a business requirement,” says Timmermans. “Building this application on AWS, with the solution that we opted for, took us roughly one-tenth of the time that it would have taken with our legacy on-premises infrastructure." Using Amazon S3 and Amazon CloudFront, we could build the app in hours, at a fraction of the cost of any on-premises solution.” Learn more » TUI had been in the process of migrating its backend travel bookings infrastructure to the cloud for increased agility and decided to use Amazon Web Services (AWS)—namely, Amazon CloudFront, a content delivery network service built for high performance, security, and developer convenience—to create its interactive voting app. TUI was able to work quickly to build and deliver its solution in time for the season finale of De Mol, making the most of its opportunity to drive brand awareness. Português audience members used TUI’s voting app
Creating an Optimized Solution for Smart Buildings Using Amazon EC2 G5g Instances with Mircoms OpenGN _ Case Study _ AWS.txt
30–40% Français Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. As Mircom’s move to AWS progresses, the company scales while managing costs, gaining cost-structure flexibility, improving monitoring capability, and achieving reliable performance. Outcome | Optimizing OpenGN’s Unified Pane of Glass for Price and Performance Español smarter building analytics Mircom developed OpenGN as a single-site fire alarm control management system providing monitoring of its regulatory agency-approved fire and life safety products. OpenGN displays various building experiences (single, complex, and campus) in both 2D and 3D representations. In addition, OpenGN graphically displays fire and life safety events from corresponding fire and life safety products, such as pull stations and smoke detectors. Mircom later expanded OpenGN to include other mission-critical building technologies from its product line, including building automation, communication and security, and smart technologies. As a result, OpenGN evolved into a single-site digital twin and Internet of Things software platform for on-premises building experiences. reduction in infrastructure costs 日本語 Amazon EC2 G5g instances are powered by AWS Graviton2 processors and feature NVIDIA T4G Tensor Core GPUs to provide the best price performance in Amazon EC2 for graphics workloads such as Android game streaming. Learn more » 2023 Increased building monitoring Contact Sales Customer Stories / Engineering, Construction & Real Estate 한국어 Facilitates Overview | Opportunity | Solution | Outcome | AWS Services Used Over 90% Amazon EC2 G5g Instances NICE DCV To mitigate the costs associated with migrating from an onsite to a cloud-hosted solution, Mircom moved from licensed to open-source software, which it could do because of the flexibility of AWS services. This shift helped the company reduce its licensing costs and prevented it from needing to repurchase licenses for cloud use. The essential open-source software used by Mircom included Ubuntu Server 18.04, an operating system; MATE Desktop Environment; MySQL Community Server 8.0, a relational database management system; and OpenVPN Access Server, a virtual private network system. Opportunity | Using AWS Services to Modernize OpenGN’s Graphics-Intensive Single Pane of Glass OpenGN’s graphics-intensive workloads mandate a dedicated graphics card to accommodate all its customers’ building experiences. Although Mircom’s on-premises hardware infrastructure could support most of its customers, its largest deployments pushed OpenGN’s performance limits. The hardware infrastructure could handle approximately 250 buildings, but some current and future deployments had two to four times that number. Additionally, multiple-site deployments, requiring distributed building experiences, led Mircom to explore the feasibility of migrating its on-premises hardware infrastructure to the cloud, which ultimately increased the company’s building monitoring capability by 4 to 10 times. AWS Services Used Mircom, a global designer, manufacturer, and distributor of intelligent building solutions, wanted to modernize its Open Graphic Navigator (OpenGN)—a single-site digital twin and on-premises Internet of Things (IoT) software platform. Looking for a solution that managed cost while supporting and extending this graphics-intensive application for smart building monitoring, Mircom decided to use Amazon Web Services (AWS). As a result, Mircom can now use the cloud to deliver its fire alarm control panels and mission-critical building technologies, making buildings safer, smarter, and more livable. Mircom has also reduced third-party licensing costs by over 90 percent and infrastructure costs by 30–40 percent. Mircom chose AWS Graviton processor, designed by AWS to deliver optimal price performance for cloud workloads running in Amazon EC2. The company selected AWS Graviton2 processors in particular, which deliver a major leap in performance and capabilities. Mircom uses the AWS Graviton2 processors to power Amazon EC2 G5g Instances, the first Arm-based instances in a major cloud to feature GPU acceleration, to further manage costs while gaining the processing power associated with GPUs to handle some of the functions that its software performs. Mircom can also move to a subscription pricing model, an option that the onsite hardware did not support as seamlessly as the cloud. This flexibility could help Mircom increase revenue while controlling its cost structure. Modernizing OpenGN to the cloud has helped Mircom to monitor mission-critical building technologies, such as fire detection and alarm, building automation, communication and security, and smart technologies from anywhere in the world. Mircom’s multiple-site cloud experience provides opportunities to significantly increase the breadth and depth of its customer base. “The sky’s the limit,” says Tony Falbo, founder and CEO of Mircom. 中文 (繁體) Bahasa Indonesia Learn how Mircom modernized OpenGN’s single pane of glass and reduced infrastructure costs 30–40 percent using Amazon EC2 G5g Instances. About Mircom Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Mircom also needed a mechanism for viewing its mission-critical building infrastructure. To render a browser that let Mircom monitor continuous connectivity between buildings and the cloud, the company used NICE DCV, a high-performance remote display protocol that provides customers with a way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions. Using NICE DCV and Amazon EC2, customers can run graphics-intensive applications remotely on Amazon EC2 instances and stream their user interface to simpler client machines, reducing the need for expensive dedicated workstations. reduction in third-party licensing costs Overview AWS Graviton Processor Get Started Headquartered in Toronto, Canada, Mircom was founded in 1991 and carries requisite regulatory agency approvals from Underwriters Laboratories (UL/ULC) and Factory Mutual (FM) for all its fire and life safety products. The company is the largest independent fire alarm manufacturer and distributor in North America. Its product line spans fire detection and alarm, communications and security, mass notification, building automation, and smart technologies. Türkçe Using AWS, Mircom has modernized OpenGN from an on-premises single pane of glass, or single-site building experience, to a cloud-based unified pane of glass, or multiple-site cloud experience. English Mircom is the largest independent fire alarm manufacturer and distributor in North America. It is a global designer, manufacturer, and distributor of intelligent building solutions, whose product line spans fire detection and alarm, communications and security, mass notification, building automation, and smart technologies. 4 to 10 times Deutsch NICE DCV is a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions.  Learn more » Creating an Optimized Solution for Smart Buildings Using Amazon EC2 G5g Instances with Mircom’s OpenGN Tiếng Việt The strong cloud foundation provided by AWS gives Mircom the confidence to continue its application modernization. In the future, Mircom hopes to rearchitect and rebuild OpenGN to a serverless architecture. In the long run, Mircom is better prepared to achieve its company vision, which is “to make safer, smarter, more livable buildings in order to save lives. Working alongside AWS is helping us accomplish that,” says Leung. Italiano ไทย Brian Leung Senior Manager of Engineering, Mircom During its search for the right cloud solution provider, Mircom discovered that AWS offered a cost-saving, high-performance solution that worked well for OpenGN’s application modernization. In early 2021, Mircom decided to use several AWS services, including Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for graphics-intensive workloads. After testing a few different solutions, Mircom decided to embark on refactoring and replatforming OpenGN with Amazon EC2 G5g Instances. “The can-do attitude from AWS gave us the confidence to move forward with our application modernization,” says Brian Leung, senior manager of engineering at Mircom. AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. Learn more » Learn more » Amazon EC2 Solution | Using Amazon EC2 G5g Instances with GPU Acceleration The can-do attitude from AWS gave us the confidence to move forward with our application modernization.” Português
Dallara Uses HPC on AWS to Off-Load Peak CFD Workloads for Race Car Simulations _ Case Study _ AWS.txt
On AWS, Dallara found the flexibility and availability it needed. “We get resources when we need them, and we release them when we don’t, so we’re not wasting the resources or paying for what we don’t use,” says Serioli. Whereas Dallara couldn’t acquire every new release of hardware for its on-premises system, the company can access the latest technology on AWS. “The innovation is immediate and comes from the availability of new instances, which raises new ideas of how we can use the hardware to improve our workflow,” says Serioli.   Solution | Launching a Scalable HPC Solution in Less Than 5 Months Français increased HPC capacity from on premises only Customer Stories / Automotive Español 2x AWS ParallelCluster is a smart, flexible tool. It helps manage the HPC, so our information technology team is not dedicated to hardware problems. We can scale on more nodes than we thought possible.” 1 month Learn more » 日本語 2022 Amazon EC2 Get Started 한국어 Due to an influx of customer projects in February 2021, Dallara reached 100 percent usage of its HPC capacity on premises. Serioli and the Dallara HPC team were tasked with upgrading the company’s HPC infrastructure and outsourcing its management to a cloud provider. “Our first goal was to have a ready-to-use industrial infrastructure that would support our specific applications, huge models, and high demand for HPC,” says Serioli. “The second goal was to integrate our workflows into an external environment like the cloud.” Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon FSx for Lustre In April 2021, 2 months after beginning the build, Dallara had created an industrial infrastructure on AWS, united it with its existing workloads, and allocated resources to it. The solution was stable and operating well within 5 months of intensive use. First, Dallara linked its on-premises workloads to AWS using Amazon Virtual Private Cloud (Amazon VPC), which gives the company full control over its virtual networking environment, including Amazon EC2 resource placement, and AWS Virtual Private Network (AWS VPN) solutions that establish secure connections between on-premises networks, remote offices, client devices, and the AWS global network. within request to go into production on AWS Dallara landed on Amazon Web Services (AWS) for the HPC that it needed. Using AWS, Dallara built an HPC system that met its benchmarks for performance and cost, leading the company to continue designing some of the world’s fastest and most aerodynamic vehicles. Dallara not only found the solution to its business-critical issue quickly on AWS but also benefited from its scalability and flexibility. Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. AWS Services Used 中文 (繁體) Bahasa Indonesia Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security.  Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) 3x burst in customer demand  Dallara Uses HPC on AWS to Off-Load Peak CFD Workloads for Race Car Simulations Overview Amazon VPC Dallara sought proofs of concept from various cloud providers, yet AWS was the most responsive and supportive. Within a month of its request, Dallara was in production on AWS and running CFD simulations at scale. “The support from AWS was there every day,” says Serioli. “The flexibility and engagement from AWS were key for us.” Additionally, Dallara already used software from Ansys, an AWS Partner, as its main CFD solutions, particularly Ansys Fluent, a fluid simulation software. Another reason Dallara chose AWS is that it appreciated the ability to choose the right instance for each workflow using Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. For example, Dallara began using Amazon EC2 C5n Instances, which are designed for compute-intensive workloads and use the fourth generation of custom Nitro card and Elastic Network Adapter device to deliver 100 Gbps of network throughput to a single instance.  About Dallara Automobili AWS ParallelCluster Türkçe scaled AWS cluster Founded in 1972, Dallara manufactures racing cars for the IndyCar, Indy Lights, Formula 2, Formula 3, and Super Formula Championships. It produces cars for endurance races such as the 24 Hours of Le Mans and for electric car races such as the Formula E. Today, Dallara even develops road cars, drawing interest from luxury car manufacturers. Every vehicle design undergoes rigorous testing in structure, aerodynamics, and vehicle dynamics. For that, Dallara relies on more than 15 simulation and testing tools that require massive amounts of HPC, including ones that assess computational fluid dynamics (CFD). “We use CFD tools because it’s mandatory to investigate the flow fields around our cars with all the details needed to achieve our target,” says Elisa Serioli, CFD methodology team leader at Dallara. English Opportunity | Encountering a Business-Critical Issue On Premises With its cloud and on-premises environments connected, Dallara decided to migrate 80 percent of its CFD workflow to the cloud and download the least amount of data possible in order to delegate several tasks of each workflow to the cloud. “We use several software applications that each perform a different task for our complex CFD workflow, and the output of one job is the input for another,” says Serioli. The connection between the systems on AWS and on premises facilitates a transparent user experience for Dallara’s aerodynamicists, who can choose where to run each task or overall workflow. When a task runs on the cloud, the needed files are copied automatically to Amazon FSx for Lustre, which provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Then an orchestrator makes all the workflows run. After every task completes, the data is downloaded to the on-premises solution and shared with aerodynamicists. Using FSx for Lustre, Dallara can scale up its file storage as needed within half an hour without any particular support. On average, Dallara can run 15 complete workflows per day. Outcome | Meeting High-Demand HPC Needs, Now and in the Future Elisa Serioli CFD Methodology Team Leader, Dallara Automobili On AWS, Dallara could quickly put in place the HPC resources required to deliver quality racing cars to its customers during a period of high demand. The company can innovate and update its HPC by selecting the best Amazon EC2 instance for each workload. “In terms of supporting our HPC, the cloud is ready with the instances and infrastructure we need for industrial racing and motor sporting workflows, which is not easy,” says Serioli. “It was crucial to let us support our customers and do their projects.” Dallara takes advantage of AWS ParallelCluster, an open-source cluster management tool that makes it easy for companies to deploy and manage HPC clusters on AWS. Using it, Dallara can access additional HPC resources immediately, scaling up instances almost instantaneously and adding new instance types in just 1 day. The company increased HPC capacity more than three times from on premises and has scaled the AWS cluster by two times, supporting the company in meeting a 6-month burst in customer demand. “AWS ParallelCluster is a smart, flexible tool,” says Serioli. “It helps manage the HPC, so our information technology team is not dedicated to hardware problems. We can scale on more nodes than we thought possible, sometimes scaling to more than 80 nodes.” Deutsch 6 month Tiếng Việt AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Italiano ไทย Contact Sales Learn more » In April 2021, Italian race car manufacturer Dallara Automobili (Dallara) needed more high-performance computing (HPC) for simulation and testing than what was available in its on-premises environment. The company’s computational power was over-requested, leading to difficulties meeting the demands of its customers during peak season. As a major provider of commercial racing cars for prestigious championships, Dallara uses HPC to power the tests of its car designs, making HPC fundamental to its operations. 5 months Founded in 1972, Dallara manufactures racing cars for the IndyCar, Indy Lights, Formula 2, Formula 3, and Super Formula Championships, and it also produces road cars. Its specialties are composite materials, aerodynamics, and vehicle dynamics. to build a stable infrastructure  Português
Dataminr Achieves up to Nine Times Better Throughput per Dollar Using AWS Inferentia _ Dataminr Case Study _ AWS.txt
Opportunity | Using Amazon EC2 to Run Highly Complex ML and AI Models Français Matt Hill Director of AI Engineering, Dataminr 2023 Español Amazon EC2 In 2021, the company started to experiment with AWS Inferentia to optimize its Amazon EC2 spend, while scaling its models. “We built on our early experiments to develop a pattern by which many common model types can be dropped into an optimization workflow,” says Hill. “Then, we used AWS Inferentia to produce and benchmark a compiled model so that we could select an optimal way to deploy it.” Enthused 日本語 Dataminr is realizing three distinct business benefits from the project: increased scale, increased speed, and lower costs. Moreover, Dataminr is seeing increased accuracy in cases where AWS Inferentia has facilitated the use of more complex models or covers more data sources, which are vital to effective crisis-response efforts. Founded in 2009, Dataminr employs over 850 people across eight global offices. Dataminr’s AI platform detects early signs of high-impact events and emerging risks in near real time, from more than 500,000 publicly available data sources. The company’s alerts help customers to know critical information first, mobilize for quick response, and manage crises effectively. Speed and coverage are the key values that Dataminr strives to provide its customers. “We cover many types of events all over the world in many languages, in different formats (images, video, audio, text sensors, combinations of all these types) from hundreds of thousands of sources,” says Jaimes. “Optimizing for speed and cost given that scale is absolutely critical for our business.” AWS Inferentia accelerators are designed by AWS to deliver high performance at the lowest cost for your deep learning (DL) inference applications.  Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Overview Solution | Increasing Data Volume Processing 5x to Enhance Crisis Response Using AWS Inferentia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. The first models produced using AWS Inferentia were deployed in spring of 2022, and the implementation process went as smoothly as possible. When there was an issue, Dataminr reached out to AWS Inferentia experts who provided quick guidance to develop a solution. “We were able to call in an AWS expert to diagnose memory-usage patterns and optimize our approach,” says Hill. The early results were promising. “On one of our early efforts, we increased speed by five times compared to GPU-based instances on a natural-language processing task,” says Hill. “That translated into a nine-times improvement in throughput per dollar spent for our natural-language processing models.” Those initial results inspired Dataminr to move forward with the effort, which is delivering five times increased throughput per dollar or more across all the models that it optimized, including computer vision and natural-language processing. AWS Services Used of data throughput per dollar Up to 5x increase 中文 (繁體) Bahasa Indonesia Dataminr Achieves up to 9x Better Throughput per Dollar Using AWS Inferentia Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Operating at a global scale, Dataminr has used AWS Inferentia to both reduce costs and expand its AI capabilities. The company is confident that it can continue to increase the value that it provides its worldwide corporate and government customers with fast and accurate event alerts. “To sum up the AWS Inferentia deployment: it was an innovative way to scale our platform’s scope efficiently,” says Hill. “We’re happy to say that it produced all the promised benefits.” Enhanced accuracy Dataminr, which detects high-impact events and emerging risks for corporate and government customers, wanted to increase the scale of its artificial intelligence (AI) models to provide more comprehensive event coverage by processing more data. The company uses AI to detect the earliest signals of high-impact events and emerging risks from within publicly available data in near real time. Because Dataminr employs a complex mix of machine learning (ML) models to process petabytes of data each day, scaling efficiently was a difficult task. “We wanted to continue to scale our deployment of AI models in production, but at the same time, we wanted to bend the cost curve,” says Matt Hill, director of AI engineering at Dataminr. development teams Up to 9x increase AWS Inferentia in data volume processed Türkçe English Outcome | Scaling Global Alerts Using AWS Services by using more complex models Due to the size and scope of Dataminr systems, the company strives to optimize everywhere that it can. However, it’s not enough to reduce costs. Each project that the company undertakes must help it increase scale, whether that be in the form of the speed of compute or number of data sources. Dataminr uses Amazon Elastic Compute Cloud (Amazon EC2), a broad and deep compute solution, to host its models at scale. “For any organization, time and money are constraints, but we wanted to continue efficiently scaling our coverage to generate additional types of alerts,” says Hill. The company started searching for a way to optimize for both speed and cost simultaneously to scale on Amazon EC2. Moving forward, the company is targeting improvements across corporate risk, cyber risk, and social good. Though Dataminr has access to greater scale with less spend, there are plenty of opportunities to be addressed. The company is considering using some new AWS services to help it continue improving. Among them is AWS Trainium, a high-performance ML training accelerator. “We’ll continue to explore ways to make our compute faster, cheaper, and more scalable using AWS services,” says Jaimes. Deutsch To sum up the AWS Inferentia deployment: it was an innovative way to scale our platform’s scope efficiently. We’re happy to say that it produced all the promised benefits.” Tiếng Việt Learn how Dataminr increased throughput per dollar by up to nine times using AWS Inferentia. Italiano ไทย Dataminr needs to continually improve its services and features because emergency responders depend on its event alerts. The company was running its models on a mix of CPUs and GPUs, and there was no clear path toward improving its processing throughput while reducing costs. “Speed is critical for our customers because they need our services for emergency response, so our near-real-time alerts save lives,” says Alex Jaimes, chief scientist and senior vice president of AI at Dataminr. “Our corporate customers also rely on the speed of our alerting to reduce risk from events that might impact them.” Dataminr was in communication with Amazon Web Services (AWS) when it discovered AWS Inferentia, purpose-built accelerators that deliver high performance while reducing inference costs. The company then used AWS Inferentia to accomplish both its performance and cost-efficiency goals: improving data throughput and covering more data sources for first responders and corporate customers. Dataminr improved data throughput per dollar by five times or more on the AI models that it optimized for AWS Inferentia and realized up to nine times better throughput per dollar. Learn more » About Dataminr Developers are also enthused. Dataminr emphasizes innovation, and the engineers are excited to have a new, cost-effective way to deploy AI models beyond CPUs and GPUs. The company’s commitment to innovation is now driving an internal optimization push to automate model compilation and benchmarking. “We really like working on AWS Inferentia,” says Jaimes. “We need only a few people to get this up and running, which is great.” Português Dataminr provides the earliest indication of high-impact events and emerging risks. Dataminr’s artificial intelligence platform processes data from over 500,000 public sources to generate alerts that help customers effectively manage crises and emergency response.
DB Energie Case Study.txt
Français Alongside units from product development, grid operations, and IT, DB Energie successfully deployed two use cases within 10 months: the demand forecasting model and a model to decrease peak energy load from train operations. “Without Amazon SageMaker, it would have been hard to deploy any of these models in such a short period,” says Senzel. Currently, the team is training models for three to four additional use cases, such as predictive maintenance and renewable energy forecasting. Español Building a Fully Managed ML Operations Pipeline Using Amazon SageMaker Amazon SageMaker Model Registry, which simplifies the process of managing model versions.   Learn how »  Learn how leading organizations in Europe across industries trust AWS to drive innovation at every level of their business.  日本語 The team uses a web-based interface to access a set of purpose-built ML tools through the use of Get Started 한국어 Find out how Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. In February 2021, three DB Energie data scientists joined two engineers of the data lake team to build an ML pipeline with a goal to produce solutions in less than a year. They elected not to use a Kubernetes infrastructure, which might require three full-time engineers to manage. Instead, they built a continuous integration and delivery pipeline for ML operations that activates the deployment of Amazon SageMaker services such as model training and inference. Driving a Future of Sustainability through ML DB Energie Uses Machine Learning to Enhance Sustainability and Reliability of Its Power Grid Operations Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. "AWS services have empowered us to collect data and produce value for our clients with our analysis and machine learning solutions.” —Dimitrios Avramidis, data scientist, DB Energie GmbH Amazon SageMaker Studio, a fully integrated development environment for ML. “Using Amazon SageMaker Studio, we take really fast actions and provide better consulting to our clients,” says Dimitrios Avramidis, a data scientist at DB Energie. Data scientists manage their models centrally using 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي Contact Us AWS Partner, DB Energie was converting the company’s data warehouse to a data lake on AWS. DB Energie wanted to connect its ML pipeline to the data lake, which stores large volumes of raw structured and unstructured data. “We wanted to standardize how we did studies,” says Dr. Florian Senzel, lead data scientist for ML at DB Energie. “But we were puzzled by establishing the technical infrastructure.” 中文 (简体) DB Energie MLOps AWS Customer Success Stories Türkçe Leading Cloud Innovators in Europe English Bridging the Gap between Experimentation and ML in Production Deutsch Tiếng Việt Italiano ไทย DB Systel GmbH, the main IT provider of Deutsche Bahn and an Português DB Energie’s commitment to ML helps to fulfill Deutsche Bahn’s Strong Rail initiative to improve rail travel efficiency and drive sustainability. “Using AWS, we’re establishing a data-driven culture within our company,” says Senzel. “We are showing what ML and data science can offer, answering business questions, and establishing trust in the magic of ML and artificial intelligence.” DB Energie is the main electricity provider and exclusive operator of the power grid for Deutsche Bahn. It faced strict enterprise compliance regulations as it sought to reduce operational burden in the ML process. Initially, data scientists wrote code in their own notebooks, which limited their ability to demonstrate the practical value of their models. For example, they had developed a demand forecasting model that uses historical data to predict future energy demand but lacked a way to operationalize the insights.With data engineers from As part of the German national railway Deutsche Bahn (DB), DB Energie GmbH (DB Energie) wanted to use machine learning (ML) to help meet sustainability and electricity supply reliability goals. Data scientists sought a cost-effective, scalable solution that would free them to focus on training models they could launch quickly into production. DB Energie turned to Amazon Web Services (AWS) and used Amazon SageMaker, which data scientists and ML engineers use to build, train, and deploy ML models with managed infrastructure, tools, and workflows. Within 1 year, DB Energie built a scalable ML pipeline that empowers fast deployment, helping to deliver agile and customer-centric data products.
DBS Bank Uses Amazon ElastiCache for Redis to Run Its Pricing Models at Real-Time Speed _ DBS Bank Case Study _ AWS.txt
Outcome | Continuing to Develop Cutting-Edge Financial Models for QPE on AWS In recent years, DBS migrated its Quant Pricing Engine (QPE) to Amazon Web Services (AWS) to offer near real-time pricing with a dynamic workload for its customers. Using this innovative pricing solution, DBS processes data on a massive scale on demand and generates responses from its pricing models at a fast speed. With QPE, DBS has effectively harnessed the power of cloud technology to improve its customers’ price discovery journeys and help traders better manage their market risks. Français Achieved significant cost savings Significantly reduced computing costs 2023 Español On AWS, DBS can access the latest technologies and seamlessly incorporate them into its solution stack. For example, it can set up ElastiCache clusters to partition data across multiple shards. Due to the scale of DBS’s databases, data read/write processes can happen hundreds of thousands of times per second. This scale would overwhelm a traditional database immediately, but the flexible ElastiCache clusters can scale and meet DBS’s demands effortlessly without interruption. Learn more » 日本語 AWS Services Used Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used with Amazon EC2 Spot Instances As one of the largest banks in Asia, DBS Bank Ltd. (DBS) offers innovative financial services to support a wide range of customers, including trading companies. Over the decades, the bank’s quantitative pricing engines have helped trading customers identify the most profitable opportunities using algorithms built in house. These engines were hosted on legacy on-premises infrastructures powered by various Windows and Linux systems with traditional databases, which were costly to maintain and difficult to scale. Harnessing ultrafast performance and agility, DBS will continue to expand its QPE with even more cutting-edge solutions. Next on DBS’s road map is to build machine learning and artificial intelligence solutions on AWS and incorporate advanced analytics into its QPE. Along with rapid market movement and the need for dynamic trading, the workload for pricing engines also varies dramatically. The on-premises infrastructure could not be efficiently scaled to meet traders’ needs. In addition, millions of dollars in fintech vendor licensing were spent every year. DBS chose to build a cloud-based solution on AWS and used Amazon ElastiCache for Redis—an ultrafast in-memory data store with microsecond response time —to achieve its near real-time performance. Amazon ECS Opportunity | Using Amazon ElastiCache for Redis to Process Data at a Massive Scale for DBS 中文 (繁體) Bahasa Indonesia Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that simplifies your deployment, management, and scaling of containerized applications. Amazon ElastiCache Scales to support hundreds of thousands of data Ρусский “We’re always looking for new ways to boost efficiency, improve performance, reduce costs, and explore opportunities,” says Liu. “On AWS, we can always find new solutions to help achieve our goals.” عربي We can provision resources from AWS for whatever we need, whenever we need them. For the nature of our job, AWS is a perfect fit.” Learn how DBS Bank built its innovative Quant Pricing Engine using Amazon ElastiCache for Redis. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 100x improvement Learn more » Overview in customer pricing query response time read/write processes per second Amazon ElastiCache is a fully managed, Redis- and Memcached-compatible service delivering real-time, cost-optimized performance for modern applications. in fintech vendor licensing fees Improved revaluation Customer Stories / Financial Services Powered by ElastiCache for Redis and other services, DBS has achieved virtually infinite scalability for its pricing engines, which is key to success in fulfilling fluctuating computing needs in its trading business. In the cloud, DBS quickly provisions capacity as needed by using Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload, and Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service, in conjunction with ElastiCache for Redis. “Previously, setting up an on-premises infrastructure was a painful task that involved tedious resource acquisition and lengthy provisioning activities,” says Liu. “It would take months for the infrastructure to be ready to use. On AWS, it can be done in 1 minute.” DBS Bank Uses Amazon ElastiCache for Redis to Run Its Pricing Models at Near Real-Time Speed Türkçe DBS can effectively scale its QPEs to meet customers’ pricing requests. Hundreds of millions of tasks are processed daily, amounting to an estimated 10 TB of data per day. The company has scaled up to 5,000 CPUs on Amazon ECS, which can be scaled up further if needed. “The best benefit of the cloud is on-demand capacity,” says Liu. “We can provision resources from AWS for whatever we need, whenever we need them. For the nature of our job, AWS is a perfect fit.” In addition to scalability and performance benefits, DBS has also reduced its pricing engine costs. The bank no longer needs to pay millions of dollars in annual licensing fees. It achieved further cost savings by adopting Amazon EC2 Spot Instances, which runs fault-tolerant workloads for up to a 90 percent discount compared to on-demand instances. Solution | Reducing Pricing Query Response Time by 100x with Amazon ElastiCache for Redis English Gengpu Liu Executive Director of Quant and Tech Modeling, Treasury and Markets Business, DBS Bank Ltd. Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. With support from the AWS team, DBS began to build QPE in 2018. After launching the first subsystem for its QPE in September 2019, DBS built nine subsystems covering different trading activities in just 3 years. “On AWS, we took advantage of the capacity, reliability, technology, and support that we needed to build QPE,” says Liu. “With all these capabilities, we were able to deliver a powerful and reliable system in a short period of time.” Deutsch Tiếng Việt Italiano ไทย and risk performance of risk engines by a few times Contact Sales DBS uses Amazon ElastiCache for Redis as a near real-time cache to handle complicated job queues for its QPE. As a result, it has vastly improved its pricing query response time from up to 1 minute to as fast as 0.5 seconds—a 100-times improvement in performance. “Our customers have access to prices from different banks,” says Liu. “They indicate that we’re among the fastest in the industry to provide them a price, which lets us capture more business opportunities and increase customer satisfaction.” Amazon EC2 Spot Instances DBS is a financial services group in Asia with a presence in 19 markets. Named World’s Best Bank by Global Finance and Euromoney and Global Bank of the Year by The Banker, DBS provides a full range of services in consumer, SME, and corporate banking. Headquartered and listed in Singapore, DBS is a leading financial services group with a presence in 19 markets and over S$744 billion in assets. It provides a full range of services in consumer, small and medium enterprise, and corporate banking. To best serve its trading customers, DBS has built quantitative pricing algorithms that identify and capitalize on the available trading opportunities over the decades. “In the past, what we used for our pricing models was hosted on premises, from the hardware to the software—and that limited our agility,” says Gengpu Liu, executive director of quant and tech modeling for DBS’s Treasury and Markets business. “We didn’t have the capacity to scale up whenever we needed to.” About DBS Bank Ltd. Português DBS can also access a variety of services, capacities, and capabilities on AWS, such as CPUs and GPU instances. It can thus adopt the most efficient solutions to run different workloads. This agility is a major advantage for the bank, which powers many different use cases. “We can choose AWS services based on our job nature,” says Liu. “With its suite of services, there is always something that suits our purpose, which is good.”
DCI Saves 27 on Cloud Costs Gains Support for Long-Term Growth Using AWS _ Amazon EC2.txt
In addition, DCI’s participation in AWS Activate—which offers free tools, resources, and more to help startups quickly begin using AWS—meant that it could move fast, using guidance from its account team and AWS support engineers. The migration reduced its monthly cloud costs by 27 percent. DCI now believes it has the tools and support it needs to set itself up for long-term success. Français Español DCI was a little more than two years old when an investor suggested the company migrate to AWS to avoid the kind of billing issues it had with its previous cloud provider. On multiple occasions, DCI received higher-than-expected charges for routine usage, which meant Zannikos had to spend time trying to resolve billing with the provider. “We are a startup—we cannot have a resource dedicated to managing the cloud service charges,” says Zannikos. “That's not our focus. We're trying to build our product.” Improved customer support and guidance Konstantinos Kitsaras Chief Technology Officer, Digital Commerce Intelligence Digital Commerce Intelligence (DCI) provides intelligence about online market trends, competitors, and brand performance, allowing its customers to plan corporate strategy based on data. It was founded in 2018 and is based in Singapore. 日本語 Founded in 2018 in Singapore, Digital Commerce Intelligence (DCI) saw that ecommerce businesses in Southeast Asia were operating blind and making decisions on intuition, rather than data. DCI makes timely ecommerce market intelligence available to businesses to help them make better commercial decisions. DCI now provides insights on market sizing, trends, competition, and brand performance to customers throughout Southeast Asia. Migrate with AWS. The most complete solutions to efficiently migrate to AWS and see business results faster.  Get Started 한국어 Digital Commerce Intelligence (DCI) was founded in 2018 in Singapore and has offices in both Singapore and Greece. DCI provides businesses with intelligence on market trends, competition, and brand performance that allows them to plan corporate strategy based on data. As DCI grew, it was hindered by a lack of flexibility and high costs from its previous cloud provider. The company migrated to AWS in 6 months using the AWS Startup Program. The migration reduced its monthly cloud costs by 27 percent. DCI now has the tools and support it needs to achieve long-term success. Lower costs mean we can spend more on people and on product development—things that make the business more competitive.” The company used proprietary tools that it had built and optimized for its previous provider and, in addition to migrating compute and data to AWS, it also needed to update and test those tools. DCI was able to migrate its data collection tools, SQL Server, messaging queue, Kubernetes clusters, image registry, and compute to AWS. It is now running about 65 percent of its systems on AWS and intends to move the rest after its remaining tools are updated to run on AWS. “In contrast to our previous provider, AWS provides a feature-rich and configurable cloud experience,” says Cavan David, software development lead at DCI. “With the help of the AWS team, we were able to migrate our systems from the previous cloud service to AWS in a couple of months with just a team of two engineers and without a lot of DevOps know-how.” Reduced monthly cloud costs by 27% AWS Services Used Migrating about 65 percent of its systems to AWS took DCI only 6 months. It plans to migrate the remainder soon. So far, the migration to AWS has reduced monthly IT costs by 27 percent. Those savings matter, because—to run its algorithms and deliver results to its customers—DCI needs to ingest and process a lot of data. These results give DCI customers the market insights they need to run their businesses more intelligently. DCI also found that the support at AWS helped it make better choices for the company overall. “We wanted to have an account manager from our cloud services provider who could guide us,” says Konstantinos Kitsaras, chief technical officer (CTO) at DCI. “We wanted someone to help us select the right services, evaluate our architecture, and evaluate workloads. Someone who would share knowledge with us. We got that from AWS.” 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي Learn more » Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (简体) The AWS team has provided better cost control and support for DCI. “Lower costs mean we can spend more on people and on product development—things that make the business more competitive,” says Kitsaras. “We now have a deeper understanding of how we use our cloud services. The insights we get from CloudWatch, for example, help us react quickly to any infrastructure issues that may affect our customers. We also have responsive support to help us if we ever have a question. As a market intelligence company, we see the value of what we’ve gained by using AWS.” Amazon RDS for SQL Server Migrated services in 6 months Benefits of AWS Türkçe About DCI English DCI uses Amazon RDS for SQL Server (Amazon RDS) to ingest and process data. Amazon RDS makes it easy to set up, operate, and scale SQL Server deployments in the cloud. The company uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload, and Amazon Simple Storage Service (Amazon S3) object storage built to retrieve any amount of data from anywhere. Amazon CloudWatch (CloudWatch) has been added to gain observability of its AWS resources and applications. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. We are the first major cloud provider that supports Intel, AMD, and Arm processors, the only cloud with on-demand EC2 Mac instances, and the only cloud with 400 Gbps Ethernet networking. Gained insight into use of cloud services Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, and optimize resource utilization. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events.  Migrating to AWS in 6 Months and Gaining a Cloud Guide Deutsch Monthly Cloud Costs Cut by 27% Using AWS Tiếng Việt Amazon S3 To provide market intelligence to customers, DCI uses a proprietary algorithm that acquires publicly available real-time data from top ecommerce platforms. It then converts that data into ready-to-use sales performance insights that customers can view using interactive dashboards. This allows customers to plan ecommerce strategy based on data, not guesswork. “If you’re selling products online, you need to know if you’re doing it as fast as your competitors, or if you’re a leader, in last place, or in the middle,” says Kyriakos Zannikos, founder and chief executive officer (CEO) at DCI. “Our solutions give you that critical information.” Italiano ไทย Amazon CloudWatch DCI Saves 27% on Cloud Costs, Gains Support for Long-Term Growth Using AWS Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps. 2022 Amazon EC2 However, as DCI grew, its cloud services provider lacked the flexibility it needed, resulting in unpredictable compute and database costs. The company migrated to Amazon Web Services (AWS) in 6 months using the AWS Startup Program. This AWS program offers a broad range of events to support startups as they launch, grow, and scale. Português
Deep Pool Optimizes Software Quality Control Using Amazon QuickSight _ Deep Pool Case Study _ AWS.txt
Solution | Unlocking Previously Inaccessible Data Decreased Software Issues by 57% Amazon QuickSight Français 2023 Español Improve Using QuickSight, Deep Pool can analyze software development data at the granular level and provide business intelligence to its entire organization. The company has seven development squads that work independently to build components of its software; using QuickSight, Deep Pool can track data like the number of software tests performed, the number of tests failed, whether any bugs were found, and when those issues were addressed for each squad. It can even trace software bugs down to their source, which makes it simple to locate areas for improvement. Deep Pool Optimizes Software Quality Control Using Amazon QuickSight software quality control 日本語 About Deep Pool Financial Services Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used During a larger migration to Amazon Web Services (AWS), Deep Pool discovered Amazon QuickSight, a cloud-native service that powers data-driven organizations with unified business intelligence at hyperscale. Using this innovative service, the company could meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural-language queries. Since adopting QuickSight, Deep Pool has democratized access to unused data, unlocking key insights to improve the overall quality of its software. During a lift-and-shift migration to the AWS Cloud, the AWS team introduced Deep Pool to QuickSight. Deep Pool quickly realized that, by using this service on top of its project-management system, it could identify areas for improvement and deploy key strategies to improve the quality of its solutions. “Amazon QuickSight would be an excellent foray into managing the data that we were collecting,” says Brett Promisel, chief operating officer for Deep Pool. “This solution provided the means to use previously inaccessible data and track key performance indicators involving software tests, failures, and successful fixes.” previously inaccessible data AWS Services Used increase in software testing 中文 (繁體) Bahasa Indonesia By unlocking these critical insights, Deep Pool can then take targeted actions to streamline the development cycle and improve software quality. Since the move to AWS, Deep Pool has increased software testing by 154 percent, but the number of issues that it has discovered and logged has dropped by 57 percent. “We’re just getting into the value of using Amazon QuickSight,” says Promisel. “But we’ve already proven that we can use it to measure our goals of quality control and improvement, which helps our customers as well as our internal efficiencies.” Opportunity | Using Amazon QuickSight to Improve Software Development Ρусский Brett Promisel Chief Operating Officer, Deep Pool Financial Solutions عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Deep Pool Financial Solutions is an investor servicing and compliance solutions supplier, providing software and consulting services to the world’s leading fund administrators and asset managers.  development efficiency Overview   Customer Stories / Financial Services AWS services, such as QuickSight, will continue to be critical tools for Deep Pool. The company is currently exploring ways to implement QuickSight into other workflows and unlock powerful insights about software performance, sales data, and client assets and holdings. “The options are seemingly infinite in terms of what we can do using AWS, and I know we’re just starting down that path,” says Promisel. “Amazon QuickSight is bringing the full picture of our business intelligence together.” 57% Türkçe decrease in software issues logged English Going forward, Deep Pool plans to invest in AWS Training and Certification, which organizations can use to be more effective and do more in the cloud, to continually improve its internal cloud skills and software quality. It has already participated in courses like AWS Cloud Practitioner Essentials, which provides individuals—independent of their specific technical roles—an overall understanding of the AWS Cloud, and AWS Technical Essentials, which teaches about AWS products, services, and common solutions, so this initiative is a natural extension. “As we think about the future of our products, we want our staff to be innovative,” says Promisel. “To keep up, we want to continue to invest in our employees to make sure that they can perform at the highest level.” Deep Pool is an investor servicing and compliance solutions supplier, providing software and consulting services to the world’s leading fund administrators and asset managers. In the highly regulated financial services industry, these reports need to be as accurate as possible. Companies need software solutions that they can trust, and Deep Pool incorporates rigorous quality controls into its workflows to meet and exceed its clients’ standards. Deutsch Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. With QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries. Learn more » Learn how Deep Pool Financial Solutions democratized access to business intelligence using Amazon QuickSight. High-quality software is paramount in the financial services industry, and Deep Pool Financial Solutions (Deep Pool) constantly seeks ways to deliver optimal solutions to its clients. The company, which builds digital solutions for fund administrators and asset managers, collected large amounts of data from its project management software. This data could be used to increase operational efficiency and, therefore, improve the quality of Deep Pool’s solutions, but siloed systems made this business intelligence difficult to access. Tiếng Việt 154% Italiano ไทย Increase Contact Sales Analyze “The options are seemingly infinite in terms of what we can do using AWS, and I know we’re just starting down that path.” Outcome | Improving Client Satisfaction with High-Quality Digital Solutions Because Deep Pool's project-management tool also tracks customer support requests, the company can use QuickSight to make sure that each ticket is resolved promptly and to the customer's satisfaction. It can also identify unique trends, such as when multiple customers encounter the same roadblock, and take corrective action when necessary. “On Amazon QuickSight, we have a log of every customer’s request, the age of that request, how it’s being resolved, and so forth,” says Promisel. “We can use this solution to not only optimize our internal approach to development but also to track how the client perceives our service.” Since it began using Amazon QuickSight, Deep Pool has improved client satisfaction by 16 percent. Português
Delivering a Seamless Gaming Experience to 25 Million Players Using AWS with Travian Games _ Travian Games Case Study _ AWS.txt
Travian is now migrating its business intelligence systems to AWS using Amazon Redshift, which uses SQL to analyze structured and semistructured data across data warehouses, operational databases, and data lakes. Using data analytics, Travian will be able to analyze player behavior in the game based on the 11 TB of data that it collects each month and make improvements. “It used to be impossible for us to do this at this scale,” says Strathaus. “We’re looking forward to using analytics to improve our games further on AWS.” Français Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Learn more » Travian needed a more stable service that could handle Kubernetes. The studio was initially hesitant to use AWS because the offerings from AWS are so vast that Travian worried it would be overwhelming. However, as the need for reliability became paramount, Travian decided to give it a try. “We spoke with people at AWS and had the feeling that they want to help us grow,” says Strathaus. “That is exactly what we were looking for.” Travian realized that AWS was willing to collaborate to help Travian learn how to use AWS services to improve its games. The studio scheduled six special workshops, AWS Immersion Days, to learn how to get the most out of AWS services. It then started using AWS in 2021. Within 1 year, Travian’s two biggest games were running completely on AWS. 2023 Opportunity | Using AWS to Deliver a Reliable Gaming Experience for Travian Español To deliver a seamless experience to its loyal player base, Travian migrated to Amazon Web Services (AWS). “We were searching for someone who really understands our business, someone who’s there to help us make our games better,” says Joerg Strathaus, chief executive officer (CEO) of Travian Games. “Collaborating with the AWS team has been amazing.” The studio used AWS for Games, purpose-built game development capabilities, to implement its initiative. Now, Travian players are enjoying greater game stability, its developers don’t have to spend weeks troubleshooting reliability issues, and its leaders are using data to drive business intelligence. In 2015, Travian migrated to a private cloud, and then in 2020, it changed its architectural approach and began using a managed Kubernetes service on a different cloud provider. However, the studio continued to need additional stability. “We had outages pretty much every day,” says Daniel Thoma, head of technical operations at Travian Games. “Our developers would spend weeks combing through code trying to find the fault, but they never found anything.” On several occasions, the studio had to implement rollbacks that restored the game to 48-hour-old backups—a frustration point for both Travian and its customers. Optimized to accommodate player needs Equipped with its new tools, Travian feels confident that it can continue improving and expanding its game worlds on AWS. The studio is now working to enhance its browser games. “We know that we can call AWS whenever we have a question, and the team will be there to support us,” says Strathaus. “We’re happy to have found a team that will collaborate with us into the future.” 日本語 AWS Services Used Customer Stories / Games Founded in 2005, Travian Games is a strategy game studio known for titles including Travian: Legends and Rail Nation. The company, which has a community of 25 million players, makes both turn-based and near-real-time titles. Contact Sales Delivering a Seamless Gaming Experience to 25 Million Players Using AWS with Travian Games 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used game reliability Improved Get Started Outcome | Engaging Gamers Using AWS Solution | Collaborating with the AWS Team to Create a Resilient Infrastructure AWS for Games 中文 (繁體) Bahasa Indonesia Joerg Strathaus CEO, Travian Games Ρусский About Travian Games عربي Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. When Travian migrated its first game to AWS, the results were immediate. “We have no issues,” says Thoma. “The result is what counts, and it’s running. The players don’t see time-out errors. It’s stable. It’s reliable.” On AWS, Travian uses Amazon Elastic Kubernetes Service (Amazon EKS), a managed service to run Kubernetes in the cloud. Using the managed worker nodes within Amazon EKS, Travian has a deeper level of management over its container deployments than it did before. Additionally, Travian gained greater scalability by using Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity. Now, the studio can react more quickly to changes in player demand. Overview cost-saving opportunities Learn how Travian Games achieved scalability and reliability by migrating to AWS. When Travian Games (Travian) wanted to achieve high reliability for its titles, the strategy game studio needed a new solution to support its 25 million registered players. The studio focused on eliminating stability issues to make it simpler for developers to focus on creating new features. As Travian continued to release near-real-time games, reliability would be essential for players to have consistent access to their game worlds. Scaled Türkçe English migration scope developers from code reviews Amazon RDS Migrating to AWS also saved time for Travian developers, who no longer need to spend days or weeks combing the code for errors. Instead, they can develop the code further and add value to the game. Travian also increased reliability and reduced the burden on developers by using Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up and scale databases in the cloud. “Before we migrated to AWS, the database was not corresponding to the web server fast enough,” says Thoma. “Now, our teams use Amazon RDS easily without doing any of the configuration work that used to be necessary.” After migrating to Amazon RDS, Travian collaborated with the AWS team to optimize its spending. AWS recommended the use of next-generation Amazon RDS General Purpose gp3 storage volumes for Rail Nation. Using gp3 storage volumes, Travian reduced the size of its databases by 50 percent while increasing the rate of input-output operations per second. While the increased reliability and new tools have been crucial for Travian, collaborating with the team at AWS has been a major benefit as well. “The most important part of choosing a service provider for me was to find a ‘partner in crime,’ a collaborator who really understands our business and who is there to help us,” says Strathaus. “I’m really happy that we made this move to AWS for Games.” Unlocked Deutsch Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Tiếng Việt Italiano ไทย Amazon EKS Liberated We know that we can call AWS whenever we have a question, and the team will be there to support us.” Founded in Germany in 2005, Travian creates strategy games such as Travian: Legends, Crowfall, and Rail Nation. Its titles are 4X games, which means that players explore, expand, exploit, and exterminate within the game world. “When we’re talking about a game like this, stability is crucial because the games take place in near real time,” says Strathaus. Learn more » Amazon EC2 AWS for Games aligns purpose-built game development capabilities – including AWS services, AWS solutions, and AWS Partners –  to help developers build, run, and grow their games. Learn more » Português
Delivering Engaging Games at Scale Using AWS with Whatwapp _ Case Study _ AWS.txt
Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Whatwapp was founded in Milan in 2013 by a small team of university students who wanted to reinvent classic cultural card games as video games. A decade later, the app had 29 million downloads, with averages of 900,000 monthly and 300,000 daily users. As it grew, Whatwapp needed to improve scalability and backend management for its games. “At the beginning, we explored different technologies, people were coming and going, and we were changing very quickly,” says Ricardo Gonzalez, technical lead at Whatwapp. The company needed a solution to more easily share and manage knowledge, such as database and authentication, and features, such as leaderboards and player-to-player challenge matchmaking. Implementing new features took up too much valuable engineering time, and difficulties maintaining capability among game clients led to ever-increasing technical debt and initiated updates that threatened to harm user retention. To solve these problems, Whatwapp looked to standardize its game infrastructure. “We’re now trying to put down common standards among games, with best practices and a common core, automating as much as possible,” says Gonzalez. Whatwapp looked to AWS in its effort to standardize its backend operations, avoid constant rewriting, and maintain compatibility with older versions. “We already had an AWS account, so migrating our games to AWS was the best choice for us,” says Gonzalez. One of the services Whatwapp was already using was Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service, for its backend operations. To manage backend game operations, Whatwapp elected to host the Nakama solution on its own Kubernetes clusters using Amazon EKS. 2023 Español in time to share game features 日本語 Customer Stories / Games Amazon S3 Get Started 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Giovanni Piumatti Technical Lead, Whatwapp Figure 1: Whatwapp Architecture Diagram Whatwapp is now focused on using Nakama to perfect its original games, building consistency across versions and laying the groundwork for innovation and expansion in the future. Better social and competitive game features make competitions more compelling, and modernized infrastructure makes it easier for Whatwapp’s engineers to create and share features. Most importantly, the improvements are passed along to players. “Using AWS for our new infrastructure, we deliver content to players faster, without forcing them to download any updates,” says Piumatti. “They can use it almost as quickly as we can deploy it.” AWS Services Used 中文 (繁體) Bahasa Indonesia Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. Learn more » In 2022, Whatwapp conducted a smooth migration with limited disruptions to its live games when it migrated its backend operations to Nakama, running on its own Kubernetes clusters using Amazon EKS. By pairing its own use of AWS services with Nakama, Whatwapp now has a scalable game server that can accommodate 40,000 simultaneous players and gains visibility, time savings, and feature improvements. “Nakama was the game service provider that had all the features that we needed out of the box,” says Giovanni Piumatti, technical lead at Whatwapp. “Our games were already live, and we had a large number of active users. It also let us run code in JavaScript, which allowed us to start from our existing codebase, and that made the migration a lot easier.” Managing Nakama on Amazon EKS gives Whatwapp greater visibility, meaning the company can alleviate gaming bottlenecks and identify underperforming code. “Now we can see bottlenecks and improve our code. We know how to improve our code base to get the best out of both Nakama and AWS,” says Gonzalez. Now, sharing features among games takes approximately one-third of the time that it used to take. Developers no longer need to rewrite code for each individual technical stack or push out critical updates to players. Time saved can be spent creating new features to engage players and drive retention. Because Whatwapp’s games are social multiplayer games, matchmaking—connecting individuals’ and teams’ experience at comparable challenge levels—is particularly critical to user experience and, ultimately, retention. Whatwapp developed its own asynchronous matchmaking feature, which it manages using Nakama. Whatwapp also runs a number of other social and competitive APIs on Nakama, including logins, authentication, chat, near-real-time parties, tournaments, and leaderboards. Behind the Nakama solution running on Amazon EKS, Whatwapp also uses a suite of AWS services to run its internal operations and improve the gaming experience for its players. For cost-effective storage, Whatwapp uses Amazon Simple Storage Service (Amazon S3), an object storage service offering scalability, data availability, security, and performance. For data ingestion, Whatwapp migrated to Amazon Kinesis Data Streams, a serverless streaming data service that makes it simple to capture, process, and store data streams at virtually any scale. Whatwapp uses Amazon CloudFront—a content delivery network service built for high performance, security, and developer convenience—to deliver content for its games. Developing its infrastructure on AWS has the added benefit of making Whatwapp more attractive to new DevOps talent, who prefer to work with updated, agile technology. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn more » 中文 (简体) Solution | Accommodating 40,000 Simultaneous Players Using Nakama on Amazon EKS  Increased visibility Using AWS for our new infrastructure, we deliver content to players faster, without forcing them to download any updates. They can use it almost as quickly as we can deploy it.” monthly and daily users Overview 66% reduction Türkçe Outcome | Attracting Players and New Talent with Improved User Experience and Faster Delivery  English Gaming company Whatwapp needed to standardize its infrastructure to save engineering time, support player retention, and avoid ever-increasing technical debt. The company wanted to streamline its backend infrastructure to provide a consistent, optimized player experience for its users. But rewriting feature implementations to share among its games was time-consuming and led to inconsistencies, complexity, and incompatibility. Since its inception, Whatwapp had been using solutions from Amazon Web Services (AWS) for its internal operations. So it decided to migrate its games’ backend solution and unify implementations on AWS through Nakama, an open-source distributed social and near-real-time server for games and apps provided by Heroic Labs, an AWS Partner. About Whatwapp Amazon Kinesis Data Streams Deutsch Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS Cloud and on-premises data centers. Delivering Engaging Games at Scale Using AWS with Whatwapp Tiếng Việt Founded by university students in 2013, Whatwapp is a gaming company that provides social video-game versions of classic cultural games. As of 2023, Whatwapp averages 900,000 monthly active users, playing as individuals and clubs. Italiano ไทย Amazon EKS Amazon CloudFront Opportunity | Using AWS to Create Standardized Gaming Infrastructure for Whatwapp  900,000 & 300,000 Learn how gaming company Whatwapp achieved scalability, availability, and control of its data using AWS solutions. into game and code performance Português Contact Sales
Delivering Innovative Visual Search Capabilities Using AWS with Syte _ Syte Case Study _ AWS.txt
Solution | Boosting Customer Conversion Rates by 177% with Innovative Capabilities on AWS Français Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  2023 Using Amazon OpenSearch Service saves us a lot of time and effort.” Español Amazon EC2 About Syte Syte’s ML models are the foundation of its customer offerings. To host its models, the startup adopted Amazon SageMaker, a service to build, train, and deploy ML models for any use case with fully managed infrastructure, tools, and workflows. Using these models, Syte can automatically extract data from an image or its customers’ product catalog to support various services. “At Syte, our innovation is in data science,” says Green. “We use Amazon SageMaker to serve and run our ML models. We can build more and more algorithms that we can use for different products.” For example, Syte’s camera search feature can analyze an image uploaded by a shopper and display products similar to the ones in the picture. The startup also uses ML models to display dynamic product recommendations based on predictive AI models, and its discovery icon helps shoppers explore similar items if their desired product is out of stock. Amazon OpenSearch Service 日本語 Opportunity | Using AWS Services to Drive Innovation and Optimize Costs for Syte Customer Stories / Retail & Wholesale Now that Syte has migrated its technology stack to AWS, it plans to expand its footprint. The startup has become an independent software vendor and has completed its listing for AWS Marketplace, where customers can find, test, buy, and deploy software that runs on AWS. Syte has also become an AWS Retail Competency Partner, an AWS Partner that is recognized for providing innovative technology offerings that accelerate retailers’ modernization and cloud journeys. 한국어 Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used in cost per transaction  Syte drives ecommerce performance for fashion, jewelry, and home decor retailers with intuitive search experiences powered by visual artificial intelligence. Its solutions include visual search, artificial intelligence product tagging, and personalized recommendations. Yair Green Vice President of Research and Development, Syte Get Started AWS Services Used 中文 (繁體) Bahasa Indonesia Since optimizing on AWS, Syte has seen a 200 percent increase in traffic and has boosted its revenue. The startup is continuing to deliver innovative search capabilities to its customers, driving powerful ecommerce results. “On average, our customers are seeing average order value increases of 11.5 percent, average conversion rate increases of 259 percent, and average revenue per user increases of 300 percent for shoppers exposed to Syte solutions,” says Yuter. Using its visual discovery solution, Syte helped Signet Jewelers, a major luxury jewelry retailer in the United Kingdom, increase its conversion rate by 580 percent and average revenue per user by 584.5 percent for website shoppers exposed to the product recommendations. The startup also achieved an increase in conversions for furniture retailer Coleman Furniture by a factor of 7.1 and helped fashion company Tally Weijl increase average revenue per user by 375 percent. Contact Sales Ρусский Improves scalability عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » Amazon SageMaker Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. Overview Syte relies on Amazon OpenSearch Service as the core database for its visual search data to reduce complexity, response times, and cost. Using this fully managed service, Syte can support complex search queries in 10 different languages and deliver faster results for users. “Before adopting Amazon OpenSearch Service, we had to manage the search database by ourselves,” says Green. “Now, we do not need to worry about maintenance, upgrades, or backups. Using Amazon OpenSearch Service saves us a lot of time and effort.” 200% increase Delivering Innovative Visual Search Capabilities Using AWS with Syte Türkçe Having developed one of the first product discovery solutions, Syte constantly looks for ways to optimize. Using machine learning (ML) models and artificial intelligence, the startup makes it possible for retailers to integrate advanced visual search capabilities and personalized product recommendations in their ecommerce storefronts. Syte’s solutions have helped these retailers boost key performance indicators like average order value, average revenue per user, and conversion rate. To better support its customers, the startup chose to optimize by migrating to Amazon Web Services (AWS). 42% reduction English While using another cloud provider, Syte was responsible for managing its own infrastructure. This process was time consuming for the startup, which sought to reduce manual effort and improve its scalability so that it could grow with its customers. Additionally, the company wanted to reduce its costs so that it could pass those savings along to retailers. Syte realized that it could optimize its business by migrating to AWS and adopting fully managed cloud services. “Most of our team has strong knowledge of AWS services, and we felt comfortable running our services on AWS,” says Yair Green, vice president of research and development at Syte. “We also believed that AWS services would be more cost effective compared with our previous solution.” Learn how Syte is driving innovation and ecommerce performance with its visual discovery service using AWS. Amazon Elastic Kubernetes Service (Amazon EKS) automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. in traffic Deutsch in customers' conversion rates Tiếng Việt Founded in 2015, Syte helps fashion, home, and jewelry brands make every product visually discoverable, helping shoppers find what they’re looking for. Using Syte’s visual discovery service, retailers can recommend products to shoppers, improve the searchability of their product catalogs, and increase revenue and conversion rates. “Our solutions meet the customers at every point in their ecommerce journeys to deliver a seamless experience,” says Gina Yuter, partnership manager at Syte. “These features include image search, visual search, automated product tagging, several recommendation engines, advanced personalization, omnichannel solutions, and many more.” Italiano ไทย Amazon EKS Over a period of 3–4 months, Syte migrated its technology stack to AWS. The startup then used AWS to optimize its solution for cost, performance, and availability, performing critical upgrades to its application and infrastructure. To host critical databases, Syte has adopted Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. It has also containerized its features using Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service that runs Kubernetes in the AWS Cloud and on-premises data centers. By adopting these fully managed services, Syte has improved its scalability and reduced its applications’ response times. The startup has also reduced its cost per transaction by 42 percent. “Using AWS managed services, we can maintain our infrastructure without needing to increase headcount,” says Green. “We can keep our team the same size but still grow with our customers.” without growing head count 177% average increase Learn more » Syte migrated its technology stack to AWS and adopted services like Amazon OpenSearch Service, an open-source, distributed search and analytics suite, to power its features. After adopting Amazon OpenSearch Service, the startup has reduced cost per transaction by 42 percent, improved its response times, and increased traffic by 200 percent. By optimizing for cost and performance on AWS, Syte has positioned itself to grow alongside its customers and drive powerful ecommerce results. As Syte continues to grow, it plans to use AWS services and resources to enhance its visual search capabilities and support its customers. “In the AWS community, we all want to help and advance our projects,” says Yuter. “We have felt very supported.” Português Outcome | Continuing to Build on AWS and Deliver Advanced Search Services to Retailers
Delivering Travel Deals across 110 Markets Using Amazon CloudFront with Skyscanner _ Case Study _ AWS.txt
Français Amazon CloudFront is a content delivery network service built for high performance, security, and developer convenience. 2023 Vetted solutions and guidance for business and technical use cases Learn more » Español 日本語 AWS Services Used Contact Sales As Skyscanner has grown to serve over 110 market domains, the company wanted to support engineering efficiency and productivity while optimizing its cloud spend. Although Skyscanner had invested in AWS technologies, it used a fully managed CDN solution from another provider. “One of the major challenges of this project was that we were untangling almost a decade’s worth of root configurations that our team had not implemented,” says Stuart Ross, senior engineering manager at Skyscanner. cost savings for CDN usage Customer Stories / Travel Get Started 한국어 average cache-hit rate for images Overview | Opportunity | Solution | Outcome | AWS Services Used 3 billion The migration to Amazon CloudFront has simplified the management of our infrastructure footprint. There are far fewer moving parts, and it’s largely driven by AWS-managed services, which is great.” To continue innovating, the Skyscanner team plans to adopt a blue-green deployment strategy, which will help its team reduce deployment risk and quickly roll back changes by creating two identical independent environments for routing web traffic. The Skyscanner team can accelerate its efforts toward this goal with a streamlined, standardized stack on AWS. “The migration to Amazon CloudFront has simplified the management of our infrastructure footprint,” says Ross. “There are far fewer moving parts, and it’s largely driven by AWS-managed services, which is great.” To set up these configurations, Skyscanner used the AWS Cloud Development Kit (AWS CDK), giving its team the ability to define its cloud application resources using familiar programming languages. “AWS CDK was key to this project,” says Aylett. “Our teams could write code rather than writing infrastructure.” Skyscanner sourced code for its configurations from the AWS Solutions Library, which provides vetted solutions and guidance for business and technical use cases. By making these resources available to its engineering teams, Skyscanner configured Amazon CloudFront with 1,000 lines of code—a significant reduction from its previous solution, which had over 26,000 lines. Opportunity | Using Amazon CloudFront to Optimize the Technology Stack for Skyscanner 中文 (繁體) Bahasa Indonesia AWS Shield is a managed DDoS protection service that safeguards applications running on AWS. Learn more » Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Stuart Ross Senior Engineering Manager, Skyscanner 中文 (简体) After completing the POC, the Skyscanner team migrated its front-facing applications and website to Amazon CloudFront in increments, starting with its less-trafficked market domains. “It built up our confidence to start pushing the rest of our traffic from our consumer-facing sites to Amazon CloudFront,” says Aylett. The migration took a total of 3 months to complete, during which the Skyscanner team experienced zero global downtime. Since then, the Skyscanner team has been able to scale its serverless image handler to three billion monthly API requests while maintaining an average cache-hit rate of 99.99 percent. And by running its image handler on a serverless architecture, the Skyscanner team reduced its CDN costs by 50 percent. Another challenge that the Skyscanner team faced was migrating its CDN to AWS without degrading the customer experience. On any given day, Skyscanner can receive up to 1.5 billion API requests, representing about 24 TB of data. With such high demand, it was essential to avoid global incidents and downtime. AWS Shield 26,000 Overview Skyscanner engaged the AWS team to create a proof of concept (POC) for Amazon CloudFront. “The AWS team was amazing,” says Andrew Aylett, senior software engineer at Skyscanner. “We had the opportunity to talk to subject-matter experts to determine which AWS services would be the best fit for our road map.” During the 3-month POC phase, the Skyscanner team built customized configurations, including a serverless image-management handler that automatically compresses static images in the most cost-effective format. “That aspect was previously managed by our CDN provider, and we wanted Amazon CloudFront to have the same capabilities,” says Rory McCann, senior software engineer at Skyscanner. lines of code reduced to 1000 lines 50% monthly API requests handled Founded in 2003, Skyscanner is a global leader in travel, helping 100 million travelers plan and book their trips with ease and confidence by providing an all-in-one place for the best flight, hotel, or car-hire options from more than 1,200 trusted travel partners. Türkçe experienced globally About Skyscanner Ltd. English Delivering Travel Deals across 110 Markets Using Amazon CloudFront with Skyscanner Skyscanner is a global leader in travel that connects over 100 million travelers each month with more than 1,200 trusted travel partners so that travelers can find the best flight, hotel, or car-hire options. Founded in 2003, Skyscanner has offices worldwide, in Europe, Asia-Pacific, and North America, where traveler-first innovations are developed and powered by data and insights. The company is committed to helping shape a more responsible future for travel in collaboration with its partners and by making use of the latest technology so that every traveler can explore the world effortlessly for generations to come. Skyscanner also configured Amazon CloudFront for multiregion deployment, increasing its fault tolerance. “Our team can sleep at night knowing that if something happened, there would be another AWS Region where we could automatically direct our web traffic,” says Ross. Protecting its front-facing applications and website from distributed denial-of-service attacks was a priority, too, so the Skyscanner team implemented AWS Shield, a managed distributed denial-of-service protection service that safeguards applications running on AWS. The team activated AWS Shield Advanced so that it has near-real-time visibility into distributed denial-of-service events and 24/7 support from the AWS Shield Response Team. AWS Solutions Library Outcome | Future-Proofing Its Architecture for Blue-Green Deployments Solution | Configuring a Serverless Image Handler and Multiregion Deployment Using AWS CDK AWS CDK Deutsch 99.99% Tiếng Việt Italiano ไทย Amazon CloudFront Zero downtime Learn more » As a global leader in travel, Skyscanner Ltd. (Skyscanner) made the strategic decision to operate in one cloud environment as a means to future-proof its environment and identify opportunities for cost savings. Because the company serves 100 million people each month through its travel marketplace, fault tolerance was a high priority for Skyscanner while consolidating its technological stack. Skyscanner had already migrated its front-facing applications from its data center to Amazon Web Services (AWS) in 2017. Based on its experience, the company wanted to standardize its content delivery network (CDN) on AWS. So, the Skyscanner team adopted Amazon CloudFront, which securely delivers dynamic and static content with low latency and high transfer speeds. The Skyscanner team also built a serverless image handler that compresses static content using Amazon CloudFront, helping the company achieve 50 percent cost savings across its total CDN usage. AWS Cloud Development Kit (AWS CDK) accelerates cloud development using common programming languages to model your applications. Learn more » Português Learn how Skyscanner in the travel industry scales to three billion monthly API requests using Amazon CloudFront.
Democratize Access to HPC for Computer-Aided Materials Design Using Amazon EC2 Spot Instances with Good Chemistry _ Good Chemistry Case Study _ AWS.txt
Philip Ifrah Head of Product, Good Chemistry Democratize Access to HPC for Computer-Aided Materials Design Using Amazon EC2 Spot Instances with Good Chemistry Français Outcome | Applying Cloud-Native HPC Technology to Accelerate New Use Cases 2023 Español QEMIST Cloud facilitates high-throughput, high-accuracy computational chemistry simulations for billions of chemical combinations powered by Amazon Web Services (AWS) infrastructure. Using this solution, Good Chemistry is driving the development of economical ways to remove PFAS from the world’s water supply, helping solve one of the most pressing environmental challenges that humans currently face. Scales to one million Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. 日本語 virtual CPU cores Close Democratizes access 한국어 Solution | Scaling Past One Million Cores and Democratizing Access to Powerful Supercomputer Capabilities on AWS Using its highly scalable AWS infrastructure, Good Chemistry accurately calculated the bond-breaking energy for PFOA, one of the largest and most notorious PFAS molecules, in 37 hours, with only 4 hours at the one million core peak. Had the company tried to run these simulations sequentially, the process would have taken several years. “We dynamically scaled QEMIST Cloud to one million cores, and by the next day, we were able to create a new solution that was out of reach before,” says Zaribafiyan. “All it took was the on-demand scalability of the cloud. It’s a game changer for HPC in material science and chemistry.” of new materials and drugs About Good Chemistry Get Started Founded in 2021, Good Chemistry has a mission to create a more sustainable, circular economy by solving tough material science problems, like the removal of PFAS from the environment. Its product, QEMIST Cloud, uses high performance computing (HPC) clusters on AWS to push the boundaries of what is possible with quantum chemistry simulations. Using these simulations, scientists can accelerate the discovery and development of new materials. “The number of potential synthesizable molecules dwarfs the number of particles in the observable universe,” says Zaribafiyan. “Our mission is to use modern computing on the cloud to search uncharted chemical space and bring new materials and new drugs to market faster.” AWS Services Used Accelerates design and discovery 中文 (繁體) Bahasa Indonesia Click to enlarge for fullscreen viewing.  high-accuracy workloads at scale “The accurate understanding of chemical reactions is the key to finding the best solution to break PFAS apart and remove them from the environment,” says Arman Zaribafiyan, founder of Good Chemistry. “We can now interrogate chemical reactions at a tremendous volume because of the unprecedented scale of the cloud and accuracy of our algorithms.” Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Good Chemistry has a mission to make the world healthier, cleaner, and more sustainable using QEMIST Cloud, a cloud-native solution that accelerates materials design by facilitating high-throughput, high-accuracy computational chemistry simulations. Learn more » Amazon Aurora To speed up chemistry simulations in the cloud, Good Chemistry joined forces with the AWS HPC team and Intel as part of the Amazon Global Impact Computing team’s initiative on Digital Technologies for a Circular Economy. Together, the teams developed highly scalable infrastructure for QEMIST Cloud powered by AWS services like Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which run hyperscale workloads at significant cost savings. Through this engagement, Good Chemistry massively increased the scaling capabilities of QEMIST Cloud to run a chemistry simulation using more than one million CPU cores. Overview On AWS, Good Chemistry empowers researchers worldwide to simulate chemical combinations and drive sustainable innovations. This project marks an essential step forward for the remediation of PFAS from the environment and will likely play a major role in the discovery of new pathways for PFAS destruction. “Through this PFAS project, we demonstrated that we could run very high-accuracy calculations on AWS,” says Takeshi Yamazaki, director of research and development at Good Chemistry. “We are creating lots of high-quality data that will, in turn, help us offer differentiated machine learning models for material discovery.” Good Chemistry is already expanding QEMIST Cloud to support more industries, like pharmaceuticals, advanced chemicals, energy, and automotive. Use cases in progress, like crystal structure prediction, virtual screening, and reaction pathway prediction, will significantly reduce the cost, time, and risk associated with new drug development. Other use cases will lead to the development of better batteries, more effective carbon capture, and better solar panels. Good Chemistry is also one of the few AWS Partners that have been selected for the third cohort of the AWS Clean Energy Accelerator (CEA), where it will work with leading energy organizations to solve pressing clean energy and decarbonization challenges. “Right now, we’ve only scratched the surface,” says Ifrah. “We’re excited to extend our capabilities in computational chemistry, machine learning, and quantum computing to bring many new use cases to life.” Finding affordable, scalable ways to break the chemical bonds in PFAS is a major priority for scientists around the world. These artificial chemicals are found in everything from nonstick cookware to firefighting equipment but are known to cause significant health problems, including harm to the reproductive and immune systems and an increased risk of cancer. “Because PFAS are not biodegradable, they accumulate in the environment and find their way into underground water reservoirs,” says Zaribafiyan. “In the United States alone, more than 200 million people have PFAS in their drinking water. That’s two-thirds of the population.” Türkçe Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Learn more » English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon EC2 Spot Instances Opportunity | Using AWS to Achieve Massive Scale for Workloads at Low Cost to supercomputer capabilities Per- and polyfluoroalkyl substances (PFAS), often called forever chemicals, pose a significant risk to human health and the environment. The remediation of PFAS pollution is a huge global challenge, estimated to cost billions of dollars and involve years of research. But now, Good Chemistry has developed a powerful solution to accelerate the process and further the development of a circular economy. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers.  Deutsch Runs high-throughput Tiếng Việt Maintains high availability Italiano ไทย Amazon EKS Architecture Diagram With QEMIST Cloud, Good Chemistry has democratized access to supercomputer capabilities for research organizations, regardless of size or resources. “You don’t have to spend millions of dollars in infrastructure to get computing capability at this scale,” says Philip Ifrah, head of product at Good Chemistry. “Our solution on AWS orchestrates millions of computing resources on demand to perform experiments that push the boundaries of what’s possible.” Learn more » QEMIST Cloud architecture Learn how Good Chemistry is helping scientists run HPC workloads at scale with QEMIST Cloud on AWS. of compute resources On AWS, Good Chemistry can run high-throughput, high-accuracy HPC workloads at scale. QEMIST Cloud’s infrastructure is containerized and uses Amazon Elastic Kubernetes Service (Amazon EKS) to start, run, and scale Kubernetes clusters, each of which runs chemistry algorithms. Using Karpenter, an open-source node provisioning solution, each HPC cluster can scale on multiple instance types across Availability Zones, providing optimal scale and availability. “Using this approach, we can take advantage of all Availability Zones in an AWS Region and circumvent any scaling issues that Kubernetes might encounter,” says Rudi Plesch, head of software development at Good Chemistry. “Periodically, we rebalance the clusters to make sure that none of them run out of work.” The immediate results of the simulation are then stored on Amazon Aurora, a relational database management system built for the cloud with full MySQL and PostgreSQL compatibility. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Our solution on AWS orchestrates millions of computing resources on demand to perform experiments that push the boundaries of what’s possible.”
Democratize computer vision defect detection for manufacturing quality using no-code machine learning with Amazon SageMaker Canvas _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Democratize computer vision defect detection for manufacturing quality using no-code machine learning with Amazon SageMaker Canvas by Brajendra Singh , Davide Gallitelli , and Danny Smith | on 30 JUN 2023 | in Advanced (300) , Amazon SageMaker , Amazon SageMaker Canvas , Artificial Intelligence | Permalink | Comments |  Share Cost of poor quality is top of mind for manufacturers. Quality defects increase scrap and rework costs, decrease throughput, and can impact customers and company reputation. Quality inspection on the production line is crucial for maintaining quality standards. In many cases, human visual inspection is used to assess the quality and detect defects, which can limit the throughput of the line due to limitations of human inspectors. The advent of machine learning (ML) and artificial intelligence (AI) brings additional visual inspection capabilities using computer vision (CV) ML models. Complimenting human inspection with CV-based ML can reduce detection errors, speed up production, reduce the cost of quality, and positively impact customers. Building CV ML models typically requires expertise in data science and coding, which are often rare resources in manufacturing organizations. Now, quality engineers and others on the shop floor can build and evaluate these models using no-code ML services, which can accelerate exploration and adoption of these models more broadly in manufacturing operations. Amazon SageMaker Canvas is a visual interface that enables quality, process, and production engineers to generate accurate ML predictions on their own—without requiring any ML experience or having to write a single line of code. You can use SageMaker Canvas to create single-label image classification models for identifying common manufacturing defects using your own image datasets. In this post, you will learn how to use SageMaker Canvas to build a single-label image classification model to identify defects in manufactured magnetic tiles based on their image. Solution overview This post assumes the viewpoint of a quality engineer exploring CV ML inspection, and you will work with sample data of magnetic tile images to build an image classification ML model to predict defects in the tiles for the quality check. The dataset contains more than 1,200 images of magnetic tiles, which have defects such as blowhole, break, crack, fray, and uneven surface. The following images provide an example of single-label defect classification, with a cracked tile on the left and a tile free of defects on the right. In a real-world example, you can collect such images from the finished products in the production line. In this post, you use SageMaker Canvas to build a single-label image classification model that will predict and classify defects for a given magnetic tile image. SageMaker Canvas can import image data from a local disk file or Amazon Simple Storage Service (Amazon S3). For this post, multiple folders have been created (one per defect type such as blowhole, break, or crack) in an S3 bucket, and magnetic tile images are uploaded to their respective folders. The folder called Free contains defect-free images. There are four steps involved in building the ML model using SageMaker Canvas: Import the dataset of the images. Build and train the model. Analyze the model insights, such as accuracy. Make predictions. Prerequisites Before starting, you need to set up and launch SageMaker Canvas. This setup is performed by an IT administrator and involves three steps: Set up an Amazon SageMaker domain. Set up the users. Set up permissions to use specific features in SageMaker Canvas. Refer to Getting started with using Amazon SageMaker Canvas and Setting Up and Managing Amazon SageMaker Canvas (for IT Administrators) to configure SageMaker Canvas for your organization. When SageMaker Canvas is set up, the user can navigate to the SageMaker console, choose Canvas in the navigation pane, and choose Open Canvas to launch SageMaker Canvas. The SageMaker Canvas application is launched in a new browser window. After the SageMaker Canvas application is launched, you start the steps of building the ML model. Import the dataset Importing the dataset is the first step when building an ML model with SageMaker Canvas. In the SageMaker Canvas application, choose Datasets in the navigation pane. On the Create menu, choose Image . For Dataset name , enter a name, such as Magnetic-Tiles-Dataset . Choose Create to create the dataset. After the dataset is created, you need to import images in the dataset. On the Import page, choose Amazon S3 (the magnetic tiles images are in an S3 bucket). You have the choice to upload the images from your local computer as well. Select the folder in the S3 bucket where the magnetic tile images are stored and chose Import Data . SageMaker Canvas starts importing the images into the dataset. When the import is complete, you can see the image dataset created with 1,266 images. You can choose the dataset to check the details, such as a preview of the images and their label for the defect type. Because the images were organized in folders and each folder was named with the defect type, SageMaker Canvas automatically completed the labeling of the images based on the folder names. As an alternative, you can import unlabeled images, add labels, and perform labeling of the individual images at a later point of time. You can also modify the labels of the existing labeled images. The image import is complete and you now have an images dataset created in the SageMaker Canvas. You can move to the next step to build an ML model to predict defects in the magnetic tiles. Build and train the model You train the model using the imported dataset. Choose the dataset ( Magnetic-tiles-Dataset ) and choose Create a model . For Model name , enter a name, such as Magnetic-Tiles-Defect-Model. Select Image analysis for the problem type and choose Create to configure the model build. On the model’s Build tab, you can see various details about the dataset, such as label distribution, count of labeled vs. unlabeled images, and also model type, which is single-label image prediction in this case. If you have imported unlabeled images or you want to modify or correct the labels of certain images, you can choose Edit dataset to modify the labels. You can build model in two ways: Quick build and Standard build. The Quick build option prioritizes speed over accuracy. It trains the model in 15–30 minutes. The model can be used for the prediction but it can’t be shared. It’s a good option to quickly check feasibility and accuracy of training a model with a given dataset. The Standard build chooses accuracy over speed, and model training can take between 2–4 hours. For this post, you train the model using the Standard build option. Choose Standard build on the Build tab to start training the model. The model training starts instantly. You can see the expected build time and training progress on the Analyze tab. Wait until the model training is complete, then you can analyze model performance for the accuracy. Analyze the model In this case, it took less than an hour to complete the model training. When the model training is complete, you can check model accuracy on the Analyze tab to determine if the model can accurately predict defects. You see the overall model accuracy is 97.7% in this case. You can also check the model accuracy for each of the individual label or defect type, for instance 100% for Fray and Uneven but approximately 95% for Blowhole . This level of accuracy is encouraging, so we can continue the evaluation. To better understand and trust the model, enable Heatmap to see the areas of interest in the image that the model uses to differentiate the labels. It’s based on the class activation map (CAM) technique. You can use the heatmap to identify patterns from your incorrectly predicted images, which can help improve the quality of your model. On the Scoring tab, you can check precision and recall for the model for each of the labels (or class or defect type). Precision and recall are evaluation metrics used to measure the performance of a binary and multiclass classification model. Precision tells how good the model is at predicting a specific class (defect type, in this example). Recall tells how many times the model was able to detect a specific class. Model analysis helps you understand the accuracy of the model before you use it for prediction. Make predictions After the model analysis, you can now make predictions using this model to identify defects in the magnetic tiles. On the Predict tab, you can choose Single prediction and Batch prediction . In a single prediction, you import a single image from your local computer or S3 bucket to make a prediction about the defect. In batch prediction, you can make predictions for multiple images that are stored in a SageMaker Canvas dataset. You can create a separate dataset in SageMaker Canvas with the test or inference images for the batch prediction. For this post, we use both single and batch prediction. For single prediction, on the Predict tab, choose Single prediction , then choose Import image to upload the test or inference image from your local computer. After the image is imported, the model makes a prediction about the defect. For the first inference, it might take few minutes because the model is loading for the first time. But after the model is loaded, it makes instant predictions about the images. You can see the image and the confidence level of the prediction for each label type. For instance, in this case, the magnetic tile image is predicted to have an uneven surface defect (the Uneven label) and the model is 94% confident about it. Similarly, you can use other images or a dataset of images to make predictions about the defect. For the batch prediction, we use the dataset of unlabeled images called Magnetic-Tiles-Test-Dataset by uploading 12 test images from your local computer to the dataset. On the Predict tab, choose Batch prediction and choose Select dataset . Select the Magnetic-Tiles-Test-Dataset dataset and choose Generate predictions . It will take some time to generate the predictions for all the images. When the status is Ready , choose the dataset link to see the predictions. You can see predictions for all the images with confidence levels. You can choose any of the individual images to see image-level prediction details. You can download the prediction in CSV or .zip file format to work offline. You can also verify the predicted labels and add them to your training dataset. To verify the predicted labels, choose Verify prediction . In the prediction dataset, you can update labels of the individual images if you don’t find the predicted label correct. When you have updated the labels as required, choose Add to trained dataset to merge the images into your training dataset (in this example, Magnetic-Tiles-Dataset ). This updates the training dataset, which includes both your existing training images and the new images with predicted labels. You can train a new model version with the updated dataset and potentially improve the model’s performance. The new model version won’t be an incremental training, but a new training from scratch with the updated dataset. This helps keep the model refreshed with new sources of data. Clean up After you have completed your work with SageMaker Canvas, choose Log out to close the session and avoid any further cost. When you log out, your work such as datasets and models remains saved, and you can launch a SageMaker Canvas session again to continue the work later. SageMaker Canvas creates an asynchronous SageMaker endpoint for generating the predictions. To delete the endpoint, endpoint configuration, and model created by SageMaker Canvas, refer to Delete Endpoints and Resources . Conclusion In this post, you learned how to use SageMaker Canvas to build an image classification model to predict defects in manufactured products, to compliment and improve the visual inspection quality process. You can use SageMaker Canvas with different image datasets from your manufacturing environment to build models for use cases like predictive maintenance, package inspection, worker safety, goods tracking, and more. SageMaker Canvas gives you the ability to use ML to generate predictions without needing to write any code, accelerating the evaluation and adoption of CV ML capabilities. To get started and learn more about SageMaker Canvas, refer to the following resources: Amazon SageMaker Canvas Developer Guide Announcing Amazon SageMaker Canvas – a Visual, No Code Machine Learning Capability for Business Analysts About the authors Brajendra Singh is solution architect in Amazon Web Services working with enterprise customers. He has strong developer background and is a keen enthusiast for data and machine learning solutions. Danny Smith is Principal, ML Strategist for Automotive and Manufacturing Industries, serving as a strategic advisor for customers. His career focus has been on helping key decision-makers leverage data, technology and mathematics to make better decisions, from the board room to the shop floor. Lately most of his conversations are on democratizing machine learning and generative AI. Davide Gallitelli is a Specialist Solutions Architect for AI/ML in the EMEA region. He is based in Brussels and works closely with customers throughout Benelux. He has been a developer since he was very young, starting to code at the age of 7. He started learning AI/ML at university, and has fallen in love with it since then. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Deploy a serverless ML inference endpoint of large language models using FastAPI AWS Lambda and AWS CDK _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Deploy a serverless ML inference endpoint of large language models using FastAPI, AWS Lambda, and AWS CDK by Tingyi Li and Demir Catovic | on 23 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence , AWS Lambda , Generative AI , Technical How-to | Permalink | Comments |  Share For data scientists, moving machine learning (ML) models from proof of concept to production often presents a significant challenge. One of the main challenges can be deploying a well-performing, locally trained model to the cloud for inference and use in other applications. It can be cumbersome to manage the process, but with the right tool, you can significantly reduce the required effort. Amazon SageMaker inference , which was made generally available in April 2022, makes it easy for you to deploy ML models into production to make predictions at scale, providing a broad selection of ML infrastructure and model deployment options to help meet all kinds of ML inference needs. You can use SageMaker Serverless Inference endpoints for workloads that have idle periods between traffic spurts and can tolerate cold starts. The endpoints scale out automatically based on traffic and take away the undifferentiated heavy lifting of selecting and managing servers. Additionally, you can use AWS Lambda directly to expose your models and deploy your ML applications using your preferred open-source framework, which can prove to be more flexible and cost-effective. FastAPI is a modern, high-performance web framework for building APIs with Python. It stands out when it comes to developing serverless applications with RESTful microservices and use cases requiring ML inference at scale across multiple industries. Its ease and built-in functionalities like the automatic API documentation make it a popular choice amongst ML engineers to deploy high-performance inference APIs. You can define and organize your routes using out-of-the-box functionalities from FastAPI to scale out and handle growing business logic as needed, test locally and host it on Lambda, then expose it through a single API gateway, which allows you to bring an open-source web framework to Lambda without any heavy lifting or refactoring your codes. This post shows you how to easily deploy and run serverless ML inference by exposing your ML model as an endpoint using FastAPI, Docker, Lambda, and Amazon API Gateway . We also show you how to automate the deployment using the AWS Cloud Development Kit (AWS CDK). Solution overview The following diagram shows the architecture of the solution we deploy in this post. Prerequisites You must have the following prerequisites: Python3 installed, along with virtualenv for creating and managing virtual environments in Python aws-cdk v2 installed on your system in order to be able to use the AWS CDK CLI Docker installed and running on your local machine Test if all the necessary software is installed: The AWS Command Line Interface (AWS CLI) is needed. Log in to your account and choose the Region where you want to deploy the solution. Use the following code to check your Python version: python3 --version Check if virtualenv is installed for creating and managing virtual environments in Python. Strictly speaking, this is not a hard requirement, but it will make your life easier and helps follow along with this post more easily. Use the following code: python3 -m virtualenv --version Check if cdk is installed. This will be used to deploy our solution. cdk --version Check if Docker is installed. Our solution will make your model accessible through a Docker image to Lambda. To build this image locally, we need Docker. docker --version Make sure Docker is up and running with the following code: docker ps How to structure your FastAPI project using AWS CDK We use the following directory structure for our project (ignoring some boilerplate AWS CDK code that is immaterial in the context of this post): ``` fastapi_model_serving │ └───.venv │ └───fastapi_model_serving │   │   __init__.py │   │   fastapi_model_serving_stack.py │   │ │   └───model_endpoint │       └───docker │       │      Dockerfile │       │      serving_api.tar.gz │ │ │       └───runtime │            └───serving_api │                    requirements.txt │                    serving_api.py │                └───custom_lambda_utils │                     └───model_artifacts │                            ... │                     └───scripts │                            inference.py │ └───templates │   └───api │   │     api.py │   └───dummy │         dummy.py │ │ app.py │   cdk.json │   README.md │   requirements.txt │   init-lambda-code.sh ``` The directory follows the recommended structure of AWS CDK projects for Python . The most important part of this repository is the fastapi_model_serving directory. It contains the code that will define the AWS CDK stack and the resources that are going to be used for model serving. The fastapi_model_serving directory contains the model_endpoint subdirectory, which contains all the assets necessary that make up our serverless endpoint, namely the Dockerfile to build the Docker image that Lambda will use, the Lambda function code that uses FastAPI to handle inference requests and route them to the correct endpoint, and the model artifacts of the model that we want to deploy. model_endpoint also contains the following: Docker – This subdirectory contains the following: Dockerfile – This is used to build the image for the Lambda function with all the artifacts (Lambda function code, model artifacts, and so on) in the right place so that they can be used without issues. serving.api.tar.gz – This is a tarball that contains all the assets from the runtime folder that are necessary for building the Docker image. We discuss how to create the .tar.gz file later in this post. runtime – This subdirectory contains the following: serving_api – The code for the Lambda function and its dependencies specified in the requirements.txt file. custom_lambda_utils – This includes an inference script that loads the necessary model artifacts so that the model can be passed to the serving_api that will then expose it as an endpoint. Additionally, we have the template directory, which provides a template of folder structures and files where you can define your customized codes and APIs following the sample we went through earlier. The template directory contains dummy code that you can use to create new Lambda functions: dummy – Contains the code that implements the structure of an ordinary Lambda function using the Python runtime api – Contains the code that implements a Lambda function that wraps a FastAPI endpoint around an existing API gateway Deploy the solution By default, the code is deployed inside the eu-west-1 region. If you want to change the Region, you can change the DEPLOYMENT_REGION context variable in the cdk.json file. Keep in mind, however, that the solution tries to deploy a Lambda function on top of the arm64 architecture, and that this feature might not be available in all Regions. In this case, you need to change the architecture parameter in the fastapi_model_serving_stack.py file, as well as the first line of the Dockerfile inside the Docker directory, to host this solution on the x86 architecture. To deploy the solution, complete the following steps: Run the following command to clone the GitHub repository: git clone https://github.com/aws-samples/lambda-serverless-inference-fastapi Because we want to showcase that the solution can work with model artifacts that you train locally, we contain a sample model artifact of a pretrained DistilBERT model on the Hugging Face model hub for a question answering task in the serving_api.tar.gz file. The download time can take around 3–5 minutes. Now, let’s set up the environment. Download the pretrained model that will be deployed from the Hugging Face model hub into the ./model_endpoint/runtime/serving_api/custom_lambda_utils/model_artifacts directory. It also creates a virtual environment and installs all dependencies that are needed. You only need to run this command once: make prep . This command can take around 5 minutes (depending on your internet bandwidth) because it needs to download the model artifacts. Package the model artifacts inside a .tar.gz archive that will be used inside the Docker image that is built in the AWS CDK stack. You need to run this code whenever you make changes to the model artifacts or the API itself to always have the most up-to-date version of your serving endpoint packaged: make package_model . The artifacts are all in place. Now we can deploy the AWS CDK stack to your AWS account. Run cdk bootstrap if it’s your first time deploying an AWS CDK app into an environment (account + Region combination): make cdk_bootstrap This stack includes resources that are needed for the toolkit’s operation. For example, the stack includes an Amazon Simple Storage Service (Amazon S3) bucket that is used to store templates and assets during the deployment process. Because we’re building Docker images locally in this AWS CDK deployment, we need to ensure that the Docker daemon is running before we can deploy this stack via the AWS CDK CLI. To check whether or not the Docker daemon is running on your system, use the following command: docker ps If you don’t get an error message, you should be ready to deploy the solution. Deploy the solution with the following command: make deploy This step can take around 5–10 minutes due to building and pushing the Docker image. Troubleshooting If you’re a Mac user, you may encounter an error when logging into Amazon Elastic Container Registry (Amazon ECR) with the Docker login, such as Error saving credentials ... not implemented . For example: exited with error code 1: Error saving credentials: error storing credentials - err: exit status 1,...dial unix backend.sock: connect: connection refused Before you can use Lambda on top of Docker containers inside the AWS CDK, you may need to change the ~/docker/config.json file. More specifically, you might have to change the credsStore parameter in ~/.docker/config.json to osxkeychain. That solves Amazon ECR login issues on a Mac. Run real-time inference After your AWS CloudFormation stack is deployed successfully, go to the Outputs tab for your stack on the AWS CloudFormation console and open the endpoint URL. Now our model is accessible via the endpoint URL and we’re ready to run real-time inference. Navigate to the URL to see if you can see “hello world” message and add /docs to the address to see if you can see the interactive swagger UI page successfully. There might be some cold start time, so you may need to wait or refresh a few times. After you log in to the landing page of the FastAPI swagger UI page, you can run via the root / or via /question . From / , you could run the API and get the “hello world” message. From /question , you could run the API and run ML inference on the model we deployed for a question answering case. For example, we use the question is What is the color of my car now? and the context is My car used to be blue but I painted red. When you choose Execute , based on the given context, the model will answer the question with a response, as shown in the following screenshot. In the response body, you can see the answer with the confidence score from the model. You could also experiment with other examples or embed the API in your existing application. Alternatively, you can run the inference via code. Here is one example written in Python, using the requests library: import requests url = "https://<YOUR_API_GATEWAY_ENDPOINT_ID>.execute-api.<YOUR_ENDPOINT_REGION>.amazonaws.com/prod/question?question=\"What is the color of my car now?\"&context=\"My car used to be blue but I painted red\"" response = requests.request("GET", url, headers=headers, data=payload) print(response.text) The code outputs a string similar to the following: '{"score":0.6947233080863953,"start":38,"end":41,"answer":"red"}' If you are interested in knowing more about deploying Generative AI and large language models on AWS, check out here: Deploy Serverless Generative AI on AWS Lambda with OpenLLaMa Deploy large language models on AWS Inferentia2 using large model inference containers Clean up Inside the root directory of your repository, run the following code to clean up your resources: make destroy Conclusion In this post, we introduced how you can use Lambda to deploy your trained ML model using your preferred web application framework, such as FastAPI. We provided a detailed code repository that you can deploy, and you retain the flexibility of switching to whichever trained model artifacts you process. The performance can depend on how you implement and deploy the model. You are welcome to try it out yourself, and we’re excited to hear your feedback! About the Authors Tingyi Li is an Enterprise Solutions Architect from AWS based out in Stockholm, Sweden supporting the Nordics customers. She enjoys helping customers with the architecture, design, and development of cloud-optimized infrastructure solutions. She is specialized in AI and Machine Learning and is interested in empowering customers with intelligence in their AI/ML applications. In her spare time, she is also a part-time illustrator who writes novels and plays the piano. Demir Catovic is a Machine Learning Engineer from AWS based in Zurich, Switzerland. He engages with customers and helps them implement scalable and fully-functional ML applications. He is passionate about building and productionizing machine learning applications for customers and is always keen to explore around new trends and cutting-edge technologies in the AI/ML world. TAGS: Generative AI , Natural Language Processing Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Deploy Falcon-40B with large model inference DLCs on Amazon SageMaker _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Deploy Falcon-40B with large model inference DLCs on Amazon SageMaker by James Park , Abhi Shivaditya , Evandro Franco , Frank Liu , Qing Lan , and Robert Van Dusen | on 13 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence | Permalink | Comments |  Share Last week, Technology Innovation Institute (TII) launched TII Falcon LLM , an open-source foundational large language model (LLM). Trained on 1 trillion tokens with Amazon SageMaker , Falcon boasts top-notch performance (#1 on the Hugging Face leaderboard at time of writing) while being comparatively lightweight and less expensive to host than other LLMs such as llama-65B. In this post, we demonstrate how to deploy Falcon for applications like language understanding and automated writing assistance using large model inference deep learning containers on SageMaker. The Falcon has landed on SageMaker TII is the applied research organization within Abu Dhabi’s Advanced Technology Research Council ; its team of scientists, researchers, and engineers is dedicated to the discovery of transformative technologies and development of scientific breakthroughs that will future-proof our society. Earlier this year, TII set out to train a state-of-the-art, open-source LLM and used the infrastructure, tooling, and expertise of SageMaker to get the job done (to learn more about how this model was trained on SageMaker, refer to Technology Innovation Institute trains the state-of-the-art Falcon LLM 40B foundation model on Amazon SageMaker ). The result of this effort is TII Falcon LLM . Trained on 1 trillion tokens, Falcon boasts top-notch performance against the Eleuther AI Language Model Evaluation Harness and is currently #1 on the Hugging Face leaderboard for accuracy. The model is available in two different sizes—Falcon-40B and Falcon-7B—and can be used for state-of-the-art performance in applications such as language understanding, conversational experiences, and automated writing assistance. This post will help you get started with deploying Falcon on SageMaker for high-accuracy inference in these types of domains. SageMaker large model inference DLCs simplify LLM hosting Hosting LLMs such as Falcon-40B and Falcon-7B can be challenging. Larger models are often more accurate because they include billions of parameters, but their size can also result in slower inference latency or worse throughput. Hosting an LLM can require more GPU memory and optimized kernels to achieve acceptable performance. To further complicate things, although smaller models such as Falcon-7B can generally fit on a single GPU such as an NVIDIA A10G instance that powers AWS G5 instance types, larger models like Falcon-40B cannot. When this happens, strategies such as tensor parallelism must be used to shard that larger model into multiple pieces and take advantage of the memory of multiple GPUs. Legacy hosting solutions used for smaller models typically don’t offer this type of functionality, adding to the difficulty. SageMaker large model inference (LMI) deep learning containers (DLCs) can help. LMI DLCs are a complete end-to-end solution for hosting LLMs like Falcon-40B. At the front end, they include a high-performance model server (DJL Serving) designed for large model inference with features such as token streaming and automatic model replication within an instance to increase throughput. On the backend, LMI DLCs also include several high-performance model parallel engines, such as DeepSpeed and FasterTransformer, that can shard and manage model parameters across multiple GPUs. These engines also include optimized kernels for popular transformer models, which can accelerate inference by up to three times faster. With LMI DLCs, you simply need to create a configuration file to get started with LLM hosting on SageMaker. To learn more about SageMaker LMI DLCs, refer to Model parallelism and large model inference and our list of available images . You can also check out our previous post about hosting Bloom-175B on SageMaker using LMI DLCs. Solution overview This post walks you through how to host Falcon-40B using DeepSpeed on SageMaker using LMI DLCs. Falcon-40B requires that we use multiple A10 GPUs, whereas Falcon-7B only requires a single GPU. We have also prepared examples you can reference to host Falcon-40B and Falcon-7B using both DeepSpeed and Accelerate. You can find our code examples on GitHub . This example can be run in SageMaker notebook instances or Amazon SageMaker Studio notebooks. For hosting Falcon-40B using LMI and DeepSpeed, we need to use an ml.g5.24xlarge instance. These instances provide 4x NVIDIA A10G GPUs, which each support 96 GiB of GPU memory. In addition, the host provides 96 vCPUs and 384 GiB of host memory. The LMI container will help address much of the undifferentiated heavy lifting associated with hosting LLMs, including downloading the model and partitioning the model artifact so that its comprising parameters can be spread across multiple GPUs. Quotas for SageMaker machine learning (ML) instances can vary between accounts. If you receive an error indicating you’ve exceeded your quota for g5.24xlarge instances while following this post, you can increase the limit through the Service Quotas console . Notebook walkthrough To begin, we start by installing and importing the necessary dependencies for our example. We use the Boto3 SDK as well as the SageMaker SDK. Note that we use Amazon Simple Storage Service (Amazon S3) to store the model artifacts that we need for SageMaker and LMI to use, so we set up an S3 prefix variable accordingly. See the following code: import sagemaker import jinja2 from sagemaker import image_uris import boto3 import os import time import json from pathlib import Path from sagemaker.utils import name_from_base role = sagemaker.get_execution_role() # execution role for the endpoint sess = sagemaker.session.Session() # sagemaker session for interacting with different AWS APIs bucket = sess.default_bucket() # bucket to house artifacts model_bucket = sess.default_bucket() # bucket to house artifacts s3_code_prefix_deepspeed = "hf-large-model-djl-/code_falcon40b/deepspeed" # folder within bucket where code artifact will go region = sess._region_name account_id = sess.account_id() s3_client = boto3.client("s3") sm_client = boto3.client("sagemaker") smr_client = boto3.client("sagemaker-runtime") jinja_env = jinja2.Environment() We then create a local folder for our workspace to store our model artifacts: !mkdir -p code_falcon40b_deepspeed We first create a serving.properties configuration file in the local directory we created. This serving.properties file indicates to the LMI container and the front-end DJL Serving library which model parallelization and inference optimization engine we want to use. You can find the configuration options for both DeepSpeed and Hugging Face Accelerate in Configurations and settings . Here, note that we set the option.model_id parameter to define which Hugging Face model to pull from. SageMaker makes working with Hugging Face models simple, and this one line is all you need. In addition, we set option.tensor_parallel_degree to a value of 4 because we have four GPUs on our ml.g5.24xlarge instance. This parameter defines how many partitions of the model to create and distribute. Note that if we had used a larger instance with eight GPUs, such as ml.g5.48xlarge, and still set a value of 4, then LMI would automatically create two replicas of the model (two replicas spread across four GPUs each). See the following code: %%writefile ./code_falcon40b_deepspeed/serving.properties engine=Python #to deploy falcon-40b-instruct set the model_id value to 'tiiuae/falcon-40b-instruct' option.model_id=tiiuae/falcon-40b option.tensor_parallel_degree=4 #option.s3url = {{s3url}} You can also swap out tiiuae/falcon-40b with tiiuae/falcon-40b-instruct if it suits your needs better. We also include a requirements.txt file that you can specify to install packages that you require: %%writefile ./code_falcon40b_deepspeed/requirements.txt einops torch==2.0.1 The last thing we need is the model.py file that will be used with your model: %%writefile ./code_falcon40b_deepspeed/model.py from djl_python import Input, Output import os import torch from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer from typing import Any, Dict, Tuple import warnings predictor = None def get_model(properties): model_name = properties["model_id"] local_rank = int(os.getenv("LOCAL_RANK", "0")) model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(model_name) generator = pipeline( task="text-generation", model=model, tokenizer=tokenizer, device_map="auto" ) return generator def handle(inputs: Input) -> None: global predictor if not predictor: predictor = get_model(inputs.get_properties()) if inputs.is_empty(): # Model server makes an empty call to warmup the model on startup return None data = inputs.get_as_json() text = data["text"] text_length = data["text_length"] outputs = predictor(text, do_sample=True, min_length=text_length, max_length=text_length) result = {"outputs": outputs} return Output().add_as_json(result) That’s it! At this point, we have created all the artifacts you will need deploy Falcon-40B with DeepSpeed! We package the directory into a *.tar.gz file and upload it to Amazon S3. Note that the actual model has not been downloaded or packaged into this file. The LMI container will download the model for you from Hugging Face directly. You also have the option to target an S3 bucket if you would like your own copy of the model in a location that will be more performant to download. LMI also includes optimization for downloading from Amazon S3 with high performance. See the following code: s3_code_artifact_deepspeed= sess.upload_data("model.tar.gz", bucket, s3_code_prefix_deepspeed) print(f"S3 Code or Model tar for deepspeed uploaded to --- > {s3_code_artifact_deepspeed}") All that is left to do at this point is to define the container we want to use and create a model object: inference_image_uri = ( f"763104351884.dkr.ecr.{region}.amazonaws.com/djl-inference:0.22.1-deepspeed0.8.3-cu118" ) model_name_acc = name_from_base(f"falcon40b-model-ds") create_model_response = sm_client.create_model( ModelName=model_name_acc, ExecutionRoleArn=role, PrimaryContainer={"Image": inference_image_uri, "ModelDataUrl": s3_code_artifact_deepspeed}, ) model_arn = create_model_response["ModelArn"] Then we create an endpoint configuration and create the endpoint: endpoint_config_name = f"{model_name}-config" endpoint_name = f"{model_name}-endpoint" endpoint_config_response = sm_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[ { "VariantName": "variant1", "ModelName": model_name, "InstanceType": "ml.g5.24xlarge", "InitialInstanceCount": 1, "ModelDataDownloadTimeoutInSeconds": 3600, "ContainerStartupHealthCheckTimeoutInSeconds": 3600, # "VolumeSizeInGB": 512 }, ], ) endpoint_config_response create_endpoint_response = sm_client.create_endpoint( EndpointName=f"{endpoint_name}", EndpointConfigName=endpoint_config_name ) print(f"Created Endpoint: {create_endpoint_response['EndpointArn']}") Configuration items to keep in mind for successful hosting An important consideration for large model hosting is ensuring there is adequate time for model download from Hugging Face. In our tests, the Falcon-40B took about 90 minutes to download onto the instance. A key set of configurations to allow for this are ContainerStartupHealthCheckTimeoutInSeconds and ModelDataDownloadTimeoutInSeconds . Make sure the SageMaker endpoint configuration has a value of 3600 for each of these. Additionally, it’s much easier to download from Amazon S3 instead of the original model zoo using the LMI containers that are specially designed for LLMS that use the S5cmd utility, which cuts the model download time to around 10 minutes. You can monitor the status of the endpoint by calling DescribeEndpoint , which will tell you when everything is complete. Your endpoint is now ready to respond to inference requests! Because LMI handles the model partitioning and orchestration for you, each request will be processed using all 4 GPUs available on our ml.g5.12xlarge instance. This allows us to host LLMs and increase performance if you scale GPU accelerators horizontally. See the following code: response_model = smr_client.invoke_endpoint( EndpointName=endpoint_name, Body=json.dumps({"text": "What is the purpose of life?", "text_length": 150}), ContentType="application/json", ) response_model["Body"].read().decode("utf8") If you are done and would like to delete the endpoint configuration, endpoint, and model object, you can run the following commands: sm_client.delete_endpoint(EndpointName=endpoint_name) sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name) sm_client.delete_model(ModelName=model_name) This code we referenced in this post can be found in the complete notebook on GitHub . Conclusion SageMaker Hosting and the LMI DLC makes it easy for you to host LLMs like Falcon-40B. It takes on the undifferentiated heavy lifting in orchestrating what is required to host models across multiple GPUs and provides configurable options to suit your needs. In addition, using Hugging Face models becomes very straightforward, with built-in support for these models. In this post, we showed how you can use SageMaker to host the Falcon-40B model using DeepSpeed. In addition, we provided examples in GitHub to host Falcon-40B using Accelerate, and the smaller Falcon-7B models. We encourage you to give this a try on SageMaker with LMI and get hands-on with the best-performing publicly available LLM to date! About the authors James Park  is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In h is spare time he enjoys seeking out new cultures, new experiences,  and staying up to date with the latest technology trends.You can find him on LinkedIn . Abhi Shivaditya is a Senior Solutions Architect at AWS, working with strategic global enterprise organizations to facilitate the adoption of AWS services in areas such as Artificial Intelligence, distributed computing, networking, and storage. His expertise lies in Deep Learning in the domains of Natural Language Processing (NLP) and Computer Vision. Abhi assists customers in deploying high-performance machine learning models efficiently within the AWS ecosystem. Robert Van Dusen is a Senior Product Manager with Amazon SageMaker. He leads deep learning model optimization for applications such as large model inference. Evandro Franco is an AI/ML Specialist Solutions Architect working on Amazon Web Services. He helps AWS customers overcome business challenges related to AI/ML on top of AWS. He has more than 15 years working with technology, from software development, infrastructure, serverless, to machine learning. Qing Lan is a Software Development Engineer in AWS. He has been working on several challenging products in Amazon, including high performance ML inference solutions and high performance logging system. Qing’s team successfully launched the first Billion-parameter model in Amazon Advertising with very low latency required. Qing has in-depth knowledge on the infrastructure optimization and Deep Learning acceleration. Frank Liu  is a Software Engineer for AWS Deep Learning. He focuses on building innovative deep learning tools for software engineers and scientists. In his spare time, he enjoys hiking with friends and family. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Deploying and benchmarking YOLOv8 on GPU-based edge devices using AWS IoT Greengrass _ The Internet of Things on AWS Official Blog.txt
The Internet of Things on AWS – Official Blog Deploying and benchmarking YOLOv8 on GPU-based edge devices using AWS IoT Greengrass by Romil Shah and Kevin Song | on 29 JUN 2023 | in Amazon Machine Learning , Artificial Intelligence , AWS IoT Greengrass , Technical How-to | Permalink |  Share Introduction Customers in manufacturing, logistics, and energy sectors often have stringent requirements for needing to run machine learning (ML) models at the edge. Some of these requirements include low-latency processing, poor or no connectivity to the internet, and data security. For these customers, running ML processes at the edge offers many advantages over running them in the cloud as the data can be processed quickly, locally and privately. For deep-learning based ML models, GPU-based edge devices can enhance running ML models at the edge. AWS IoT Greengrass can help with managing edge devices and deploying of ML models to these devices. In this post, we demonstrate how to deploy and run YOLOv8 models, distributed under the GPLv3 license, from Ultralytics on NVIDIA-based edge devices. In particular, we are using Seeed Studio’s reComputer J4012 based on NVIDIA Jetson Orin™ NX 16GB module for testing and running benchmarks with YOLOv8 models compiled with various ML libraries such as PyTorch and TensorRT. We will showcase the performance of these different YOLOv8 model formats on reComputer J4012. AWS IoT Greengrass components provide an efficient way to deploy models and inference code to edge devices. The inference is invoked using MQTT messages and the inference output is also obtained by subscribing to MQTT topics. For customers interested in hosting YOLOv8 in the cloud, we have a blog demonstrating how to host YOLOv8 on Amazon SageMaker endpoints. Solution overview The following diagram shows the overall AWS architecture of the solution. Seeed Studio’s reComputer J4012 is provisioned as an AWS IoT Thing using AWS IoT Core and connected to a camera. A developer can build and publish the com.aws.yolov8.inference Greengrass component from their environment to AWS IoT Core. Once the component is published, it can be deployed to the identified edge device, and the messaging for the component will be managed through MQTT, using the AWS IoT console. Once published, the edge device will run inference and publish the outputs back to AWS IoT core using MQTT. Prerequisites An AWS account with permissions for AWS IoT Core, AWS IoT Greengrass, and Amazon Simple Storage Service (S3) A Seeed Studio reComputer J4012 edge device (optional) Edge device connected to a camera or RTSP stream Walkthrough Step 1: Setup edge device Here, we will describe the steps to correctly configure the edge device reComputer J4012 device with installing necessary library dependencies, setting the device in maximum power mode, and configuring the device with AWS IoT Greengrass. Currently, reComputer J4012 comes pre-installed with JetPack 5.1 and CUDA 11.4, and by default, JetPack 5.1 system on reComputer J4012 is not configured to run on maximum power mode. In Steps 1.1 and 1.2, we will install other necessary dependencies and switch the device into maximum power mode. Finally in Step 1.3, we will provision the device in AWS IoT Greengrass, so the edge device can securely connect to AWS IoT Core and communicate with other AWS services. Step 1.1: Install dependencies From the terminal on the edge device, clone the GitHub repo using the following command: $ git clone https://github.com/aws-samples/deploy-yolov8-on-edge-using-aws-iot-greengrass Move to the utils directory and run the install_dependencies.sh script as shown below: $ cd deploy-yolov8-on-edge-using-aws-iot-greengrass/utils/ $ chmod u+x install_dependencies.sh $ ./install_dependencies.sh Step 1.2: Setup edge device to max power mode From the terminal of the edge device, run the following commands to switch to max power mode: $ sudo nvpmodel -m 0 $ sudo jetson_clocks To apply the above changes, please restart the device by typing ‘yes’ when prompted after executing the above commands. Step 1.3: Set up edge device with IoT Greengrass For automatic provisioning of the device, run the following commands from reComputer J4012 terminal: $ cd deploy-yolov8-on-edge-using-aws-iot-greengrass/utils/ $ chmod u+x provisioning.sh $ ./provisioning.sh (optional) For manual provisioning of the device, follow the procedures described in the AWS public documentation . This documentation will walk through processes such as device registration, authentication and security setup, secure communication configuration, IoT Thing creation, & policy and permission setup. When prompted for IoT Thing and IoT Thing Group , please enter unique names for your devices. Otherwise, they will be named with default values (GreengrassThing and GreengrassThingGroup). Once configured, these items will be visible in AWS IoT Core console as shown in the figures below: Step 2: Download/Convert models on the edge device Here, we will focus on 3 major categories of YOLOv8 PyTorch models: Detection, Segmentation, and Classification. Each model task further subdivides into 5 types based on performance and complexity, and is summarized in the table below. Each model type ranges from ‘Nano’ (low latency, low accuracy) to ‘Extra Large’ (high latency, high accuracy) based on sizes of the models. Model Types Detection Segmentation Classification Nano yolov8n yolov8n-seg yolov8n-cls Small yolov8s yolov8s-seg yolov8s-cls Medium yolov8m yolov8m-seg yolov8m-cls Large yolov8l yolov8l-seg yolov8l-cls Extra Large yolov8x yolov8x-seg yolov8x-cls We will demonstrate how to download the default PyTorch models on the edge device, converted to ONNX and TensorRT frameworks. Step 2.1: Download PyTorch base models From the reComputer J4012 terminal, change the path from edge/device/path/to/models to the path where you would like to download the models to and run the following commands to configure the environment: $ echo 'export PATH="/home/$USER/.local/bin:$PATH"' >> ~/.bashrc $ source ~/.bashrc $ cd {edge/device/path/to/models} $ MODEL_HEIGHT=480 $ MODEL_WIDTH=640 Run the following commands on reComputer J4012 terminal to download the PyTorch base models: $ yolo export model=[yolov8n.pt OR yolov8n-seg.pt OR yolov8n-cls.pt] imgsz=$MODEL_HEIGHT,$MODEL_WIDTH Step 2.2: Convert models to ONNX and TensorRT Convert PyTorch models to ONNX models using the following commands: $ yolo export model=[yolov8n.pt OR yolov8n-seg.pt OR yolov8n-cls.pt] format=onnx imgsz=$MODEL_HEIGHT,$MODEL_WIDTH Convert ONNX models to TensorRT models using the following commands: [Convert YOLOv8 ONNX Models to TensorRT Models] $ echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/targets/aarch64-linux/lib' >> ~/.bashrc $ echo 'alias trtexec="/usr/src/tensorrt/bin/trtexec"' >> ~/.bashrc<br />$ source ~/.bashrc $ trtexec --onnx={absolute/path/edge/device/path/to/models}/yolov8n.onnx --saveEngine={absolute/path/edge/device/path/to/models}/yolov8n.trt Step 3: Setup local machine or EC2 instance and run inference on edge device Here, we will demonstrate how to use the Greengrass Development Kit (GDK) to build the component on a local machine, publish it to AWS IoT Core, deploy it to the edge device, and run inference using the AWS IoT console. The component is responsible for loading the ML model, running inference and publishing the output to AWS IoT Core using MQTT. For the inference component to be deployed on the edge device, the inference code needs to be converted into a Greengrass component. This can be done on a local machine or Amazon Elastic Compute Cloud (EC2) instance configured with AWS credentials and IAM policies linked with permissions to Amazon Simple Storage Service (S3). Step 3.1: Build/Publish/Deploy component to the edge device from a local machine or EC2 instance From the local machine or EC2 instance terminal, clone the GitHub repository and configure the environment: $ git clone https://github.com/aws-samples/deploy-yolov8-on-edge-using-aws-iot-greengrass $ export AWS_ACCOUNT_NUM="ADD_ACCOUNT_NUMBER" $ export AWS_REGION="ADD_REGION" $ export DEV_IOT_THING="NAME_OF_OF_THING" $ export DEV_IOT_THING_GROUP="NAME_OF_IOT_THING_GROUP" Open recipe.json under components/com.aws.yolov8.inference directory, and modify the items in Configuration . Here, model_loc is the location of the model on the edge device defined in Step 2.1: "Configuration": { "event_topic": "inference/input",     "output_topic": "inference/output",     "camera_id": "0",     "model_loc": "edge/device/path/to/yolov8n.pt" OR " edge/device/path/to/models/yolov8n.trt" } Install the GDK on the local machine or EC2 instance by running the following commands on terminal: $ python3 -m pip install -U git+https://github.com/aws-greengrass/aws-greengrass-gdk-cli.git@v1.2.0 $ [For Linux] apt-get install jq $ [For MacOS] brew install jq Build, publish and deploy the component automatically by running the deploy-gdk-build.sh script in the utils directory on the local machine or EC2 instance: $ cd utils/ $ chmod u+x deploy-gdk-build.sh $ ./deploy-gdk-build.sh Step 3.2: Run inference using AWS IoT Core   Here, we will demonstrate how to use the AWS IoT Core console to run the models and retrieve outputs. The selection of model has to be made in the recipe.json on your local machine or EC2 instance and will have to be re-deployed using the deploy-gdk-build.sh script. Once the inference starts, the edge device will identify the model framework and run the workload accordingly. The output generated in the edge device is pushed to the cloud using MQTT and can be viewed when subscribed to the topic. Figure below shows the inference timestamp, model type, runtime, frame per second and model format. To view MQTT messages in the AWS Console, do the following: In the AWS IoT Core Console, in the left menu, under Test, choose MQTT test client. In the Subscribe to a topic tab, enter the topic inference/output and then choose Subscribe. In the Publish to a topic tab, enter the topic inference/input and then enter the below JSON as the Message Payload. Modify the status to start, pause or stop for starting/pausing/stopping inference: { "status": "start" } Once the inference starts, you can see the output returning to the console. Benchmarking YOLOv8 on Seeed Studio reComputer J4012 We compared ML runtimes of different YOLOv8 models on the reComputer J4012 and the results are summarized below. The models were run on a test video and the latency metrics were obtained for different model formats and input shapes. Interestingly, PyTorch model runtimes didn’t change much across different model input sizes while TensorRT showed marked improvement in runtime with reduced input shape. The reason for the lack of changes in PyTorch runtimes is because the PyTorch model does not resize its input shapes, but rather changes the image shapes to match the model input shape, which is 640×640. Depending on the input sizes and type of model, TensorRT compiled models performed better over PyTorch models. PyTorch models seem to have a decreased performance in latency when model input shape was decreased which is due to extra padding. While compiling to TensorRT, the model input is already considered which removes the padding and hence they perform better with reduced input shape. The following table summarizes the latency benchmarks (pre-processing, inference and post-processing) for different input shapes using PyTorch and TensorRT models running Detection and Segmentation. The results show the runtime in milliseconds for different model formats and input shapes. For results on raw inference runtimes, please refer to the benchmark results published in Seeed Studio’s blog post . Model Input Detection – YOLOv8n (ms) Segmentation – YOLOv8n-seg (ms) [H x W] PyTorch TensorRT PyTorch TensorRT [640 x 640] 27.54 25.65 32.05 29.25 [480 x 640] 23.16 19.86 24.65 23.07 [320 x 320] 29.77 8.68 34.28 10.83 [224 x 224] 29.45 5.73 31.73 7.43 Cleaning up While the unused Greengrass components and deployments do not add to the overall cost, it is ideally a good practice to turn off the inference code on the edge device as described using MQTT messages. The GitHub repository also provides an automated script to cancel the deployment. The same script also helps to delete any unused deployments and components as shown below: From the local machine or EC2 instance, configure the environment variables again using the same variables used in Step 3.1: $ export AWS_ACCOUNT_NUM="ADD_ACCOUNT_NUMBER" $ export AWS_REGION="ADD_REGION" $ export DEV_IOT_THING="NAME_OF_OF_THING" $ export DEV_IOT_THING_GROUP="NAME_OF_IOT_THING_GROUP" From the local machine or EC2 instance, go to the utils directory and run cleanup_gg.py script: $ cd utils/ $ python3 cleanup_gg.py Conclusion In this post, we demonstrated how to deploy YOLOv8 models to Seeed Studio’s reComputer J4012 device and run inferences using AWS IoT Greengrass components. In addition, we benchmarked the performance of reComputer J4012 device with various model configurations, such as model size, type and image size. We demonstrated the near real-time performance of the models when running at the edge which allows you to monitor and track what’s happening inside your facilities. We also shared how AWS IoT Greengrass alleviates many pain points around managing IoT edge devices, deploying ML models and running inference at the edge. For any inquiries around how our team at AWS Professional Services can help with configuring and deploying computer vision models at the edge, please visit our website . About Seeed Studio We would first like to acknowledge our partners at Seeed Studio for providing us with the AWS Greengrass certified reComputer J4012 device for testing. Seeed Studio is an AWS Partner and has been serving the global developer community since 2008, by providing open technology and agile manufacturing services, with the mission to make hardware more accessible and lower the threshold for hardware innovation. Seeed Studio is NVIDIA’s Elite Partner and offers a one-stop experience to simplify embedded solution integration, including custom image flashing service, fleet management, and hardware customization. Seeed Studio speeds time to market for customers by handling integration, manufacturing, fulfillment, and distribution. Learn more about their NVIDIA Jetson ecosystem . Romil Shah Romil Shah is a Sr. Data Scientist at AWS Professional Services. Romil has more than six years of industry experience in computer vision, machine learning, and IoT edge devices. He is involved in helping customers optimize and deploy their machine learning workloads for edge devices.   Kevin Song Kevin Song is a Data Scientist at AWS Professional Services. He holds a PhD in Biophysics and has more than five years of industry experience in building computer vision and machine learning solutions.   TAGS: machine learning at the edge , Nvidia , object detection Resources Getting Started What's New Top Posts Official AWS Podcast AWS Case Studies Follow  Twitter  Facebook  LinkedIn  Twitch  RSS Feed  Email Updates
Deputy Case Study _ Amazon Web Services.txt
Opportunity | Scheduling Millions of Shift Workers on Deputy’s Platform Français Amazon Aurora is a fully managed relational database that delivers faster queries, decreased latency, high performance, and reliability. Its high throughput rate makes it particularly well suited for computationally heavy workloads like Deputy’s. “Our data stores are massive—each cluster has up to 10,000 databases, and each database can have as many as 200 tables,” explained Rajini Carpenter, vice president of engineering at Deputy. “That’s close to 2 million tables in a single cluster, and just watching how Amazon Aurora handles that is amazing.” 2023 Amazon Simple Storage Service Español Since we can lean on Amazon Aurora for scaling and maintaining our databases, we can focus on building world-class software.” Amazon Aurora also natively integrates with other critical components of Deputy’s infrastructure. For example, Deputy uses Amazon OpenSearch Service for data-powered business insights and has built a data pipeline using Amazon Kinesis Data Firehose and AWS Lambda to load streaming data into OpenSearch clusters. In addition, Deputy offers a touch-free facial-analysis feature with biometric validation for employees to clock in and out, built using Amazon Rekognition. “We’ve received a tremendous amount of support from AWS to fuel us to go upmarket and serve larger, more complex businesses,” said Qamal Kosim-Satyaputra, senior director of engineering at Deputy. “We wouldn’t be here without their support.” Learn More Learn more » 日本語 Qamal Kosim-Satyaputra Senior Director of Engineering, Deputy Solution | Delivering High Performance for Massive Clusters Can recover deleted records in minutes Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used increase in query speed and latency Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. To learn more, visit aws.amazon.com/rds/aurora. AWS Services Used 中文 (繁體) Bahasa Indonesia Amazon Aurora Amazon Elastic Cloud Compute Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Outcome | Boosting Performance by 30% and Improving Reliability Deputy is on a mission to Simplify Shift Work™ for millions of shift workers and businesses globally. The company streamlines scheduling, timesheets, tasks, and communication for business owners and their workers. Deputy is available on AWS Marketplace. With Amazon Aurora, Deputy increased performance of its workforce scheduling platform by 30% to better support its enterprise customers. About Deputy Overview Amazon Aurora Helps Deputy Improve Performance by 30% and Expand Customer Base to Large Organisations Türkçe Deputy provides cloud-based workforce management and scheduling solutions that enable companies to schedule complex shift work. With Amazon Aurora, Deputy took advantage of high-throughput and variable scaling to speed query processing times, improve reliability, and boost performance by nearly 30 percent. English Deputy is a cloud-based workforce scheduling platform designed to automate the complex calculations required to optimally schedule shift work. More than 330,000 workplaces and 1.4 million shift workers around the world rely on Deputy software to automate scheduling and facilitate workforce management. Since moving to Amazon Aurora, Deputy has seen an improvement in query speed and latency of up to 30 percent. The platform is also more reliable, with faster failovers and the ability to easily recover lost data. “The reliability improvements have been extremely helpful in our day-to-day operations,” said Deepika Rao, engineering manager at Deputy. “In situations where our customers accidentally delete their records, we’ve been able to backtrack and spin up a new cluster in a matter of minutes, rather than having to restore them manually from terabytes of data.” Kosim-Satyaputra added, “Since we can lean on Amazon Aurora for scaling and maintaining our databases, we can focus on building world-class software.” Amazon Relational Database Service Rapid deployment While Amazon RDS freed Deputy’s engineering team from infrastructure management, the company had its sights set on other scaling solutions that would supercharge its growth by resonating with large-scale enterprises with even more complex scheduling needs. In particular, the team wanted a solution capable of handling Deputy’s read-heavy application without replication lag. The company also wanted to be sure the platform could handle the larger volumes of data and queries coming from its growing stable of midmarket and enterprise customers. After working with the AWS team on a proof of concept, Deputy chose to move its workforce scheduling platform to Amazon Aurora and run on Aurora MySQL version 3, which is wire-compatible with MySQL 8.0. Implemented massive migration in 8 weeks Data recovery Deutsch Born in the cloud, Deputy’s workforce scheduling platform has been powered by Amazon Web Services (AWS) since the very beginning. The original platform was built using Amazon Elastic Cloud Compute (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) with self-managed MySQL databases on Amazon EC2. As the company grew, it committed to minimizing operational burdens for its engineers, which meant moving to fully managed Amazon Relational Database Service (Amazon RDS). “Amazon RDS allowed us to focus on our product, while leaning on Amazon scaling up and down to effortlessly serve our fast growing, largest customers,” said Deepesh Banerji, chief product officer at Deputy. Tiếng Việt Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Italiano ไทย Contact Sales Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » 30% Português
Design considerations for cost-effective video surveillance platforms with AWS IoT for Smart Homes _ The Internet of Things on AWS Official Blog.txt
The Internet of Things on AWS – Official Blog Design considerations for cost-effective video surveillance platforms with AWS IoT for Smart Homes by Thorben Sanktjohanser | on 14 JUL 2023 | in Amazon API Gateway , Amazon Cognito , Amazon DynamoDB , Amazon Kinesis , AWS IoT Core , Intermediate (200) , Internet of Things , Kinesis Video Streams , Technical How-to | Permalink |  Share Introduction Designing and developing a cost-efficient, cloud-connected video platform for surveillance cameras and smart home devices require developers to architect and integrate a streaming service capable of ingesting, storing, and processing unstructured media data at scale. The infrastructure behind such a platform needs to handle large volumes of predicated data load along with the flexibility to support sudden, non-forecasted demand spikes. From buffering and latency to dropped connections and data storage issues, video streaming from smart home devices can be fraught with difficulties. Therefore, one of the key objectives for a smart camera solution must be the flexibility and scalability to support millions of devices, trillions of messages, and petabytes of data. Serverless computing eliminates the need for provisioning servers and enables automatic scaling, cost optimization by charging only for actual usage, and provides built-in fault tolerance and high availability. Serverless architectures promote agility, reduce operational complexity, and accelerate time-to-market for businesses. Considerations To deliver a smart camera solution that is capable of providing scalable, reliable, and efficient video streaming service, you need to consider the costs associated with managing servers, storage, and network hardware responsible for providing high bandwidth and low latency network performance. Procuring, installing, and maintaining the hardware can lower your staff’s focus on creating differentiated applications and delivering a better user experience. Amazon Kinesis Video Stream s is a fully managed AWS service that enables you to securely stream media for storage, analytics, and playback without provisioning servers. You do not have to build, operate, or scale any WebRTC (Web Real-Time Communication) related cloud infrastructure, such as signaling servers or media relay servers to securely stream media across applications and devices. This makes it an ideal service to combine with AWS IoT for connected products. HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (DASH) are two streaming protocols used to deliver pre-recorded, on-demand and live video content from a server. WebRTC is an open-source project and set of technologies that enables real-time and low-latency peer-to-peer communication, directly between web browsers or mobile applications. With Amazon Kinesis Video Streams, you can choose from two options to provide live video streaming: play-back videos from streams with HLS and DASH ; or low-latency two-way media streaming with WebRTC . The option to stream from HLS and DASH will lead to data transfer charges from the Kinesis Video Streams service to the internet. Kinesis Video Streams service charges you per GB for data ingested and data consumed . There is no additional fee for data from the internet to AWS. Data transferred out to the internet is free for the first 100GB of each month, as of December 1, 2021 . An additional fee per GB applies to the data transfer after that. Further cost improvements can be achieved by lowering data rates using compression, or dynamic bitrates and frame rate adjustments of a video stream. n a 24×7 streaming scenario, I recommend lowering the bitrate to an acceptable minimum. The bitrate used in your product is a major contributing factor to the overall KVS service cost. Amazon Kinesis Video Streams supports different video codecs, such as H.264 (Advanced Video Coding or AVC) and H.265 (High Efficiency Video Coding or HVEC). You can read more about the differences and their trade-offs in this blog post . Consider the overall video and audio quality, the effective bitrate, the resulting data volume, and the capabilities of your hardware when selecting a codec for your product. The data egress costs scale with the number of cameras and users of your platform when streaming live from HLS and DASH. Data egress can be avoided when using Kinesis Video Streams with WebRTC and peer-to-peer connections. Kinesis Video Streams with WebRTC uses a signaling channel to exchange connection information between peers. Afterwards, the peers connect directly with each other for live streaming, instead of sending or receiving data from the AWS cloud. Charges occur for the signaling channel active in a given month and the number of signaling messages sent and received . There are no charges for streaming video content directly, peer-to-peer without a relay server. In cases where direct connections are not feasible, due to restrictive network conditions, a relay server (TURN) provided by Kinesis Video Streams will be used. This server relays the media traffic between peers to ensure connectivity. Relaying media traffic via the TURN server are charged in streaming minutes with an additional fee per GB to the data transfer out after the first 100GB . Architecture Overview Figure 1. Surveillance camera platform architectural diagram. With Amazon Kinesis Video Stream s’ fully-managed capability, you do not have to build, operate, or scale any WebRTC related cloud infrastructure, such as signalling servers or media relay servers to securely stream media across applications and devices. You use the Kinesis Video Streams with WebRTC SDK with the camera and client. Until now, I have discussed how you can stream video from a smart camera to a client with a peer-to-peer connection and shared considerations on costs. Another part of this architecture is the administrating and controlling of the smart camera itself, such as provisioning, configuration, security and maintenance to ensure the smart device functions properly. You can onboard your smart cameras to AWS by using AWS IoT Core to implement a secure connection between the device and AWS to manage them. The service includes a device gateway and a message broker. The communication from the camera to AWS IoT Core is based on MQTT , a lightweight publish-subscribe network protocol. The recommended way of securing the management connection between smart home devices and the AWS Cloud is by using X.509 certificates. The certificates allow you to authorize cameras to access services on AWS. AWS IoT Core can generate and register an individual certificate for each device at scale. In this architecture the fleet provisioning by claim method is used. A bootstrap certificate is saved to the camera which will be automatically exchanged with a unique device certificate upon provisioning. During the provisioning process, an AWS Lambda function reads a database table that holds information, such as a serial number, of all the manufactured surveillance cameras to verify the cameras accessing the services. In this architecture, the serverless key-value database service Amazon DynamoDB is used to verify identities, to store user and device data. DynamoDB integrates seamlessly with AWS IoT services delivering consistent, single-digit millisecond latency at any scale, enabling real-time processing and analysis of IoT data. For communication on the client side, you can implement the serverless authenticate and authorize pattern to control access to your backend services. Amazon Cognito provides a user directory storing user’s profile attributes, such as username, email addresses, and phone numbers. The client receives access tokens from Cognito to verify users and to authorize access to backend services and surveillance cameras. Amazon API Gateway handles the verification of access tokens by providing a REST API that integrates with Amazon Cognito . This authorizes authenticated users to proxy requests from the client to the backend services with Amazon API Gateway. The backend services receiving and returning requests in this architecture are built with AWS Lambda , which allows you to run code on demand. You can use a Lambda function to read from the manufacturer database to verify devices and to bind user accounts with cameras. Lambda will request session credentials on demand with AWS Identity and Access Management (IAM) to access the signalling channel of the camera on Kinesis Video Streams. With generated credentials, you can isolate clients from each other .   Walkthrough You will incur costs when deploying the Amazon Kinesis Video Streams Serverless Surveillance Platform in your account. When you are finished examining the example, follow the steps in the Clean Up section to delete the infrastructure and stop incurring charges. Have a look at the README file in the repository to understand the building blocks of the platform example in detail. You can use AWS Cloud9 to deploy the code sample. Cloud9 provides a cloud-based platform for developers to write, debug, and collaborate on code using a web browser, making it convenient and accessible from anywhere. The code sample was tested using Cloud9, which reduces the need for local setup and configuration. Step 1: Create Cloud9 environment Open Cloud9 in the AWS Management Console Click on Create environment Name your environment surveillance-camera-ide Click on Create and wait until the environment is created Choose surveillance-camera-ide and Open in Cloud9 Open a terminal in Cloud9 Clone the Amazon Kinesis Video Streams Serverless Surveillance Platform repository: git clone https://github.com/aws-samples/amazon-kinesis-video-streams-serverless-surveillance-platform.git Step 2: Deploy the surveillance camera platform Copy the Cloud9 ID from the address bar in your browser, i.e. <REGION>.console.aws.amazon.com/cloud9/ide/ 59f5e14c6cdb4fbb95f61f107b5ad86d Install the infrastructure from root directory with the Cloud9 ID as follows: cd infrastructure sh ./install-infrastructure.sh 59f5e14c6cdb4fbb95f61f107b5ad86d Deploy the camera mock from root directory as follows: cd camera sh ./install-mock.sh The deployment of the camera takes up to 10 minutes Deploy the web client from root directory as follows: cd web-client yarn install --silent yarn start Open https:// 59f5e14c6cdb4fbb95f61f107b5ad86d .vfs.cloud9.<REGION>.amazonaws.com ( Alternatively ) Click on Preview in the top bar in Cloud9 Select Preview Running Application Select Pop Out Into New Window  in the preview window Step 3: Login and bind the camera mock to your account Copy the Username and Password and select Login Enter the credentials and select a new password Setup a software MFA in the Cognito Hosted UI Enter the provided Serial number and Secret and select Submit Once the camera mock provision status is true , select BCM2835-00000000b211cf11 in the table. Refresh the page to request a status update or if an error occurs You will see the test stream from the camera mock as below. Figure 2. Web client sample stream from camera mock Cleanup Remove infrastructure, camera mock, and Cloud9 environment Remove the infrastructure from root directory within Cloud9 ID as follows: cd infrastructure sh ./uninstall-infrastructure.sh Remove the camera mock from root directory within Cloud9 ID as follows: cd camera sh ./uninstall-mock.sh Navigate to Cloud9 in the AWS Management Console Choose surveillance-camera-ide Click Delete Conclusion The architecture covered above, showed an approach on how to build a cloud-connected surveillance camera. With the considerations in mind, you can determine a pricing model and build a cost-efficient cloud-connected video surveillance platform with AWS IoT. Follow the next steps and read the following resources to provide your consumers with state-of-the-art functionality and use cases: Integrate real-time alerts on the live video stream with Amazon Rekognition. Follow this blog post here . Provide your own machine learning models to cameras performing inference without a connection to the cloud. Read more about it here . Stream and process data from video streams locally with a machine learning appliance like AWS Panorama. Read this blog post to see how other customers leverage IoT services . Build a machine learning pipeline to save images from your Kinesis Video Streams stream to S3 for further processing. See this blog post to implement this feature . About the author Thorben Sanktjohanser Thorben Sanktjohanser is a Solutions Architect at Amazon Web Services supporting small- and medium-sized business on their cloud journey with his expertise. Thorben has an Information Systems and Management background and could gather knowledge in different business verticals to innovate together with his customers on modern data strategies and migrations. He is passionate about IoT and building smart home devices. Almost every part of his home is automated from light bulb over blinds to vacuum cleaning and mopping. Resources Getting Started What's New Top Posts Official AWS Podcast AWS Case Studies Follow  Twitter  Facebook  LinkedIn  Twitch  RSS Feed  Email Updates
Designing a hybrid AI_ML data access strategy with Amazon SageMaker _ AWS Architecture Blog.txt
AWS Architecture Blog Designing a hybrid AI/ML data access strategy with Amazon SageMaker by Franklin Aguinaldo, Ananta Khanal, Sid Misra, and Tony Chen | on 10 JUL 2023 | in Amazon Elastic File System (EFS) , Amazon File Cache , Amazon FSx for Lustre , Amazon SageMaker , Architecture , AWS DataSync , AWS Direct Connect , AWS Storage Gateway | Permalink | Comments |  Share Over time, many enterprises have built an on-premises cluster of servers, accumulating data, and then procuring more servers and storage. They often begin their ML journey by experimenting locally on their laptops. Investment in artificial intelligence (AI) is at a different stage in every business organization. Some remain completely on-premises, others are hybrid (both on-premises and cloud), and the remaining have moved completely into the cloud for their AI and machine learning (ML) workloads. These enterprises are also researching or have started using the cloud to augment their on-premises systems for several reasons. As technology improves, both the size and quantity of data increases over time. The amount of data captured and the number of datapoints continues to expand, which presents a challenge to manage on-premises. Many enterprises are distributed, with offices in different geographic regions, continents, and time zones. While it is possible to increase the on-premises footprint and network pipes, there are still hidden costs to consider for maintenance and upkeep. These organizations are looking to the cloud to shift some of that effort and enable them to burst and use the rich AI and ML features on the cloud. Defining a hybrid data access strategy Moving ML workloads into the cloud calls for a robust hybrid data strategy describing how and when you will connect your on-premises data stores to the cloud. For most, it makes sense to make the cloud the source of truth, while still permitting your teams to use and curate datasets on-premises. Defining the cloud as source of truth for your datasets means the primary copy will be in the cloud and any dataset generated will be stored in the same location in the cloud. This ensures that requests for data is served from the primary copy and any derived copies. A hybrid data access strategy should address the following: Understand your current and future storage footprint for ML on-premises. Create a map of your ML workloads, along with performance and access requirements for testing and training. Define connectivity across on-premises locations and the cloud. This includes east-west and north-south traffic to support interconnectivity between sites, required bandwidth, and throughput for the data movement workload requirements. Define your single source of truth (SSOT)[1] and where the ML datasets will primarily live. Consider how dated, new, hot, and cold data will be stored. Define your storage performance requirements, mapping them to the appropriate cloud storage services . This will give you the ability to take advantage of cloud-native ML with Amazon SageMaker . Hybrid data access strategy architecture To help address these challenges, we worked on outlining an end-to-end system architecture in Figure 1 that defines: 1) connectivity between on-premises data centers and AWS Regions; 2) mappings for on-premises data to the cloud; and 3) Aligning Amazon SageMaker to appropriate storage, based on ML requirements. Figure 1. AI/ML hybrid data access strategy reference architecture Let’s explore this architecture step by step. On-premises connectivity to the AWS Cloud runs through AWS Direct Connect for high transfer speeds. AWS DataSync is used for migrating large datasets into Amazon Simple Storage Service (Amazon S3). AWS DataSync agent is installed on-premises. On-premises network file system (NFS) or server message block (SMB) data is bridged to the cloud through Amazon S3 File Gateway , using either a virtual machine (VM) or hardware appliance. AWS Storage Gateway uploads data into Amazon S3 and caches it on-premises. Amazon S3 is the source of truth for ML assets stored on the cloud. Download S3 data for experimentation to Amazon SageMaker Studio . Amazon SageMaker notebooks instances can access data through S3, Amazon FSx for Lustre , and Amazon Elastic File System . Use Amazon File Cache for high-speed caching for access to on-premises data, and Amazon FSx for NetApp ONTAP for cloud bursting. SageMaker training jobs can use data in Amazon S3, EFS, and FSx for Lustre. S3 data is accessed via File, Fast File, or Pipe mode, and pre-loaded or lazy-loaded when using FSx for Lustre as training job input. Any existing data on EFS can also be made available to training jobs as well. Leverage Amazon S3 Glacier for archiving data and reducing storage costs. ML workloads using Amazon SageMaker Let’s go deeper into how SageMaker can help you with your ML workloads. To start mapping ML workloads to the cloud, consider which AWS storage services work with Amazon SageMaker. Amazon S3 typically serves as the central storage location for both structured and unstructured data that is used for ML. This includes raw data coming from upstream applications, and also curated datasets that are organized and stored as part of a Feature Store. In the initial phases of development, a SageMaker Studio user will leverage S3 APIs to download data from S3 to their private home directory. This home directory is backed by a SageMaker-managed EFS file system. Studio users then point their notebook code (also stored in the home directory) to the local dataset and begin their development tasks. To scale up and automate model training, SageMaker users can launch training jobs that run outside of the SageMaker Studio notebook environment. There are several options for making data available to a SageMaker training job. Amazon S3. Users can specify the S3 location of the training dataset. When using S3 as a data source, there are three input modes to choose from: File mode. This is the default input mode, where SageMaker copies the data from S3 to the training instance storage. This storage is either a SageMaker-provisioned Amazon Elastic Block Store (Amazon EBS) volume or an NVMe SSD that is included with specific instance types. Training only starts after the dataset has been downloaded to the storage, and there must be enough storage space to fit the entire dataset. Fast file mode. Fast file mode exposes S3 objects as a POSIX file system on the training instance. Dataset files are streamed from S3 on demand, as the training script reads them. This means that training can start sooner and require less disk space. Fast file mode also does not require changes to the training code. Pipe mode. Pipe input also streams data in S3 as the training script reads it, but requires code changes. Pipe input mode is largely replaced by the newer and easier-to-use Fast File mode. FSx for Lustre. Users can specify a FSx for Lustre file system, which SageMaker will mount to the training instance and run the training code. When the FSx for Lustre file system is linked to a S3 bucket, the data can be lazily loaded from S3 during the first training job. Subsequent training jobs on the same dataset can then access it with low latency. Users can also choose to pre-load the file system with S3 data using hsm_restore commands. Amazon EFS. Users can specify an EFS file system that already contains their training data. SageMaker will mount the file system on the training instance and run the training code. Find out how to Choose the best data source for your SageMaker training job. Conclusion With this reference architecture, you can develop and deliver ML workloads that run either on-premises or in the cloud. Your enterprise can continue using its on-premises storage and compute for particular ML workloads, while also taking advantage of the cloud, using Amazon SageMaker. The scale available on the cloud allows your enterprise to conduct experiments without worrying about capacity. Start defining your hybrid data strategy on AWS today! Additional resources: Choose the best data source for your Amazon SageMaker training job Hybrid Machine Learning Whitepaper Access Training data with Amazon SageMaker Learn more about how to migrate data into the AWS Cloud Learn more about different AWS storage offerings [1] The practice of aggregating data from many sources to a single source or location. Franklin Aguinaldo Franklin is a Senior Solutions Architect at Amazon Web Services, He has over 20+ years of experience in development and architecture. Franklin is an App Modernization SME, and an expert on Serverless and Containers. Ananta Khanal Ananta Khanal is a Solutions Architect focused on Cloud storage solutions at AWS. He has worked in IT for over 15 years, and held various roles in different companies. He is passionate about cloud technology, infrastructure management, IT strategy, and data management. Sid Misra Sid Misra is a Senior Product Manager on the Amazon File Storage team. Sid has 15+ years of experience leading product and engineering teams focused on enterprise software, machine learning, computer vision, and wireless communications. Tony Chen Tony Chen is a Machine Learning Solutions Architect at Amazon Web Services, helping customers design scalable and robust machine learning capabilities in the cloud. As a former data scientist and data engineer, he leverages his experience to help tackle some of the most challenging problems organizations face with operationalizing machine learning. Comments View Comments Resources AWS Architecture Center AWS Well-Architected AWS Architecture Monthly AWS Whitepapers AWS Training and Certification This Is My Architecture Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Developing a Pioneering Multicancer Early Detection Test _ GRAIL Case Study _ AWS.txt
Français architecture Scaled to ingest data Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Learn more » To make sure that Galleri met its required clinical validation, the team embarked on one of the largest clinical development programs in genomic medicine: a pivotal clinical trial across 142 sites in the United States and Canada, tracking over 15,000 participants over 5 years. It involved collecting genomic sequencing data at a massive scale and using it to build model-training classifiers. Once the models were ready, bioinformaticians could run and develop pipelines at scale. Using AWS, GRAIL built a scalable infrastructure to handle large amounts of genomic data so that bioinformaticians could focus on applying their expertise in building pipelines instead of worrying about scaling infrastructure. “Using AWS provided us with reliable, cost-effective services to build Galleri,” says Olga Ignatova, director of software development at GRAIL. Español Opportunity | Developing a Cancer Detection Test in 5 Years with Robust Clinical Validation  Optimized 日本語 Contact Sales 2022 Learn how biotechnology company GRAIL used Amazon EC2 and 60 other scalable AWS services to pioneer new technologies for early cancer detection. GRAIL - Pioneering early-stage cancer testing Get Started 한국어 GRAIL Develops a Pioneering Multicancer Early Detection Test Using AWS Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon EKS Satnam Alag Senior Vice President for Software Development and Chief Security Officer, GRAIL Amazon EC2 per gigabyte of storage cost AWS Services Used 中文 (繁體) Bahasa Indonesia Aiming to shift the paradigm from screening for individual cancers to screening individuals for cancer and to detect cancers earlier, biotechnology innovator GRAIL created a multicancer early detection test, Galleri. It detects a cancer signal shared by over 50 types of cancer—over 45 of which currently lack recommended screening—through a blood draw. Combining next-generation genomics sequencing, population-scale clinical studies, state of the art data science, and machine learning, GRAIL used a range of offerings from Amazon Web Services (AWS) to test and commercially scale its platform while achieving significant cost savings, scalability, reliability, and architecture optimization. In a clinical study, GRAIL’s test demonstrated high overall sensitivity, less than 1 percent false positive rates based on 99.5 percent specificity, and high accuracy in participants with a positive cancer signal. ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn more » 中文 (简体) 40% savings In 2021 GRAIL partnered with the National Health Service (NHS) of England to implement Galleri in the largest multiyear, multicancer early detection trial to date, including 140,000 participants at mobile clinics operating in 150 locations around England. Those participating were recruited in a record 10 months. The enrollment ended in July 2022, and screenings are scheduled to continue for participants annually for 3 years. The NHS might eventually roll out the Galleri test to an additional one million people and has a long-term goal of detecting 75 percent of cancers while they are less advanced. Outcome | Improving Testing Over Time Using AWS  Overview The GRAIL team developed Reflow to manage its bioinformatics workloads on AWS. Reflow language helps bioinformaticians to compose existing tools—packaged in Docker images—using ordinary programming constructs. Reflow runtime is deployed in Amazon Elastic Kubernetes Service (Amazon EKS) clusters, a managed service to run Kubernetes in the AWS cloud and on-premises data centers. It evaluates Reflow programs and parallelizes workloads onto Spot Instances, further reducing costs. It also improved performance through incremental data processing and memoization of results. “We are constantly looking for opportunities to optimize our architecture and to get the boost of using AWS services that we haven’t used before and changing our architecture to take advantage of those,” says Alag. About GRAIL secure data encryption One of the biggest values of using AWS is that we can concentrate up the stack without needing to worry about scale associated with storage or compute.” Türkçe Launched in 2021, the Galleri test takes genetic data from a single blood draw and screens for a cancer signal by analyzing DNA methylation patterns. The team uses AWS to support the commercial scaling of the infrastructure to meet high demand and to fuel the software that runs its labs. The infrastructure uses over 60 AWS services.   English Headquartered in Menlo Park, California, GRAIL is a healthcare company working on innovative cancer-detection technologies. Because GRAIL deals with sensitive health-related information, having a strong networking and security program is imperative. To make sure that its data is secure and complies with data privacy laws, GRAIL uses Amazon Virtual Private Cloud (Amazon VPC). It lets organizations define and launch AWS instances in a logically isolated virtual network, with guardrails in place to control access to sensitive data. “AWS provides really good infrastructure and capabilities that we use for data protection and encryption at rest and in transit,” says Alag. “We’re making use of the controls on AWS to restrict access to our sensitive data.” GRAIL expands into different AWS Regions and scales globally while meeting the data residency requirements by using the 87 Availability Zones on AWS. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. For the compute resources to run Galleri tests at scale, GRAIL uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. “One of the biggest values of using AWS is that we can concentrate up the stack without needing to worry about scale associated with storage or compute,” says Alag. To cost-efficiently run its computational workloads, the company uses Amazon EC2 Spot Instances, which let users take advantage of unused Amazon EC2 capacity. For its databases, GRAIL uses Reserved DB instances for Aurora, which provide a significant discount compared to On-Demand database instance pricing. The earlier cancer is diagnosed, the higher the chance of successful treatment and survival. In the United States today, around 70 percent of all cancer-related deaths are from cancers with no recommended screening. GRAIL’s mission is to detect cancers earlier, when they have a higher probability of being cured. Its pioneering Galleri test analyzes a single blood draw to detect multiple types of cancers—most of which cannot be detected with current screening paradigms. It also predicts with high accuracy where the cancer originated in those diagnosed with cancer. “No one knew if an assay would be able to detect multiple cancers at the same time through a blood test,” says Satnam Alag, senior vice president for software development and chief security officer of GRAIL. “With Galleri, we met success and results complementary to traditional standard-of-care screening.” Deutsch Amazon Elastic Compute Cloud (Amazon EC2) provides secure and resizable compute capacity for virtually any workload. Learn more » Tiếng Việt Amazon S3 from participants in a 140,000-person trial Italiano Customer Stories / Life Sciences Solution | Achieving Scalability, Cost Savings, and Security Using AWS  Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security. Learn more » To address its storage needs, GRAIL uses Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. The company has achieved cost savings using Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering), which automates storage cost savings by migrating data when access patterns change. “We transitioned most of our data to S3 Intelligent-Tiering, which led to 40 percent savings per gigabyte of storage cost,” says Ignatova. Adding Galleri to the five US-recommended cancer screenings could potentially reduce 5-year cancer mortality by 39 percent in those intercepted. GRAIL is working on more clinical trials to add more data to prove the efficacy of the Galleri test and looking for ways to further improve the performance and cost of the test as it scales to a larger population. “We wouldn’t have been able to scale, perform the huge number of computations, and store the large amounts of data that we deal with daily as easily without AWS infrastructure,” says Alag. “Using AWS will be key for us as we scale the system across the world.” Amazon VPC Supports Português
Dexatek Optimizes Its IoT Platform and Boosts Spend on Innovation by 30 with AWS _ Dexatek Case Study _ AWS.txt
AWS Lambda Français AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » 2023 Español Pause slide rotation Next Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Learn more » 日本語 Thanks to this optimization, the company has been able to dedicate 30 percent more resources to innovation and speed up coding and testing times, while scaling platform performance tenfold. In addition, Dexatek is now looking to new markets and has launched a product on the AWS Marketplace. The ability to easily expand the IoT platform is helping Dexatek focus on new markets. Chen, for one, is already looking to a near future where the company goes beyond smart homes. “We have the expertise and the capabilities to support the transfer of IoT data from devices and sensors on cars just as well as in the home, which means fleet management could be an area of interest for the future,” he says. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Prev 한국어 Dexatek Technology, headquartered in New Taipei City, designs, manufactures, and promotes Internet of Things (IoT) consumer electronic products. Founded in 2003, the company provides solutions for a range of smart appliances, covering home security, wellbeing, and more.   Overview | Opportunity | Solution | Outcome | AWS Services Used With AWS, Dexatek can continue pursuing expansion, using the platform’s scalability to seize new business opportunities. As a first step, the company has launched its Dexatek IoT Core solution on the AWS Marketplace to offer businesses an out-of-the-box solution, complete with mobile apps, that provides their products with smart capabilities. Play In addition to being simpler to administer, the platform scales automatically as more IoT connections are added, and data travels between the platform and devices 10 times faster. “With AWS IoT Core, we can drive growth without worrying about platform workloads and offer businesses a level of performance that exceeds many of our competitors,” comments Chen. AWS IoT Core lets you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure. Get Started Jerry Chen Chief Executive Officer, Dexatek Technology By optimizing its platform with AWS IoT Core and going serverless, Dexatek has tightened the security of device connections through mutual authentication and end-to-end encryption. “I think the overall stability of the platform is also greater,” adds Chen, “which means I can go to bed at night and not think about problems such as a server causing the platform to go down.” AWS Services Used Dexatek hoped to create a more scalable IoT platform that reduced management time while maintaining a high level of security. Jerry Chen, chief executive officer at Dexatek Technology, explains, “We had to scale our instances manually and schedule regular maintenance to update servers as well as the security certification for our MQTT connections. We wanted to eliminate these administrative activities so we could focus on development and growing the company.” increase in processing performance 中文 (繁體) Bahasa Indonesia 10x Working closely with AWS, Dexatek successfully migrated to AWS Lambda with AWS IoT Core to securely connect smart devices, and Amazon DynamoDB to easily store and query device data. The strong working relationship with AWS helped the Dexatek team save a lot of work. “We completed development, including all APIs and basic testing, in under three months instead of six to eight months as expected for a project like this.” Automated encryption and authentication Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Customer Stories / Hi Tech, Electronics & Semiconductor Greater security AWS IoT Core To drive growth in this expanding market, Dexatek wanted to optimize its Amazon Web Services (AWS) infrastructure that supported the processing of smart-device data. The infrastructure was based on a combination of Amazon Elastic Compute Cloud (Amazon EC2) instances and Amazon Simple Storage Service (Amazon S3) to handle the transfer of information to and from devices via the MQTT protocol. Overview We estimate that by moving to AWS IoT Core along with AWS Lambda, we can shift 30 percent more IT resources to product development.” Dexatek increased the performance of its Internet of Things (IoT) platform, enhanced security, and lowered management time by migrating to AWS IoT Core. Opportunity | Making Smart Devices Easier to Scale and Less Management Intensive Dexatek Optimizes Its IoT Platform and Boosts Spend on Innovation by 30% with AWS Amazon EC2 Türkçe Resume slide rotation English Outcome | Creating Opportunities for New Markets with AWS Lowered coding and testing times from months The company is currently experimenting with Amazon SageMaker to help train machine learning models and AWS IoT Greengrass to leverage pre-built software components that would speed up delivery of the IoT device software. “If your goals are to reduce costs and make IoT devices smarter, then AWS has what you need,” Chen concludes. About Dexatek Technology 30% Dexatek can also onboard businesses quicker, launching IoT platform demos for new customers in less than a week—a process that could previously take three months. This is because AWS IoT Core makes coding easier and testing periods shorter. Chen explains, “We give the engineers a heads up on what we need them to do, and after three to five days, they’re saying, ‘It’s done.’” Solution | Freeing Up Resources for Innovation with AWS IoT Core   < 5 days Dexatek Technology, based in Taiwan, gives electronic consumer products smart capabilities using its IoT solutions. To optimize its IoT platform for processing data from smart devices, Dexatek migrated to AWS IoT Core and AWS Lambda along with the Amazon DynamoDB database service. Deutsch Amazon DynamoDB Tiếng Việt Italiano ไทย more available resources for innovation Contact Sales Learn more » With development finished, Dexatek Technology is completing a final technical review before fully adopting the AWS IoT Core–based serverless architecture. Chen expects it to significantly reduce the amount of infrastructure management that IT personnel will need to perform. “We estimate that by moving to AWS IoT Core along with AWS Lambda, we can shift 30 percent more IT resources to product development,” he says. Dexatek Technology helps consumer electronics companies incorporate smart technology into products like light switches, thermostats, and air-conditioning units. It equips businesses with Internet of Things (IoT) capabilities so that customers can remotely monitor and control their devices, such as adjusting the temperatures of their homes or scheduling when their lights come on. The company is taking advantage of the growing market for smart home products, which is expected to attract $173 billion in consumer spending worldwide by 2025. With optimization as its goal, Dexatek looked at moving from Amazon EC2 to AWS Lambda serverless service. In addition, it began investigating AWS IoT Core to join, manage, and scale its smart device connections without having to think about security. Says Chen, “We decided to engage with AWS Solutions Architects to make sure we proceeded correctly. We wanted them to double-check everything we did to avoid any delays in the optimization process.” Português
Directing ML-powered Operational Insights from Amazon DevOps Guru to your Datadog event stream _ AWS DevOps Blog.txt
AWS DevOps Blog Directing ML-powered Operational Insights from Amazon DevOps Guru to your Datadog event stream by Bineesh Ravindran and David Ernst | on 13 JUL 2023 | in Amazon DevOps Guru , Amazon Machine Learning , Artificial Intelligence , AWS CLI , DevOps , Integration & Automation , Technical How-to | Permalink |  Share Amazon DevOps Guru is a fully managed AIOps service that uses machine learning (ML) to quickly identify when applications are behaving outside of their normal operating patterns and generates insights from its findings. These insights generated by DevOps Guru can be used to alert on-call teams to react to anomalies for business mission critical workloads. If you are already utilizing Datadog to automate infrastructure monitoring, application performance monitoring, and log management for real-time observability of your entire technology stack, then this blog is for you. You might already be using Datadog for a consolidated view of your Datadog Events interface to search, analyze and filter events from many different sources in one place. Datadog Events are records of notable changes relevant for managing and troubleshooting IT Operations, such as code, deployments, service health, configuration changes and monitoring alerts. Wherever DevOps Guru detects operational events in your AWS environment that could lead to outages, it generates insights and recommendations. These insights/recommendations are then pushed to a user specific Datadog endpoint using Datadog events API. You can then create dashboards, incidents, alarms or take corrective automated actions based on these insights and recommendations in Datadog. Datadog collects and unifies all of the data streaming from these complex environments, with a 1-click integration for pulling in metrics and tags from over 90 AWS services. Companies can deploy the Datadog Agent directly on their hosts and compute instances to collect metrics with greater granularity—down to one-second resolution. And with Datadog’s out-of-the-box integration dashboards, companies get not only a high-level view into the health of their infrastructure and applications but also deeper visibility into individual services such as AWS Lambda and Amazon EKS . This blogpost will show you how to utilize Amazon DevOps Guru with Datadog to get real time insights and recommendations on your AWS Infrastructure. We will demonstrate how an insight generated by Amazon DevOps Guru for an anomaly can automatically be pushed to Datadog’s event streams which can then be used to create dashboards, create alarms and alerts to take corrective actions. Solution Overview When an Amazon DevOps Guru insight is created, an Amazon EventBridge rule is used to capture the insight as an event and routed to an AWS Lambda Function target. The lambda function interacts with Datadog using a REST API to push corresponding DevOps Guru events captured by Amazon EventBridge. The EventBridge rule can be customized to capture all DevOps Guru insights or narrowed down to specific insights. In this blog, we will be capturing all DevOps Guru insights and will be performing actions on Datadog for the below DevOps Guru events: DevOps Guru New Insight Open DevOps Guru New Anomaly Association DevOps Guru Insight Severity Upgraded DevOps Guru New Recommendation Created DevOps Guru Insight Closed Figure 1: Amazon DevOps Guru Integration with Datadog with Amazon EventBridge and AWS. Solution Implementation Steps Pre-requisites Before you deploy the solution, complete the following steps. Datadog Account Setup: We will be connecting your AWS Account with Datadog. If you do not have a Datadog account, you can request a free trial developer instance through Datadog . Datadog Credentials: Gather the credentials of Datadog keys that will be used to connect with AWS. Follow the steps below to create an API Key and Application Key. Add an API key or client token To add a Datadog API key or client token: Navigate to Organization settings, then click the API keys or Client Tokens Click the New Key or New Client Token button, depending on which you’re creating. Enter a name for your key or token. Click Create API key or Create Client Token . Note down the newly generated API Key value. We will need this in later steps Figure 2: Create new API Key. Add application keys To add a Datadog application key, navigate to Organization Settings > Application Keys .If you have the permission to create application keys, click New Key .Note down the newly generated Application Key. We will need this in later steps. Add Application Key and API Key to AWS Secrets Manager : Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure the secret can’t be compromised by someone examining your code,because the secret no longer exists in the code. Follow below steps to create a new secret in AWS Secrets Manager. Open the Secrets Manager console at https://console.aws.amazon.com/secretsmanager/ Choose Store a new secret . On the Choose secret type page, do the following: For Secret type , choose other type of secret . In Key/value pairs , either enter your secret in Key/value pairs Figure 3: Create new secret in Secret Manager. Click next and enter “DatadogSecretManager” as the secret name followed by Review and Finish. Figure 4: Configure secret in Secret Manager. Enable DevOps Guru for your applications by following these steps or you can follow this blog to deploy a sample serverless application that can be used to generate DevOps Guru insights for anomalies detected in the application. AWS Cloud9 is recommended to create an environment as   AWS Serverless Application Model (SAM) CLI and   AWS Command Line Interface (CLI) are pre-installed  and can be accessed from a bash terminal. Install and set up SAM CLI – Install the SAM CLI . Download and set up Java . The version should be matching to the runtime that you defined in the SAM template. yaml Serverless function configuration – Install the Java SE Development Kit 11 . Maven – Install Maven Option 1: Deploy Datadog Connector App from AWS Serverless Repository The DevOps Guru Datadog Connector application is available on the AWS Serverless Application Repository which is a managed repository for serverless applications. The application is packaged with an AWS Serverless Application Model (SAM) template, definition of the AWS resources used and the link to the source code. Follow the steps below to quickly deploy this serverless application in your AWS account. Login to the AWS management console of the account to which you plan to deploy this solution. Go to the DevOps Guru Datadog Connector application in the AWS Serverless Repository and click on “Deploy”. The Lambda application deployment screen will be displayed where you can enter the Datadog Application name Figure 5: DevOps Guru Datadog connector. Figure 6: Serverless Application DevOps Guru Datadog connector. After successful deployment the AWS Lambda Application page will display the “Create complete” status for the serverlessrepo-DevOps-Guru-Datadog-Connector application. The CloudFormation template creates four resources, Lambda function which has the logic to integrate to the Datadog Event Bridge rule for the DevOps Guru Insights Lambda permission IAM role Now skip Option 2 and follow the steps in the “Test the Solution” section to trigger some DevOps Guru insights/recommendations and validate that the events are created and updated in Datadog. Option 2: Build and Deploy sample Datadog Connector App using AWS SAM Command Line Interface As you have seen above, you can directly deploy the sample serverless application form the Serverless Repository with one click deployment. Alternatively, you can choose to clone the GitHub source repository and deploy using the SAM CLI from your terminal. The Serverless Application Model Command Line Interface (SAM CLI) is an extension of the AWS CLI that adds functionality for building and testing serverless applications. The CLI provides commands that enable you to verify that AWS SAM template files are written according to the specification, invoke Lambda functions locally, step-through debug Lambda functions, package and deploy serverless applications to the AWS Cloud, and so on. For details about how to use the AWS SAM CLI, including the full AWS SAM CLI Command Reference, see  AWS SAM reference – AWS Serverless Application Model . Before you proceed, make sure you have completed the pre-requisites section in the beginning which should set up the AWS SAM CLI, Maven and Java on your local terminal. You also need to install and set up Docker to run your functions in an Amazon Linux environment that matches Lambda. Clone the source code from the github repo. git clone https://github.com/aws-samples/amazon-devops-guru-connector-datadog.git Build the sample application using SAM CLI. $cd DatadogFunctions $sam build Building codeuri: $\amazon-devops-guru-connector-datadog\DatadogFunctions\Functions runtime: java11 metadata: {} architecture: x86_64 functions: Functions Running JavaMavenWorkflow:CopySource Running JavaMavenWorkflow:MavenBuild Running JavaMavenWorkflow:MavenCopyDependency Running JavaMavenWorkflow:MavenCopyArtifacts Build Succeeded Built Artifacts : .aws-sam\build Built Template : .aws-sam\build\template.yaml Commands you can use next ========================= [*] Validate SAM template: sam validate [*] Invoke Function: sam local invoke [*] Test Function in the Cloud: sam sync --stack-name {{stack-name}} --watch [*] Deploy: sam deploy --guided This command will build the source of your application by installing dependencies defined in Functions/pom.xml, create a deployment package and saves it in the. aws-sam/build folder. Deploy the sample application using SAM CLI. $sam deploy --guided This command will package and deploy your application to AWS, with a series of prompts that you should respond to as shown below: Stack Name: The name of the stack to deploy to CloudFormation. This should be unique to your account and region, and a good starting point would be something matching your project name. AWS Region: The AWS region you want to deploy your application to. Confirm changes before deploy: If set to yes, any change sets will be shown to you before execution for manual review. If set to no, the AWS SAM CLI will automatically deploy application changes. Allow SAM CLI IAM role creation: Many AWS SAM templates, including this example, create AWS IAM roles required for the AWS Lambda function(s) included to access AWS services. By default, these are scoped down to minimum required permissions. To deploy an AWS CloudFormation stack which creates or modifies IAM roles, the CAPABILITY_IAM value for capabilities must be provided. If permission isn’t provided through this prompt, to deploy this example you must explicitly pass --capabilities CAPABILITY_IAM to the sam deploy command. Disable rollback [Y/N]: If set to Y, preserves the state of previously provisioned resources when an operation fails. Save arguments to configuration file (samconfig.toml): If set to yes, your choices will be saved to a configuration file inside the project, so that in the future you can just re-run sam deploy without parameters to deploy changes to your application. After you enter your parameters, you should see something like this if you have provided Y to view and confirm ChangeSets. Proceed here by providing ‘Y’ for deploying the resources. Initiating deployment ===================== Uploading to sam-app-datadog/0c2b93e71210af97a8c57710d0463c8b.template 1797 / 1797 (100.00%) Waiting for changeset to be created.. CloudFormation stack changeset --------------------------------------------------------------------------------------------------------------------- Operation LogicalResourceId ResourceType Replacement --------------------------------------------------------------------------------------------------------------------- + Add FunctionsDevOpsGuruPermissi AWS::Lambda::Permission N/A on + Add FunctionsDevOpsGuru AWS::Events::Rule N/A + Add FunctionsRole AWS::IAM::Role N/A + Add Functions AWS::Lambda::Function N/A --------------------------------------------------------------------------------------------------------------------- Changeset created successfully. arn:aws:cloudformation:us-east-1:867001007349:changeSet/samcli-deploy1680640852/bdc3039b-cdb7-4d7a-a3a0-ed9372f3cf9a Previewing CloudFormation changeset before deployment ====================================================== Deploy this changeset? [y/N]: y 2023-04-04 15:41:06 - Waiting for stack create/update to complete CloudFormation events from stack operations (refresh every 5.0 seconds) --------------------------------------------------------------------------------------------------------------------- ResourceStatus ResourceType LogicalResourceId ResourceStatusReason --------------------------------------------------------------------------------------------------------------------- CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole - CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole Resource creation Initiated CREATE_COMPLETE AWS::IAM::Role FunctionsRole - CREATE_IN_PROGRESS AWS::Lambda::Function Functions - CREATE_IN_PROGRESS AWS::Lambda::Function Functions Resource creation Initiated CREATE_COMPLETE AWS::Lambda::Function Functions - CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru - CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru Resource creation Initiated CREATE_COMPLETE AWS::Events::Rule FunctionsDevOpsGuru - CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermissi - on CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermissi Resource creation Initiated on CREATE_COMPLETE AWS::Lambda::Permission FunctionsDevOpsGuruPermissi - on CREATE_COMPLETE AWS::CloudFormation::Stack sam-app-datadog - --------------------------------------------------------------------------------------------------------------------- Successfully created/updated stack - sam-app-datadog in us-east-1 Once the deployment succeeds, you should be able to see the successful creation of your resources. Also, you can find your Lambda, IAM Role and EventBridge Rule in the CloudFormation stack output values. You can also choose to test and debug your function locally with sample events using the SAM CLI local functionality.Test a single function by invoking it directly with a test event. An event is a JSON document that represents the input that the function receives from the event source. Refer the Invoking Lambda functions locally – AWS Serverless Application Model link here for more details. $ sam local invoke Functions -e ‘event/event.json’ Once you are done with the above steps, move on to “Test the Solution” section below to trigger some DevOps Guru insights and validate that the events are created and pushed to Datadog. Test the Solution To test the solution, we will simulate a DevOps Guru Insight. You can also simulate an insight by following the steps in this blog . After an anomaly is detected in the application, DevOps Guru creates an insight as shown below. Figure 7: DevOps Guru insight for DynamoDB For the DevOps Guru insight shown above, a corresponding event is automatically created and pushed to Datadog as shown below. In addition to the events creation, any new anomalies and recommendations from DevOps Guru is also associated with the events. Figure 8 : DevOps Guru Insight pushed to Datadog event stream. Cleaning Up To delete the sample application that you created, In your Cloud 9 environment open a new terminal. Now type in the AWS CLI command below and pass the stack name you provided in the deploy step. aws cloudformation delete-stack --stack-name <Stack Name> Alternatively, you could also use the AWS CloudFormation Console to delete the stack. Conclusion This article highlights how Amazon DevOps Guru monitors resources within a specific region of your AWS account, automatically detecting operational issues, predicting potential resource exhaustion, identifying probable causes, and recommending remediation actions. It describes a bespoke solution enabling integration of DevOps Guru insights with Datadog, enhancing management and oversight of AWS services. This solution aids customers using Datadog to bolster operational efficiencies, delivering customized insights, real-time alerts, and management capabilities directly from DevOps Guru, offering a unified interface to swiftly restore services and systems. To start gaining operational insights on your AWS Infrastructure with Datadog head over to Amazon DevOps Guru documentation page. About the authors: Bineesh Ravindran Bineesh is Solutions Architect at Amazon Webservices (AWS) who is passionate about technology and love to help customers solve problems. Bineesh has over 20 years of experience in designing and implementing enterprise applications. He works with AWS partners and customers to provide them with architectural guidance for building scalable architecture and execute strategies to drive adoption of AWS services. When he’s not working, he enjoys biking, aquascaping and playing badminton. David Ernst David is a Sr. Specialist Solution Architect – DevOps, with 20+ years of experience in designing and implementing software solutions for various industries. David is an automation enthusiast and works with AWS customers to design, deploy, and manage their AWS workloads/architectures. TAGS: AI/ML , AIOps , Amazon DevOps Guru , AWS Serverless Application Model (SAM) , DevOps , Observability Resources AWS Development Center AWS Developer Tools Blog AWS Cloud9 AWS CodeStar AWS Elastic Beanstalk AWS X-Ray Follow  AWS .NET on Twitter  AWS Cloud on Twitter  AWS on Reddit  LinkedIn  Twitch  Email Updates
DTN Case Study _ HPC _ AWS.txt
Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. About DTN Helping Critical Organizations Make Data-Driven Decisions Français Español 日本語 AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. It began testing the high-performance computing (HPC) capabilities of Amazon Web Services (AWS) and running data processing and modeling workloads on Amazon Elastic Compute Cloud (Amazon EC2), a service that provides secure, resizable compute capacity in the cloud. As a proof of concept, DTN used historical data from Hurricane Laura, a category 4 hurricane that made landfall in Louisiana in August 2020. Using HPC on AWS, the company could reliably, accurately, and consistently double the frequency with which it could generate high-resolution weather forecasts. With faster model output, DTN can generate more-timely and valuable insights for organizations that depend on them for safe and sustainable operations. For example, DTN weather data feeds Storm Impact Analytics, a machine learning application that helps electric utilities more accurately predict the power outages a given weather event might create. “We go beyond the data to give our customers timely, actionable insights for specific storms,” says Chenevert. “We help them understand how to prepare for potential outages, estimate time to restore power, and plan for restoration response efficiently.” Further testing with the Amazon EC2 Hpc6a Instances has shown the potential to further compress the rendering time to under 1 hour. “Our team celebrated when a test configuration showed that we could run our global model and generate 1 hour of forecast data in less than 1 minute on AWS,” says Chenevert. 한국어 Working on AWS brings agility to HPC. We can go from idea to production rapidly and scale in a way that’s beneficial to us and our customers.” “Working on AWS brings agility to HPC,” says Shaw. “We can go from idea to production rapidly and scale in a way that’s beneficial to us and our customers.” Part of that agility is the result of using Amazon FSx for Lustre, which provides businesses with fully managed shared storage built on the world’s most popular high-performance file system, and Amazon Simple Storage Service (Amazon S3), an object storage service that offers industry-leading scalability, data availability, security, and performance. DTN uses these services to store the data that it pulls in from around the world and make it highly available to other parts of its technology infrastructure. With the combination of AWS services and technical collaboration, DTN has been able to innovate more quickly, improve insights during rapidly evolving weather events, and offer the best operational intelligence possible for its customers. In January 2022 DTN began using Amazon EC2 Hpc6a Instances—which are designed specifically for compute-intensive HPC workloads in Amazon EC2—and effectively doubled its high-resolution global weather modeling capacity to four times daily. The company needed a flexible and powerful management tool to increase throughput for its range of HPC workloads, such as simultaneously running atmospheric- and oceanic wave-modeling spaces as well as handling rapid-refresh updates. It started using AWS ParallelCluster, an open-source cluster management tool that makes it easier to deploy and manage HPC clusters on AWS. Increased high-resolution model frequency from two to four runs per day Amazon EC2 Hpc6a Instances offer the best price performance for compute-intensive, high performance computing (HPC) workloads in Amazon EC2. Hpc6a instances deliver up to 65% better price performance over comparable, compute-optimized, x86 based instances. Get Started Amazon S3 is an object storage service offering industry-leading scalability, data availability, security, and performance. Achieving Agile HPC and Improving Performance in the Cloud Rendered 1 hour of forecast data in under 1 minute in test scenario AWS Services Used 中文 (繁體) Bahasa Indonesia DTN is a global data, analytics, and technology company that delivers unparalleled operational intelligence to help businesses prosper and organizations improve service delivery in the agriculture, energy, and other weather-dependent industries. DTN engaged the AWS team in fall 2020 to explore how to efficiently increase the frequency of forecast outputs. Starting with existing data from Hurricane Laura as a benchmark, DTN developed and tested HPC infrastructures alongside the AWS team over 18 months to optimize the throughput potential of its forecast models. “We found a lot of value in collaborating with the AWS team,” says Brent Shaw, chief weather architect and director of core content services at DTN. “As our engineers optimized our weather science workflows, AWS provided support in optimizing the HPC infrastructure. These changes led to improvements across our weather modeling technology stack.” Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский Brent Shaw Chief Weather Architect and Director of Core Content Services, DTN عربي Learn more » 中文 (简体) Amazon Elastic Compute Cloud (Amazon EC2) Hpc6a Instances Benefits of AWS Amazon Simple Storage Service (Amazon S3) Organizations in weather-sensitive industries need highly accurate and near-real-time weather intelligence to make adept business decisions. Many companies in these industries rely on information from DTN, a global data, analytics, and technology company, for that information. To deliver high-level operational intelligence for weather-dependent industries, DTN deploys a suite of proprietary and supplementary weather data and models that deliver sophisticated, high-resolution outputs and require continual processing of vast amounts of data from inputs across the globe. This complexity has historically limited how often forecast engines can update. To optimize its solutions for customers worldwide, DTN sought innovative ways to efficiently increase the frequency and accuracy of its weather forecasting models. AWS ParallelCluster Türkçe DTN specializes in the analysis and delivery of timely weather, agricultural, energy, and commodity market information. While most global weather forecasting organizations run models twice daily, DTN wanted to increase the frequency of forecast modeling to provide customers with intelligence that better reflects how changing weather could impact their operations. “In weather forecasting, we need highly elastic and scalable HPC systems to analyze huge amounts of data globally,” says Doug Chenevert, director of the forecast platform at DTN. “Because weather changes rapidly, a system that can ingest data quickly and run our models frequently is critical for delivering near-real-time insights.” DTN chose to use AWS for the capacity, flexibility, and maturity of its HPC capabilities and services. “Ideally, we want to render high-resolution global forecasts hourly,” says Chenevert. “That kind of output is uncharted territory for weather forecasting, but we’re getting closer by using AWS.” English Since DTN’s successful proof of concept, the company has moved most of its weather data infrastructure to AWS. “The entire global forecasting solution currently runs on AWS,” says Chenevert. This infrastructure supports a massive amount of data input, storage, and processing; the company estimates that it processes petabytes of data per day. Running tightly coupled HPC workloads presents a challenge with intensive parallel processes running across many instances that must communicate with each other at high speeds. “Weather is the original big data problem,” says Shaw. “Each part needs to know what’s happening in the other parts of the system as it’s happening.” DTN is running HPC workloads in the cloud using Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances that customers can use to run applications requiring high levels of internode communications at scale. Delivering More Timely Weather Forecasts Using AWS Deutsch Tiếng Việt DTN Doubles Weather Forecasting Performance Using Amazon EC2 Hpc6a Instances Italiano ไทย DTN has a long history of innovation and continues to develop infrastructures that deliver improved, more-timely intelligence for customers. The company is currently exploring using the artificial intelligence (AI) features of AWS while making further improvements to its forecast model processing. By collaborating with AWS and using its services, DTN has made improvements that further differentiate it from other data providers. “We view the accomplishments we’ve made to our global forecast engine on AWS as groundbreaking,” says Chenevert. “It is truly innovative and extremely beneficial to the weather-dependent organizations that we serve.” Contact Sales 2022 Supports faster results and more-timely insights to customers Português Elastic Fabric Adapter (EFA)
e-banner Streamlines Its Contact Center Operations and Facilitates a Fully Remote Workforce with Amazon Connect _ e-banner Case Study _ AWS.txt
e-banner's on-premises contact center solution also had a range of additional challenges. Maintaining e-banner's legacy contact center system required the assistance of a third-party provider, and some features took several months to update. Furthermore, the business faced an annual increase in maintenance costs of 15–20 percent. In addition, e-banner’s existing interactive voice response (IVR) system, used to automate simple customer requests by phone, was tedious to customize. As a result, the business had to allocate 30 customer service team members to attend to basic customer requests that could easily have been automated with the right IVR system in place. Team members had to spend 30 minutes to an hour searching for customer information in e-banner’s client relationship management (CRM) software, which impacted the overall customer experience. less time to update IVR call flow Français Kenny Lui Head of Operations, e-banner Kenny Lui, head of operations at e-banner, explains, “We sought a cloud-based solution that would empower more than 30 customer service team members to work from home during the pandemic without compromising our reputation for responsiveness and quality customer service.”   Outcome | Delivering a Seamless Customer Service Experience with Remote Staff 2023 100% Español Amazon EC2 AWS worked closely with AWS Partner Megazone Cloud to transform e-banner’s contact center into a modern cloud-based platform. They met with e-banner’s leadership team to assess the company’s needs. e-banner’s top priority was to ensure customer satisfaction through uninterrupted service. “AWS and Megazone Cloud collaborated to present the full suite of features offered by Amazon Connect to our leadership team and provided a demonstration. Their dedicated support in the process of migrating our contact center to the new platform gave us the confidence and trust to proceed with the implementation,” says Kenny. e-banner is a Hong Kong–based digital printing company that specializes in a variety of printing services, including large-format printing, display stands, event backdrops, outdoor banners, and more. The company has been in operation for over a decade and has served hundreds of clients in Hong Kong and the Asia Pacific region. 日本語 e-banner Streamlines Its Contact Center Operations and Facilitates a Fully Remote Workforce with Amazon Connect Customer Stories / Retail & Wholesale e-banner, one of Hong Kong’s largest digital printing companies, is committed to providing quick, convenient, high-quality services to its customers. To make the digital printing process even easier, the company offers real-time quotations and a self-service order platform, as well as access to order history and status, 24/7 via its website. Get Started 한국어 Opportunity | Overcoming the Challenges of an On-Premises Contact Center Solution Overview | Opportunity | Solution | Outcome | AWS Services Used 40% Solution | Implementing a Customized Cloud-Based Contact Center   AWS Services Used About Megazone Cloud e-banner is one of Hong Kong’s largest digital printing companies. To optimize its contact center and facilitate its transition to a work-from-home model, the company adopted Amazon Connect as its contact center solution. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. 80% Contact Sales Ρусский Ensures a reliable contact center experience عربي Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. To ensure a seamless online shopping experience, e-banner operates a contact center where customers can get help with any enquiries or issues they encounter. Its contact center solution was hosted on premises, which required customer service staff to work on site. However, the 2020 global pandemic rendered this approach completely unsustainable, as the contact center had to be shut down entirely, leaving customers' inquiries unattended. Amazon Connect's scalability and pay-as-you-go model makes it an ideal choice for businesses of all sizes. The seamless management of backend technical issues by the AWS account team also ensures that we can focus on delivering the best possible customer experience.” With Amazon Connect, you can set up a contact center in minutes that can scale to support millions of customers. During a period of internal research, e-banner discovered Amazon Web Services (AWS) and learned that implementing a fully remote contact center team was easily achievable through Amazon Connect. Learn more » Amazon Connect Overview Not only has the implementation of Amazon Connect saved e-banner a significant amount of time, but it’s also led to a significant reduction in maintenance and upgrade costs. Kenny adds, “Our initial cost concerns were alleviated with Amazon Connect's pay-as-you-go pricing model, which ultimately resulted in 40 percent cost savings for e-banner.” Türkçe Furthermore, e-banner integrated its CRM and enterprise resource planning (ERP) software with Amazon Connect, streamlining operations for greater efficiency. Consequently, call agents can effortlessly access and retrieve real-time information on the customers they are assisting from a single platform, resulting in further time savings and increased responsiveness. English Amazon RDS e-banner’s management team is currently leveraging Amazon Connect's performance monitoring features to gain valuable insights and collect data on essential customer service metrics, including call times and agent productivity. This data guides the company's efforts to continually enhance its customer service. Additionally, e-banner can perform artificial intelligence–based sentiment analysis on calls across multiple languages, providing even more valuable insights into its customers. Through sentiment analysis, management can identify the specific issues that customers often express negative feedback about, and subsequently provide training to agents to enhance their communication and resolve these issues more effectively. About e-banner Deutsch e-banner, one of Hong Kong’s largest print production companies, transforms its contact center with Amazon Connect to reduce costs, improve uptime, and empower its customer service team to work remotely. Tiếng Việt As a leading AWS Premier Consulting Partner in APAC, Megazone Cloud has earned the trust of over 5,000 customers ranging from startups to large enterprises. Apart from delivering cloud contact center support, Megazone Cloud also offers expertise in artificial intelligence, machine learning, serverless architecture, cloud-based media streaming, and AI chatbot technologies. Italiano ไทย cost savings Zero downtime By implementing Amazon Connect, e-banner has ensured a seamless online experience for its customers, improved stability and reliability, reduced costs, and facilitated 100 percent remote work for all of its contact center operations.   of customer service staff empowered to work from home With Amazon Connect, e-banner gained a stable, reliable contact center system for seamless customer service during the pandemic and beyond. Now, the company’s customer service staff can work remotely, and the business has the flexibility to onboard new agents from anywhere. AWS and Megazone Cloud implemented a customized Amazon Connect solution in one month from the initial engagement, automating basic customer service requests with a personalized IVR. The Amazon Connect IVR is fully customizable, flexible, and user friendly, which allows staff to easily modify scripts and design the most effective flows. Kenny says, “With Amazon Connect, we no longer rely on third-party vendors to design our IVR flow, which used to take weeks to implement. Instead, we can make changes to our IVR in just a few hours.” e-banner estimates it now saves 80 percent of the time it previously spent on IVR. e-banner looks forward to extending Amazon Connect to its sister company, e-print. The business also intends to adopt Amazon Connect's omni-channel contact center solutions, which will allow customers to connect with its contact center team via WhatsApp, Facebook, and more. Kenny concludes, “Amazon Connect's ability to scale and its pay-as-you-go model makes it an ideal choice for businesses of all sizes. The seamless management of backend technical issues by the AWS account team also ensures that we can focus on delivering the best possible customer experience.” Português
Effectively solve distributed training convergence issues with Amazon SageMaker Hyperband Automatic Model Tuning _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Effectively solve distributed training convergence issues with Amazon SageMaker Hyperband Automatic Model Tuning by Uri Rosenberg | on 13 JUL 2023 | in Amazon SageMaker , Best Practices , Expert (400) | Permalink | Comments |  Share Recent years have shown amazing growth in deep learning neural networks (DNNs). This growth can be seen in more accurate models and even opening new possibilities with generative AI: large language models (LLMs) that synthesize natural language, text-to-image generators, and more. These increased capabilities of DNNs come with the cost of having massive models that require significant computational resources in order to be trained. Distributed training addresses this problem with two techniques: data parallelism and model parallelism. Data parallelism is used to scale the training process over multiple nodes and workers, and model parallelism splits a model and fits them over the designated infrastructure. Amazon SageMaker distributed training jobs enable you with one click (or one API call) to set up a distributed compute cluster, train a model, save the result to Amazon Simple Storage Service (Amazon S3), and shut down the cluster when complete. Furthermore, SageMaker has continuously innovated in the distributed training space by launching features like heterogeneous clusters and distributed training libraries for data parallelism and model parallelism . Efficient training on a distributed environment requires adjusting hyperparameters. A common example of good practice when training on multiple GPUs is to multiply batch (or mini-batch) size by the GPU number in order to keep the same batch size per GPU. However, adjusting hyperparameters often impacts model convergence. Therefore, distributed training needs to balance three factors: distribution, hyperparameters, and model accuracy. In this post, we explore the effect of distributed training on convergence and how to use Amazon SageMaker Automatic Model Tuning to fine-tune model hyperparameters for distributed training using data parallelism. The source code mentioned in this post can be found on the GitHub repository (an m5.xlarge instance is recommended). Scale out training from a single to distributed environment Data parallelism is a way to scale the training process to multiple compute resources and achieve faster training time. With data parallelism, data is partitioned among the compute nodes, and each node computes the gradients based on their partition and updates the model. These updates can be done using one or multiple parameter servers in an asynchronous, one-to-many, or all-to-all fashion. Another way can be to use an AllReduce algorithm. For example, in the ring-allreduce algorithm, each node communicates with only two of its neighboring nodes, thereby reducing the overall data transfers. To learn more about parameter servers and ring-allreduce, see Launching TensorFlow distributed training easily with Horovod or Parameter Servers in Amazon SageMaker . With regards to data partitioning, if there are n compute nodes, then each node should get a subset of the data, approximately 1/ n in size. To demonstrate the effect of scaling out training on model convergence, we run two simple experiments: Train an image classification model using a fully connected-layer DNN with ReLU activation functions using MXNet and Gluon frameworks. For training data, we used the MNIST dataset of handwritten digits. We used the source provided in the SageMaker example repository . Train a binary classification model using the SageMaker built-in XGBoost algorithm . We used the direct marketing dataset to predict bank customers who are likely to respond with a specific offer. The source code and steps to reproduce the experiment can be found on the GitHub repo . Each model training ran twice: on a single instance and distributed over multiple instances. For the DNN distributed training, in order to fully utilize the distributed processors, we multiplied the mini-batch size by the number of instances (four). The following table summarizes the setup and results. Problem type Image classification Binary classification Model DNN XGBoost Instance ml.c4.xlarge ml.m5.2xlarge Data set MNIST (Labeled images) Direct Marketing (tabular, numeric and vectorized categories) Validation metric Accuracy AUC Epocs/Rounds 20 150 Number of Instances 1 4 1 3 Distribution type N/A Parameter server N/A AllReduce Training time (minutes) 8 3 3 1 Final Validation score 0.97 0.11 0.78 0.63 For both models, the training time was reduced almost linearly by the distribution factor. However, model convergence suffered a significant drop. This behavior is consistent for the two different models, the different compute instances, the different distribution methods, and different data types. So, why did distributing the training process affect model accuracy? There are a number of theories that try to explain this effect: When tensor updates are big in size, traffic between workers and the parameter server can get congested. Therefore, asynchronous parameter servers will suffer significantly worse convergence due to delays in weights updates [1]. Increasing batch size can lead to over-fitting and poor generalization, thereby reducing the validation accuracy [2]. When asynchronously updating model parameters, some DNNs might not be using the most recent updated model weights; therefore, they will be calculating gradients based on weights that are a few iterations behind. This leads to weight staleness [3] and can be caused by a number of reasons. Some hyperparameters are model or optimizer specific. For example, the XGBoost official documentation says that the exact value for the tree_mode hyperparameter doesn’t support distributed training because XGBoost employs row splitting data distribution whereas the exact tree method works on a sorted column format. Some researchers proposed that configuring a larger mini-batch may lead to gradients with less stochasticity. This can happen when the loss function contains local minima and saddle points and no change is made to step size, to optimization getting stuck in such local minima or saddle point [4]. Optimize for distributed training Hyperparameter optimization (HPO) is the process of searching and selecting a set of hyperparameters that are optimal for a learning algorithm. SageMaker Automatic Model Tuning (AMT) provides HPO as a managed service by running multiple training jobs on the provided dataset. SageMaker AMT searches the ranges of hyperparameters that you specify and returns the best values, as measured by a metric that you choose. You can use SageMaker AMT with the built-in algorithms or use your custom algorithms and containers. However, optimizing for distributed training differs from common HPO because instead of launching a single instance per training job, each job actually launches a cluster of instances. This means a greater impact on cost (especially if you consider costly GPU-accelerated instances, which are typical for DNN). In addition to AMT limits , you could possibly hit SageMaker account limits for concurrent number of training instances. Finally, launching clusters can introduce operational overhead due to longer starting time. SageMaker AMT has specific features to address these issues. Hyperband with early stopping ensures that well-performing hyperparameters configurations are fine-tuned and those that underperform are automatically stopped. This enables efficient use of training time and reduces unnecessary costs. Also, SageMaker AMT fully supports the use of Amazon EC2 Spot Instances, which can optimize the cost of training up to 90% over on-demand instances. With regards to long start times, SageMaker AMT automatically reuses training instances within each tuning job, thereby reducing the average startup time of each training job by 20 times . Additionally, you should follow AMT best practices , such as choosing the relevant hyperparameters, their appropriate ranges and scales, and the best number of concurrent training jobs, and setting a random seed to reproduce results. In the next section, we see these features in action as we configure, run, and analyze an AMT job using the XGBoost example we discussed earlier. Configure, run, and analyze a tuning job As mentioned earlier, the source code can be found on the GitHub repo . In Steps 1–5, we download and prepare the data, create the xgb3 estimator (the distributed XGBoost estimator is set to use three instances), run the training jobs, and observe the results. In this section, we describe how to set up the tuning job for that estimator, assuming you already went through Steps 1–5. A tuning job computes optimal hyperparameters for the training jobs it launches by using a metric to evaluate performance. You can configure your own metric , which SageMaker will parse based on regex you configure and emit to stdout , or use the metrics of SageMaker built-in algorithms . In this example, we use the built-in XGBoost objective metric , so we don’t need to configure a regex. To optimize for model convergence, we optimize based on the validation AUC metric: objective_metric_name="validation:auc" We tune seven hyperparameters: num_round – Number of rounds for boosting during the training. eta – Step size shrinkage used in updates to prevent overfitting. alpha – L1 regularization term on weights. min_child_weight – Minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight , the building process gives up further partitioning. max_depth – Maximum depth of a tree. colsample_bylevel – Subsample ratio of columns for each split, in each level. This subsampling takes place once for every new depth level reached in a tree. colsample_bytree – Subsample ratio of columns when constructing each tree. For every tree constructed, the subsampling occurs once. To learn more about XGBoost hyperparameters, see XGBoost Hyperparameters . The following code shows the seven hyperparameters and their ranges: hyperparameter_ranges = { "num_round": IntegerParameter(100, 200), "eta": ContinuousParameter(0, 1), "min_child_weight": ContinuousParameter(1, 10), "alpha": ContinuousParameter(0, 2), "max_depth": IntegerParameter(1, 10), "colsample_bylevel": ContinuousParameter(0, 1), "colsample_bytree": ContinuousParameter(0, 1), } Next, we provide the configuration for the Hyperband strategy and the tuner object configuration using the SageMaker SDK. HyperbandStrategyConfig can use two parameters: max_resource (optional) for the maximum number of iterations to be used for a training job to achieve the objective, and min_resource – the minimum number of iterations to be used by a training job before stopping the training. We use HyperbandStrategyConfig to configure StrategyConfig , which is later used by the tuning job definition. See the following code: hsc = HyperbandStrategyConfig(max_resource=30, min_resource=1) sc = StrategyConfig(hyperband_strategy_config=hsc) Now we create a HyperparameterTuner object, to which we pass the following information: The XGBoost estimator, set to run with three instances The objective metric name and definition Our hyperparameter ranges Tuning resource configurations such as number of training jobs to run in total and how many training jobs can be run in parallel Hyperband settings (the strategy and configuration we configured in the last step) Early stopping ( early_stopping_type ) set to Off Why do we set early stopping to Off? Training jobs can be stopped early when they are unlikely to improve the objective metric of the hyperparameter tuning job. This can help reduce compute time and avoid overfitting your model. However, Hyperband uses an advanced built-in mechanism to apply early stopping. Therefore, the parameter early_stopping_type must be set to Off when using the Hyperband internal early stopping feature. See the following code: tuner = HyperparameterTuner( xgb3, objective_metric_name, hyperparameter_ranges, max_jobs=30, max_parallel_jobs=4, strategy="Hyperband", early_stopping_type="Off", strategy_config=sc ) Finally, we start the automatic model tuning job by calling the fit method. If you want to launch the job in an asynchronous fashion, set wait to False . See the following code: tuner.fit( {"train": s3_input_train, "validation": s3_input_validation}, include_cls_metadata=False, wait=True, ) You can follow the job progress and summary on the SageMaker console. In the navigation pane, under Training , choose Hyperparameter tuning jobs , then choose the relevant tuning job. The following screenshot shows the tuning job with details on the training jobs’ status and performance. When the tuning job is complete, we can review the results. In the notebook example, we show how to extract results using the SageMaker SDK. First, we examine how the tuning job increased model convergence. You can attach the HyperparameterTuner object using the job name and call the describe method. The method returns a dictionary containing tuning job metadata and results. In the following code, we retrieve the value of the best-performing training job, as measured by our objective metric (validation AUC): tuner = HyperparameterTuner.attach(tuning_job_name=tuning_job_name) tuner.describe()["BestTrainingJob"]["FinalHyperParameterTuningJobObjectiveMetric"]["Value"] The result is 0.78 in AUC on the validation set. That’s a significant improvement over the initial 0.63! Next, let’s see how fast our training job ran. For that, we use the HyperparameterTuningJobAnalytics method in the SDK to fetch results about the tuning job, and read into a Pandas data frame for analysis and visualization: tuner_analytics = sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name) full_df = tuner_analytics.dataframe() full_df.sort_values(by=["FinalObjectiveValue"], ascending=False).head() Let’s see the average time a training job took with Hyperband strategy: full_df["TrainingElapsedTimeSeconds"].mean() The average time took approximately 1 minute. This is consistent with the Hyperband strategy mechanism that stops underperforming training jobs early. In terms of cost, the tuning job charged us for a total of 30 minutes of training time. Without Hyperband early stopping, the total billable training duration was expected to be 90 minutes (30 jobs * 1 minutes per job * 3 instances per job). That is three times better in cost savings! Finally, we see that the tuning job ran 30 training jobs and took a total of 12 minutes. That is almost 50% less of the expected time (30 jobs/4 jobs in parallel * 3 minutes per job). Conclusion In this post, we described some observed convergence issues when training models with distributed environments. We saw that SageMaker AMT using Hyperband addressed the main concerns that optimizing data parallel distributed training introduced: convergence (which improved by more than 10%), operational efficiency (the tuning job took 50% less time than a sequential, non-optimized job would have taken) and cost-efficiency (30 vs. the 90 billable minutes of training job time). The following table summarizes our results: Improvement Metric No Tuning/Naive Model Tuning Implementation SageMaker Hyperband Automatic Model Tuning Measured Improvement Model Quality (Measured by validation AUC) 0.63 0.78 15% Cost (Measured by billable training minutes) 90 30 66% Operational efficiency (Measured by total running time) 24 12 50% In order to fine-tune with regards to scaling (cluster size), you can repeat the tuning job with multiple cluster configurations and compare the results to find the optimal hyperparameters that satisfy speed and model accuracy. We included the steps to achieve this in the last section of the notebook . References [1] Lian, Xiangru, et al. “Asynchronous decentralized parallel stochastic gradient descent.” International Conference on Machine Learning . PMLR, 2018. [2] Keskar, Nitish Shirish, et al. “On large-batch training for deep learning: Generalization gap and sharp minima.” arXiv preprint arXiv:1609.04836 (2016). [3] Dai, Wei, et al. “Toward understanding the impact of staleness in distributed machine learning.” arXiv preprint arXiv:1810.03264 (2018). [4] Dauphin, Yann N., et al. “Identifying and attacking the saddle point problem in high-dimensional non-convex optimization.” Advances in neural information processing systems 27 (2014). About the Author Uri Rosenberg is the AI & ML Specialist Technical Manager for Europe, Middle East, and Africa. Based out of Israel, Uri works to empower enterprise customers to design, build, and operate ML workloads at scale. In his spare time, he enjoys cycling, hiking, and complaining about data preparation. TAGS: AI/ML , Amazon SageMaker Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Effortlessly Summarize Phone Conversations with Amazon Chime SDK Call Analytics_ Step-by-Step Guide _ Business Productivity.txt
Business Productivity Effortlessly Summarize Phone Conversations with Amazon Chime SDK Call Analytics: Step-by-Step Guide by Jillian Munro, Court Schuett, and Takeshi Kobayashi | on 26 JUN 2023 | in Amazon Chime SDK , Amazon DynamoDB , Amazon EventBridge , Amazon SageMaker , Amazon Simple Storage Service (S3) , Amazon Transcribe , AWS Lambda , Business Productivity , Customer Solutions , Kinesis Data Streams , Technical How-to | Permalink |  Share Introduction The Amazon Chime SDK Call Analytics Real-Time Summarizer is a solution that provides real-time summarization of the phone conversation held through Amazon Chime SDK Voice Connector that leverages the Amazon Chime SDK call analytics to provide real-time summarization of phone conversation health. This demo, Amazon Chime SDK Call Analytics Real-Time Summarizer, utilizes the Amazon Chime SDK Voice Connector to obtain conversation transcripts which are then used to generate a summary of the conversation using Amazon SageMaker . In this blog post, we will discuss how to leverage the Amazon Chime SDK Call Analytics to capture conversation transcriptions and use a SageMaker endpoint to generate a summary of the conversation in real-time as soon as the phone conversation is completed. The application of this solution is versatile and can be utilized in various scenarios. Use Cases Legal Services: Law firms often deal with a high volume of phone calls, and it can be time-consuming for lawyers and legal professionals to manually review and summarize each call. With Amazon Chime SDK Call Analytics, the automatic summarization feature can quickly generate transcripts and summaries of client consultations, court proceedings, or legal negotiations. This enables lawyers to focus more on analyzing the content and key points of the calls rather than spending valuable time on transcribing them. Call Centers: Within call centers, customer support representatives have the ability to use Amazon Chime SDK Call Analytics real-time summarizer to analyze support calls as they occur, providing a report of the call within seconds. Additionally, a customer summarizer generates a report of the phone call including a transcript, for both the representative and the customer. Healthcare: In the healthcare industry, healthcare providers who use Telehealth Solutions can also take advantage of the Amazon Chime SDK Call Analytics Real-Time Summarizer, which can record SOAP notes for patients during the call. Financial Services: Financial institutions, including banks, insurance companies, and investment firms, handle numerous client interactions over the phone. Automatic call summarization can assist in compliance monitoring by analyzing and summarizing these calls, flagging any potential regulatory or compliance issues. It helps in ensuring adherence to industry regulations and maintaining a high standard of customer service. Overview Amazon Chime SDK Call Analytics is a collection of Machine Learning (ML) driven capabilities that enable a customer to record, transcribe, and analyze their communication sessions in real time. Amazon Chime SDK Call Analytics has different configure options, such as, Amazon Transcribe or Amazon Transcribe Call Analytics to create call transcripts, detect and redact PII, generate call summaries and insights from sentiment (non-talk, talk-speed, loudness, interruptions, and voice tone). Amazon Chime SDK Call Analytics can record calls and call metadata to Amazon Simple Storage Service (Amazon S3) as well as send real-time alerts via Amazon EventBridge on matched rule. This demo offers a webpage that displays real-time transcriptions of phone conversations between agents and customers. Once the conversation is completed, the summarization of the conversation is generated and displayed in the upper section of the page. Technical Walkthrough Architecture diagram of Amazon Chime SDK Call Analytics Real-Time Summarizer solution Getting Phone System Setup The Amazon Chime SDK voice connector is a service that operates on a pay-as-you-go basis and facilitates Session Initiation Protocol (SIP) trunking for your current phone system. To simplify the setup of the phone system, an Asterisk PBX web server will be deployed on an EC2 instance in this demo. The Amazon Chime SDK Voice Connector will also be deployed and assigned a phone number. Any incoming calls to this number will be directed to the Asterisk PBX web server. Capturing Transcripts To generate a summary quickly, it is necessary to capture real-time transcriptions using Transcribe through the Amazon Chime SDK Call Analytics. To achieve this, we will take the output of the Amazon Chime SDK Call Analytics Media Insight Pipeline and write the transcriptions to an Amazon DynamoDB table. This will be accomplished by processing the output of the Amazon Kinesis Data Stream with an AWS Lambda function. try { const putCommand = new PutItemCommand({ TableName: process.env.TRANSCRIBE_TABLE, Item: { transactionId: { S: metadata.transactionId }, timestamp: { N: epochTime }, channelId: { S: postData.TranscriptEvent.ChannelId }, startTime: { N: postData.TranscriptEvent.StartTime.toString() }, endTime: { N: postData.TranscriptEvent.EndTime.toString() }, transcript: { S: postData.TranscriptEvent.Alternatives[0].Transcript, }, }, }); await dynamoDBClient.send(putCommand); } catch (error) { console.error('Failed to insert record into DynamoDB:', error); } Simultaneously, we will record this data to a WebSocket API through Amazon API Gateway , allowing for near real-time delivery to the client for the duration of the call. Post-Call Summarization Processing Upon completion of the call, a notification event will be transmitted to EventBridge, and upon receipt of this event, we will: Query the DynamoDB table Parse the results Create a prompt Send the prompt to our Sagemaker Endpoint Send the response to our WebSocket API As we have been capturing the transcription results in real-time, the process of reading, parsing, and making a request to SageMaker can be completed rapidly. This enables us to generate a summary of the call within seconds, rather than minutes. Prerequisites To implement the solution outlined in this blog post, the following items will be required: yarn – https://yarnpkg.com/getting-started/install Docker desktop – https://www.docker.com/products/docker-desktop/ AWS account Basic understanding of telephony Request access to Amazon SageMaker – Foundation models (this could take few days) Subscribe to Cohere Generate Model – Command-Light at AWS Marketplace Deploy We have provided a sample on Github that is easy to deploy and test in your own environment. Once you have confirmed that all prerequisites are met, you can clone the repository to your local environment and initiate ‘yarn launch’ from the command line to get started. Upon successful deployment, the output will provide you with the DistributionUrl and PhoneNumber information. Alternatively, you can find this information on the CloudFormation page on the AWS Console . This information will be required for testing purposes. Testing To test this demo, go to the CloudFront Distribution webpage . If ‘Endpoint Status’ shows as ‘Endpoint disabled’, click on ‘Start Endpoint’ to enable the SageMaker endpoint. This process may take a few minutes to complete. Once the ‘Endpoint Status’ shows as ‘InService’, you are ready to begin testing. Attention: This deployment includes SageMaker endpoint which you incur additional charges when you start the SageMaker endpoint. We recommend you to stop the SageMaker endpoint by clicking on the ‘Stop Endpoint’ button once finished with the experiment to avoid unexpected charges. See Amazon SageMaker Pricing for relevant costs. Dial the provided phone number and upon answering, a WAV file will be played, simulating the response from a sample agent. Clean up Once you have completed experimenting with the solution, you can clean up your resources by initiating ‘yarn cdk destroy’ . This will remove all resources that were created during the deployment of the solution. Conclusion This blog post provides a detailed explanation of the deployment steps required to run the Amazon Chime SDK Call Analytics Real-Time Summarizer as well as the technical implementation of this simple solution. The Amazon Chime SDK Call Analytics Real-Time Summarizer provides an instant summary of phone conversations, opening up new possibilities for post-conversation reporting and analysis. We recommend using this solution as a starting point for your projects and taking further steps to provide feature differentiation to your service. Learn More Amazon Chime SDK in the AWS Console Amazon Chime SDK launches call analytics Github: amazon-chime-sdk-call-analytics-real-time-summarizer Using Amazon Chime SDK call analytics Using the call analytics workflows Blog: Amazon Chime SDK Call Analytics: Real-Time Voice Tone Analysis and Speaker Search TAGS: amazon chime voice connector , Amazon Machine Learning , Amazon Transcribe Call Analytics , SIP trunking Jillian Munro Jillian Munro is a Program Manager for the Amazon Chime SDK. Jillian is focused on Amazon Chime SDK education and awareness. Court Schuett Court Schuett is the Lead Evangelist for the Amazon Chime SDK with a background in telephony and now loves to build things that build things. Court is focused on teaching developers and non-developers alike how to build with AWS. Takeshi Kobayashi Takeshi Kobayashi is a Senior Chime Specialist Solutions Architect at AWS, based in Seattle. He is passionate about building web media applications with AWS services. Resources Alexa for Business Amazon Chime Amazon Honeycode Amazon WorkDocs Amazon WorkMail Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Empowering Customers to Take an Active Role in the Energy Transition Using AWS Serverless Services with Iberdrola _ Case Study _ AWS.txt
through energy consumption optimization AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. Learn more » AWS Lambda Français Scalability During the prototyping comparison, Iberdrola specifically looked for scalability because the company anticipates needing to manage millions of devices as more customers use the platform over time. To store all the data coming from its ASA platform, Iberdrola uses Amazon DynamoDB, a fast, flexible NoSQL database service for single-digit millisecond performance at virtually any scale. Español Customer Stories / Energy - Power & Utilities Learn more » About Iberdrola 日本語 2023 Another use case for the ASA platform is adjusting energy consumption based on fluctuating energy prices. With recommendation models trained and deployed using Amazon SageMaker, the ASA platform can, for example, heat water in a customer’s water tank when energy is the cheapest instead of heating it on demand during peak hours. “It’s not simple for customers to optimize energy consumption because they need to understand their devices’ energy needs as well as changing energy prices, but we’re developing different variables in the platform to handle complex energy optimization,” says Pascual. The company uses AWS Lambda to run code for the platform without provisioning or managing infrastructure, helping Iberdrola run the solution efficiently, scale as needed, and reduce its own carbon footprint. Iberdrola also increases efficiency using serverless and scalable AWS Step Functions, a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning pipelines. 한국어 Carlos Pascual Head of Connected Energy Services, Iberdrola Iberdrola will launch the commercial product that customers can use to manage the ASA platform for their own devices in Spain in 2023. The company plans to expand to make the product available in the rest of its geographies as quickly as possible, using the scalability of the product and the global footprint of AWS to achieve additional cost efficiencies. Iberdrola’s platform will support residential customers first, but the company plans to support businesses in the future to help manage resources like buildings and fleets. “Scalability is key,” says Pascual. “When we made the prototype using AWS services, we tested to see if we could connect to millions of devices because that’s the volume we anticipate in the next few years.” Get Started Reduces customers’ carbon footprint AWS Services Used projected reduction in smart device energy consumption Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. 中文 (繁體) Bahasa Indonesia achieved to connect millions of devices 10–30% Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Amazon SageMaker 中文 (简体) Based in Spain, Iberdrola is a global electric utility company that connects 40 million customers around the world in countries like Portugal, Italy, France, the United Kingdom, the United States, Brazil, Mexico, and Australia. Iberdrola is one of the largest global producers of renewable energy and manages businesses for network distribution and retail in the energy industry. Provides flexibility services Outcome | Expanding the ASA Platform Using Scalable AWS Services and introduces more renewable energy to the grid Iberdrola Innovation Middle East, the global digital solutions development company of Iberdrola, is in charge of the algorithms, artificial intelligence, machine learning, and logic rules that help the platform make meaningful recommendations and automated actions to minimize energy, cost, and emissions. For this part of the platform, Iberdrola Innovation Middle East uses Amazon SageMaker—fully managed infrastructure, tools, and workflows for building, training, and deploying machine learning models for virtually any use case. “Our solution optimally reschedules loads, such as electrical vehicle chargers, heat pumps, or water heaters, without needing additional hardware in our clients’ homes,” says Santiago Bañales, managing director at Iberdrola Innovation Middle East. “It’s a 100 percent cloud solution.” Iberdrola anticipates that its ASA platform will help customers reduce energy consumption by a projected 10–30 percent depending on the devices involved. This outcome will lower costs for customers and reduce power consumption across the grid. Overview based on energy price or source Opportunity | Developing a Smart Devices Monitoring Platform to Help Customers Save Energy AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Türkçe AWS Step Functions Based in Spain, Iberdrola is a global electric utility company with over 40 million customers worldwide. Iberdrola is one of the largest producers of renewable energy in the world and manages businesses for network distribution and retail. English Solution | Using AWS Lambda Supports Reduction in Smart Device Energy Consumption for Iberdrola Customers by a Projected 10–30% Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Iberdrola’s main goal is to help customers save energy and reduce their carbon footprint with the ASA platform. It focuses on large devices that can be flexibly managed to achieve the greatest impact on energy consumption. For example, when solar panels produce a large amount of energy in the middle of the day, Iberdrola’s ASA platform can intelligently increase the household energy consumption to perform tasks like charging an electric vehicle instead of routing that energy to the grid, which is less cost effective for customers than consuming the energy they produce. The platform can also increase consumption when the energy source is renewable, delaying nonurgent energy consumption until the greenest hours of the day. This advanced control can also benefit the grid, providing flexibility services and introducing more renewable energy into the system. “Our solution is crucial, especially as the industry is trying to reduce the dependence on fossil fuels,” says Pascual. “Using AWS services, our energy management platform helps every customer figure out how to consume energy more efficiently and more sustainably.” Iberdrola had a vision to create the ASA platform to empower customers to connect remotely to smart home devices, monitor them, and determine whether they need to take action to improve power consumption. To evaluate how the platform would perform and scale using AWS services, Iberdrola conducted a proof of concept for comparison. In the first phase, Iberdrola worked alongside the AWS prototyping team in late 2021, collaborating with teams in multiple countries to connect test devices. When the proof-of-concept testing using AWS services was successful, Iberdrola moved on to the second phase a few months later to do industrial development for the ASA platform. “When we compared AWS to our first tests, it was clear that using AWS was going to be much better, more scalable, and more cost effective,” says Carlos Pascual, head of connected energy services at Iberdrola. “It wasn’t a difficult decision.”   When we compared AWS to our first tests, it was clear that using AWS was going to be much better, more scalable, and more cost effective.” Deutsch Amazon DynamoDB Adjusts customer energy consumption Tiếng Việt In 2021, the company looked to Amazon Web Services (AWS) for building the prototype for its Advanced Smart Assistant (ASA) platform using services like AWS Lambda—a serverless, event-driven compute service for running code without provisioning or managing servers. The ASA platform connects to any of Iberdrola’s Smart Solutions portfolio and controls them autonomously to reduce a customer’s energy bills and carbon footprint while maintaining comfort. The ASA platform also offers advanced insights and recommendations to help customers progress in the energy transition and in their efficiency. Italiano ไทย Global energy company Iberdrola facilitates its customers’ electrification journey with a portfolio of Smart Solutions and understands the sustainable impact of managing energy consumption for devices like electric vehicle chargers, heat pumps, solar panels, and water heaters. To further support sustainability, Iberdrola wanted to develop a scalable, high-performing, and cost-efficient platform for consumers. Architecture Diagram Learn how global energy company Iberdrola in the power and utilities industry supports energy efficiency using AWS serverless services. Close Learn more » Click to enlarge for fullscreen viewing.  Empowering Customers to Take an Active Role in the Energy Transition Using AWS Serverless Services with Iberdrola Português Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale.
ENGIE Rapidly Migrates Assets and Accounts Easing Divestiture Using AWS _ Engie Case Study _ AWS.txt
or service interruptions to its production environment Millions of dollars Français Centralized its financial operations To facilitate its divestiture, ENGIE also engaged AWS Enterprise Support, which provides consultative guidance service where the main focus is helping customers achieve their outcomes and find success in the cloud. Using AWS Enterprise Support, ENGIE received access to a dedicated technical account manager, who verifies technical procedures, advises on automation opportunities, and coordinates efforts between ENGIE and AWS. Through this collaboration, ENGIE aligned the scheduling of its new project with the AWS Enterprise Support team in case it needed technical support along the way. “AWS Enterprise Support helps us sleep better at night,” says Frédéric Poncin. “We know that if something happens, we can call them, and they will respond.” ENGIE Rapidly Migrates Assets and Accounts, Easing Divestiture Using AWS To support its purpose, ENGIE decided to form a separate division that would absorb the majority of its services-led activities. In July 2021, the company created EQUANS, a global multitechnical services leader. EQUANS employs 74,000 people in 17 countries and generates an annual turnover of over €12 billion. Español Customer Stories / Energy - Power & Utilities Outcome | Supporting a Greener Future on AWS AWS Organizations lets you create new AWS accounts at no additional charge. With accounts in an organization, you can easily allocate resources, group accounts, and apply governance policies to accounts or groups. AWS Service Catalog lets you centrally manage deployed IT services, applications, resources, and metadata to achieve consistent governance of your infrastructure as code (IaC) templates. annually by maintaining its AWS Savings Plan 日本語 Opportunity | Preparing for a Large-Scale Divestiture  Contact Sales Get Started 한국어 AWS Enterprise Support Overview | Opportunity | Solution | Outcome | AWS Services Used Millions of dollars saved Headquartered in La Défense, France, ENGIE’s purpose is to accelerate the transition toward a carbon-neutral economy through reduced energy consumption and environmentally friendly solutions. This purpose brings together the company, its 170,000 employees, its clients, and its shareholders, and builds on its key areas of business—gas, renewable energy, and services—to offer competitive solutions. Globally, the group generated €57.9 billion in 2021. Learn how ENGIE in the energy industry seamlessly transferred IT assets using AWS Cloud Operations. to reduce compute costs AWS Services Used ENGIE duplicated this setup for EQUANS and, with its baseline environment configured with security, networking, governance, and identity and access management, ENGIE could securely transfer existing accounts to a new, separate environment. First, ENGIE manually reassigned a small batch of its accounts using AWS Organizations to see if that would have an effect on its operations. “It was a new approach,” says Frédéric Poncin. “We did not have to migrate workloads. We did not have to migrate data. We just reassigned the ownership of our AWS accounts to the new organization and fixed a few technical dependencies.” Throughout the project, ENGIE experienced virtually no downtime or service interruptions to its production workloads. This divestiture meant that the company needed to efficiently migrate thousands of workloads to a separate and secure environment without impacting its production. ENGIE had already widely adopted Amazon Web Services (AWS), and at the time, there were several large-scale, ongoing cloud migration projects that the company wanted to avoid impacting. To simplify the management of its workloads, ENGIE uses AWS Organizations, which gives companies the ability to centrally manage and govern their environments as they scale their AWS resources. In 8 months, the energy group completed a complex divestiture by migrating workloads from 70 AWS accounts, including multiple production systems, with minimal effort compared with a traditional data center migration project. 中文 (繁體) Bahasa Indonesia Solution | Transferring IT Assets Seamlessly Using AWS Cloud Operations Since completing this project, EQUANS has now been handed over to a new team, and it is operated autonomously. As a result, ENGIE can allocate its resources toward its ambitious net-zero carbon strategy, which it plans to fulfill by 2045. This decarbonization strategy includes increasing its renewable hydrogen capacity to 4 GW and its overall renewable energy capacity to 80 GW by 2030. ENGIE was already operating a secure, multi-account AWS environment with an account factory that is based on AWS best practices for AWS Organizations, AWS Service Catalog, and AWS Cloud Operations, which helps businesses operate securely and safely in the cloud at scale. Under this model, the company can support its local IT teams in adopting a cloud-first approach, addressing business needs, and centralizing its financial operations to reduce compute costs and align with its security standards. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Learn more » 2022 worth of workloads migrated from 70 AWS accounts in 8 months Overview In November 2021, ENGIE accelerated the project by automating the transfer of its assets using AWS Organizations. By automating this task, the company could complete an AWS account transfer in minutes. Within 2 months, ENGIE migrated over 95 percent of its accounts while keeping its IT team free to focus on other projects. In total, ENGIE migrated several million dollars’ worth of workloads across 70 AWS accounts in 8 months and avoided a costly and risky workload migration project that would have required a large-scale mobilization of its IT teams. “It was a smooth ride,” says Frédéric Poncin. “We removed the burden from our IT team that was already loaded with other tasks and divestiture activities.” Our multi-account strategy using AWS Organizations has been key to our success when facing both acquisitions and divestitures.” About ENGIE Türkçe AWS Enterprise Support provides you with concierge-like service where the main focus is helping you achieve your outcomes and find success in the cloud. English ENGIE is a global reference in low-carbon energy and services. The group is committed to accelerating the transition toward a carbon-neutral world through reduced energy consumption and more environmentally friendly solutions. ENGIE also worked alongside AWS Enterprise Support to maintain the benefits of using Savings Plans, a flexible pricing model offering lower prices compared with On-Demand Pricing, in exchange for a specific usage commitment for a 1- or 3-year period. As a longtime user of AWS, ENGIE had committed to an AWS Savings Plan years prior, which has helped it save millions of dollars each year. “We had questions about whether we could keep our commitment and cost savings as we split part of our organization,” says Frédéric Poncin. “By collaborating with AWS Enterprise Support, we could reassign part of our long-term commitment to the new organization, which brings in significant cost savings for both ENGIE and EQUANS.” 95% AWS Service Catalog As the company moves closer to achieving its goals, it will continue to rely on AWS for scalable and cost-effective cloud services. “Our multi-account strategy using AWS Organizations has been key to our success when facing both acquisitions and divestitures,” says Frédéric Poncin. “This strategy has given us the agility that we need to accelerate our organizational transformation.” Global reference for low-carbon and energy services ENGIE announced the sale of its EQUANS division in 2021. This announcement is a major step forward in support of the group’s strategic plan to focus on accelerating investment in its core activities, notably in energy renewables, and to achieve net-zero carbon emissions by 2045. Deutsch AWS Cloud Operations provides a model and tools for a secure and efficient way to operate in the cloud. You can transform your organization, modernize and migrate your applications, and accelerate innovation with AWS. of workloads transferred in 2 months Tiếng Việt Italiano ไทย AWS Cloud Operations Experienced virtually no downtime However, creating this autonomous entity required ENGIE, which had been running on AWS since 2017, to transfer thousands of virtual machines and AWS-managed services into a separate environment without impacting its operations. Originally, the company had started working on AWS to modernize its IT systems, and it had adopted AWS Organizations and AWS Service Catalog, which helps organizations create and manage catalogs of IT services that are approved for use on AWS. These services gave its teams more flexibility in their resource management. “Using AWS Organizations and a multi-account strategy, our IT teams can deploy and operate workloads at a local level in a controlled environment,” says Frédéric Poncin, head of cloud center of excellence at ENGIE. “We quickly grew from two AWS accounts to five hundred AWS accounts under this model.” Learn more » AWS Organizations Frédéric Poncin Head of Cloud Center of Excellence, ENGIE Português
Enhancing customer experience using Amazon CloudFront with Zalando _ Case Study _ AWS.txt
images delivered per day Zalando, a leading fashion, beauty, and lifestyle-focused online platform based in Berlin, Germany, was looking to optimize its services in the face of rapid growth. Zalando connects customers to brands and products across 25 European markets and serves more than 49 million active customers. A key component of Zalando’s online customer experience is the use of rich media content across its web and app properties. The solution Zalando had in place to manage, transform, and deliver images was not providing enough developer visibility or control—both vital factors to support continued growth and differentiated customer experience. Français Increased CloudFront Functions is ideal for high scale and latency sensitive operations like HTTP header manipulations, URL rewrites/redirects, and cache-key normalizations. These types of short running, lightweight operations support traffic that is often unpredictable and spiky. Learn more » Zalando migrated quickly and flexibly. By working alongside the Enterprise Support, Service Specialists, and Service Teams at AWS, Zalando planned the migration timeline in a way that avoided overlaps with customer campaigns and market events. Zalando’s migration to CloudFront started in August 2020 and lasted 4 months, pausing in preparation for Cyber Week, a busy time of year for online retailers. The first phases of the migration started with small groups of customers so that the company could detect any migration improvement opportunities without significantly affecting Zalando customers. Zalando migrated over 20 websites and apps during this process, for a combined 26.93 PB of data. The peak traffic handled by CloudFront has been regularly exceeding 100,000 requests per second. Customer Stories / Retail & Wholesale Español In August 2020, Zalando decided to migrate its media management and delivery solution to Amazon Web Services (AWS) using Amazon CloudFront, a content delivery network service built for high performance, security, and developer convenience. Zalando used CloudFront to improve scalability, provide enhanced online shopping experiences, and improve developer observability. reduction in requests to nonoptimized images transactions handled per second, on average Zalando migrated to CloudFront to improve the media management and delivery architectures that drive shopper experience so that it could provide better services for its customers. Support from the AWS team meant Zalando could conduct a smooth migration, resulting in substantial benefits. “The business benefits of using Amazon CloudFront are the operational flexibility as well as the ability to monitor the health of the solution and experiment and reverse changes quickly,” says Czarnecki. “We can react to incidents in near real time without waiting for support to be called in. This operational flexibility is a big, big benefit for us.” 日本語 Contact Sales 2022 Get Started 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Securely deliver content with low latency and high transfer speeds. Overview | Opportunity | Solution | Outcome | AWS Services Used developer visibility and control  AWS Services Used Process video files and clips to prepare on-demand content for distribution or archiving. Learn more » Initially, Zalando decided to use Lambda@Edge, a feature of CloudFront that lets customers run code closer to the users of their applications and improve performance and reduce latency. Zalando used Lambda@Edge to run image-width normalization and to rewrite URLs based on the viewer device type. Following the release of CloudFront Functions, a complementary edge compute runtime environment deployed within CloudFront edge locations and built for short-running, latency-sensitive JavaScript code, Zalando switched to CloudFront Functions to further reduce costs and optimize the performance of its solution. Through the direct relationship between Zalando and the CloudFront Service team, Zalando customized the behavior of its website and mobile apps. With Zalando’s prelaunch hands-on access to CloudFront Functions, the development team further optimized the image-delivery solution. “I was very happy to be supported on multiple levels during multiple stages,” says Emil Varga, lead software engineer at Zalando. “Starting very early, when we were investigating proofs of concept, there was regular communication. We were sending code to check for validity and for hurdles in our way.” 中文 (繁體) Bahasa Indonesia Zalando wants to continue to innovate the management and manipulation of rich media content using AWS. It is planning to encourage customer engagement by building an interactive ecommerce solution using AWS Elemental MediaConvert, a file-based video transcoding service with broadcast-grade features. Ρусский Solution | Migrating to the AWS Edge عربي Przemek Czarnecki Vice President of Software Engineering, Zalando 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 3x Learn more » Overview Lambda@Edge About Zalando Focused on fashion and lifestyle, Zalando is an online retailer based in Berlin, Germany. Founded in 2008, it connects customers, brands, and partners across 25 European countries. Opportunity | Increasing Developer Ownership to Support Growth Türkçe Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency. With Lambda@Edge, you don't have to provision or manage infrastructure in multiple locations around the world. Learn more » English CloudFront Functions To address these challenges, the team at Zalando decided to build its new media management solution using Amazon CloudFront. “We looked at Amazon CloudFront as an extension of our existing AWS product portfolio,” says Przemek Czarnecki, vice president of software engineering at Zalando. “Migrating to AWS simplified the way that we develop and integrate products.” Zalando used CloudFront for its programmability and flexibility, both essential to scale operations and match increases in customer demand. After the migration, Zalando has been achieving cache hit ratios of 99.5 percent, and its new image-delivery solution serves around five billion images daily. CloudFront and CloudFront Functions were fully implemented prior to Cyber Week 2021. “I was responsible for engineering aspects of Cyber Week in 2021, and there was not a single issue related to Amazon CloudFront,” says Czarnecki. With around 250 million online orders in 2021, the scale and efficiency of Zalando’s solution on CloudFront played a key role in delivering an excellent customer experience. Zalando has implemented further optimizations, leading to a three times reduction in requests to nonoptimized images on the home screens of both the company’s mobile and web applications. Teams across Zalando have switched to using the pipeline built on CloudFront for other types of content due to its enhanced performance and flexibility of usage. 100,000 AWS Elemental MediaConvert Deutsch Tiếng Việt 5 billion Italiano ไทย Zalando Enhances Customer Experience Using Amazon CloudFront Amazon CloudFront Outcome | Driving Future Customer Engagement 99.5% The business benefits of using Amazon CloudFront are the operational flexibility as well as the ability to monitor the health of the solution and experiment and reverse changes quickly.” In May 2021, Zalando began to use CloudFront Functions in production. “The big change with CloudFront Functions is smooth configuration,” says Varga. “It scales on demand and makes it simpler to deploy and reliably revert tasks on an operational level and for everyday development.” As the company began to roll out the new solution across its web properties, Zalando quickly overcame obstacles. “When adjustments were needed, we were able to roll back very quickly, making changes before real downtime could occur, which was key,” says Varga. Today, Zalando uses both CloudFront Functions and Lambda@Edge for different use cases. Having multiple layers of edge compute provides more flexibility, visibility, and control for its developers and a better overall experience for customers. This helps Zalando react with agility and better serve both customers and the business. cache hit ratios achieved Zalando, an online fashion and lifestyle business based in Germany, migrated its media management solution to Amazon CloudFront and increased developer control, leading to an improved customer experience. As a result of significant growth, Zalando outgrew its previous image management solution, which offered limited flexibility in the configuration capabilities to Zalando’s engineering and product teams. Additionally, operational insights were sparse, creating a lack of visibility into how efficiently the service was functioning and what optimizations could be made. This impacted Zalando’s ability to adapt and optimize its digital storefronts. The lack of detailed reporting around image transformation presented challenges in delivering consistent customer experience during peak seasonal events. Português
EPAM Systems.txt
The Maestro platform and its companion app, Maestro Databased (MD), comprise a modern solution designed for effective hybrid and multi-cloud infrastructure management, monitoring, analytics, FinOps enablement, and other business-critical operations. They are designed for use in large enterprises, where top performance is needed from every component. Using AWS, Maestro performance improved by 10 percent and its ratio of price-to-performance improved by 40 percent, according to internal tests comparing the AWS infrastructure to its previous setup. This means that the customer using Maestro can now more effectively and efficiently manage its cloud environments. Français While migrating to AWS, the EPAM team explored the best ways to structure Maestro technology. It ran tests on resource-intensive processes, such as simulated month-end procedures, and was impressed by the results it experienced using AWS. “This was a good stress test,” says Isaiev. “The task needs the highest CPU and memory capacity because the database uses the in-memory cache for requested data. The new platform we built on AWS performed well. We couldn’t overload it, it just scaled and scaled. And it performed about 10 percent faster than our on-premises systems.” Create, maintain, and secure APIs at any scale.  Learn more » The tasks include provisioning infrastructure, meeting security and compliance requirements, managing resources and permissions, and auditing events.EPAM is now applying the lessons it learned working on the Maestro platform migration to help its other customers do more. “Our customers look to us to solve their problems,” says Isaiev. “Working on this project and getting support from AWS, we’ve improved our cloud skills and know so much more than we used to. It was like an intense training program, and we can now share that knowledge with our customers.” One of MCC’s enterprise customers needed more processing power for its implementation of Maestro, and EPAM’s Cloud Native Research and Development Center was invited to participate in the renovation of a platform’s entity installed on the customer’s side. The EPAM team decided to migrate Maestro to a solution based on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity, and AWS Graviton. The migration was one of the first cases of AWS Graviton being used on an organization level. “The results of the tests and proof-of-concept (PoC) looked great,” says Anton Isaiev, lead systems engineer at EPAM, who was engaged into the migration project. “We migrated applications and cloud-native solutions to AWS Graviton to take advantage of performance and scalability benefits.” Español Solution | Gaining Value and a 10% Speed Boost   performance We couldn’t overload it, it just scaled and scaled. And it performed about 10% faster than our on-premises system.” 日本語 AWS Services Used 2023 The Maestro platform also interacts with various applications deployed by its customers using APIs (application programming interfaces). To enable users to interact with the platform with predictable performance, the platform uses a cloud-native user interface based on AWS Lambda 한국어 Learn how »  Overview | Opportunity | Solution | Outcome | AWS Services Used with regional and industry standards , which are ideal for running memory-intensive workloads, to make the applications based on its databases work faster. This means that the customers’ technical managers, DevOps teams, and engineers experience speedy performance for all data-management tasks. Opportunity | Choosing AWS Graviton for Scalability and Faster Performance Amazon API Gateway Amazon EC2 R6g instances 40% improvement AWS Graviton Processor EPAM Gains 40% Price-Performance Improvement for a Cloud Management App With AWS Graviton improvement EPAM was tasked, by Maestro Cloud Control, to migrate it’s Maestro hybrid cloud management platform to AWS Graviton within a pre-existing enterprise infrastructure. The aim of the project was to reduce Maestro’s ongoing R&D cost and improve its performance. This was achieved by increasing processing time by 10 percent, using less resources to achieve this higher level of performance. Maestro Cloud Control (MCC) uses EPAM’s software engineering expertise to create and evolve the Maestro platform. The platform provides automated control over virtual resource creation, updates, monitoring, analytics, billing, charge-back, compliance, security threat detection, and usage optimization recommendations. ipsum et velit consectetur 中文 (繁體) Bahasa Indonesia Better experience Customer Stories / Financial Services / EMEA Ρусский Enabling the best price performance in Amazon EC2. Learn more » عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.   Overview AWS Lambda To meet user needs and set the product up for future growth, EPAM redeveloped Maestro into a cloud-native application on Amazon Web Services (AWS). The resulting platform relies significantly on AWS Graviton, processors designed by AWS to deliver the best price-performance for cloud workloads running in Amazon EC2. About Company Get Started , a serverless, event-driven compute service, and AWS Customer Success Stories Türkçe English , a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Both of these services provide customers with a high-quality experience for accessing their unique applications. Anton Isaiev Team Lead of Level 3 Support for Applications, EPAM Run code without thinking about servers or clusters. Learn more » Secure and resizable compute capacity for virtually any workload. Maestro’s move to AWS made sense for the customer, both in terms of capabilities and operational benefits. “The main advantage is the ratio of price-to-value,” says Isaiev. “That was a real winner, with an improvement of about 40 percent over our previous setup. The project team now has the necessary capacity to handle large workloads, and extra resources to increase its staff productivity—all for a reasonable price. And the implementation works beautifully. There is a wide range of specialized instances to meet its needs, and the amount of compute power it can use can be scaled without limit.” Deutsch The cloud-native Maestro on AWS performs faster than its previous on-premises version, and delivers greater value to the business. “The customer gets much more for the same price, using AWS Graviton,” says Isaiev. “It has faster performance for all compute tasks and greater access to resources. It’d cost so much more to try to replicate this in an on-premises system.” Ability to scale Tiếng Việt Outcome | Using AWS to Solve Its Customers’ Problems Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Italiano ไทย Contact Sales 10% performance Learn more » Amazon EC2 EPAM provides digital transformation and product engineering services to help businesses plan, build, and run their IT systems. Headquartered in the US, the company operates globally in more than 50 countries. EPAM’s more than 59,000 staff help businesses reimagine themselves with an eye to today’s challenges and the digital future. Its software engineering heritage combined with its strategic business and innovation consulting generated $4.82 billion in revenues in 2022, according to its annual report. The company operates in more than 50 countries around the world. Maestro migration team selected Amazon API Gateway Português