Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'23andMe could migrate its existing environment with virtually no changes, and over time started incorporating more AWS services into its solution. The company is looking for further ways to optimize costs using AWS, exploring services like AWS Graviton processor, which delivers excellent price performance for cloud workloads running in Amazon EC2. The company is finding opportunities to be cost optimal while retaining the resources it needs for on-demand computing. “We’re about 10 months past migration, and the eventual goal is to drive a faster process from idea to validation. Our researchers are faster and more efficient, and our hope is to see a big research breakthrough,” says de Leon.\xa0\nIncreased scalability, supporting a compute job running on more than 80,000 virtual CPUs\n About 23andMe\nEspañol\n\t{font-family:"Cambria Math";\n日本語\n\tmso-font-pitch:variable;\n\tfont-family:"Arial",sans-serif;\n한국어\n\t{font-family:Cambria;\n Amazon MAP\n \n\tmso-bidi-font-size:12.0pt;\n AWS Services Used\nArnold de Leon Sr. Program Manager,\xa023andMe\n\tmargin:0in;\n Optimizing Value Running HPC on AWS\n          \xa0 \n\tmso-pagination:widow-orphan;\nOptimized costs @font-face\n\t{page:WordSection1;}ol\n23andMe can scale on demand to match compute capacity for actual workloads and then scale back down. “To give a sense of scale, we had a peak compute job running with over 80,000 virtual CPUs operating at once,” says de Leon. In addition, using Amazon EC2 ins
...
n more\xa0»\n\tmso-font-signature:3 0 0 0 -2147483647 0;}@font-face\n\t{mso-style-name:Normal0;\n\tfont-size:11.0pt;\n             Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. \nΡусский\nRemoved compute resource contention among researchers\n\tmso-font-charset:77;\n中文 (简体)\n\t{margin-bottom:0in;}\n          23andMe initially used an on-premises facility, but as its data storage and compute needs grew, the company began looking to the cloud for greater scalability and flexibility. Additionally, the company sought to reduce human operating costs for facility maintenance and accelerate its ability to adopt new hardware and tech by transitioning to the cloud. In 2016, the company began using \n\tmso-style-parent:"";\n             AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. \n          As it started using cloud services, 23andMe tried a hybrid solution, running workloads in its data center and on AWS concurrently. This solution provided some scalability but came with associated costs of migrating data back and forth between the on-premises data center and the cloud. To achieve better cost optimization while also gaining more flexibility and scalability, 23andMe decided to migrate fully to AWS in 2021. \n Get Started\n\tmso-generic-font-family:roman;\n  Contact Sales'}) and 2 missing columns ({'Content', 'ID'}).

This happened while the csv dataset builder was generating data using

hf://datasets/shalabh05/Shalabh_Dataset/output_updated.csv (at revision ccdff331387befbe517669379feeed22ee461f93)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              23andMe could migrate its existing environment with virtually no changes, and over time started incorporating more AWS services into its solution. The company is looking for further ways to optimize costs using AWS, exploring services like AWS Graviton processor, which delivers excellent price performance for cloud workloads running in Amazon EC2. The company is finding opportunities to be cost optimal while retaining the resources it needs for on-demand computing. “We’re about 10 months past migration, and the eventual goal is to drive a faster process from idea to validation. Our researchers are faster and more efficient, and our hope is to see a big research breakthrough,” says de Leon. 
              Increased scalability, supporting a compute job running on more than 80,000 virtual CPUs
               About 23andMe
              Español
              	{font-family:"Cambria Math";
              日本語
              	mso-font-pitch:variable;
              	font-family:"Arial",sans-serif;
              한국어
              	{font-family:Cambria;
               Amazon MAP
               
              	mso-bidi-font-size:12.0pt;
               AWS Services Used
              Arnold de Leon Sr. Program Manager, 23andMe
              	margin:0in;
               Optimizing Value Running HPC on AWS
                          
              	mso-pagination:widow-orphan;
              Optimized costs @font-face
              	{page:WordSection1;}ol
              23andMe can scale on demand to match compute capacity for actual workloads and then scale back down. “To give a sense of scale, we had a peak compute job running with over 80,000 virtual CPUs operating at once,” says de Leon. In addition, using Amazon EC2 instances has removed resource contention f
              ...
              l0;
              	font-size:11.0pt;
                           Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. 
              Ρусский
              Removed compute resource contention among researchers
              	mso-font-charset:77;
              中文 (简体)
              	{margin-bottom:0in;}
                        23andMe initially used an on-premises facility, but as its data storage and compute needs grew, the company began looking to the cloud for greater scalability and flexibility. Additionally, the company sought to reduce human operating costs for facility maintenance and accelerate its ability to adopt new hardware and tech by transitioning to the cloud. In 2016, the company began using 
              	mso-style-parent:"";
                           AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. 
                        As it started using cloud services, 23andMe tried a hybrid solution, running workloads in its data center and on AWS concurrently. This solution provided some scalability but came with associated costs of migrating data back and forth between the on-premises data center and the cloud. To achieve better cost optimization while also gaining more flexibility and scalability, 23andMe decided to migrate fully to AWS in 2021. 
               Get Started
              	mso-generic-font-family:roman;
                Contact Sales: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 22231
              to
              {'ID': Value(dtype='string', id=None), 'Content': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1317, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 932, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'23andMe could migrate its existing environment with virtually no changes, and over time started incorporating more AWS services into its solution. The company is looking for further ways to optimize costs using AWS, exploring services like AWS Graviton processor, which delivers excellent price performance for cloud workloads running in Amazon EC2. The company is finding opportunities to be cost optimal while retaining the resources it needs for on-demand computing. “We’re about 10 months past migration, and the eventual goal is to drive a faster process from idea to validation. Our researchers are faster and more efficient, and our hope is to see a big research breakthrough,” says de Leon.\xa0\nIncreased scalability, supporting a compute job running on more than 80,000 virtual CPUs\n About 23andMe\nEspañol\n\t{font-family:"Cambria Math";\n日本語\n\tmso-font-pitch:variable;\n\tfont-family:"Arial",sans-serif;\n한국어\n\t{font-family:Cambria;\n Amazon MAP\n \n\tmso-bidi-font-size:12.0pt;\n AWS Services Used\nArnold de Leon Sr. Program Manager,\xa023andMe\n\tmargin:0in;\n Optimizing Value Running HPC on AWS\n          \xa0 \n\tmso-pagination:widow-orphan;\nOptimized costs @font-face\n\t{page:WordSection1;}ol\n23andMe can scale on demand to match compute capacity for actual workloads and then scale back down. “To give a sense of scale, we had a peak compute job running with over 80,000 virtual CPUs operating at once,” says de Leon. In addition, using Amazon EC2 ins
              ...
              n more\xa0»\n\tmso-font-signature:3 0 0 0 -2147483647 0;}@font-face\n\t{mso-style-name:Normal0;\n\tfont-size:11.0pt;\n             Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. \nΡусский\nRemoved compute resource contention among researchers\n\tmso-font-charset:77;\n中文 (简体)\n\t{margin-bottom:0in;}\n          23andMe initially used an on-premises facility, but as its data storage and compute needs grew, the company began looking to the cloud for greater scalability and flexibility. Additionally, the company sought to reduce human operating costs for facility maintenance and accelerate its ability to adopt new hardware and tech by transitioning to the cloud. In 2016, the company began using \n\tmso-style-parent:"";\n             AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. \n          As it started using cloud services, 23andMe tried a hybrid solution, running workloads in its data center and on AWS concurrently. This solution provided some scalability but came with associated costs of migrating data back and forth between the on-premises data center and the cloud. To achieve better cost optimization while also gaining more flexibility and scalability, 23andMe decided to migrate fully to AWS in 2021. \n Get Started\n\tmso-generic-font-family:roman;\n  Contact Sales'}) and 2 missing columns ({'Content', 'ID'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/shalabh05/Shalabh_Dataset/output_updated.csv (at revision ccdff331387befbe517669379feeed22ee461f93)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Open a discussion for direct support.

ID
string
Content
string
23andMe Case Study _ Life Sciences _ AWS.txt
23andMe could migrate its existing environment with virtually no changes, and over time started incorporating more AWS services into its solution. The company is looking for further ways to optimize costs using AWS, exploring services like AWS Graviton processor, which delivers excellent price performance for cloud workloads running in Amazon EC2. The company is finding opportunities to be cost optimal while retaining the resources it needs for on-demand computing. “We’re about 10 months past migration, and the eventual goal is to drive a faster process from idea to validation. Our researchers are faster and more efficient, and our hope is to see a big research breakthrough,” says de Leon.  Increased scalability, supporting a compute job running on more than 80,000 virtual CPUs About 23andMe Español {font-family:"Cambria Math"; 日本語 mso-font-pitch:variable; font-family:"Arial",sans-serif; 한국어 {font-family:Cambria; Amazon MAP mso-bidi-font-size:12.0pt; AWS Services Used Arnold de Leon Sr. Program Manager, 23andMe margin:0in; Optimizing Value Running HPC on AWS   mso-pagination:widow-orphan; Optimized costs @font-face {page:WordSection1;}ol 23andMe can scale on demand to match compute capacity for actual workloads and then scale back down. “To give a sense of scale, we had a peak compute job running with over 80,000 virtual CPUs operating at once,” says de Leon. In addition, using Amazon EC2 instances has removed resource contention for 23andMe’s researchers. “Recently, we had a 3-week production workload finish 33 percent ahead of schedule. Since migrating to AWS, our ability to deliver compute resources to our researchers is now unmatched,” says Graham. mso-bidi-font-family:Cambria;}.MsoChpDefault ไทย font-family:"Cambria",serif; mso-default-props:yes; panose-1:2 4 5 3 5 4 6 3 2 4; Português {margin-bottom:0in;}ul Français Embracing the Cloud for Secure Data Storage Exploring Future Possibilities with Flexibility on AWS 23andMe quickly discovered the benefits of having a variety of Amazon EC2 instance types available for its use. “We have the entire menu of Amazon EC2 offerings available to us, and one way to achieve efficiency is finding an optimal fit for resource use,” says Justin Graham, manager of an infrastructure engineering group at 23andMe. As of 2022, the company uses many instance types flexibly, including Amazon EC2 X2i Instances, the next generation of memory-optimized instances delivering improvements in performance, price performance, and costs for memory-intensive workloads. 23andMe also uses AWS Batch to provide rightsizing and match resources to determine which instance types to use, which helps with price-performance optimization. mso-font-signature:3 0 0 0 1 0;}@font-face mso-ascii-font-family:Cambria;   panose-1:5 0 0 0 0 0 0 0 0 0; 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. The AWS Migration Acceleration Program (MAP) is a comprehensive and proven cloud migration program based upon AWS’s experience migrating thousands of enterprise customers to the cloud. mso-style-unhide:no; 2022 AWS Batch Türkçe English {mso-style-unhide:no; Tiếng Việt Headquartered in California, 23andMe is known for its at-home DNA collection kits. The company also uses its database of genetic information to further its understanding of biology and therapeutics to develop new drugs and therapies. Founded in 2006, 23andMe has collected an enormous amount of data and generated millions of lines of code for its research and therapeutics. They use this data for regression analysis, genome-wide association studies, and general correlation studies across datasets. The genetic testing market has been gaining momentum because of the increased prevalence of genetic diseases, better awareness among the public about the benefits of early detection, and falling costs of genetic sequencing over the past 16 years. mso-hansi-font-family:Cambria; mso-font-signature:-536869121 1107305727 33554432 0 415 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-type:export-only; Benefits of AWS Organizations of all sizes across all industries are transforming and delivering on their missions every day using AWS. Contact our experts and start your own AWS Cloud journey today. {font-family:Wingdings; Managing scientists’ file-based home directories presented another challenge. To solve this issue, 23andMe turned to Weka, an AWS Partner. The WekaIO parallel file system is functional, cost-effective, and compatible with Amazon S3. This helped 23andMe’s internal team implement changes with no disruption to the customer's experience. When the migration was complete, 23andMe started taking advantage of AWS services for HPC like Amazon EC2 C5 Instances, which deliver cost-effective high performance at a low price per compute ratio for running advanced compute-intensive workloads. It chose this type of Amazon EC2 instance because it was the closest analog to its previous computing resources. Increased efficiency, completing a 3-week production workload 33% ahead of schedule Amazon EC2 mso-generic-font-family:decorative; mso-fareast-font-family:Cambria; عربي While enjoying these benefits of using HPC services on AWS, 23andMe has not had to compromise on its initial spending goals. “Our goal was to keep our costs the same but gain flexibility, capability, and value. Savings is less about the bottom line and more about what we gain for what we spend,” says de Leon. 23andMe has achieved increases in cost optimization by using a variety of AWS services, including Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud, as well as Amazon EC2. 23andMe is all-in on AWS and aims to continue pursuing price-performance optimization for its workloads. To give a sense of scale, we had a peak compute job running with over 80,000 virtual CPUs operating at once. Using Amazon EC2 has removed the resource contention for 23andMe’s researchers." Migrated smoothly to the cloud within 4 months 23andMe Innovates Drug and Therapeutic Discovery with HPC on AWS Amazon Simple Storage Service (Amazon S3), an object storage service that offers scalability, data availability, security, and performance. “If we care about a piece of data, we store it in Amazon S3,” says Arnold de Leon, program manager in charge of cloud spending at 23andMe. “It is an excellent way of securing data with regard to data durability.” 23andMe uses Amazon S3 intelligent tiering storage class to automatically migrate data to the most cost-effective access tier when access patterns change. mso-style-qformat:yes; Deutsch 23andMe used the AWS Migration Acceleration Program (AWS MAP), a comprehensive and proven cloud migration program based on the experience that AWS has in migrating thousands of enterprise customers to the cloud. Using AWS MAP, 23andMe could achieve a smooth migration in only 4 months. “What AWS MAP was offering us was the ability to do a fast, massive shift,” says de Leon. “Usually when you do that, it’s very expensive, but AWS MAP solved that problem.” 23andMe migrated everything out of its data center and into the cloud on AWS. One year after migrating to AWS, as the AWS MAP program ends for 23andMe, it is achieving equal or better price performance because of the team’s diligence in adopting AWS services. Amazon S3 Italiano mso-font-charset:0; 23andMe, a genomics and biotechnology company based in California, provides genetic information to customers and has crowdsourced billions of data points for study, resulting in scientific discoveries. Genomics and biotechnology company 23andMe provides direct-to-customer genetic testing, giving customers valuable insights into their genetics. 23andMe needed more scalability and flexibility in its high-performance computing (HPC) to manage multiple petabytes of data efficiently. The company had been using an on-premises solution but began using Amazon Web Services (AWS) in 2016 to store important data. In 2021, the company made a full migration to the cloud, a process that took only 4 months. Since adopting AWS HPC services, including Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload, and AWS Batch, which lets developers, scientists, and engineers easily and efficiently run hundreds of thousands of batch computing jobs on AWS, 23andMe has increased its scalability, flexibility, and cost optimization. mso-bidi-font-family:Cambria;}p.Normal0, li.Normal0, div.Normal0 mso-bidi-font-family:Cambria;}div.WordSection1 Learn more » mso-font-signature:3 0 0 0 -2147483647 0;}@font-face {mso-style-name:Normal0; font-size:11.0pt; Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Ρусский Removed compute resource contention among researchers mso-font-charset:77; 中文 (简体) {margin-bottom:0in;} 23andMe initially used an on-premises facility, but as its data storage and compute needs grew, the company began looking to the cloud for greater scalability and flexibility. Additionally, the company sought to reduce human operating costs for facility maintenance and accelerate its ability to adopt new hardware and tech by transitioning to the cloud. In 2016, the company began using mso-style-parent:""; AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. As it started using cloud services, 23andMe tried a hybrid solution, running workloads in its data center and on AWS concurrently. This solution provided some scalability but came with associated costs of migrating data back and forth between the on-premises data center and the cloud. To achieve better cost optimization while also gaining more flexibility and scalability, 23andMe decided to migrate fully to AWS in 2021. Get Started mso-generic-font-family:roman; Contact Sales
36 new or updated datasets on the Registry of Open Data_ AI analysis-ready datasets and more _ AWS Public Sector Blog.txt
AWS Public Sector Blog 36 new or updated datasets on the Registry of Open Data: AI analysis-ready datasets and more by Erin Chu | on 13 JUL 2023 | in Analytics , Announcements , Artificial Intelligence , AWS Data Exchange , Education , Open Source , Public Sector , Research | Permalink | Comments |  Share The AWS Open Data Sponsorship Program makes high-value, cloud-optimized datasets publicly available on Amazon Web Services (AWS). AWS works with data providers to democratize access to data by making it available to the public for analysis on AWS; develop new cloud-native techniques, formats, and tools that lower the cost of working with data; and encourage the development of communities that benefit from access to shared datasets. Through this program, customers are making over 100PB of high-value, cloud-optimized data available for public use. The full list of publicly available datasets are on the Registry of Open Data on AWS and are now also discoverable on AWS Data Exchange . This quarter, AWS released 36 new or updated datasets. As July 16 is Artificial Intelligence (AI) Appreciation Day , the AWS Open Data team is highlighting three unique datasets that are analysis-ready for AI. What will you build with these datasets? Three AI analysis-ready datasets on the Registry of Open Data NYUMets Brain Dataset from the NYU Langone Medical Center is one of the largest datasets in existence of cranial imaging, and the largest dataset of metastatic cancer, containing over 8,000 brain MRI studies, clinical data, and treatment records from cancer patients. Over 2,300 images have been annotated for metastatic tumor segmentations, making NYUMets: Brain a valuable source of segmented medical imaging. An AI model for segmentation tasks as well as a longitudinal tracking tool are available for NYUMets through MONAI. Learn more about this dataset . RACECAR Dataset from the University of Virginia is the first open dataset for full-scale and high-speed autonomous racing. RACECAR is suitable to explore issues regarding localization, object detection and tracking (LiDAR, Radar, and Camera), and mapping that arise at the limits of operation of the autonomous vehicle. You can get started with RACECAR with this SageMaker Studio Lab notebook . Aurora Multi-Sensor Dataset from Aurora Operations, Inc. is a large-scale multi-sensor dataset with highly accurate localization ground truth, captured between January 2017 and February 2018 in the metropolitan area of Pittsburgh, PA, USA. The de-identified dataset contains rich metadata, such as weather and semantic segmentation, and spans all four seasons, rain, snow, overcast and sunny days, different times of day, and a variety of traffic conditions. This data can be used to develop and evaluate large-scale long-term approaches to autonomous vehicle localization. Aurora is applicable to many research areas including 3D reconstruction, virtual tourism, HD map construction, and map compression. Full list of new or updated datasets These three datasets join 33 other new or updated datasets on the Registry of Open Data in the following categories. Climate and weather: ECMWF real-time forecasts from European Centre for Medium-Range Weather Forecasts NOAA Wang Sheeley Arge (WSA) Enlil from the National Oceanic and Atmospheric Administration (NOAA) ONS Open Data Portal from National Electric System Operator of Brazil Pohang Canal Dataset: A Multimodal Maritime Dataset for Autonomous Navigation in Restricted Waters from the Mobile Robotics & Intelligence Laboratory (MORIN Lab) Sup3rCC from National Renewable Energy Laboratory EURO-CORDEX – European component of the Coordinated Regional Downscaling Experiment from Helmholtz Centre Hereon / GERICS Geospatial: Astrophysics Division Galaxy Segmentation Benchmark Dataset from the National Aeronautics and Space Administration (NASA) Astrophysics Division Galaxy Morphology Benchmark Dataset from NASA ESA WorldCover Sentinel-1 and Sentinel-2 10m Annual Composites from the European Space Agency Korean Meteorological Agency (KMA) GK-2A Satellite Data from the Korean Meteorological Agency NASA / USGS Controlled Europa DTMs from NASA NASA / USGS Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) Targeted DTMs from NASA Nighttime-Fire-Flare from Universities Space Research Association (USRA) and NASA Black Marble PALSAR-2 ScanSAR Tropical Cyclone Mocha (L2.1) from the Japan Aerospace Exploration Agency (JAXA) PALSAR-2 ScanSAR Flooding in Rwanda (L2.1) from JAXA Solar Dynamics Observatory (SDO) Machine Learning Dataset from NASA Life sciences: Extracellular Electrophysiology Compression Benchmark from the Allen Institute for Neural Dynamics Long Read Sequencing Benchmark Data from the Garvan Institute Genomic Characterization of Metastatic Castration Resistant Prostate Cancer from the University of Chicago Harvard Electroencephalography Database from the Brain Data Science Platform The Human Sleep Project from the Brain Data Science Platform Integrative Analysis of Lung Adenocarcinoma in Environment and Genetics Lung cancer Etiology (Phase 2) from the University of Chicago National Cancer Institute Imaging Data Commons (IDC) Collections from the Imaging Data Commons Indexes for Kaiju from the University of Copenhagen Bioinformatics Center Molecular Profiling to Predict Response to Treatment (phs001965) from the University of Chicago NYUMets Brain Dataset from the NYU Langone Medical Center SPaRCNet data:Seizures, Rhythmic and Periodic Patterns in ICU Electroencephalography from the Brain Data Science Platform The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset from the University of California San Francisco UK Biobank Linkage Disequilibrium Matrices from the Broad Institute VirtualFlow Ligand Libraries from Harvard Medical School Machine learning: Aurora Multi-Sensor Dataset from Aurora Operations, Inc. RACECAR Dataset from University of Virginia Exceptional Responders Initiative from Amazon Amazon Seller Contact Intent Sequence from Amazon Open Food Facts Images from Open Food Facts Product Comparison Dataset for Online Shopping from Amazon What are people doing with open data? Amazon Location Service launched Open Data Maps for Amazon Location Service , a data provider option for the Maps feature based on OpenStreetMap . Oxford Nanopore Technologies benchmarked their genomic basecalling algorithms, which decodes DNA or RNA to sequence for analysis, on 20 different Amazon Elastic Compute Cloud (Amazon EC2) instances . HuggingFace hosted a Bio x ML Hackathon that challenged teams to leverage AI tools, open data, and cloud resources to solve problems at the intersection of the life sciences and artificial intelligence. How can you make your data available? Looking to make your data available? The AWS Open Data Sponsorship Program covers the cost of storage for publicly available high-value, cloud-optimized datasets. We work with data providers who seek to: Democratize access to data by making it available for analysis on AWS Develop new cloud-native techniques, formats, and tools that lower the cost of working with data Encourage the development of communities that benefit from access to shared datasets Learn how to propose your dataset to the AWS Open Data Sponsorship Program . Learn more about open data on AWS . Read more about open data on AWS: Largest metastatic cancer dataset now available at no cost to researchers worldwide Creating access control mechanisms for highly distributed datasets 33 new or updated datasets on the Registry of Open Data for Earth Day and more How researchers can meet new open data policies for federally-funded research with AWS Accelerating and democratizing research with the AWS Cloud Introducing 10 minute cloud tutorials for research Subscribe to the AWS Public Sector Blog newsletter to get the latest in AWS tools, solutions, and innovations from the public sector delivered to your inbox, or contact us . Please take a few minutes to share insights regarding your experience with the AWS Public Sector Blog in this survey , and we’ll use feedback from the survey to create more content aligned with the preferences of our readers. TAGS: Artificial Intelligence , AWS Open Data Sponsorship Program , climate , dataset , datasets , geospatial , geospatial data , life sciences , Machine Learning , open data , open data on AWS , public sector , Registry of Open Data on AWS Erin Chu Erin Chu is the life sciences lead on the Amazon Web Services (AWS) open data team. Trained to bridge the gap between the clinic and the lab, Erin is a veterinarian and a molecular geneticist, and spent the last four years in the companion animal genomics space. She is dedicated to helping speed time to science through interdisciplinary collaboration, communication, and learning. Comments View Comments Resources AWS in the Public Sector AWS for Government AWS for Education AWS for Nonprofits AWS for Public Sector Health AWS for Aerospace and Satellite Solutions Case Studies Fix This Podcast Additional Resources Contact Us Follow  AWS for Government Twitter  AWS Education Twitter  AWS Nonprofits Twitter  Newsletter Subscription
54gene _ Case Study _ AWS.txt
experimentation Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Genomics research studying global population is crucial for learning how genomic variation impacts diseases and how data can be used to improve the well-being of all populations. Despite the diverse genetic makeup of people in Africa, the continent is vastly underrepresented in global genetic research, with less than 3 percent of genomic data coming from African populations. The mission of health technology startup 54gene is to bridge this gap to deliver precision medicine to Africa and the global population. Solution | Analyzing Datasets as Large as 30–40 TB in a Few Days   54gene Equalizes Precision Medicine by Increasing Diversity in Genetics Research Using AWS 54gene’s integrative digital solution has three major components: the clinical operations to enroll patients for collecting clinical and phenotypic data, the biobank that stores biospecimens, and the downstream genomic analysis, which uses technologies like genotyping and whole genome sequencing to generate insights. This large-scale genomic analysis needs access to robust HPC solutions to process a high throughput of data. “Our current architecture, which is exclusively on AWS, strikes a good balance between cost effectiveness and flexibility,” says Joshi. “We have varying sizes and designs of computing architecture to make our processes cost effective, and it has been really nice.” Using AWS ParallelCluster, 54gene can customize the kind of HPC that it wants to use depending on the type and size of the data coming in. The startup has one queue for handling terabytes of data with compute-optimized nodes and a separate queue for smaller tasks, like running short Python scripts. The AWS team provided support throughout the migration and design of GENIISYS. “AWS listens carefully to our questions and needs and works diligently to provide additional resources,” says He. 日本語 2023 AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. About 54gene Analyzed The company built a proprietary solution called GENIISYS on Amazon Web Services (AWS) to curate genetic, clinical, and phenotypic data from Africa and other diverse populations and generate insights that can lead to new treatments and diagnostics. Using multiple AWS services, including AWS ParallelCluster, an open-source cluster management tool that makes it simple to deploy and manage high performance computing (HPC) clusters on AWS, GENIISYS can scale to cost-effectively support massive datasets and power precision medicine for historically underserved demographics. 한국어 54gene is already seeing the benefits of AWS as it develops and scales new features of GENIISYS. “We are doing a lot of trial and error,” says Joshi. “On AWS, we can start small with novel ideas and deploy a lot of small applications, and the AWS team helps us determine which particular interface best suits us.” Overview | Opportunity | Solution | Outcome | AWS Services Used To store and visualize its datasets, 54gene uses Amazon Relational Database Service (Amazon RDS), which makes it simple to set up, operate, and scale databases in the cloud. “On Amazon RDS, we’re able to store metadata from our three major components of research and query our datasets efficiently,” says Joshi. The startup also uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload, to power its data analytics workflows. Using different HPC configurations, 54gene can analyze datasets as large as 30–40 TB in just a few days. And even while it’s achieving a throughput of more than 5 TB per week, the startup is reducing its costs on AWS. “Another factor that made us choose AWS is that AWS has a great presence in the African continent, including the close physical proximity of its data centers to our business units there,” says He.   54gene is using its data analytics infrastructure on AWS to drive research into specific diseases. For example, the startup is working to identify what genetic factors might lead to more serious cases of sickle cell disease in Nigeria and to tailor treatments to patients based on disease severity. 54gene stores all its genomic data using Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. “Another great aspect of working on AWS is that we can configure data storage to be cost effective,” says Joshi. The company uses Amazon S3 Lifecycle policies to automatically migrate data to Amazon S3 Glacier storage classes—which are purpose-built for data archiving—to minimize storage costs.   To conveniently access data stored in Amazon S3 for processing using HPC clusters, the startup uses Amazon FSx for Lustre, which provides fully managed shared storage built on a popular high-performance file system. And 54gene’s computational scientists, many of whom had trained on traditional on-premises setups, adjusted easily to AWS. “What’s nice about AWS is that we are able to replicate a familiar environment for our computational scientists with minimal cloud training,” says Joshi. “AWS ParallelCluster is a great example of that.” Based in Nigeria, 54gene is a genomics startup that works with pharmaceutical and research partners to study genetic diseases and identify treatments. It’s focused on addressing the need for diverse datasets from underrepresented African populations. Amazon EC2 30–40 TB AWS Services Used Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Reduced 中文 (繁體) Bahasa Indonesia ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Our current architecture, which is exclusively on AWS, strikes a good balance between cost effectiveness and flexibility. We have varying sizes and designs of computing architecture to make our processes cost effective, and it has been really nice.” costs Achieved Overview Español Facilitated Ji He Senior Vice President of Technology, 54gene Get Started  flexible, scalable, and reliable cloud infrastructure Opportunity | Using AWS ParallelCluster to Build a Scalable, Cost-Effective Genomics Research Solution for 54gene  AWS ParallelCluster Türkçe With the flexibility and cost effectiveness of the cloud, 54gene is better able to study the effects of diseases on previously underrepresented African genetic data. The startup can also seamlessly integrate its highly curated clinical, phenotypic, and genetic data within one solution and build capacity for further research initiatives focused on targeted populations in Africa or specific disease areas. “We have the flexibility to do almost anything on AWS,” says Joshi. “From running quick scripts to genotyping in a matter of hours to analyzing terabytes of data efficiently, this flexibility has been really beneficial.” English Learn how 54gene in life sciences is curating diverse datasets to unlock genetic insights in Africa and globally using AWS. Outcome | Continuing to Increase Representation for African Genetic Data in Global Health Research  datasets that increase diversity in global genetic research Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud Learn more » Deutsch Nigeria-based 54gene collaborates with local research institutions and global pharmaceutical partners to study the many ethnolinguistic groups within Nigeria, better understand the diversity present on the continent, and uncover new biological insights. Its GENIISYS solution includes a state-of-the-art biorepository that stores highly curated clinical, phenotypic, and genetic data from the African population to facilitate research for a new wave of therapeutics. “Through GENIISYS, we wanted to create a gateway between genomics insights from Africa and research in other countries,” says Ji He, senior vice president of technology at 54gene. Amazon RDS Tiếng Việt Amazon S3 Italiano Customer Stories / Life Sciences To effectively collect and store genomic data and connect it to phenotypic information (such as clinical and demographic data), the startup needed a flexible cloud-based solution that could scale while still optimizing costs. “When we’re performing genotyping or whole genome sequencing, we generate huge amounts of data, and we have to process it at a high rate of throughput,” says Esha Joshi, bioinformatics engineer at 54gene. “We chose AWS because of its reliability and scalability and the fact that we have to pay only for what we use. That’s important for a startup because it can be difficult to anticipate computing and storage needs.” Contact Sales Learn more » Português of data analyzed in a few days
6sense Case Study.txt
Searching for a more scalable solution, 6sense began to explore Kubernetes, an open-source container orchestration system, to improve its data pipelines. In 2018, the company migrated its application and API services to two Kubernetes clusters and began using kOps, a set of tools for installing, operating, and deleting Kubernetes clusters in the cloud. Although a containerized architecture improved agility for 6sense, kOps was not fully managed, which required the 6sense team to perform significant day-to-day operations and management. “Using kOps, we experienced way too much maintenance overhead,” says Liaw. “We realized that if we could reduce these manual tasks, our team could focus its time on serving the customer instead of managing Kubernetes.” Français Benefits of AWS Amazon Elastic Compute Cloud (Amazon EC2) By migrating to fully managed Amazon EKS clusters, 6sense can effectively scale and manage its data pipeline, which has accelerated its speed to deliver insights to its customers. The company plans to further improve its scaling capabilities using Karpenter, an open-source Kubernetes cluster automatic scaler built alongside AWS.  Español Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Searching for Scalable Pipeline Orchestration Improved speed to market for new applications and features 日本語 Using Amazon EKS, 6sense has seen a 400 percent improvement in workload throughput, giving it the ability to process 1–2 TB of data per day and growing. With this speed, 6sense can support highly complex workloads and quickly deliver valuable insights to its customers 65 percent faster.  With Enterprise Support, you get 24x7 technical support to automatically manage health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive / preventative programs and AWS subject matter experts. Continuing to Enhance Scalability on AWS Contact Sales Get Started 한국어 6sense’s AWS-powered solution is not only extremely fast but also highly scalable. “We can scale a cluster on Amazon EKS almost infinitely to run as many things in parallel as possible,” says Premal Shah, senior vice president of engineering and infrastructure at 6sense. “We no longer need to worry about how much we can run per hour.” The company also relies on Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which are used to run large workloads at a significant cost savings and accelerate workloads by running parallel tasks. By using Amazon EC2 Spot Instances, 6sense can provision the capacity it needs to support its future expansion while optimizing for costs.  6sense Insights Inc.’s Revenue AI reinvents the way companies create, manage, and convert pipelines to revenue by capturing anonymous buying signals, targeting the right accounts, and recommending channels and messages to boost performance. Frees employees’ time to focus on high-value tasks and innovation Delivers insights to customers 65% faster Because Amazon EKS is a fully managed Kubernetes service, 6sense no longer needs to focus on managing or operating its Kubernetes clusters. Using this time savings, its team can dedicate time to improving the customer experience. “On AWS, we are able to increase developer velocity, reduce unnecessary red tape, and serve our customers as best as we can,” says Liaw. “We can push out new features, insights, and products to them as quickly as possible. The faster we can innovate to serve our customers, the better the experience is for everybody—including our team.” Improved developer productivity Improving Speed, Agility, and Innovation Using Amazon EKS Improved workload throughput by 400% AWS Services Used Processes 1–2 TB of data per day 中文 (繁體) Bahasa Indonesia 6sense has also vastly accelerated its development speeds by migrating to AWS. On Apache Mesos, the company was limited in its ability to build, test, and deploy new data pipelines due to limitations on container throughput. On Amazon EKS, 6sense can run up to 300 percent more containers per hour. It can also run the same number of Docker containers on Amazon EKS in approximately 50 percent of the time that it took under its previous solution. By achieving this level of speed and scalability, 6sense has improved developer productivity and accelerated its speed to market for new applications and features.  We can scale a cluster on Amazon EKS almost infinitely to run as many things in parallel as possible.” Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Learn more » 中文 (简体) In 2019, 6sense chose to invest in AWS Enterprise Support, which provides concierge-like service to support companies in achieving outcomes and finding success in the cloud. The AWS Enterprise Support team helped the company realize that it could alleviate the issues that it was facing by migrating to Amazon EKS, which is fully managed. “For 6sense, Amazon EKS was almost a drop-in replacement that magically worked better,” says Liaw. Premal Shah Senior Vice President of Engineering and Infrastructure, 6sense Insights Inc.   6sense migrated to Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud or on premises. Using Amazon EKS, 6sense completes workloads significantly faster while reducing management needs, improving its speed of delivery, and freeing its developers to focus on innovative solutions. 6sense Insights Inc. (6sense) needed to effectively scale and manage its data pipelines so that it could better support its growth. With 6sense Revenue AI, a leading platform for predictable revenue growth, the company generates actionable insights for business-to-business sales and marketing teams. This service relies on artificial intelligence, machine learning, and big data processing, requiring 6sense to run complex workloads and process terabytes of data per day. When its open-source pipeline orchestration solution could no longer support these workloads, 6sense began exploring alternative solutions and chose to implement fully managed services from Amazon Web Services (AWS).  Headquartered in San Francisco, California, 6sense delivers data analytics, sales insights, and other predictions so that business-to-business revenue teams can better understand their buyers and customers. In 2014, the company began using Apache Mesos, an open-source solution that manages compute clusters, to orchestrate its data pipeline frameworks. “As we grew, we encountered several limitations on Apache Mesos,” says George Liaw, director of infrastructure engineering at 6sense. “We could only offer compute resources to one framework at a time, which slowed our processes. We also experienced scaling issues.”  Türkçe Facilitates a fully managed solution English 6sense Insights Inc. Improves Scalability and Accelerates Speed to Market by Migrating to Amazon EKS Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  AWS Enterprise Support Deutsch Tiếng Việt Italiano ไทย About 6sense Insights Inc. 2022 Amazon EC2 Spot Instances On AWS, 6sense freed its employees to focus on innovation, and the company will continue to use AWS services to develop new, value-generating solutions. “At 6sense, we are able to move quickly and innovate on AWS without being held back,” says Liaw. Amazon Elastic Kubernetes Service (Amazon EKS) In September 2021, 6sense began migrating its remaining workloads from legacy solutions running on Apache Mesos and kOps to Amazon EKS. The company migrated the majority of its application and API service workloads to Amazon EKS within the first week and developed a stable and usable pipeline orchestration solution by the end of 2021. “Once we started running Amazon EKS clusters, we unlocked valuable capabilities,” says Liaw. “We could test clusters with more flexible configurations without worrying about their stability.” By December 2021, the company was running 7–8 clusters on Amazon EKS and had completed 80 percent of its migration.  Português Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices.
Accelerate Time to Business Value Using Amazon SageMaker at Scale with NatWest Group _ Case Study _ AWS.txt
On AWS, NatWest Group can quickly launch personalized products and services to meet customer demands, boost satisfaction, and anticipate future needs. The bank’s data science teams are empowered to deliver significant business value with streamlined workflows and a self-service environment. In fact, NatWest Group is on track to double its number of use cases to 60 and achieve a 3-month time to value. Français The bank will continue to explore and create new, innovative solutions on AWS. For example, NatWest Group will soon introduce an ML offering that automatically sets prices for its products, improving the intelligence and efficiency of the pricing process.  2023 Español To equip its data teams with the skills that they need to use these tools, NatWest Group has encouraged its employees to embark on cloud learning journeys. It has hosted over 720 AWS Training courses for its data science teams to learn new skills, such as applying best practices for DevOps and building a data lake on AWS. Additionally, several employees obtained AWS Certifications, which are industry-recognized credentials that validate technical skills and cloud expertise. By offering these opportunities, NatWest Group has equipped its data science teams to build powerful, predictive ML models on AWS at a faster pace. 日本語 NatWest Group is one of the largest banks in the United Kingdom. Formally established in 1968, the company has origins dating back to 1727. NatWest Group seeks to use its rich legacy data to innovate and personalize its personal, business, and corporate banking and insurance services. To deliver these solutions at a faster pace, the bank needed a standardized ML approach. “We didn’t have a consistent way to access our data, generate insights, or build solutions,” says Andy McMahon, head of MLOps for data innovation for NatWest Group. “Our customers felt these challenges because it took a much longer time to derive value than we wanted.” Contact Sales for data science teams To deploy personalized solutions at an enterprise scale, NatWest Group chose to adopt Amazon SageMaker as its core ML technology. The bank also engaged AWS Professional Services, a global team of experts that can help companies realize their desired business outcomes when using AWS, to prepare for the project. During a series of workshops, NatWest Group and AWS Professional Services worked together to identify areas of improvement within the company’s ML landscape and created a strategy for development. After crafting a comprehensive plan, the teams began working on the project in July 2021.   한국어 Accelerate Time to Business Value Using Amazon SageMaker at Scale with NatWest Group Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps, improving data science team productivity by up to 10x.. Learn more » Solution | Achieving an Agile DevOps Culture Using AWS ML Solutions Opportunity | Using Amazon SageMaker to Reduce Time to Value for NatWest Group AWS Services Used from 2–4 weeks to hours Outcome | Deploying Innovative Services at Scale Using Amazon SageMaker 中文 (繁體) Bahasa Indonesia NatWest Group employees now have fast and simple access to the data and tools that they need to build and train ML models. “We modernized our technology stack, simplified data access, and standardized our governance and operational procedures in a way that maintains the right risk behaviors,” says McMahon. “Using Amazon SageMaker, we can go from an idea on a whiteboard to a working ML solution in production in a few months versus 1 year or more.” NatWest Group launched its first offerings in November 2022, reducing its time to value from 12–18 months to only 7. 30+ ML use cases AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي To remain competitive in the fast-paced financial services industry, NatWest Group is under pressure to deliver increasingly personalized and premier services to its 19 million customers. The bank has built a variety of workflows to explore its data and build machine learning (ML) solutions that provide a bespoke experience based on customer demands. However, its legacy processes were slow and inconsistent, and NatWest Group wanted to accelerate its time to business value with ML. 中文 (简体)   Amazon SageMaker Studio About NatWest Group Overview built in 4 months The bank turned to Amazon Web Services (AWS) and adopted Amazon SageMaker, a service that data scientists and engineers use to build, train, and deploy ML models for virtually any use case with fully managed infrastructure, tools, and workflows. By centralizing its ML processes on AWS, NatWest Group has reduced the time that it takes to launch new products and services by several months and has embraced a more agile culture among its data science teams. In April 2022, NatWest Group launched an enterprise-wide, centralized ML workflow, which it powers by using Amazon SageMaker. And because the bank already had a presence on Amazon Simple Storage Service (Amazon S3)—an object storage service offering industry-leading scalability, data availability, security, and performance—this was the service of choice for its data lake migration. With simpler access to data and powerful ML tools, its data science teams have built over 30 ML use cases on Amazon SageMaker in the first 4 months after launch. These use cases include a solution that tailors marketing campaigns to specific customer segments and an application that automates simple fraud detection tasks so that investigators can focus on difficult, higher-value cases. Get Started Reduced time to value   Customer Stories / Financial Services “There’s so much that we’ve gained from using our data intelligently,” says Greig Cowan, head of data science for data innovation at NatWest Group. “On AWS, we have opened up many new avenues and opportunities for us to detect fraud, tailor our marketing, and understand our customers and their needs.” Türkçe English 720+ Promotes self-service environment NatWest Group is a British banking company that offers a wide range of services for personal, business, and corporate customers. It serves 19 million customers throughout the United Kingdom and Ireland. Greig Cowan Head of data science for data innovation, NatWest Group If you want to launch an environment for data science work, it could take 2–4 weeks. On AWS, we can spin up that environment within a few hours. At most, it takes 1 day.” AWS Service Catalog Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. To accelerate its employees’ workflows, NatWest Group uses AWS Service Catalog, which organizations use to create, organize, and govern infrastructure-as-code templates. Before the bank adopted this solution, data scientists or engineers would need to contact a centralized team if they wanted to provision an ML environment. Previously, it would take 2–4 weeks before the infrastructure was ready to use. Now, NatWest Group can launch a template from AWS Service Catalog and spin up an ML environment in just a few hours. Its data teams can begin working on projects much sooner and have more time to focus on building powerful ML models. This self-service environment not only empowers data science teams to derive business value faster, but it also encourages consistency. “As a large organization, we want to make sure anything that we build is scalable and consistent,” says McMahon. “On AWS, we have standardized our approach to data using a consistent language and framework, which can be rolled out across different use cases.” Reduced time to provision environment Deutsch Tiếng Việt Amazon S3 Italiano ไทย Learn how NatWest Group used Amazon SageMaker to create personalized customer journeys with secure machine learning. To learn more, visit aws.amazon.com/financial-services/machine-learning/. Learn more » AWS courses completed from 12–18 months to 7 NatWest Group has adopted a number of features on Amazon SageMaker to streamline its ML workflows with the security and governance required of a major financial institution. In particular, NatWest Group adopted Amazon SageMaker Studio, a single web-based visual interface where it can perform all ML development steps. Because Amazon SageMaker Studio is simple to use and configure, new users can quickly set it up and start building ML models sooner. Português Amazon SageMaker
Accelerate Your Analytics Journey on AWS with DXC Analytics and AI Platform _ AWS Partner Network (APN) Blog.txt
AWS Partner Network (APN) Blog Accelerate Your Analytics Journey on AWS with DXC Analytics and AI Platform by Dhiraj Thakur and Murali Gowda | on 27 JUN 2023 | in Analytics , Artificial Intelligence , AWS Partner Network , Customer Solutions , Intermediate (200) , Thought Leadership | Permalink | Comments |  Share By Dhiraj Thakur, Solutions Architect – AWS By Murali Gowda, Advisor Architect – DXC Technology DXC Technology Analytics are an essential tool that helps companies accelerate their business outcomes, but the current approach to analytics taken by most companies limits their effectiveness. Rapid changes in business intelligence and analytics solutions mean companies are continually over-investing in solutions that rapidly age. They’re spending more time reevaluating, redesigning, and redeploying technologies than applying them to the business. They’re also making new commitments to expand their IT footprint at a time when most want to reduce their total estate. Analytics can unlock new value from data, but customers want to make faster decisions and gain greater competitive advantage. To benefit from the full power of analytics, customers need a solution they can deploy quickly and use to improve the effectiveness of their existing business intelligence over time—and avoid investing in tools that become obsolete before they’re deployed. With DXC Technology’s Analytics and AI Platform (AAIP) , an analytics platform as a service built on Amazon Web Services (AWS), you can develop and deploy new analytics applications in weeks. In this post, we walk through the features and benefits of AAIP, which helps you look further and deeper, gaining business insights from data you could not previously access or manage. DXC Technology is an AWS Premier Tier Services Partner and Managed Service Provider (MSP) that understands the complexities of migrating workloads to AWS in large-scale environments and the skills needed for success. Platform Overview Historically, several challenges held customers back from adopting advanced analytics: Siloed data and operational data stores hindered data access and discovery, thereby limiting insights generation. Data duplicated across multiple systems led to data quality issues. Managing data ingestion, data integration, and data quality all from a single, centralized location. Gaining approval on enterprise data models and entity relationship models from multiple business units. Regulatory and compliance issues. Complex, upfront costs, and heavy development marred with skills issues Limited by use of on-premises options. Administrative overhead. DXC Analytics and AI Platform is an analytics solution that rapidly improves the effectiveness and impact of your existing business intelligence landscape. AAIP addresses these challenges and eliminates the need to make continuous investments that expand the IT footprint and increase maintenance and upgrade costs. Figure 1 – DXC Analytics and AI Platform (AAIP). The bottom layer of the graphic above is DXC’s managed service offering where they offer to manage the platform. The next layer shows where DXC offers flexible deployment options including hybrid cloud, on-premises, and AWS deployments. Bundled with DXC’s managed service, AAIP takes the guesswork and complexity out of analytics with a fully managed, industrialized solution that incorporates the latest technologies. DXC follows AWS best practices for policies, architecture, and operational processes built to satisfy the requirements of enterprise grade security to protect data and IT infrastructure hosted in AWS. DXC provides the core industrialized platform complemented by AWS products and platform extensions from a rich services catalog, and custom options are also available. Customers can take advantage of rapid advances in artificial intelligence (AI), automation, and core analytics technologies offered from AWS. DXC’s solution accelerator, design patterns, and reference architecture speed up the implementation, allowing you to quickly access the right data and develop solutions that target the most critical needs. Using AAIP, customers can develop and deploy analytics apps that are more user-friendly and self-service oriented, using a pay-as-you-go mode. Solution Features and Benefits AAIP is a hardened software-defined architecture that combines the standard security and compliance controls with best-of-breed tooling to provide platform as a service (PaaS). The following diagram provides the benefits offered from AAIP as a service. Figure 2 – AAIP solution features and benefits. There are many benefits of AAIP available, including: Scale: A platform that scales as you grow. Seamlessly works with on-premises or cloud vendors, with multi- and hybrid-cloud deployment options. Support and maintenance: Leverages a pre-built monitoring and infrastructure configuration. Security: The enterprise-grade platform is built with high standards in security, including protection for most frequently occurring infrastructure (layer 3 and 4) attacks like distributed denial of service (DDoS), reflection attacks, and others. The platform is HITRUST certified and uses AWS Shield , a threat detection service that continuously monitors AWS accounts. Patching and scanning: Managed services functions include analytics workloads, service management, data backup/recovery, software patches/upgrades, continuous vulnerability management, and incident management. Operating system and security patches are reviewed and applied periodically. New instances are scanned prior to implementation, and anti-virus scanning is implemented. Data visualization tools: Robust data visualization tools and algorithms for advanced analytics and ML. Logging and monitoring: Provisioned resource tracking for continuous monitoring of account related activity across AWS infrastructure. Standard and selectable AWS and third-party tooling: Preconfigured ServiceNow for incident management and simplified workload monitoring. In case of any incident Amazon Simple Notification Service (Amazon SNS) sends the notification to users and triggers the ServiceNow incidents. Data pipelines: Batch, event- and API-driven data pipeline and workflow engines. In the following diagram, you can see how AAIP features support end-to-end cloud analytics adoption. Figure 3 – AAIP offering overview. The black box in Figure 3 shows DXC’s offerings in data analytics platform, including decades of extensive industry experience, enterprise-grade security and platform, and accelerators. The grey box shows DXC’s best practice guidance to customers to rapidly build the platform for their analytics needs. The purple box shows benefits to customers. AAIP provides distinct advantages to customers including: Accelerate the time to business value: DXC solution accelerators offer a T-shirt sizing-based platform, ingestion of the right data, and rapid execution of targeted business use cases. End-to-end managed services: DXC’s managed services leverage a deep pool of technical, business, and industry experts with field-tested methodologies, processes, tools delivered per an agreed service-level agreement (SLA). This includes monitoring, incident management, centralized logging, endpoint security, cloud security posture management, compliance, scanning, and threat detection. Solution accelerators: DXC offers accelerators such as reference architectures, design patterns, deployment automation, blueprints, and runbooks that cover the initial setup, onboarding, and ongoing run with adherence to SLAs. Full-service suite: Utilize a full set of analytics services to assist in achieving analytics insight goals. Supports delivery of advanced analytics (AI/ML, natural language processing) and actionable insights to business stakeholders. Conclusion In this post, you learned about the features and benefits of using DXC Technology’s Analytics and AI Platform (AAIP) on AWS. In an environment of competitive pressure emerging from AI and analytics, AAIP enables companies to unleash the potential of data in real-world, practical applications. AAIP is a proven analytics platform that’s built from AWS-native services and enables users to scale their business seamlessly and reduce go-to-market time significantly. DXC offers standardized services to advise and coach people, change organizational structures, and implement and run analytics platforms at scale. . . DXC Technology – AWS Partner Spotlight DXC Technology is an AWS Premier Tier Services Partner  that understands the complexities of migrating workloads to AWS in large-scale environments, and the skills needed for success. Contact Partner | Partner Overview | AWS Marketplace | Case Studies TAGS: AWS Competency Partners , AWS MSP Partner Program , AWS Partner Guest Post , AWS Partner Solutions Architects (SA) , AWS Partner Success Stories , AWS Premier Tier Services Partners , AWS Public Sector Partners , AWS Service Delivery Partners , AWS Solution Provider Partners , AWS Well-Architected Partners , DXC Technology , Managed Service Provider Comments View Comments Resources AWS Partner and Customer Case Studies AWS Partner Network Case Studies Why Work with AWS Partners Join the AWS Partner Network Partner Central Login AWS Training for Partners AWS Sponsorship Opportunities Follow  AWS Partners LinkedIn  AWS Partners Twitter  AWS Partners YouTube  AWS Email Updates  APN Blog RSS Feed
Accelerating customer onboarding using Amazon Connect _ NCS Case Study _ AWS.txt
NCS, an AWS Partner, had been using AWS services to support various applications and IT environments for several years. The NCS Service Desk team wanted to expand its use of AWS by migrating to Amazon Connect, a pay-as-you-go, contact center offering with infinite scalability. “Amazon Connect met all our requirements, and we knew it would allow us to add innovative features on top of it in the future to meet our customers’ needs,” Cheung says. Amazon Comprehend On-demand scaling Français About NCS Group 2023 Amazon Connect is an omnichannel cloud contact center that allows you to set up a contact center in minutes that can scale to support millions of customers. With Amazon Connect you can stay ahead of customer expectations and outpace the competition at a lower cost.  Español Recently, NCS has started using AI and ML technologies such as Contact Lens for Amazon Connect, which the company now deploys for contact center analytics. “With Contact Lens for Amazon Connect, we can measure the quality of our customer calls by generating analytical reports within hours of a call,” says Sivabalan Murugaya. 日本語 To further improve its customer experience, NCS has integrated a survey in Amazon Connect to gauge customer sentiment after each call. “Our customer satisfaction scores have been very high, which is encouraging,” says Cheung. NCS has accelerated onboarding time, improved customer communications, and reduced costs by migrating its Service Desk contact center to Amazon Connect. The group is funneling savings back into the business and can more efficiently deploy staff to value-added projects. “We can invest more in our development efforts now,” Cheung says. “As a result, our team is spending more time exploring new features and innovations to serve our customers.” Get Started 한국어 Although NCS initially planned for the migration to take six months, the company completed it in just three months. “Because of the AWS integration and overall efficiency of Amazon Connect, we migrated 40 projects to Amazon Connect quickly and easily,” elaborates Murugaya. Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Transforming NCS Service Desk to be More Agile NCS Group, a subsidiary of Singtel Group, is a leading IT consulting firm that partners with governments and enterprises in the Asia Pacific region to advance communities through technology. It was established in 1981 and has 12,000 employees across the region.   reduction in system operations costs Jessica Cheung Practice Lead for EUC and Service Desk, NCS AWS Services Used Additionally, with the integration between Amazon Connect and the NCS knowledge base system, service desk agents can quickly search different databases for information. “We now have a consistent feed of accurate information to relay to our customers,” adds Murugaya. As part of an ongoing digital transformation, NCS sought to onboard new Service Desk customers faster by moving away from the solution’s on-premises IT environment. “The deployment time for new customers could take eight weeks because of software implementation and hardware procurement, and that was too long. We wanted technology that was agile, modular, cost effective, and easy to scale as we grew,” says Sivabalan Murugaya, lead consultant for EUC and Service Desk at NCS Group. On-demand scaling was a key point, as Service Desk call volumes are highly dynamic; from one day to the next the group might need 100 additional service center agents. 中文 (繁體) Bahasa Indonesia Data sovereignty Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) 3 weeks Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text. Amazon Comprehend helps businesses simplify document processing, classify documents, redact personally identifying information, and more. Learn more » Outcome | Investing in New Features and AI Innovation Amazon Connect Overview Since 1981, NCS has been providing technology solutions and consulting services to government agencies and enterprises across the Asia Pacific region. The group employs 12,000 people, many of them working with the NCS Service Desk. “Through NCS Service Desk, we support our customers’ application, infrastructure, and end-user desktop needs,” explains Jessica Cheung, practice lead for EUC and Service Desk at NCS Group. customer onboarding time Contact Lens for Amazon Connect, a feature of Amazon Connect, provides a set of conversational analytics and quality management capabilities, powered by machine learning, that helps understand and classify the sentiment, trends, and compliance of your conversations. Learn more » NCS Service Desk serves healthcare organizations and local governments, making data sovereignty another critical consideration for a new Service Desk IT environment. NCS was also looking to implement technology that would facilitate efficient innovation with native AI capabilities. Türkçe NCS is also evaluating Amazon Comprehend to derive new insights from text within its knowledge base. Cheung concludes, “We are confident that with Amazon Connect and other AWS services, we can keep providing a better contact center solution for our global customers.” NCS migrated its on-premises Service Desk solution to Amazon Connect to halve onboarding time, reduce operations costs, and improve customer communications with new technologies such as artificial intelligence and machine learning. English Contact Lens for Amazon Connect NCS Accelerates Customer Onboarding by Moving its Contact Center to Amazon Connect Amazon Connect met all our requirements, and we knew it would allow us to add innovative features on top of it in the future to meet our customers’ needs.” complies with strict data residency requirements Deutsch Tiếng Việt The group is using Amazon Connect as an omnichannel call center solution, including Contact Lens for Amazon Connect to perform call analytics. Using Amazon Web Services (AWS), NCS onboards new customers twice as fast, reduced operations costs, and gains the agility to innovate new features with native artificial intelligence (AI) and machine learning (ML) capabilities. Taking advantage of Amazon Connect, NCS is delivering an omnichannel solution that integrates voice, chat, email, and AI to improve its overall customer experience. For example, the group typically uses in-house AI to handle end users’ emails within a minute. However, responses can take longer when customers present more complex issues. Using Amazon Connect, service desk agents receive the complex emails immediately and can provide a timely response. Italiano ไทย supports variable, volatile workloads Solution | Saving Time and Operations Cost with an Omnichannel Solution Onboarding new customers to Amazon Connect is likewise quicker and easier. Instead of six to eight weeks, onboarding now takes just three weeks. The group can scale its Service Desk solution up or down on demand and has reduced system operational costs by 30 percent. By leveraging various data centers within the AWS Asia Pacific Region, it also ensures compliance with customers’ stringent data sovereignty requirements. Learn more » NCS Group (NCS) is a multinational information technology company that serves governments and enterprises across Asia Pacific. To improve agility and onboard customers faster, NCS migrated its on-premises call center to Amazon Connect. 30% Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Accelerating Migration at Scale Using AWS Application Migration Service with 3M Company _ Case Study _ AWS.txt
applications cutover in 12 hours 3M Company is a manufacturing company that uses science to improve lives and solve some of the world’s toughest challenges. 3M has corporate operations in 70 countries and sales in over 200. Get more flexibility and value out of your SAP investments with the world’s most secure, reliable, and extensive cloud infrastructure, 200+ AWS services to innovate, and, purpose-built SAP automation tooling to reduce risk and simplify operations. Learn more » Français scalability, flexibility, and resiliency Outcome | Developing Modern, Cloud-First Applications 2023 Solution | Migrating 2,200 Applications in 24 Months Using AWS Application Migration Service SAP on AWS Español Global manufacturer 3M Company migrated 2,200 applications to AWS in 24 months with minimal downtime, improving its scalability and resiliency, and optimizing costs to save millions of dollars. 日本語 AWS Services Used Contact Sales Customer Stories / Manufacturing Accelerating Migration at Scale Using AWS Application Migration Service with 3M Company Get Started 한국어 AWS Professional Services AWS Professional Services offerings help you achieve specific outcomes related to enterprise cloud adoption. Each offering delivers a set of activities, best practices, and documentation reflecting our experience supporting hundreds of customers in their journey to the AWS Cloud. Learn more » Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The promise of the cloud—and what we achieved after we migrated to AWS—was the ability to flexibly scale and deploy with a very short lead time.” Improved About 3M Company Reduced 中文 (繁體) Bahasa Indonesia AWS DataSync by cost optimizing compute applications across thousands of servers migrated in 24 months Ρусский عربي AWS Application Migration Service 中文 (简体) Opportunity | Working alongside AWS Professional Services to Get to Migration at Scale for 3M Company   The migration at scale moved at significant speed. At one point, the team moved 500 applications in around 12 hours. Perhaps even more impressively, 3M’s largest and most critical workload—its enterprise resource planning solution, which included hundreds of terabytes of data and hundreds of applications—was cutover in under 20 hours. That solution was migrated to SAP on AWS, which offers proven approaches backed by expert experience supporting SAP customers in the cloud on AWS. “The speed and consistency in delivering our workloads to the cloud was truly a benefit of 3M working alongside AWS in our migration at scale,” says Hammer. “When we looked at the challenge that was presented to us—30 months or fewer to migrate nearly all our enterprise workloads from our aging data center to the cloud—the combined effort between 3M, AWS Professional Services, and other AWS engineering teams made that possible. We were able to hit our milestones and migrate our workloads; we reduced risks and, in many cases, introduced better capabilities using AWS, which provided the scalability and flexibility and resiliency that we didn’t have in the data center.” 3M is a global manufacturing company, producing products from adhesives to medical supplies to industrial abrasives, all with the mission to use science to improve lives and solve tough customer challenges. With corporate operations in 70 countries and sales in over 200, 3M needed greater scalability than was available using its on-premises data centers. There were long lead times for procuring and deploying hardware, making it difficult for 3M to meet the demands of existing workloads and slowing down new projects. 3M required greater stability and sustainability, neither of which the aging data center could provide. Overview Kyle Hammer Director of Cloud Transformation, 3M Company Türkçe To perform the migration, 3M used tools such as AWS Application Migration Service, which minimizes time-intensive, error-prone manual processes by automating the conversion of source servers to run natively on AWS. AWS Application Migration Service also simplifies application modernization with built-in and custom-optimization options. 3M also used AWS DataSync, a secure, online service that automates and accelerates moving data between on-premises and AWS storage services. Using these tools, 3M could replicate its workloads from on premises to AWS with minimal changes. 3M migrated some workloads that required more creative, flexible work-around capabilities, and using AWS tools, it could address those challenges as they arose. “We were able to maintain the pace that we needed even with those diverse workloads across many different systems,” says Hammer. After each wave of the migration, the company also took time to thoroughly and thoughtfully evaluate how the migration was going. “We captured data in each wave, and that data would help remediate challenges in subsequent migrations,” says Hammer. “That process was helpful for us to mitigate risk and improve the delivery.” Global manufacturer 3M Company (3M) needed a technology solution more flexible and scalable than its data centers. Not only were the data centers aging, but it was difficult to obtain new hardware when 3M needed to increase its capacity quickly. 3M began looking for a cloud-hosting solution to run its applications, including 11 different enterprise resource planning environments. 3M Enterprise IT selected Amazon Web Services (AWS) as its preferred cloud services provider and used AWS tools and expertise to migrate thousands of servers in 24 months. Now on AWS, 3M has increased its scalability and resiliency, and it has begun using automation to streamline processes such as server deployment and rightsizing. English 500 Now that 3M has completed its migration at scale, the company is delivering new applications with a cloud-first, serverless focus. 3M is planning to move its databases into AWS-native database services, such as Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 3M is automating server builds in the cloud using the AWS interface. Now, users within 3M can build and deploy resources on AWS in minutes, compared to weeks or even months on premises. 3M is also using automation to correctly size compute instances for workloads and to schedule compute only when needed. “On AWS, we no longer need to run many of our systems 24 hours a day, like we used to do in our data center,” says Hammer. “That’s resulted in millions of dollars in compute savings from what we initially migrated to the cloud.” 3M is also optimizing its storage and backups, saving hundreds of thousands of dollars in its storage rightsizing efforts alone. 3M kicked off its 3M Cloud Transformation Program in 2020 to complete a migration at scale to AWS. “The promise of the cloud—and what we achieved after we migrated to AWS—was the ability to flexibly scale and deploy with a very short lead time,” says Kyle Hammer, director of cloud transformation at 3M. To complete its migration at scale, 3M began working alongside AWS Professional Services, a global team of experts that can help organizations realize desired business outcomes using AWS, to plan a migration. “Working alongside AWS Professional Services went very well,” says Hammer. “This migration would not have been successful in the time that we had allotted without the strong collaboration from AWS and AWS Professional Services.” AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. Learn more » Deutsch Tiếng Việt Overview | Opportunity | Solution | Outcome | AWS Services Used Italiano ไทย 2,200 Saved millions of dollars Learn more » AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automating the conversion of your source servers to run natively on AWS. “3M is driving to increase our presence with digital products and enterprise. We’re continuing to develop products that are supporting and solving challenges for our customers, and those will be developed in the cloud on AWS,” says Hammer. resource deployment time from weeks to minutes The 3M Cloud Transformation Program began with 8 months of designing and planning, followed by 24 months of migration at scale. 3M completed the transformation program with minimal downtime in 24 months with 51 waves, delivering 2,200 existing enterprise applications to AWS in addition to hundreds of other new instances and applications that were in development in that time frame. “We worked alongside AWS Professional Services to develop a solid plan that had the appropriate governance and controls in place so that we could review, flex, build, and scale to meet the migration needs,” says Hammer. “Through that methodology, we could adjust the technical processes and react quickly to keep the program on track and continue to deliver our migration at scale.” The end state of the migration included over 6,200 instances on Amazon Elastic Compute Cloud (Amazon EC2)—a service that provides secure and resizable compute capacity for virtually any workload—and petabytes of data migrated to other AWS services. Português
Accelerating Time to Market Using AWS and AWS Partner AccelByte _ Omeda Studios Case Study _ AWS.txt
Omeda Studios was founded in 2020 with the mission to build community-driven games. Omeda’s founders began the Predecessor project in 2018, seeking to rebuild a defunct multiplayer online battle arena game they had enjoyed and make it available for PC and console. The studio had built a backend but found the architecture was not designed to scale with the expected numbers of players. The company knew it would need another solution. “We needed a reliable, resilient, and scalable backend that would handle hundreds of thousands of players,” says Miles. 68,000 players Français Outcome | Launching Predecessor for PC and Console Español ran successful playtest with no downtime 日本語 AWS Services Used Customer Stories / Games 2022 4–6 months 한국어 Tom Miles Vice President of Engineering, Omeda Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon DocumentDB Get Started In addition to AccelByte offering the services and features that the studio needed, Omeda also received great customer support from AccelByte. “The ease of integration with AccelByte was much simpler than anything else we tried,” says Miles. “Instead of struggling to integrate with an unfamiliar backend, the AccelByte team implemented it for us.” In April 2022, the studio ran a playtest—the third playtest for the game, and the first using AccelByte’s backend. Over 68,000 players logged in to play the game during the test weekend, playing 11 million total minutes. Omeda received overwhelmingly positive feedback from the test on social media, including positive feedback about the latency of the game. “There was no downtime for the infrastructure during the playtest,” says Steven Meilleur, founder and chief technology officer at Omeda. “It went off without a hitch, and we were able to accommodate all the players that wanted to gain access. It was impressive to see how AccelByte’s solutions on AWS held up with that kind of load.” Opportunity | Building a Reliable Backend for Predecessor 中文 (繁體) Bahasa Indonesia Omeda Studios Accelerates Time to Market Using AWS and AWS Partner AccelByte Omeda researched the options and found AccelByte, which offered game solutions that fit most closely with the experience Omeda wanted to offer. Using AWS, AccelByte provides account services; cloud game storage to track and save player progression and configurations; social services for players to make friends and establish groups; dedicated server fleet management services; monetization services; and tools such as stats, leaderboards, and achievements to boost player engagement. AccelByte has been an AWS Partner since 2019. “We wanted to serve our customers better by investing in running our technology on AWS as efficiently and reliably as possible,” says Train Chiou, vice president of customer success at AccelByte. “Our goal is to help our clients get to market quicker and not have to worry about reinventing the wheel. You don’t have to spend the first year of creating your game investing in technologies that have already been well established, and you can focus on making the game better.” Omeda began working alongside AccelByte in August 2021 to integrate the game with AccelByte’s backend, which helped the studio accelerate the launch of Predecessor by 4–6 months. The studio also saves time by using managed services. For persistent storage, the game backend services use Amazon DocumentDB (with MongoDB compatibility) (Amazon Document DB)—a scalable, highly durable, and fully managed database service for operating mission-critical MongoDB workloads—and Amazon Relational Database Service (Amazon RDS) for PostgreSQL, a managed service that makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. By using fully managed services, Omeda can focus its time on creating a great player experience. “Game studios take a long time to grow, so it’s pivotal for us to use resources where they are most needed: in developing the game,” says Miles. “Using AWS, we can spend more time on developing game features.” Omeda plans to release Predecessor by the end of 2022. “It’s a very short time scale for a game in general, let alone a game that’s going to be online,” says Miles. “Using AWS and AccelByte and having the cooperation from their teams facilitated our meeting those aggressive deadlines.” The studio is growing quickly, doubling its employee base in the 2 years since it was founded. After the PC release, the studio will also work on releasing the game for consoles. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Amazon EC2 Amazon RDS makes it easy to set up, operate, and scale PostgreSQL deployments in the cloud. With Amazon RDS, you can deploy scalable PostgreSQL deployments in minutes with cost-efficient and resizable hardware capacity. Learn more » Learn more »   Overview Predecessor, by 4–6 months using AWS Partner AccelByte’s game backend services built on AWS. Time for creativity Scalable solution Gaming company Omeda Studios accelerated the launch of its first game, About Omeda Studios Türkçe English Amazon RDS for PostgreSQL Amazon DocumentDB is a scalable, highly durable, and fully managed database service for operating mission-critical MongoDB workloads. Learn more » Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. “We’ve succeeded in rebuilding most of what we set out to build,” says Meilleur. “AWS has delivered what we needed in a time when we really needed it.” accelerated game launch Omeda turned to Amazon Web Services (AWS) and AccelByte, an AWS Partner and game technology company that provides game backend as a service. Using AccelByte services, built on AWS, Omeda accelerated the time to market for Predecessor and improved the reliability and elasticity of the game. “Our aim is to release the game to players as soon as we can, and AccelByte helped us with this,” says Tom Miles, vice president of engineering at Omeda. Deutsch Using AccelByte’s services on AWS, Omeda can scale the backend of its game to meet demand for hundreds of thousands of players. Compute for the game runs on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. AccelByte has deployed its services on AWS to meet Omeda’s load and usage requirements, using different sized disk queues and deployment methodologies to accommodate Omeda’s target player concurrency and setting up the architecture to automatically scale up or down. Additionally, because AWS offers high service-level agreements, the reliability and uptime of the game service are high, with AccelByte targeting 99.9 percent uptime for its clients. “High uptime is key for a good player experience, and that’s one of the things we trust AWS to deliver,” says Miles. “You can make the best game in the world, but if players can’t play it because it’s down, it doesn’t even matter.” Tiếng Việt Founded in 2020, Omeda Studios is a London-based game studio that builds community-driven games. Its first game, Predecessor, is a multiplayer online battle arena game launching in 2022. focused on improving player experience rather than rebuilding backend Italiano ไทย Solution | Accelerating Production Using AccelByte and AWS Contact Sales High uptime is key for a good player experience, and that’s one of the things we trust AWS to deliver.” to support hundreds of thousands of concurrent players Omeda Studios (Omeda) needed a scalable, reliable backend to bring its game, Predecessor, to market quickly and support hundreds of thousands of players. With 50,000 fans in the game’s Discord server and 140,000 players who have signed up to playtest the game, Predecessor is Omeda’s first game, and the studio wanted to concentrate its small team on making the best player experience possible without focusing all its energy on building the game backend. Português
Achieving Burstable Scalability and Consistent Uptime Using AWS Lambda with TiVo _ Case Study _ AWS.txt
Deploying the tech stack and architecture is cheap and simple. Because of the pricing tiers of some of the managed services that we’re using and the pay-as-you-go pricing model, it costs almost nothing to innovate." Solution | Modernizing Hundreds of APIs Using AWS Lambda Français Increased 2023 Outcome | Improving Innovation Using Serverless Solutions Español Learn how TiVo in the media and entertainment industry achieved burstable scalability and consistent uptime of streaming services using AWS Lambda and Amazon API Gateway. performance taking only 30 ms at load TiVo plans to continue migrating the rest of its APIs to the cloud using AWS and is looking for ways to innovate further. With more investment in AWS solutions, the company has improved integration and connectivity. It benefits from managed services, like data sharing and data migration, because it is not egressing data. “We get a lot of benefits from using AWS at a very good pricing model. It is enticing to continue migrating to AWS,” says Devitt-Carolan.   日本語 By using AWS-managed and serverless solutions, TiVo has a better understanding of cost limits and can use this to instruct its architecture decisions and innovation. “Deploying the tech stack and architecture is cheap and simple, so that’s a clear benefit for us,” says Devitt-Carolan. “Because of the pricing tiers of some of the managed services that we’re using and the pay-as-you-go pricing model, it costs almost nothing to innovate.” Pairing low costs for early development testing alongside an understanding of the cost and usage patterns fits the incubation process of innovation for TiVo. Building off managed services costs the company only dollars per day, at most. Customer Stories / Media & Entertainment Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Adding new devices and accounts to TiVo’s solution, managing content and entitlement, and managing the arrival of guide and programming data are all powered by hundreds of APIs that interface with those datasets. Modernizing these APIs to improve scalability and connectivity was important to the company. TiVo interacts with its clients through the Amazon API Gateway. “Our use of Amazon API Gateway is tightly coupled with our authentication and authorization strategy,” says Devitt-Carolan. Using Amazon API Gateway, TiVo drives connectivity and forwards APIs to its microservices, legacy APIs, and serverless functions like AWS Lambda, a serverless, event-driven compute service that supports running code for virtually any type of application or backend service without provisioning or managing servers. All data processing from APIs is run at scale using AWS Lambda. Improved AWS Services Used Achieving Burstable Scalability and Consistent Uptime Using AWS Lambda with TiVo Reduced 中文 (繁體) Bahasa Indonesia Opportunity | Using Amazon API Gateway to Improve Scalability for TiVo innovation prompted by low development costs Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي TiVo makes it easy for people to find, watch, and enjoy what they love in one integrated experience, driving loyalty and engagement. In 2017 TiVo began developing microservices for better scalability and time to market, but the continued investment in its infrastructure impeded the desired benefits. “We have a lot of technology that’s interconnected, with dependencies across our services, data stores, and deployment models,” says Taram Devitt-Carolan, vice president of engineering at Xperi. 中文 (简体) The interconnectedness of services has performance cost benefits for TiVo. “Our goal is to treat APIs as a commodity,” says Devitt-Carolan. “If we need to call an API and load a particular piece of data, it costs only 30 ms at load, whether there is a concurrency of 1 or a concurrency of 1,000, which is excellent.” Overview Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. To run its microservices, TiVo uses Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service to run Kubernetes in the AWS Cloud and on-premises data centers. When the company develops a microservice, it runs on an Amazon EKS cluster that has been assimilated into the company’s modernized tech stack to be more compatible with its use cases. TiVo similarly uses Amazon Managed Streaming for Apache Kafka (Amazon MSK), which makes it simple to ingest and process streaming data in near real time with fully managed Apache Kafka, with a more distributed strategy to fit the company’s needs. “Using Amazon MSK and our infrastructure as code, we can make smaller clusters to support sets of APIs that are related to specific data,” says Devitt-Carolan. Taram Devitt-Carolan Vice President of Engineering, Xperi Türkçe hosting cost with pay-as-you-go pricing model English Amazon Elastic Kubernetes Service (Amazon EKS) automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Amazon API Gateway TiVo creates DVR technology and provides television, on-demand, and streaming services to customers. The company has a solution designed to provide businesses with audience analytics and drive viewership. TiVo Brands LLC (TiVo), a wholly owned subsidiary of entertainment technology company Xperi Inc., is migrating hundreds of APIs to the cloud to achieve burstable scalability, expand growth globally, and achieve consistent uptime of its video services. Instead of investing in an on-premises solution that required an ongoing investment in its network infrastructure, TiVo engineering decided to invest in serverless technologies and managed solutions to power core features and critical use cases. TiVo chose Amazon Web Services (AWS) to modernize its on-premises solution by going serverless. In doing so, TiVo improved global scalability, reduced its technical debt, and facilitated innovation and engineering efforts without experiencing budget strain. Deutsch Amazon EKS TiVo uses AWS Lambda functions across a variety of use cases, both externally and internally. These range from calling services within its system to reading or writing operations. Alongside AWS Lambda, the company uses Amazon DynamoDB, a fast, flexible NoSQL database service for single-digit millisecond performance at virtually any scale. TiVo uses AWS Lambda and Amazon DynamoDB to make its APIs lightweight and to query and respond to clients in client use cases. “We have a good, immediate, and burstable scale strategy using Amazon DynamoDB and AWS Lambda, which empowers us to simplify our multiregion approach,” says Devitt-Carolan. By using these serverless services in tandem and modernizing its tech stack, the company improves scalability from a global perspective and can support hundreds of millions of calls per day. Tiếng Việt About TiVo Italiano ไทย Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Learn more » Amazon DynamoDB Higher Learn more » scalability to support streaming globally AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. AWS Lambda Português After carefully reviewing the factors slowing transformation, TiVo engineering selected AWS to host all new services so that the teams could focus on bringing value to the customer with the ease and elasticity of using serverless technologies. “Adopting more AWS-managed services facilitated better connectivity and synchronization across the tech stack,” says Devitt-Carolan. One of the primary managed services TiVo uses is Amazon API Gateway, which it uses to create, maintain, and secure APIs at virtually any scale. By modernizing its tech stack, TiVo achieves a separation of concerns and predictability at scale.
Acrobits Uses Amazon Chime SDK to Easily Create Video Conferencing Application Boosting Collaboration for Global Users _ Acrobits Case Study _ AWS.txt
Français Acrobits leverages Amazon Chime SDK to streamline application development, scale to support thousands of new customers, and increase communication and collaboration. 2023 Español Solution | Building a New Video Conferencing Solution with Amazon Chime SDK Acrobits worked alongside the Amazon Chime SDK team to create LinkUp, a new video conferencing solution that features audio, video, screen sharing, and chat functionality for desktop and mobile environments. The application uses AWS services, including Amazon Elastic Compute Cloud (Amazon EC2) instances for compute. “The Amazon Chime SDK team was a great help. Each time we had an issue, they responded right away,” adds Torreblanca. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Acrobits is also considering integrating Amazon Chime SDK features such as speech-to-text and machine learning (ML) capabilities to analyze customer sentiment. “I can see us using machine learning in our call centers to track customers’ moods during calls,” Torreblanca says. “Amazon Chime SDK makes it easy for us to add new features that differentiate our application, and we plan to do that to make our customers even more comfortable using LinkUp.” 日本語 Outcome | Easing Development and Creating a Simple, Unified Application Experience With LinkUp, Acrobits customers across the globe have improved collaboration via desktop or mobile application. “Our customers simply open the application and press a button for comprehensive video and audio conferencing and chat capabilities, helping them communicate and collaborate more easily,” says Torreblanca. “Also, with features such as noise suppression in Amazon Chime SDK, we can drastically improve communication in call centers or even in noisy home environments.” LinkUp also provides user authentication, moderator controls, call recording, and calendar integration, as well as noise suppression through Amazon Voice Focus. Additionally, Acrobits developers used WebRTC Media, integrated into Amazon Chime SDK, for high-quality audio and video on WebRTC-enabled browsers and mobile systems. “WebRTC also uses encryption for the entire media element, which gave us confidence in the overall security of the environment,” says Torreblanca. Get Started 한국어 The company also needed the right technology to scale as customers adopted the solution. “To meet demand, we knew we had to scale from 10,000 to 100,000 to even 1 million endpoints based on what we were forecasting,” says Torreblanca. “The cloud was the only way to make that possible.” Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used About Acrobits AWS Services Used Improves 中文 (繁體) Bahasa Indonesia to support thousands of new customers Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Simplifies application development Acrobits provides white-label communication and collaboration applications to customers worldwide. To simplify development, the company chose to build on Amazon Web Services (AWS) and used the Amazon Chime SDK to create LinkUp, a new video collaboration platform. Recently, Acrobits needed to respond to customers who were asking for a new video conferencing tool. “The pandemic really initiated that, because many of our customers were caught by surprise and suddenly had people working from home. They needed to give their employees a remote solution for collaborating over video,” says Torreblanca. “Building a video collaboration solution from the ground up wasn’t something we were ready for or had the time and available resources to do on our own.” Scales “Our customers have high expectations, and there’s always a risk when we put out a new solution, but we were confident we could deliver because of the support and responsiveness we got from AWS. Overview By using Amazon Chime SDK and relying on additional AWS services, Acrobits can easily scale LinkUp to meet the video conferencing needs of thousands of customers without limitations. “CPU and memory requirements are intensive for any application, and video conferencing is even more so,” explains Torreblanca. “The moment we need to scale as the application grows, we must ensure we have the power to add thousands of new users immediately. AWS helps us do that. Our developers don’t need to worry about managing compute capacity and servers as the platform continues expanding.” Because Acrobits’ parent company Sinch, an AWS Partner, runs the majority of its business on AWS, Acrobits sought an AWS-based development solution. That search led the company to Amazon Chime SDK, a set of developer tools that helps builders easily integrate real-time voice, video, and messaging into applications. “Amazon Chime SDK is scalable and very robust,” says Torreblanca. “It is also purely an SDK solution without a defined UI, allowing us to develop a brandable user interface for our customers while also supporting our core white label business.” Türkçe Acrobits Uses Amazon Chime SDK to Easily Create Video Conferencing Application and Boost Collaboration for Global Users English By relying on Amazon Chime SDK, Acrobits was able to develop and launch LinkUp in months, offering on-demand scale to support thousands of new customers while improving collaboration for global users. The moment we need to scale as the application grows, we must ensure we have the power to add thousands of new users immediately. AWS helps us do that. Our developers don’t need to worry about managing compute capacity and servers as the platform continues expanding.” Acrobits is a rapidly growing provider of white-label communication and collaboration solutions delivered through a low-code platform. Owned by Sinch, which provides software development kits (SDKs) and application programming interfaces (APIs) for developers, Acrobits helps companies to create customizable and brandable enterprise-grade collaboration solutions in a variety of industries. “We serve 500 businesses in 74 countries and manage around 140 million endpoints” says Rafael Torreblanca, managing director at Acrobits. Amazon Chime SDK Because Amazon Chime SDK simplifies feature integration, Acrobits streamlined the development and management of LinkUp. “Amazon Chime SDK gives us a lot of flexibility in terms of tools we can use, and it has native interfaces for iOS and Android. This really simplified development,” says Torreblanca. “It was easy for us to integrate video, audio, chat, and noise suppression into the application.” Acrobits is a technology leader in mobile and desktop communication and collaboration solutions, providing white-label solutions to customers worldwide. The company’s solutions enable HD voice, video, and multi-messaging mobile and desktop products for system integrators, content service providers, and telecom companies across the communications industry. Deutsch Tiếng Việt Italiano ไทย collaboration in the hybrid workplace Rafael Torreblanca Managing Director, Acrobits Video conferencing may help to increase businesses’ productivity while working from home, but with the world reopening, a new trend has emerged: video conferencing fatigue—a trend that's largely driven by complex UIs. Acrobits designed LinkUp to offer a seamless experience for customers. "LinkUp is not a complicated tool. It's a unified video collaboration platform with simple ways to create and start a meeting and invite people to attend," says Torreblanca. "Using LinkUp, it's very easy for people to set up meetings, connect their calendars, present, and record calls from within the UI while adding a powerful collaboration component to our softphone apps." Amazon EC2 Opportunity | Responding to Customer Demands for Better Collaboration With the Amazon Chime SDK, builders can easily add real-time voice, video, and messaging powered by machine learning into their applications. Português
Actuate AI Case study.txt
Ben Ziomek Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Computer vision startup Actuate AI had a novel idea for identifying threats through security footage. Instead of focusing on facial recognition, which can be expensive, biased, and unreliable, the company set out to use artificial intelligence (AI) object recognition to detect weapons using security camera footage. The result of its efforts was a system that identifies weapons and intruders in real time and notifies stakeholders of immediate threats. However, Actuate AI didn’t want to impose expensive hardware costs on its customers’ security systems, so it knew it would need substantial cloud compute power for offsite inferencing and for scaling as the company grew. Added a security layer with minimal bandwidth usage, often lower than 50 kilobits per second per camera “Most security decision makers are concerned with being able to identify where people are in a building at any given time, being able to understand anomalous behaviors, and trying to identify violent situations before they happen,” says Ziomek. “Unless you know exactly the people who are going to be doing these acts, facial recognition doesn’t help. By focusing on object recognition, we can give our clients all of the security information they need in an instantaneous, easy-to-digest format that respects privacy.” Español About Actuate AI 日本語 Contact Sales For most applications, you just need raw GPU power. Having access to that has enabled us to cut our costs significantly and win some very large contracts." Actuate AI Powers Its Real-Time Threat-Detection Security Tech Using Amazon EC2 Get Started 한국어 Like many startups, Actuate AI faces the challenge of scale—and it has found a suitable growth environment in the AWS Cloud. “For most applications, you just need raw GPU power,” says Ziomek. “Having access to that has enabled us to cut our costs significantly and win some very large contracts that would have been cost prohibitive had we been running on any other type of virtual machines. We’ve found that the level of granularity we get in monitoring and management on AWS has enabled us to maintain the same level of quality while we scale dramatically.” By focusing the AI inference engine on weapons and intruders rather than faces, Actuate AI is able to provide its clients actionable information with fewer false positives and without the racial bias inherent in many facial recognition–based AI models. Focusing on objects also enables Actuate AI to apply its technology to other relevant security and compliance tasks, including mask compliance, social distancing detection, intruder detection, people counting, and pedestrian traffic analysis. Actuate AI found an effective solution in Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud, and a number of other Amazon Web Services (AWS) Cloud services. This solution enabled Actuate AI to offer an affordable, high-level security layer to existing systems for schools, businesses, and the US military. “We run on the cloud using AWS,” says Actuate AI cofounder and chief technology officer Ben Ziomek, “which lets us offer solutions that are more flexible, faster to install, and less expensive than those from almost anyone else on the market.” AWS Services Used Amazon EC2 C5 instances deliver cost-effective high performance at a low price per compute ratio for running advanced compute-intensive workloads. 中文 (繁體) Bahasa Indonesia Actuate AI is a software-based, computer vision AI startup that turns any security camera into a smart camera that monitors threats in real time, accelerating the response times of security firms, schools, corporations, and the US military. Amazon EC2 G4 Instances give Actuate AI a highly responsive, scalable solution that delivers enough power to run image processing and AI inference for eight jobs concurrently—but only when it’s needed. This flexibility enables Actuate AI to scale as necessary while reducing its accelerated computing costs by as much as 66 percent, giving it a huge competitive advantage over AI security firms using on-premises GPUs. “Even a really active camera is going to only have motion on it maybe 40 percent of the time during the day and less than 1 percent of the time at night,” says Ziomek. “On AWS, I only have to pay for the time I’m actually using it, which makes the cloud extremely beneficial to our business model. We have never had an issue with GPU instance availability on AWS.” Ρусский عربي Enabled a fully software-based AI detection system 中文 (简体) The potential applications of its technology are vast. Actuate AI is already working with some customers to track ingress and direct employees to temperature-monitoring stations in the wake of the COVID-19 pandemic, as well as with the US military to help with weapon cataloguing and tracking. Actuate AI currently uses CUDA by NVIDIA—a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of NVIDIA GPUs—and intends to use NVIDIA A100 Tensor Core GPU–based Amazon EC2 instances to further test the limits of its AI. Actuate AI utilizes an in-house AI system that combined best practices from many industry-leading convolutional neural network–based AI models. Many of the system’s core functions, however, operate using AWS. The AI uses the processing power of an Amazon EC2 C5 Instance to monitor cameras for movement at all times. In doing so, the AI identifies relevant objects in less than half a second with the help of Amazon EC2 G4 Instances. Once the AI has decided that the event is a threat, the metadata is stored in Amazon DynamoDB, a key-value and document database that delivers single-digit millisecond performance at any scale. Actuate AI stores the images themselves in Amazon S3. Then, depending on the client’s preferences, Actuate AI uses Amazon API Gateway—a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale—to send the client push notifications about the threat. These notifications can be sent immediately to monitoring stations in under a second, dramatically increasing the client’s ability to respond to threats. Meeting the Future on AWS Overcoming the Shortcomings of Facial Recognition Amazon EC2 C5 Benefits of AWS When Ziomek and Actuate AI cofounder and CEO Sonny Tai decided to develop a computer vision AI security system, they knew that improving from the status quo meant changing some of the basics of traditional AI security solutions. Instead of relying on facial recognition, Actuate AI would use object recognition as the backbone of its inference engine. And rather than the expensive, on-premises hardware typically built into other AI security suites, the company would use accelerated cloud computing.   Reduced accelerated computing cost by 66% Türkçe Historically, a lot of building-monitoring security and defense tasks required expensive, specialized hardware, but Actuate AI is taking a software approach and moving said tasks to the cloud. “We can turn any camera into a smart camera and basically displace a lot of sensor suites by using off-the-shelf cameras that can gather almost-as-good data for a far cheaper price,” says Ziomek. “We’re able to do this with minimal bandwidth usage, often lower than 50 kilobits per second per camera.” Sends push notifications of suspicious activity in under a second   English Amazon EC2 G4 instances deliver the industry’s most cost-effective and versatile GPU instance for deploying machine learning models in production and graphics-intensive applications. Getting Powerful, Cost-Effective Compute Using Amazon EC2 Deutsch Detects firearms and intruders with greater than 99% accuracy in less than 0.5 seconds  Tiếng Việt Cofounder and Chief Technology Officer, Actuate AI Actuate AI runs all actions in the AWS Cloud—using everything from Amazon EC2 P3 Instances powered by NVIDIA V100 Tensor Core GPUs to Amazon EC2 G4 Instances powered by NVIDIA T4 Tensor Core GPUs, AWS Lambda, Amazon API Gateway, and Amazon DynamoDB serverless tools. Additionally, the company stores security images in Amazon Simple Storage Service (Amazon S3), which offers industry-leading scalability, data availability, security, and performance. The cloud architecture enables the company to avoid the cost, time, and liability involved in installing and maintaining expensive, onsite servers and to pass on the savings to its clients. “With AI, generally you need accelerated processing, or graphics processing units [GPUs], and those get expensive fast,” says Ziomek. “We save our customers money while still making everything work without having to do anything onsite, and that’s enabled by the fact that we’re a cloud-first solution.” Italiano ไทย Actuate AI’s inference engine relies on what may be the world’s largest database of labeled security camera footage—a library of more than 500,000 images that helps the company’s AI scour live video to detect very small objects in highly complex scenes with greater than 99 percent accuracy and an industry-leading false positive rate. Much like a graphically demanding video game, image-reliant AI inferencing requires access to powerful GPUs that can quickly analyze high-resolution images and video concurrently. Actuate AI’s models only run when motion is detected, so the number of camera feeds analyzed by the AI will increase as motion is detected by more cameras connected to Actuate AI’s security system. 2020 Learn more » Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2 G4 Instances Facilitated 100% cloud-based data production Português
ADP Developed an Innovative and Secure Digital Wallet in a Few Months Using AWS Services _ Case Study _ AWS.txt
ADP has seen a positive response in usage of its digital wallet in the United States, processing nearly $1 billion of transactions in customer savings envelopes in the 7 months since launching the product. Contact Sales Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » 2023 Español 日本語 ADP Digital Wallet Architecture Diagram valuable flexibility with Earned Wage Access feature Founded in 1949, ADP serves one million customers in 140 countries with its human capital management software. As the source of pay for one in six Americans, ADP saw an opportunity to help enhance the employee experience through financial wellness offerings. The company wanted to move quickly to provide a socially responsible option for its existing customers and lead the way with a modern industry solution. The company’s digital wallet includes on-demand access to eligible workers’ earned wages before payday, support for online shopping, and many other cutting-edge features. ADP had been using AWS services since 2015 and had worked with Nuvalence on other business initiatives since 2019, so it decided to enlist both companies as it worked on this strategic initiative. “The AWS team has been with us through thick and thin and is always responsive. By using AWS, we have incorporated best practices while building resilient systems that can handle our global scale,” says Lohit Sarma, senior vice president of product development at ADP. “Nuvalence has been a strategic partner of ours, delivering high-quality work. Its expertise in building large-scale digital solutions was an ideal fit for our needs, and we brought the firm in to provide high-quality performance.” 한국어 The digital wallet development started in early 2022. Teams from ADP, Nuvalence, and AWS first aligned on the architecture and security requirements. AWS then made service recommendations that were based on the use case and the existing architecture. Nuvalence paired with ADP engineers to design and build the solution, maximizing the effectiveness of features from AWS services and providing the glue to connect to ADP’s infrastructure and existing set of services. Although similar projects often take several years to complete, ADP released the first version of its digital wallet in a few months. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Fortifies security As of 2022, ADP supports approximately 1.7 million Wisely card members across the United States and plans to keep investing in its digital wallet while rolling out additional features using AWS services. “ADP pays one in six workers and moves close to $100 billion in payroll per day in the United States,” says Lohit. “We have to be working 24/7 with high quality, resiliency, and reliability. We brought AWS and Nuvalence together because of these requirements.” Provides eligible members Get Started Lohit Sarma Senior Vice President of Product Development, ADP AWS Services Used 中文 (繁體) Bahasa Indonesia ADP needed flexibility and extensibility to offer a dynamic solution for a fast-moving market with many changing variables. ADP provides education for companies as they roll out the Earned Wage Access feature. With this support, companies can help eligible members make informed decisions while getting valuable access to earned wages when needed. “ADP takes great pride in being a company with high morals that is always there for its clients and their people,” says Lohit. “Using AWS services, we can give people tools to manage their finances and give them access to funds when they potentially need them the most.” Amazon S3 Ρусский عربي Solution | Launching Multiple Features Quickly Using Serverless Technology from AWS Lambda 中文 (简体) About ADP Overview To make that vision a reality, ADP needed to build a solution that supported high security and privacy standards, facilitated going to market quickly, and offered technology for innovation. ADP worked alongside Amazon Web Services (AWS) and Nuvalence, an AWS Partner, to use modern, cloud-native development practices to build the solution for its digital wallet. ADP built an innovative digital wallet in a few months alongside AWS and Nuvalence to make financial wellness tools more accessible to US workers. Customer Stories / Financial Services Türkçe speed, creating a digital wallet in a few months Because ADP manages employee and financial services, the company needed the solution to meet rigorous compliance-quality standards, including the Payment Card Industry Data Security Standard. To bolster the security of its digital wallet, ADP uses services like Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve virtually any amount of data from anywhere. Using Amazon S3, ADP can securely store flat text files involved in money movement. The solution also uses tokens for the card number to keep transactions secure. Because the payment credentials were loaded securely into the digital wallet, customers could use the digital card for purchases and make payments immediately without waiting for a physical card to arrive in the mail. “Data security and privacy are critical to everything we develop,” says Lohit. “Using AWS services, we could uphold our company’s existing standards while innovating on the implementation.” English ADP Developed an Innovative and Secure Digital Wallet in a Few Months Using AWS Services Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram using tokens and oversight Supported $1 billion Increased development Deutsch AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Opportunity | Selecting AWS and Nuvalence to Collaborate on ADP’s Digital Wallet Tiếng Việt With its digital wallet, ADP accomplished its mission of making financial wellness tools more accessible to US workers. The digital wallet is a safe and simple option through which employees without a traditional bank account can access their pay, giving them freedom in spending their wages. The Earned Wage Access feature gives eligible members access to their earned wages before payday, creating a viable alternative for customers who urgently need access to funds and eliminating the need to take out high-interest-rate loans. Human capital management company ADP serves one million customers in 140 countries. In the United States, ADP released its innovative digital wallet, which features tools to help card members with financial wellness. Italiano ไทย ADP, a global leader in human capital management solutions, wanted to provide workers across North America with unprecedented flexibility with a modern digital wallet. ADP’s vision was to use its robust workforce data and many years of experience to create a product that adapted to the modern way that people managed their money. Outcome | Investing in the Digital Wallet for Future Growth Using AWS Services Close Learn more » Click to enlarge for fullscreen viewing.  Data security and privacy are critical to everything we develop. Using AWS services, we could uphold our company’s existing standards while innovating on the implementation.” Architecture Diagram AWS Lambda $1 billion of processing transactions in customer savings envelopes in 7 months Português ADP met its goal to release the digital wallet quickly using AWS Lambda, a serverless, event-driven compute service that customers use to run code without thinking about servers or clusters. The digital wallet uses AWS Lambda to create a variety of different functions, minimizing the compute footprint of the service. “The team used AWS Lambda to provide an efficient and scalable approach to handling authentication, authorization, and other key functions for the wallet,” says Abe Sultan, partner at Nuvalence and executive sponsor of the Nuvalence team working with ADP. Using serverless technology, ADP could both go to market quickly and leave room to scale for future growth as the needs of the solution evolve.
Adzuna doubles its email open rates using Amazon SES _ Adzuna Case Study _ AWS.txt
At first, Adzuna relied on standard Amazon SES features while staff focused on content and deliverability. In recent years, Adzuna has shifted to using dedicated IP addresses and tools like Amazon CloudWatch, a service that provides observability of users’ AWS resources and applications on AWS and on premises. Handles large volumes needs of a growing user base Français For a job search engine to differentiate itself in a crowded market, it must be able to match job seekers to relevant jobs more swiftly and reliably than its competitors. Adzuna, a United Kingdom–based job aggregator that serves 20 countries, aims to achieve that goal by using smart technology to match people to the right jobs and sending personalized emails to users. To handle this substantial task, Adzuna required an email service that was reliable, simple to use, and that could scale as the company grew. The company turned to Amazon Web Services (AWS) and found Amazon Simple Email Service (Amazon SES), a high-scale inbound and outbound cloud email service, to be the solution for its requirements. Using Amazon SES, Adzuna can efficiently send billions of emails to its users across the globe. To support its goal of sending personalized emails to users, Adzuna needed an easy-to-use email service that could handle increasingly large volumes of email as the company grew. Amazon SES proved to be a simple, scalable solution. First, it integrated seamlessly with Adzuna’s existing AWS infrastructure. Second, because Amazon SES could be used as a Simple Mail Transfer Protocol, the Adzuna developers were able to automate the entire process. The team never had to log on to the service or worry about its inner workings, which meant that it could focus its energy on more important tasks like making necessary edits and updates to emails. Solution | Supporting Company Goals through Simplicity and Scalability Español Opportunity | Seeking Reliability, Scalability, and Cost Effectiveness for Large Volumes of Email 日本語 AWS Services Used Adzuna is a smart, transparent job search engine used by tens of millions of visitors per month across 20 countries globally. It uses the power of technology to match people to better, more fulfilling jobs and keep the world working. Bilal Ikram Email Marketing Manager, Adzuna Because its users rely on the accuracy and timeliness of Adzuna’s emailed job alerts, Adzuna required an email service that was, above all, reliable. “It’s important that there’s no downtime and that there are no deliverability issues or at least no server issues where emails just completely fail to send,” says Bilal Ikram, email marketing manager at Adzuna. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Adzuna has continued to benefit from the scalability of Amazon SES and its additional features. In 2022, the company added an additional four countries, and it has used Amazon SES to meet the needs of its growing user base throughout the expansion. Achieved Adzuna Doubles Email Open Rates Using Amazon SES Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Adzuna launched in 2011 as a job search site based in the United Kingdom, and it now operates in 20 countries, including the United States, Singapore, Australia, and India. Users can search the website by type of job and location and have the option to sign up with their email address for job alerts. When users sign up, Adzuna sends an initial welcome email and, after that, sends regular alerts when relevant jobs are posted to the site. With tens of millions of visitors every month, Adzuna sends around two billion personalized emails every year. Improved 中文 (繁體) Bahasa Indonesia Overall, Adzuna has benefited from using multiple AWS services for different purposes while keeping everything under the same umbrella. Outcome | Relying on an Integrated Suite of Solutions Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » 2022 Doubled Overview Türkçe “We can simply create commands that constantly send out the emails connected to Amazon SES without us having to worry about volumes,” Ikram says. Further, Adzuna set up Amazon SES so that it runs across multiple AWS Regions, helping to manage the workload and providing a backup option for sending emails if needed. “If we were to have an outage, we would have a fallback, which makes the network more reliable,” Ikram says. English of email as the company grows “It would be impossible for us to send volumes of emails with dynamic content to the same extent without using Amazon SES,” says Ikram. “It’s very important that we automate that process and send out emails that are relevant to our users.” “Using Amazon SES, I can focus more on improving the quality and content of the emails and our underlying metrics rather than having to worry about just sending the emails out on a daily basis,” Ikram says. “So that means we have more time to focus on the things that really matter—connecting our users to better, more fulfilling jobs.” Amazon SES lets you reach customers confidently without an on-premises Simple Mail Transfer Protocol (SMTP) system. About Adzuna email open rates Amazon Simple Email Service (Amazon SES) Deutsch a simple, seamless setup using AWS infrastructure Amazon SES turned out to be the most reliable tool for the company’s needs. The Adzuna team initially tested a few other email tools, but they weren’t scalable to the degree the company needed. Using the automation abilities of Amazon SES, the company has been able to handle its burgeoning volume of email since it began using the service in 2011—almost from the company’s start. Without these capabilities, Adzuna would be unable to perform a key service feature. Tiếng Việt Italiano ไทย Amazon CloudWatch Contact Sales Using Amazon SES, I can focus more on improving the quality and content of the emails and our underlying metrics rather than having to worry about just sending the emails out on a daily basis.” email click-through rates Supports Português Since Adzuna’s migration to dedicated internet protocol addresses, the company has seen a significant improvement in email open rates, which have almost doubled. It also saw improvements in click-through rates.
AEON Case Study.txt
Reduced costs Français Traffic surges can stifle our business. Using AWS, we can scale easily, and guarantee our customers a reliable service.” Español Amazon EC2 Scales automatically Learn how »  AEON Scales Card Processing System, Achieves 40% Market Growth Using AWS About AEON 日本語 Customer Stories / Financial Services / Cyprus 2023 AWS Migration Acceleration Program Get Started 한국어 John Abraham CEO, AEON Payment Technologies Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity: Faster Cloud Migration and Modernization Using AWS Migration Acceleration Program AEON is now able to easily comply with GDPR requirements too, using AWS Regions and Availability Zones. The company also set up its own data center close to the AWS EU (Frankfurt) Region data center to support personal identification number (PIN) encryption and decryption, and to meet local privacy requirements in the region. AWS Services Used 中文 (繁體) Bahasa Indonesia The company is now able to scale to meet traffic peaks within minutes. “During peak card usage times, we’re seeing 100 card transactions per second with a large number of people checking their accounts online,” says John Abraham, CEO at AEON. “Traffic surges can stifle our business. Thanks to Cloud Nomads and using AWS, we can scale easily, and guarantee our customers a reliable service.” Contact Sales Ρусский AEON’s next challenge was to ensure its card processing system was market ready and able to serve new territories in Europe and Africa. عربي Opportunity: A Streamlined, Scalable Card Processing Software System The AWS Migration Acceleration Program (MAP) is a comprehensive and proven cloud migration program based upon AWS’s experience migrating thousands of enterprise customers to the cloud.  Learn more » 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AEON turned to AWS Partner Cloud Nomads when it realized its on-premises system was hampering growth. Its existing infrastructure couldn’t scale without significant investment in IT equipment. Its main challenge was to ensure its banking clients could meet customer usage peaks at the end and the beginning of each month, when employee wages are typically paid in. AEON scales to handle credit card usage peaks and support 40 percent growth. Using Amazon Web Services, it can comply with data laws and security protocols to support market entry in Europe and Africa. Overview AWS Directory Service Outcome: Building a Growth-Ready Infrastructure to Support New Markets AWS Customer Success Stories Türkçe The company completed its migration in just 3 months using the AWS Migration Acceleration Program (AWS MAP), which helps businesses speed their cloud migration and modernization journey with an outcome-driven methodology. Using AWS MAP gave AEON assurance over the migration process, providing its IT team with the confidence that the project would deliver the successful outcome it needed. English Based in Cyprus, AEON Payment Technologies wanted to move to the cloud to scale its card processing system for banking customers, and expand into new markets in Europe and Africa. It migrated in just 3 months using the AWS Migration Acceleration Program with the help of AWS Partner, Cloud Nomads. With its infrastructure running on AWS, AEON has increased the number of credit and debit cards it handles by 40 percent over 2 years. The business has also saved 33 percent of planned expenditure on IT, and can scale to handle traffic peaks within minutes. Critically, it can easily comply with Visa and Mastercard’s regulations, local data laws, and support Payment Card Industry Data Security Standard (PCI DSS) standards for card processing. AEON began by migrating its card processing software and databases to Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. AEON also uses Amazon EC2 instances for Windows and Linux to support the card processing system’s databases. This expansion meant the company needed support for PCI DSS compliance in new regions. Critically, it also meant that AEON had to comply with EU GDPR data privacy laws. In some of its target markets, it would also need to keep sensitive data within country borders to meet local regulations. The AEON team has worked closely with AWS to create a scalable and reliable cloud-based system. “In our business, technology can hinder progress—now, the opposite is true for AEON,” says Abraham. “Technology is aiding our growth. The fact that we handle traffic peaks without incident is a great achievement for both our IT team and AWS.” AEON has reduced its reliance on on-premises equipment and cut its planned infrastructure budget to one-third of its previous budget using cloud services. “The sales cycle in the card processing industry is long,” says Abraham. “Also, it’s essential to have infrastructure in place so new customers have confidence that we can support them right away. Using AWS, we have the flexibility to serve new customers instantly in our new markets without having to invest in expensive IT equipment and having it sit idle.” AEON is now evaluating AWS Outposts—which businesses can use to run AWS infrastructure and services on premises for a truly consistent hybrid experience—to support PIN encryption and decryption in the future. Deutsch AEON’s systems on Amazon Web Services (AWS) are certified to meet the regulations of its payment associates, Visa and Mastercard. This includes ensuring compliance for those companies’ card issuing and transaction acquisition regulations. With its systems built on AWS, AEON can also comply with the Payment Card Industry Data Security Standard (PCI DSS) requirements and the European Union (EU) General Data Protection Regulation (GDPR) for data privacy.The company has also cut IT expenditure to one-third of its previous budget and can now scale its system to handle traffic peaks within minutes. Cyprus-based AEON Payment Technologies is a third-party card processing software provider that provides value-added services to support the payment processing needs of the commercial banking industry. This includes card issuing, transaction management, and also authorization, reconciliation, and infrastructure services. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Access reliable, scalable infrastructure on demand. Scale capacity within minutes with SLA commitment of 99.99% availability. Learn more » Tiếng Việt AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, activates your directory-aware workloads and AWS resources to use managed AD on AWS. Learn more » Italiano ไทย Amazon EC2 running Microsoft Windows Server is a secure, reliable, and high-performance environment for deploying Windows-based applications and workloads. Learn more » 40% growth Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Using AWS, AEON can handle the complex PCI DSS security protocols in the cloud for its card processing software. “We have to have multiple levels of security in place to meet industry regulations—otherwise, we would not be able to operate,” says Abraham. “Because AWS is PCI DSS compliant, we could move to the cloud, easily meet these industry standards, and benefit from much faster card processing.” Solution: Delivering Full Compliance with Banking Protocols and Privacy Laws Based in Cyprus, AEON Payment Technologies wanted to move to the cloud to scale its card processing system for banking customers and expand into new markets in Europe and Africa. It migrated in just 3 months using the AWS Migration Acceleration Program. With its infrastructure running on AWS, AEON has increased the number of credit and debit cards it handles by 40 percent over 2 years. The business has also saved 33 percent of planned expenditure on IT, and can scale to handle traffic peaks within several minutes. Critically, it can now easily comply with local data laws and support Payment Card Industry Data Security Standard (PCI DSS) standards for card processing. Amazon EC2 Windows Instances Over the past 2 years, AEON has increased the number of credit and debit cards it handles by 40 percent. “Using AWS, we now support 11.5 million cards and 30,000 merchant card terminals,” says Abraham. “We can also guarantee the 99.999 percent uptime we need so that our banking clients limit downtime and manage reputational risk.” Português
ALTBalaji _ Amazon Web Services.txt
AWS Elemental MediaTailor is a channel assembly and personalized ad-insertion service for video providers to create linear over-the-top (OTT) channels using existing video content. The service then lets you monetize those channels—or other live streams—with personalized advertising. Learn more » Amazon Redshift Français Shahabuddin Sheikh Chief Technology Officer, ALTBalaji India-based Español ALTBalaji launched its platform on the AWS Cloud, using Amazon CloudFront to securely deliver media content to millions of customers every day, Amazon Elastic Compute Cloud (Amazon EC2) instances to run applications, and Amazon Redshift as a data warehouse for analytics. About ALTBalaji    Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Amazon CloudFront 日本語 2022 Zero Downtime live-stream views of Lock Upp ALTBalaji is a subscription-based video on demand (SVOD) platform that produces original over-the-top (OTT) media content. To broadcast live streams of its Indian reality show Lock Upp, the company chose to build its live streaming infrastructure on Amazon Web Services (AWS). India-based ALTBalaji is parent company Balaji Telefilms’ first foray into the digital entertainment space. ALTBalaji offers fresh, original, exclusive stories, tailored for Indian audiences across the world. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Elemental MediaLive Amazon Personalize for targeted content recommendations to viewers and “AWS Elemental MediaLive removed the complexity of developing and operating our live streaming infrastructure, allowing us to focus on providing better user experience and producing unique, compelling content. We're now exploring new ways to enhance our customers' experience, and voice search is just the next step in our journey of constant improvement,” Sheikh concludes. To broadcast live streams of Lock Upp, ALTBalaji built its live streaming infrastructure on AWS Elemental MediaLive—a solution that encodes and transcodes real-time video for broadcast and streaming delivery. Results from a proof of concept (POC) revealed the company could easily add live streaming with advanced broadcasting capabilities to its platform and meet its challenging timeline. The team worked with its AWS Technical Account Manager (TAM) and Subject Matter Expert (SME) to conduct an AWS Infrastructure Event Management (IEM) analysis to right-size the live streaming infrastructure for load handling. In addition, it used AWS Elemental MediaTailor to set up server-side ad integration for live streams under free subscription accounts. ALTBalaji is now preparing for Lock Upp’s second season knowing it can deliver a reliable live streaming experience. It also plans to test AWS Services Used 中文 (繁體) Bahasa Indonesia 10x Furthermore, the live-streaming solution easily managed a tenfold increase in viewership during highly anticipated episodes showing nominations and evictions from Kangana Ranaut’s “jail”. Customer Success / Media Solution | Building Live Streaming Capabilities from Scratch Contact Sales Ρусский Launched in April 2017, عربي By using AWS Elemental MediaLive, ALTBalaji delivered its live streaming solution in weeks and ensured uninterrupted live streams of Lock Upp during its 72-day run for millions of viewers across India. ALTBalaji Develops Live Streaming Capabilities and Delivers Reality Show in Real Time to Millions 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Transcribe to allow viewers to use voice commands over typing to search for series content. Scales to meet tenfold surge in viewership Overview AWS Elemental MediaTailor Get Started 100 Million   Türkçe English Outcome | Ensuring Uninterrupted Live Streams for Millions of Viewers AWS Elemental MediaLive is a broadcast-grade live video processing service that creates high-quality streams for delivery to broadcast TVs and internet-connected devices. Learn more » ALTBalaji, a subsidiary of Balaji Telefilms Limited, is the group’s foray into the digital entertainment space. ALTBalaji is an SVOD platform aiming to provide 34 million subscribers with original over-the-top (OTT) Indian media content right at their fingertips. Subscribers can log in to ALTBalaji and access content—such as shows, movies, and music videos—via desktops, tablets, smartphones, and internet-connected TVs. Deutsch Amazon Rekognition to reduce the cost of video ad integration and other content operations. Furthermore, ALTBalaji wants to assess Tiếng Việt ALTBalaji had just over a month to deliver a live streaming solution in time for the start of the series. Shahabuddin Sheikh, chief technology officer at ALTBalaji, says, “Aside from meeting the deadline, we were also concerned about infrastructure downtime and service lags during the live streams, which would negatively impact the viewer experience.” Opportunity | Delivering a Live Streaming Solution in One Month Italiano ไทย In December 2021, ALTBalaji began production on an Indian reality competition series called Lock Upp. Local celebrities, including renowned Indian film stars, comedians, and sports stars, would be locked inside actor and show host Kangana Ranaut’s “jail” for 72 days, and voted out by viewers until there was a winner. It set a February 2022 launch date for Lock Upp and wanted to broadcast live streams of the show for its duration. Live streamed reality series for 72 days with no downtime Just 19 days after its premiere, Lock Upp garnered more than 100 million views, becoming the most-watched reality show in the Indian OTT space. During the airing of the series, ALTBalaji reported a tenfold increase in viewer data compared to its historical average. However, thanks to optimized workflows in its Amazon Redshift data warehouse, ALTBalaji handled the surge seamlessly. Furthermore, the company gained valuable insights into how often viewers paused and played streams, alongside behavior during live streaming ads and activities that influenced video view count. It plans to use this information to improve product development and user experience. ALTBalaji built its live streaming workflows using AWS Elemental MediaLive, a broadcast-grade live video processing service for high-quality video streams. As a result, it experienced zero downtime during its first live stream despite a tenfold increase in viewership. Learn more » Many viewers will be streaming from smaller towns in India where internet speeds are slower than major urban cities. To ensure an uninterrupted and enjoyable viewing experience from any location, ALTBalaji minimized lags that could cause streams to fail by finetuning AWS Elemental MediaLive. AWS Elemental MediaLive removed the complexity of developing and operating our live streaming infrastructure, allowing us to focus on providing better user experience and producing unique, compelling content.” By using AWS Elemental MediaLive, ALTBalaji delivered its live streaming solution in weeks and ensured uninterrupted live streams of Lock Upp for millions of viewers across India during its 72-day run. Sheikh describes the assistance from AWS as “hyper support”. Sheikh says, “Without AWS Elemental MediaLive, it would’ve taken several months to deliver our streaming solution. From the start, AWS understood the criticality of everything we were doing and stayed the course with the team even after the go-live date.” Português  Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more »
Amanotes Stays on Beat by Delivering Simple Music Games to Millions Worldwide on AWS.txt
Français 120 million Español Expansion To stay ahead of competitors, Amanotes needs to innovate continuously to deliver more immersive game experiences, while managing costs effectively. With Amazon Elastic Container Service (Amazon ECS) and AWS Fargate, the business easily deploys applications across a scalable, multi-region infrastructure and minimizes its technology team’s management and maintenance workload. Average request processing time Learn More AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. 日本語 Customer Stories / Games The business is executing plans to complement its existing music ‘Play’ pillar with a ‘Learn’ pillar delivered through an educational music app, and a ‘Simulation’ pillar that gives users the ability to learn musical instruments through digital simulations. This strategy is designed to realize Amanotes’ vision of becoming the number one ecosystem for everyone to play, learn, create, and connect through music. Average time to deliver downloads 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Amanotes launched its business on the AWS Cloud for scalability, low latency, and stability. “We analyzed cloud providers and determined AWS had the extensive reach we required: 27 AWS Regions worldwide, each featuring multiple Availability Zones and hundreds of edge locations,” says Nguyen Nghi, Head of Technology at Amanotes. Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Pursuing growth in China and Japan Get Started Solution | Running Music Games and Apps Seamlessly on Amazon CloudFront AWS Services Used Amanotes is running its application services, core database, and backend API services on the AWS Cloud. It uses Amazon CloudFront to deliver game content reliably and with low latency to its global user base.  “With Amazon CloudFront, we’re delivering content that includes five leading music games to more than 120 million monthly active users who, collectively, make more than 90 million download requests per day,” says Nghi. “We can also secure the content from cyberattacks that could compromise our reputation and slow our expansion into new markets.”   中文 (繁體) Bahasa Indonesia To learn more, visit aws.amazon.com/cloudfront. monthly active users of Amanotes’ games With Amazon CloudFront, we’re delivering content that includes five leading music games to more than 120 million monthly active users who, collectively, make more than 90 million download requests per day.” Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) 1.5 seconds Nguyen Nghi Head of Technology, Amanotes Outcome | Innovating with New Services and Connecting Global Users Through Music 2022 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Overview Amazon Elastic Kubernetes Service Founded in 2014 and headquartered in Ho Chi Minh City, Vietnam, Amanotes oversees a portfolio of music games and apps, including Magic Tiles 3, Tiles Hop, and Dancing Road. Since its founding, users across the globe have downloaded Amanotes music games and apps more than 2.5 billion times. AWS Fargate Amanotes’ founders decided to focus on a niche the business describes as ‘Simple Music Games’; games that are intuitive and easy for users to interact with. In 2016, Amanotes developed Magic Tiles 3, a game requiring users to tap digital musical notes on their smartphone screens in sync with songs from selected genres. Amazon Elastic Container Service Amanotes plans to further leverage AWS Global Infrastructure and innovative solutions to grow its business in markets such as Japan and China. The business also believes new AWS edge locations in Hanoi and Ho Chi Minh City present opportunities to acquire new customers in its domestic market. Nghi says, “We aim to grow our business as much as possible, and AWS provides the speed and scale we need to do this.” Türkçe English 90 million content file download requests met daily The business delivers its content files in 1.5 seconds or less, with smaller files delivered in just 0.1 seconds. Average request processing time for the Amanotes API is around 100 milliseconds. This low latency leads to repeat gamers and attracts advertisers. This in turn increases revenue generation from in-game and reward-based advertisements, pay-to-play, and subscriptions.   Opportunity | Delivering Music Games with Speed and Scale  Amanotes is also leveraging Amazon Elastic Kubernetes Service (Amazon EKS) to run some of its services. “By leveraging managed services capabilities from Amazon EKS, our team can focus purely on application development without worrying about infrastructure,” says Nghi. Amanotes is a Vietnam-headquartered music game developer that publishes games to a global audience. To provide game downloads to global users reliably, securely, and with low latency, Amanotes chose to launch on AWS. About Amanotes Deutsch Amanotes uses Amazon CloudFront, Amazon Elastic Kubernetes Service, and Amazon Elastic Container Service to deliver games from a scalable, multi-region infrastructure via a global content delivery network. With AWS, Amanotes delivers tens of millions of downloads every day to customers around the world. Amanotes Stays on Beat by Delivering ‘Simple Music Games’ to Millions Worldwide on AWS Tiếng Việt With AWS, Amanotes has built on the success of Magic Tiles 3 to develop another four major music games: Tiles Hop, Dancing Road, Beat Blader 3D, and Dancing Race, growing into a global app publisher. It’s now one of the leading mobile game publishers in Southeast Asia and one of the top music game publishers worldwide.   Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Italiano ไทย Amazon CloudFront Amanotes delivers a low-latency, seamless gaming experience to players around the globe with Amazon CloudFront. Learn more » Personalizing user experiences is key to Amanotes’ growth strategy. The business plans to use machine learning through Amazon Personalize to generate more relevant music recommendations to gamers, increasing engagement and growing revenue by attracting more customers. 100 milliseconds In 2014, Nguyen Tuan Cuong and Vo Tuan Binh co-founded Amanotes to give users the ability to extend their interactions with music beyond listening. This meant using technology to create personalized experiences tailored to each users’ taste, consumption, and musical ability.  Português
Amazon OpenSearch Services vector database capabilities explained _ AWS Big Data Blog.txt
AWS Big Data Blog Amazon OpenSearch Service’s vector database capabilities explained by Jon Handler , Dylan Tong , Jianwei Li , and Vamshi Vijay Nakkirtha | on 21 JUN 2023 | in Amazon OpenSearch Service , Amazon SageMaker , Artificial Intelligence , Customer Solutions , Foundational (100) , Intermediate (200) , Thought Leadership | Permalink | Comments |  Share OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, security monitoring, and observability applications, licensed under the Apache 2.0 license. It comprises a search engine, OpenSearch, which delivers low-latency search and aggregations, OpenSearch Dashboards, a visualization and dashboarding tool, and a suite of plugins that provide advanced capabilities like alerting, fine-grained access control, observability, security monitoring, and vector storage and processing. Amazon OpenSearch Service is a fully managed service that makes it simple to deploy, scale, and operate OpenSearch in the AWS Cloud. As an end-user, when you use OpenSearch’s search capabilities, you generally have a goal in mind—something you want to accomplish. Along the way, you use OpenSearch to gather information in support of achieving that goal (or maybe the information is the original goal). We’ve all become used to the “search box” interface, where you type some words, and the search engine brings back results based on word-to-word matching. Let’s say you want to buy a couch in order to spend cozy evenings with your family around the fire. You go to Amazon.com, and you type “a cozy place to sit by the fire.” Unfortunately, if you run that search on Amazon.com, you get items like fire pits, heating fans, and home decorations—not what you intended. The problem is that couch manufacturers probably didn’t use the words “cozy,” “place,” “sit,” and “fire” in their product titles or descriptions. In recent years, machine learning (ML) techniques have become increasingly popular to enhance search. Among them are the use of embedding models, a type of model that can encode a large body of data into an n-dimensional space where each entity is encoded into a vector, a data point in that space, and organized such that similar entities are closer together. An embedding model, for instance, could encode the semantics of a corpus. By searching for the vectors nearest to an encoded document — k-nearest neighbor (k-NN) search — you can find the most semantically similar documents. Sophisticated embedding models can support multiple modalities, for instance, encoding the image and text of a product catalog and enabling similarity matching on both modalities. A vector database provides efficient vector similarity search by providing specialized indexes like k-NN indexes. It also provides other database functionality like managing vector data alongside other data types, workload management, access control and more. OpenSearch’s k-NN plugin provides core vector database functionality for OpenSearch , so when your customer searches for “a cozy place to sit by the fire” in your catalog, you can encode that prompt and use OpenSearch to perform a nearest neighbor query to surface that 8-foot, blue couch with designer arranged photographs in front of fireplaces. Using OpenSearch Service as a vector database With OpenSearch Service’s vector database capabilities, you can implement semantic search, Retrieval Augmented Generation (RAG) with LLMs, recommendation engines, and search rich media. Semantic search With semantic search, you improve the relevance of retrieved results using language-based embeddings on search documents. You enable your search customers to use natural language queries, like “a cozy place to sit by the fire” to find their 8-foot-long blue couch. For more information, refer to Building a semantic search engine in OpenSearch to learn how semantic search can deliver a 15% relevance improvement, as measured by normalized discounted cumulative gain (nDCG) metrics compared with keyword search. For a concrete example, our Improve search relevance with ML in Amazon OpenSearch Service workshop explores the difference between keyword and semantic search, based on a Bidirectional Encoder Representations from Transformers (BERT) model, hosted by Amazon SageMaker to generate vectors and store them in OpenSearch. The workshop uses product question answers as an example to show how keyword search using the keywords/phrases of the query leads to some irrelevant results. Semantic search is able to retrieve more relevant documents by matching the context and semantics of the query. The following diagram shows an example architecture for a semantic search application with OpenSearch Service as the vector database. Retrieval Augmented Generation with LLMs RAG is a method for building trustworthy generative AI chatbots using generative LLMs like OpenAI, ChatGPT, or Amazon Titan Text . With the rise of generative LLMs, application developers are looking for ways to take advantage of this innovative technology. One popular use case involves delivering conversational experiences through intelligent agents. Perhaps you’re a software provider with knowledge bases for product information, customer self-service, or industry domain knowledge like tax reporting rules or medical information about diseases and treatments. A conversational search experience provides an intuitive interface for users to sift through information through dialog and Q&A. Generative LLMs on their own are prone to hallucinations —a situation where the model generates a believable but factually incorrect response. RAG solves this problem by complementing generative LLMs with an external knowledge base that is typically built using a vector database hydrated with vector-encoded knowledge articles. As illustrated in the following diagram, the query workflow starts with a question that is encoded and used to retrieve relevant knowledge articles from the vector database. Those results are sent to the generative LLM whose job is to augment those results, typically by summarizing the results as a conversational response. By complementing the generative model with a knowledge base, RAG grounds the model on facts to minimize hallucinations. You can learn more about building a RAG solution in the Retrieval Augmented Generation module of our semantic search workshop . Recommendation engine Recommendations are a common component in the search experience, especially for ecommerce applications. Adding a user experience feature like “more like this” or “customers who bought this also bought that” can drive additional revenue through getting customers what they want. Search architects employ many techniques and technologies to build recommendations, including Deep Neural Network (DNN) based recommendation algorithms such as the two-tower neural net model , YoutubeDNN . A trained embedding model encodes products, for example, into an embedding space where products that are frequently bought together are considered more similar, and therefore are represented as data points that are closer together in the embedding space. Another possibility is that product embeddings are based on co-rating similarity instead of purchase activity. You can employ this affinity data through calculating the vector similarity between a particular user’s embedding and vectors in the database to return recommended items. The following diagram shows an example architecture of building a recommendation engine with OpenSearch as a vector store. Media search Media search enables users to query the search engine with rich media like images, audio, and video. Its implementation is similar to semantic search—you create vector embeddings for your search documents and then query OpenSearch Service with a vector. The difference is you use a computer vision deep neural network (e.g. Convolutional Neural Network (CNN)) such as ResNet to convert images into vectors. The following diagram shows an example architecture of building an image search with OpenSearch as the vector store. Understanding the technology OpenSearch uses approximate nearest neighbor (ANN) algorithms from the NMSLIB , FAISS , and Lucene libraries to power k-NN search. These search methods employ ANN to improve search latency for large datasets. Of the three search methods the k-NN plugin provides, this method offers the best search scalability for large datasets. The engine details are as follows: Non-Metric Space Library (NMSLIB) – NMSLIB implements the HNSW ANN algorithm Facebook AI Similarity Search (FAISS) – FAISS implements both HNSW and IVF ANN algorithms Lucene – Lucene implements the HNSW algorithm Each of the three engines used for approximate k-NN search has its own attributes that make one more sensible to use than the others in a given situation. You can follow the general information in this section to help determine which engine will best meet your requirements. In general, NMSLIB and FAISS should be selected for large-scale use cases. Lucene is a good option for smaller deployments, but offers benefits like smart filtering where the optimal filtering strategy—pre-filtering, post-filtering, or exact k-NN—is automatically applied depending on the situation. The following table summarizes the differences between each option. . NMSLIB-HNSW FAISS-HNSW FAISS-IVF Lucene-HNSW Max Dimension 16,000 16,000 16,000 1024 Filter Post filter Post filter Post filter Filter while search Training Required No No Yes No Similarity Metrics l2, innerproduct, cosinesimil, l1, linf l2, innerproduct l2, innerproduct l2, cosinesimil Vector Volume Tens of billions Tens of billions Tens of billions < Ten million Indexing latency Low Low Lowest Low Query Latency & Quality Low latency & high quality Low latency & high quality Low latency & low quality High latency & high quality Vector Compression Flat Flat Product Quantization Flat Product Quantization Flat Memory Consumption High High Low with PQ Medium Low with PQ High Approximate and exact nearest-neighbor search The OpenSearch Service k-NN plugin supports three different methods for obtaining the k-nearest neighbors from an index of vectors: approximate k-NN, score script (exact k-NN), and painless extensions (exact k-NN). Approximate k-NN The first method takes an approximate nearest neighbor approach—it uses one of several algorithms to return the approximate k-nearest neighbors to a query vector. Usually, these algorithms sacrifice indexing speed and search accuracy in return for performance benefits such as lower latency, smaller memory footprints, and more scalable search. Approximate k-NN is the best choice for searches over large indexes (that is, hundreds of thousands of vectors or more) that require low latency. You should not use approximate k-NN if you want to apply a filter on the index before the k-NN search, which greatly reduces the number of vectors to be searched. In this case, you should use either the score script method or painless extensions. Score script The second method extends the OpenSearch Service score script functionality to run a brute force, exact k-NN search over knn_vector fields or fields that can represent binary objects. With this approach, you can run k-NN search on a subset of vectors in your index (sometimes referred to as a pre-filter search ). This approach is preferred for searches over smaller bodies of documents or when a pre-filter is needed. Using this approach on large indexes may lead to high latencies. Painless extensions The third method adds the distance functions as painless extensions that you can use in more complex combinations. Similar to the k-NN score script, you can use this method to perform a brute force, exact k-NN search across an index, which also supports pre-filtering. This approach has slightly slower query performance compared to the k-NN score script. If your use case requires more customization over the final score, you should use this approach over score script k-NN. Vector search algorithms The simple way to find similar vectors is to use k-nearest neighbors (k-NN) algorithms, which compute the distance between a query vector and the other vectors in the vector database. As we mentioned earlier, the score script k-NN and painless extensions search methods use the exact k-NN algorithms under the hood. However, in the case of extremely large datasets with high dimensionality, this creates a scaling problem that reduces the efficiency of the search. Approximate nearest neighbor (ANN) search methods can overcome this by employing tools that restructure indexes more efficiently and reduce the dimensionality of searchable vectors. There are different ANN search algorithms; for example, locality sensitive hashing, tree-based, cluster-based, and graph-based. OpenSearch implements two ANN algorithms: Hierarchical Navigable Small Worlds (HNSW) and Inverted File System (IVF). For a more detailed explanation of how the HNSW and IVF algorithms work in OpenSearch, see blog post “ Choose the k-NN algorithm for your billion-scale use case with OpenSearch ”. Hierarchical Navigable Small Worlds The HNSW algorithm is one of the most popular algorithms out there for ANN search. The core idea of the algorithm is to build a graph with edges connecting index vectors that are close to each other. Then, on search, this graph is partially traversed to find the approximate nearest neighbors to the query vector. To steer the traversal towards the query’s nearest neighbors, the algorithm always visits the closest candidate to the query vector next. Inverted File The IVF algorithm separates your index vectors into a set of buckets, then, to reduce your search time, only searches through a subset of these buckets. However, if the algorithm just randomly split up your vectors into different buckets, and only searched a subset of them, it would yield a poor approximation. The IVF algorithm uses a more elegant approach. First, before indexing begins, it assigns each bucket a representative vector. When a vector is indexed, it gets added to the bucket that has the closest representative vector. This way, vectors that are closer to each other are placed roughly in the same or nearby buckets. Vector similarity metrics All search engines use a similarity metric to rank and sort results and bring the most relevant results to the top. When you use a plain text query, the similarity metric is called TF-IDF, which measures the importance of the terms in the query and generates a score based on the number of textual matches. When your query includes a vector, the similarity metrics are spatial in nature, taking advantage of proximity in the vector space. OpenSearch supports several similarity or distance measures: Euclidean distance – The straight-line distance between points. L1 (Manhattan) distance – The sum of the differences of all of the vector components. L1 distance measures how many orthogonal city blocks you need to traverse from point A to point B. L-infinity (chessboard) distance – The number of moves a King would make on an n-dimensional chessboard. It’s different than Euclidean distance on the diagonals—a diagonal step on a 2-dimensional chessboard is 1.41 Euclidean units away, but 2 L-infinity units away. Inner product – The product of the magnitudes of two vectors and the cosine of the angle between them. Usually used for natural language processing (NLP) vector similarity. Cosine similarity – The cosine of the angle between two vectors in a vector space. Hamming distance – For binary-coded vectors, the number of bits that differ between the two vectors. Advantage of OpenSearch as a vector database When you use OpenSearch Service as a vector database, you can take advantage of the service’s features like usability, scalability, availability, interoperability, and security. More importantly, you can use OpenSearch’s search features to enhance the search experience. For example, you can use Learning to Rank in OpenSearch to integrate user clickthrough behavior data into your search application and improve search relevance. You can also combine OpenSearch text search and vector search capabilities to search documents with keyword and semantic similarity. You can also use other fields in the index to filter documents to improve relevance. For advanced users, you can use a hybrid scoring model to combine OpenSearch’s text-based relevance score, computed with the Okapi BM25 function and its vector search score to improve the ranking of your search results. Scale and limits OpenSearch as vector database support billions of vector records. Keep in mind the following calculator regarding number of vectors and dimensions to size your cluster. Number of vectors OpenSearch VectorDB takes advantage of the sharding capabilities of OpenSearch and can scale to billions of vectors at single-digit millisecond latencies by sharding vectors and scale horizontally by adding more nodes. The number of vectors that can fit in a single machine is a function of the off-heap memory availability on the machine. The number of nodes required will depend on the amount of memory that can be used for the algorithm per node and the total amount of memory required by the algorithm. The more nodes, the more memory and better performance. The amount of memory available per node is computed as memory_available = ( node_memory – jvm_size ) * circuit_breaker_limit , with the following parameters: node_memory – The total memory of the instance. jvm_size – The OpenSearch JVM heap size. This is set to half of the instance’s RAM, capped at approximately 32 GB. circuit_breaker_limit – The native memory usage threshold for the circuit breaker. This is set to 0.5. Total cluster memory estimation depends on total number of vector records and algorithms. HNSW and IVF have different memory requirements. You can refer to Memory Estimation for more details. Number of dimensions OpenSearch’s current dimension limit for the vector field knn_vector is 16,000 dimensions. Each dimension is represented as a 32-bit float. The more dimensions, the more memory you’ll need to index and search. The number of dimensions is usually determined by the embedding models that translate the entity to a vector. There are a lot of options to choose from when building your knn_vector field. To determine the correct methods and parameters to choose, refer to Choosing the right method . Customer stories: Amazon Music Amazon Music is always innovating to provide customers with unique and personalized experiences. One of Amazon Music’s approaches to music recommendations is a remix of a classic Amazon innovation, item-to-item collaborative filtering , and vector databases. Using data aggregated based on user listening behavior, Amazon Music has created an embedding model that encodes music tracks and customer representations into a vector space where neighboring vectors represent tracks that are similar. 100 million songs are encoded into vectors, indexed into OpenSearch, and served across multiple geographies to power real-time recommendations. OpenSearch currently manages 1.05 billion vectors and supports a peak load of 7,100 vector queries per second to power Amazon Music recommendations. The item-to-item collaborative filter continues to be among the most popular methods for online product recommendations because of its effectiveness at scaling to large customer bases and product catalogs. OpenSearch makes it easier to operationalize and further the scalability of the recommender by providing scale-out infrastructure and k-NN indexes that grow linearly with respect to the number of tracks and similarity search in logarithmic time. The following figure visualizes the high-dimensional space created by the vector embedding. Brand protection at Amazon Amazon strives to deliver the world’s most trustworthy shopping experience, offering customers the widest possible selection of authentic products. To earn and maintain our customers’ trust, we strictly prohibit the sale of counterfeit products, and we continue to invest in innovations that ensure only authentic products reach our customers. Amazon’s brand protection programs build trust with brands by accurately representing and completely protecting their brand. We strive to ensure that public perception mirrors the trustworthy experience we deliver. Our brand protection strategy focuses on four pillars: (1) Proactive Controls (2) Powerful Tools to Protect Brands (3) Holding Bad Actors Accountable (4) Protecting and Educating Customers. Amazon OpenSearch Service is a key part of Amazon’s Proactive Controls. In 2022, Amazon’s automated technology scanned more than 8 billion attempted changes daily to product detail pages for signs of potential abuse. Our proactive controls found more than 99% of blocked or removed listings before a brand ever had to find and report it. These listings were suspected of being fraudulent, infringing, counterfeit, or at risk of other forms of abuse. To perform these scans, Amazon created tooling that uses advanced and innovative techniques, including the use of advanced machine learning models to automate the detection of intellectual property infringements in listings across Amazon’s stores globally. A key technical challenge in implementing such automated system is the ability to search for protected intellectual property within a vast billion-vector corpus in a fast, scalable and cost effective manner. Leveraging Amazon OpenSearch Service’s scalable vector database capabilities and distributed architecture, we successfully developed an ingestion pipeline that has indexed a total of 68 billion, 128- and 1024-dimension vectors into OpenSearch Service to enable brands and automated systems to conduct infringement detection, in real-time, through a highly available and fast (sub-second) search API. Conclusion Whether you’re building a generative AI solution, searching rich media and audio, or bringing more semantic search to your existing search-based application, OpenSearch is a capable vector database. OpenSearch supports a variety of engines, algorithms, and distance measures that you can employ to build the right solution. OpenSearch provides a scalable engine that can support vector search at low latency and up to billions of vectors. With OpenSearch and its vector DB capabilities, your users can find that 8-foot-blue couch easily, and relax by a cozy fire. About the Authors Jon Handler is a Senior Principal Solutions Architect at Amazon Web Services based in Palo Alto, CA. Jon works closely with OpenSearch and Amazon OpenSearch Service, providing help and guidance to a broad range of customers who have search and log analytics workloads that they want to move to the AWS Cloud. Prior to joining AWS, Jon’s career as a software developer included four years of coding a large-scale, eCommerce search engine. Jon holds a Bachelor of the Arts from the University of Pennsylvania, and a Master of Science and a Ph. D. in Computer Science and Artificial Intelligence from Northwestern University. Jianwei Li is a Principal Analytics Specialist TAM at Amazon Web Services. Jianwei provides consultant service for customers to help customer design and build modern data platform. Jianwei has been working in big data domain as software developer, consultant and tech leader. Dylan Tong is a Senior Product Manager at Amazon Web Services. He leads the product initiatives for AI and machine learning (ML) on OpenSearch including OpenSearch’s vector database capabilities. Dylan has decades of experience working directly with customers and creating products and solutions in the database, analytics and AI/ML domain. Dylan holds a BSc and MEng degree in Computer Science from Cornell University.  Vamshi Vijay Nakkirtha is a Software Engineering Manager working on the OpenSearch Project and Amazon OpenSearch Service. His primary interests include distributed systems. He is an active contributor to various plugins, like k-NN, GeoSpatial, and dashboard-maps. Comments View Comments Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Anghami Case Study.txt
With the recent rise of rival music services, Anghami recognized the growing significance of guiding customers towards the artists and content that align with their preferences. This became even more crucial given the extensive and expanding collection of Arabic and international music available on the platform. These music-recommendation features attract new customers, and foster greater user loyalty. The company has observed that users spend more time on the site when presented with additional song recommendations. Anghami's previous solution for generating recommendations used legacy code that made it difficult for its team to expand its functionality. Anghami decided to create a new, cloud-native solution on AWS. The new platform eliminated the liability of maintaining old code, and freed up more time for engineers to build new features and capabilities for customers. It also meant they could take advantage of versatile tools such as Amazon OpenSearch Service, which makes it easy to perform interactive log analytics, real-time application monitoring, and website searches. Amazon OpenSearch Service Français Outcome: Owning Audio Content and Delighting Customers Using AWS Amazon S3 Español The company aimed to develop a cutting-edge recommendations platform that could scale to handle its expanding user-base, while facilitating the creation of novel features and services for its customers. Anghami is a music-streaming service based in Abu Dhabi. It serves approximately 70 million users in Europe, the Middle East and North Africa (MENA), and the US, giving them access to more than 72 million songs and podcasts. Over the past 10 years, it grew from a homegrown start-up into the first Arab technology company to be listed on the Nasdaq stock exchange in February 2022. Anghami sets itself apart from competitors by helping customers find suitable audio content through personalized recommendations. When its previous technology platform proved difficult to maintain and develop new features for, it turned to Amazon Web Services (AWS). The company built a new platform on AWS that uses machine learning (ML) to generate recommendations. It can now quickly surface relevant content for users, attract top tech talent, rapidly develop new features that enrich customer experience, and support future product innovation. Opportunity: Reducing Technology Risk and Building a Platform for Innovation 日本語 Amazon SageMaker 2023 An AWS customer since its inception, Anghami reached out to AWS solution architects to investigate its technology options based on its future plans. After several in-depth workshops, they came up with a new architecture that is simple, powerful, and easy to maintain and develop on. Within 6 months of the initial architecture workshops with AWS, Anghami launched its cloud-based recommendations engine for its growing catalog of songs and podcasts. The service’s recommendation platform now runs on Amazon OpenSearch Service. Anghami stores its user behavior data and audio content on Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. To run its large data workloads, the company uses Amazon EMR, which easily runs and scales Apache Spark, Hive, Presto, and other big workloads. These workloads include training nearly a decade’s worth of customer data that has been collected from millions of customers using the streaming music service daily. To train the machine learning models that produce music recommendations, Anghami uses Amazon SageMaker, which helps to build, train, and deploy ML models. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Learn how »  Kevin Williams Vice President (VP) of Machine Learning, Anghami Anghami plans to continue growing its audio catalog and expanding its user base in the Middle East and beyond. “We want to own audio in the regions we operate, for podcasts, audiobooks, and music,” adds Williams. “Using AWS, we have everything we need to accomplish that. Our platform is flexible, reliable, scalable, and easy to maintain, so we can spend our efforts on valuable tasks that benefit customers instead of maintenance.” Get Started Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. Learn more » Our platform is flexible, reliable, scalable, and easy to maintain, so we can spend our efforts on valuable tasks that benefit customers instead of maintenance.” AWS Services Used Overview 中文 (繁體) Bahasa Indonesia 10x Anghami Personalizes Music Recommendations Using Amazon OpenSearch Service Solution: Attracting Top Tech Talent and Developing Prototypes in Days on AWS Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) 72+ million Customer Stories / Media Entertainment / MENA Amazon EMR About Company Founded in 2012 in Beirut, Anghami offers free and paid audio-streaming services. Its premium service provides features such as the ability to download tracks and play them offline, rewind or fast-forward music, and view lyrics. AWS Customer Success Stories Türkçe Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks. English Anghami now has a technology foundation it can build on for years to come. “I'm excited about running development sprints and discovering the best customer experiences in a timely manner,” says Williams. 6 months songs and podcasts served seamlessly Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch Tiếng Việt Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Anghami can also release new music to fans almost immediately. When new tracks drop, typically on Fridays, fans can access them within a minute of the official release. With the previous solution, the tech team couldn’t quickly add a single track to the catalog. However, using OpenSearch, the team can insert and serve songs with its machine learning algorithm within moments of the song’s release. “This is an essential feature that really makes us stand out compared to our rivals,” says Williams. “It’s satisfying to build on fans’ excitement about new releases.” Italiano ไทย Founded in 2010, Anghami provides a music-streaming service in the Middle East and North Africa (MENA), Europe and the US. The company has offices in Abu Dhabi, Beirut, Cairo, Dubai, and Riyadh, and employs more than 160 people. Anghami developers can now rapidly prototype new feature ideas from product teams and quickly develop queries to recommend content for users. Writing a search query and creating a prototype takes 1–2 days on AWS, as opposed to around 2 weeks on the previous system. Since launching on AWS, the team has created new functions on the service landing page that suggest artists and relevant playlists for customers to listen to, instead of just suggesting tracks. Building its platform on AWS has reduced the company’s technology risk because it is now easier to find talented engineers and DevOps staff. “As a tech company, you’re only as good as your talent,” says Kevin Williams, Vice President (VP) of Machine Learning at Anghami. “We can quickly find candidates with OpenSearch skills and others who are motivated to learn OpenSearch because it’s a widely used technology. It's also quicker to train up technical staff, because they can access existing documentation on AWS services.” Learn more » to migrate entire song database faster to develop music search queries Português Contact Sales
End of preview.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0