ID
stringlengths
11
163
Content
stringlengths
1.52k
32.9k
Esade Business School Increases Graduates Employability Using AWS Education Programs _ Case Study _ AWS.txt
Esade Business School Increases Graduates’ Employability Using AWS Education Programs AWS Academy Français The Esade Business School bolstered student employability by incorporating AWS Academy into its business curriculum to teach the fundamentals of building IT infrastructure on AWS. About Esade Business School Teaches IT fundamentals with critical skills for implementing cloud initiatives Español Whether their intended role is sales, management, or business development, students need a basic understanding of the cloud. Because the cloud is used widely in all industries, employers now expect all new employees, not just those trained in technical fields, to be cloud-savvy. AWS Certified Solutions Architect–Associate As applications continue to surge, business schools serve as an increasingly important link between tertiary education and the professional world. Given the continuing integration of technology and business, business school curricula have an important role to play in technical education for students interested in careers in information technology. 日本語 The industry-recognized AWS Certification allows students to strengthen their curriculum vitae. And the credential increases their employability by validating their ability to design and implement distributed systems on AWS. “These materials feature the best practices by some of the best-known companies, and students learn how to help businesses create a competitive edge using AWS,” says Esteve Almirall, associate professor, Department of Operations, Innovation and Data Sciences at Esade Business School. 2022 Get Started 한국어 Esade Business School is one of the academic units of Esade, a prestigious international academic institution. Based in Barcelona, Spain, Esade has over 12,000 students and over 400 faculty at its business school, law school, and language center. Overview | Opportunity | Solution | Outcome | AWS Services Used Customer Stories / Education Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. AWS Services Used Esade Business School’s MSc in Business Analytics accepts approximately 130–140 students each year. About 70 percent of these students achieve their AWS Certified Solutions Architect–Associate certification. Most graduates go on to work in cloud computing, including AWS and other teams and organizations that use AWS, and about two-thirds work in business development roles.  Improves 中文 (繁體) Bahasa Indonesia These materials feature the best practices by some of the best-known companies, and students learn how to help businesses create a competitive edge using AWS.”  As part of the course, students can opt to take the AWS Certified Solutions Architect - Associate certification. To boost the take-up rate, students who pass the certification exam will receive the maximum score for the final exam in the course, which counts as 60 percent of the overall course grade. Strengthens Ρусский curriculum vitae عربي By offering its students the opportunity to learn more about Amazon Web Services (AWS) and cloud computing, the Esade Business School bolstered student employability and fulfilled the industry need for technical education. As a prestigious international business school, the Esade Business School realized that it needed to include technical education as part of its curriculum. Requiring the AWS Academy Cloud Architecting course as part of its curriculum and offering students a chance to get certified as an AWS Certified Solutions Architect–Associate helped Esade stay on the cutting edge of education. As a result, students have stronger curriculum vitae and increased employment options in businesses with cloud computing platforms and business development. Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud. Learn more » 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Training and Certification Esteve Almirall Associate Professor, Department of Operations, Innovation and Data Sciences, Esade Business School The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework.  Learn more » Overview student employability Solution | Leading the Way in Technical Education for Business Türkçe Opportunity | Identify and Develop Cloud Skills  Based in Spain, Esade is a prestigious international academic institution with more than 12,000 students and 400 faculty. The Esade Business School consistently ranks as one of the top business schools in the world. As a leader in business education, the Esade Business School started its MSc in Business Analytics to help students understand how big data and data analytics are used in the marketing, retail, and finance industries. English That’s why, when the Esade Business School began offering a master of science (MSc) in Business Analytics in 2018, it worked with AWS Education Programs to offer the opportunity for students to earn their certification as an AWS Certified Solutions Architect–Associate. This credential helps organizations identify talent with critical skills for implementing cloud initiatives and gives graduates an advantage when it comes to postgraduate employability. Earning AWS Training and Certification credentials gives Esade Business School graduates an advantage when it comes to postgraduate employability. “Beyond technical knowledge, the AWS course taught me that there are opportunities in sales, management, and business development as well,” says Javier Poveda-Panter, a data science consultant at AWS and former Esade Business School student. “We learned how to help our customers integrate cloud features and generate value in the long run.” Identify and develop talent Outcome | Preparing the Next Generation of Cloud Talent Deutsch Tiếng Việt Italiano ไทย to students building IT infrastructure on AWS Contact Sales Learn more » As part of the MSc requirements, Esade Business School students take the AWS Academy Cloud Architecting course. In the course, students learn the fundamentals of building IT infrastructure on AWS through lectures, hands-on labs, and project work. The course incorporates AWS content, like whitepapers, to explain cloud infrastructure fundamentals. It also uses case studies to illustrate how major corporations achieved positive business outcomes when they deployed cloud infrastructure. Português
Establishing the Nations Largest Mileage-Based User Fee Program Using Amazon Connect with the Virginia DMV _ Case Study _ AWS.txt
The Virginia Department of Motor Vehicles registers and titles motor vehicles and licenses drivers in the Commonwealth of Virginia. Français To create this program, the Virginia DMV turned to Emovis, a company providing a usage-based mobility solution and contact center powered by Amazon Web Services (AWS). The Virginia DMV implemented the Mileage Choice Program using Emovis’s solution in only 6 months and initially expected to enroll a few thousand drivers. Over the next 6 months, the Virginia DMV used Emovis’s solution to enroll over 10,000 drivers. The Virginia DMV also maintained staff productivity with Emovis using the new contact center, and the Mileage Choice Program became the largest road-usage charging program in the United States. 2023 According to a 2019 report by the Virginia secretary of transportation, by 2030 the use of electric, hybrid, and other fuel-efficient vehicles will amount to a loss of $250 million in fuel-tax revenue. This revenue constitutes 25 percent of the Virginia state budget for transportation financing and infrastructure projects. The Virginia DMV was tasked with implementing a road-usage charging program for fuel-efficient vehicles to make it possible for customers to pay their highway use fee per mile instead of all at once at the time of vehicle registration, an option that often results in cost savings for customers. The Virginia DMV quickly needed a way to enroll constituents in this new program. After requesting proposals for a mileage-based highway-usage solution and contact center, the Virginia DMV analyzed the bids and chose to work with Emovis. The company had already implemented road-usage charging programs in Utah and Oregon and could show how a permanent solution could be put in place. Emovis provides and manages a contact center solution powered by Amazon Connect, which provides superior customer service at a lower cost with an easy-to-use cloud contact center that can scale to support millions of customers. “We saw two major advantages in using Amazon Connect,” says Tom Krueger, vice president of operations at Emovis. “It easily integrated with our solution, and it improved the expandability of the solution.” Under the strict timeline of implementation, the Virginia DMV had a baseline goal of enrolling 2,000 drivers during the first year. This matched the numbers Emovis had seen when implementing its solution in Utah, though Virginia’s pool of eligible individuals is larger because it includes fuel-efficient vehicles that are not electric. From July 2022 to January 2023, the Virginia DMV saw over 10,000 individuals enrolled, exceeding expectations and becoming the largest road-usage charging program in the nation. Using Amazon Connect, Emovis could use agents from across the United States to support customers during times of heavy enrollment. The state has almost two million eligible vehicles, and the program will continue to roll out initial eligibility to residents through July 2023, with enrollment remaining available going forward. “We went far and above our goal very early after enrollment began. We were really pleased that there was a positive response from our residents when they signed up for the program, which is supported by Amazon Connect,” says Cummings. Español 日本語 scalability in call center that can support thousands of agents Outcome | Continuing Enrollment for the Mileage Choice Program Using Amazon Connect The long-term goal of this solution is to help the Virginia DMV address declining fuel-tax revenues, which make up 25 percent of the state’s funding to maintain roads, bridges, and tunnels and to improve transportation infrastructure. By using Amazon Connect, the Virginia DMV does not have to handle any manual processes or crunch numbers outside the system. Customers are able to apply for the new program during registration using the Emovis solution powered by Amazon Connect. “We have a lot of room to grow the program,” says Cummings. “The work we’re doing with Emovis is a great step in the right direction.” Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Overview  Improved AWS Services Used 10,000 中文 (繁體) Bahasa Indonesia Provide superior customer service at a lower cost with an easy-to-use cloud contact center. The option to enroll in the Mileage Choice Program opens for individuals at the time of vehicle registration renewal, so the first wave of enrollment is still progressing and will be complete at the end of June 2023. Nearly two million vehicles are eligible to enroll in the program, and the Virginia DMV wants to focus on enrollment as a path forward. The solution using Amazon Connect can scale up to this continued influx of new customers. The Virginia DMV is looking to create a process for new cars to be directly enrolled in the program when they are sold. At the same time, Emovis is designing a postcontact survey through Amazon Connect to support customers better and gain insights into customer satisfaction with the program. “Our role working with the Virginia DMV is to make sure customers are satisfied in their interactions with our support team,” says Krueger. “Amazon Connect is a key tool in helping us achieve customer satisfaction.” Amazon Connect Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) staff productivity With this solution, Virginia DMV staff did not have to figure out how to program devices to capture mileage or account for device inventory. “There’s very little staff time that the Virginia DMV needs to devote to this program,” says Cummings. “Emovis interacts with customers, collects miles, and sends invoices. Emovis is doing all the heavy lifting, and that’s a great benefit to us.” Because Emovis—using Amazon Connect—handles the customer service for the Mileage Choice Program, the Virginia DMV can continue operations as normal, with no added burden to Virginia DMV personnel. Amazon Connect facilitates interactions with customers and can scale to accommodate thousands of agents. Achieved participants enrolled in 6 months About the Virginia Department of Motor Vehicles The Mileage Choice Program is offered as a pay-per-use option, an alternative to paying a flat cost at the time of vehicle registration. Customers are eligible to sign up when renewing their vehicle registration; after they enroll through Emovis, they either receive a device to plug in to their vehicles or have the data taken from in-car telematics. “Through Emovis’s work with our IT team, the implementation was seamless,” says Scott Cummings, assistant commissioner for finance at the Virginia DMV. The Virginia DMV implemented the solution in 6 months, successfully meeting the statutory deadline. We were really pleased that there was a positive response from our residents when they signed up for the program, which is supported by Amazon Connect.” Türkçe 6 Solution | Connecting to Customers Using Amazon Connect English Establishing the Nation’s Largest Mileage-Based User Fee Program Using Amazon Connect with the Virginia DMV Fuel-tax revenue is critical to maintaining the roads that get us from point A to point B, but in 2019, as overall vehicle fuel efficiency increased and more drivers purchased electric and hybrid vehicles, the amount of taxes paid at the gas pump declined. To address reduced revenue, the Virginia State Legislature passed a bill in 2020 creating a highway use fee for fuel-efficient and electric vehicle owners and directed the Virginia Department of Motor Vehicles (Virginia DMV) to create a per-mile fee program as a payment option. The Virginia Department of Motor Vehicles (Virginia DMV) enrolled over 10,000 people in its Mileage Choice Program in 6 months with a solution managed by Emovis and powered by AWS.   Deutsch Tiếng Việt months to implement the Mileage Choice Program Italiano ไทย Scott Cummings Assistant Commissioner for Finance, Virginia DMV Contact Sales Customer Stories / Government Learn more » Opportunity | Using Amazon Connect to Power the Mileage Choice Program for the Virginia DMV Português
Evolving ADPs Single Global Experience in MyADP and ADP Mobile Using AWS Lambda _ Case Study _ AWS.txt
AWS Lambda Français 2023 AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. Learn more » Español ADP pursued a novel approach to unify its global UX and improve latency, cost, and performance. “The serverless model looked like a good way to handle higher traffic and be active across multiple regions,” says Anderson Buzo, chief architect at ADP. “And with serverless architecture, the cost is based on what we actually use, not what we deploy.” The company began migrating its flagship application to Amazon Web Services (AWS) in 2019 to take advantage of the benefits that come from a robust computing network. Now the application runs entirely on AWS, and clients are enjoying improved quality, lower latency, and a seamless UX. The migration to a serverless model on AWS has also accelerated the pace of innovation because ADP teams no longer have to spend time on infrastructure management. for bursts of traffic to eliminate throttling and errors AWS AppSync creates serverless GraphQL and Pub/Sub APIs that simplify application development through a single endpoint to securely query, update, or publish data.  AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. 日本語 Scaled The application users—the employees of ADP client companies—are benefiting from ADP innovations, which include intelligent self-service and chatbot functionality in some regions. The increased flexibility that ADP now offers means that the application maintains a 4.5 rating from users on mobile application marketplaces. With a new, unified user experience, time to market has been reduced, and the company can onboard new clients more quickly. ADP has also accelerated feature delivery substantially. Its teams are happy to be able to focus on what they do best. “Using AWS solutions, the talent on our team is doing actual product engineering work instead of worrying about infrastructure,” says Ramachandran. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used About ADP After migrating to AWS, ADP adopted AWS AppSync to bolster the reliability of the application and offer a better experience with offline-first design. By designing an offline-first architecture, the team is developing a solution that pushes ADP Mobile and MyADP data to user devices as new data becomes available. This approach makes the application more resilient to faults and gives users access to recently updated data even if their network connection is slow.  AWS Fargate 4.5+ AWS AppSync AWS Services Used Learn how ADP in human resources evolved a global UX using AWS serverless technologies. Amazon ECS Automatic Data Processing (ADP) wanted to modernize its flagship desktop and mobile solutions, MyADP and ADP Mobile, so that its over 17 million users had a seamless user experience (UX). The company, a global technology company providing human capital management (HCM) and enterprise payroll services, strives to build innovative products. Low latency and a high-quality UX are a must for the enterprise.  中文 (繁體) Bahasa Indonesia Automatic Data Processing (ADP) provides payroll, human resources, and tax services to businesses around the world. The company processes the payroll of one in six American employees. Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Resiliency Learn more » Learn more » Portability Overview ADP used AWS tools to resolve challenges within its application. The company required a solution that could scale seamlessly to accommodate the rush of workers that clock in during a 90-second window around the beginning of each hour. However, ADP’s prior system took 60 seconds to scale as traffic doubled. Engineers worked quickly to develop a proof of concept using AWS Fargate, a serverless, pay-as-you-go compute solution that scaled rapidly. ADP uses AWS Fargate in tandem with Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service for containerized applications. “We’re using AWS because we want to be a product development team and not an infrastructure management team,” says Ramachandran. As part of the application modernization, ADP started to build a new generation of microservices in AWS Lambda, a serverless, event-driven compute service. ADP further increased resiliency by deploying in multiple availability zones. After the migration, the team began optimizing costs. “Today, we are using AWS solutions like a Ferrari, but we’re paying the price of a regular car because of our serverless architecture,” says Ramachandran. In addition to saving money, ADP has increased staff productivity. Before using AWS, product developers had to coordinate and align with multiple internal teams to troubleshoot issues with databases and other resources. After migrating to managed services on AWS, development teams own their resources fully, and the company now spends much less time on support and maintenance.  Get Started Solution | Unlocking Resilience Through Off-line Architecture and AWS Services app store rating maintained Türkçe English ADP processes payments for one in six American workers, and the company is expanding globally. To meet quality and latency goals, the company is committed to consolidating, standardizing, and modernizing its application, which is used by over 17 million people and more than 470,000 companies. Although ADP Mobile and MyADP are used as the delivery mechanism for all ADP services, the company wanted to present a more consistent brand to customers with a unified global experience for common pillars like payroll, benefits, retirement, and taxes.  Devi Ramachandran Senior Director, DevOps, ADP with latency-based routing Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that simplifies your deployment, management, and scaling of containerized applications. Learn more » Evolving ADP’s Single Global Experience in MyADP and ADP Mobile Using AWS Lambda Deutsch Outcome | Moving Toward Global Deployments on AWS Tiếng Việt We’re using AWS because we want to be a product development team and not an infrastructure management team.” Italiano ไทย After three years, all of the application’s critical systems have been migrated to the cloud. “We are a total AWS shop right now,” says Ramachandran. Serverless architecture has opened new possibilities for innovation. The team is now focused on global deployments so that improvements developed in one region will automatically deploy globally. “When we build a feature in the United States or Europe, we can simply bring it to the app, and everybody can have it,” says Buzo. “On AWS, we can build a global app.”  ADP had to innovate to create a single experience for disparate systems of record without introducing error. “The speed at which pay statements open up should be the same speed at which benefits enrollment open, but these are two different sources of content on two different sets of infrastructure,” says Devi Ramachandran, senior director of DevOps at ADP. “That’s been our challenge from the beginning, and migrating our systems to AWS made everything simpler.” ADP also had to simplify the ADP Mobile and MyADP application programming interface (API) access that is provided by those different infrastructures. To streamline data aggregation on the backend, the company used AWS AppSync, which creates serverless GraphQL and Pub/Sub APIs that simplify application development. Using AWS AppSync, ADP can bring together data from the various backends and sources into a single endpoint. for global UX achieved Reduced Latency Opportunity | Using AWS to Create a Global User Experience for 17 Million People Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Improved through multi region architecture
Expanding Opportunities Using Amazon WorkSpaces with The Chicago Lighthouse _ Case Study _ AWS.txt
Amazon WorkSpaces Expanding Opportunities Using Amazon WorkSpaces with The Chicago Lighthouse Français People with disabilities still face barriers to employment, no matter how talented and dedicated they are, so The Lighthouse continues to advocate for inclusive workplaces. Because Amazon WorkSpaces worked so well for its workers with visual impairments, The Lighthouse is currently working toward remote accessibility solutions for users who are completely sightless. “Remote work creates opportunities for more people with disabilities to work from home,” Szlyk says. “This is significant because 60 percent of working-age adults with disabilities are not employed. Using these AWS solutions helps us open doors for more accessible, inclusive employment.” 2023 Customer Stories / Non-Profit Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security. Learn more » Español Since 1906, The Chicago Lighthouse has been a leader in comprehensive vision care, education, social services, assistive technologies, and employment opportunities that improve the quality of life for patients, clients, workers, and their families. Janet Szlyk President and Chief Executive Officer, The Chicago Lighthouse  日本語 The Chicago Lighthouse has been in operation since 1906. Today, in 2023, the agency provides 40 programs and services that help more than 50,000 people every year. Its clients access vision rehabilitation, education, assistive technology consulting, and other opportunities that improve their quality of life and empower them to live as confidently and independently as possible. Contact Sales The following diagram shows the network flow for an Amazon Workspaces user connecting to the service via the public internet from outside the corporate firewall. To find a way to keep The Lighthouse operational, Naqeeb first created a pilot workstation at his own home. When his home pilot worked, he and his IT team of six tested it in one of the contact centers. It worked well, and Naqeeb proposed an organization-wide solution. On March 17, Naqeeb and the IT team began transitioning employees to remote work. Four days later, on March 21, 70 employees were up and running. Over the next 4 days, the team transitioned another 50 employees to Amazon WorkSpaces. By March 24, 1 week after beginning the transition, 120 employees, many with disabilities, were working remotely, which was enough to continue the call centers’ seamless operations. “Esmeil Naqeeb is our hero,” Szlyk says. Among the organization’s social enterprises are 12 customer contact centers, handling calls from a number of healthcare and government clients. These businesses generate just over 60 percent of The Lighthouse’s total annual revenue. Until 2020, it was a completely in-person work environment so that The Lighthouse could provide employees with the adaptive technologies that they needed to accommodate visual and other impairments. But as the COVID-19 pandemic began making its way through the United States, Esmeil Naqeeb, network security engineer at The Lighthouse, saw the writing on the wall. “We knew lockdowns were coming,” says Naqeeb, “so we started looking for solutions.” 한국어 50% reduction in employee attrition About The Chicago Lighthouse Get Started AWS Services Used By using AWS, we have cut employee attrition in half in our customer care centers. It’s a win-win situation." in call volume 中文 (繁體) Bahasa Indonesia In solving the challenges presented by the COVID-19 pandemic, The Lighthouse also ended up finding new solutions to support long-term accessibility and inclusion in its workforce. Commuting in and around the third largest US city can be challenging in the best of times, even without added complications due to weather, transportation, or accessibility. Offering work-from-home options turned out to be a tremendous boon—not just for workers but also for the business itself. “It’s our new normal,” Szlyk says. “We’re a hybrid organization now.” 20% increase Overview | Opportunity | Solution | Archiecture Diagram |  Outcome | AWS Services Used  Ρусский By migrating some of its operations to AWS, The Lighthouse kept revenues flowing, served customers, cared for clients, and enhanced and expanded its operations. Its social enterprises even saw significant growth, including a 20 percent increase in call volume, 50 percent increase in clientele, and 26 percent increase in revenue. Perhaps most importantly, workers at The Lighthouse are happier and more productive than ever. “We hear all the time about how much they love remote work, and they still feel a sense of closeness to their teams,” says Szlyk. “They have meaningful, challenging jobs that don’t require commuting, so they’re less likely to leave. By using AWS, we have cut employee attrition by 50 percent in our customer care centers. It’s a win-win situation.” عربي Amazon VPC 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Redshift Simply shutting down the call centers was not an option. The Lighthouse serves several large organizations in Illinois, such as the Illinois Tollway Authority, University of Illinois Health System, and Cook County Health Systems, so interruptions in service could mean harmful impacts on healthcare and infrastructure around the state. The Lighthouse was also committed to caring for its employees. “Everyone needed to continue receiving a paycheck, paying their bills, and feeding their families,” says Janet Szlyk, president and CEO of The Chicago Lighthouse. “Additionally, our customer service business provides revenues that support our organization’s social services. It was critical they remain open.” Learn more » Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe. Overview 26% increase Using Amazon WorkSpaces had immediate impacts across multiple departments. Aaron Baar, senior director of advancement at The Lighthouse, says, “People said how great it was to keep working, to maintain a sense of normalcy and routine in what were not normal times.” Workers in the IT department could access the active directory and keep managing users’ accounts and other on-premises network resources while working from home. In the customer care centers, which employ 119 people who are blind, visually impaired, or otherwise disabled, the results were especially remarkable. Several call center employees with visual impairments use ZoomText, an adaptive program that enlarges a computer screen and reads webpages. Licensing each computer individually would have been expensive and cumbersome, but using AWS greatly simplified the process. Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. Learn more » Türkçe A granular look at the AWS cloud how the Lighthouse's on-premise infrastructure will connect to it: English in revenue The company’s first idea was to physically deliver computers to workers’ homes, but this would have been prohibitively time consuming and could have potentially compromised sensitive data. In the search for a better idea, The Lighthouse discovered Amazon WorkSpaces, a family of solutions that provides the right virtual workspace for varied worker types, especially hybrid and remote workers. Amazon WorkSpaces customers can get tech support, but The Lighthouse needed very little assistance during the transition. “It worked flawlessly,” says Naqeeb. Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » The Lighthouse uses Amazon Redshift as its cloud data warehouse. Amazon Redshift uses SQL to analyze structured and semistructured data, so The Lighthouse can run complex queries and scale analytics on call center data without managing infrastructure. Amazon QuickSight is a service that powers data-driven organizations with unified business intelligence at hyperscale. The Lighthouse uses Amazon QuickSight to power millions of weekly dashboard views so that all users can meet analytic needs from the same data sources and make better decisions. The Lighthouse runs these services on Amazon Virtual Private Cloud (Amazon VPC), a logically isolated virtual network that gives customers control over their networking environment, resource placement, connectivity, and security. The Chicago Lighthouse (The Lighthouse) serves and advocates for the blind and visually impaired, disabled, and veteran communities. To help make its operations self-sustaining, The Lighthouse has developed several social enterprises—all of which serve the dual purpose of generating revenues and creating employment opportunities for its clients—in customer service, digital accessibility consulting, manufacturing, and shipping.  50% increase Amazon Quicksight Deutsch Opportunity | Using Amazon WorkSpaces to Transition to Remote Work for The Chicago Lighthouse Tiếng Việt Learn how The Chicago Lighthouse, a nonprofit organization, pivoted to remote work using AWS. Solution | Keeping Workers Employed Using Amazon VPC Italiano ไทย Architecture Diagram When the COVID-19 pandemic forced workplace closures in March 2020, The Lighthouse had an urgent need to keep the organization and its programs operating without interruption. Using Amazon Web Services (AWS), The Lighthouse pivoted to a work-from-home model in a matter of days, keeping customers satisfied and mission-critical revenues flowing in. Perhaps most importantly, it allowed employees, particularly those with visual and other disabilities, to continue working. Outcome | Empowering Happy, Independent Workers Using AWS in client roster Português
Exploring Generative AI in conversational experiences_ An Introduction with Amazon Lex Langchain and SageMaker Jumpstart _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Exploring Generative AI in conversational experiences: An Introduction with Amazon Lex, Langchain, and SageMaker Jumpstart by Marcelo Silva , Kanjana Chandren , Justin Leto , Mahesh Biradar , Ryan Gomes , and Victor Rojo | on 08 JUN 2023 | in Amazon Lex , Amazon SageMaker , Amazon SageMaker JumpStart , Artificial Intelligence , Generative AI , Technical How-to | Permalink | Comments |  Share Customers expect quick and efficient service from businesses in today’s fast-paced world. But providing excellent customer service can be significantly challenging when the volume of inquiries outpaces the human resources employed to address them. However, businesses can meet this challenge while providing personalized and efficient customer service with the advancements in generative artificial intelligence (generative AI) powered by large language models (LLMs). Generative AI chatbots have gained notoriety for their ability to imitate human intellect. However, unlike task-oriented bots, these bots use LLMs for text analysis and content generation. LLMs are based on the Transformer architecture , a deep learning neural network introduced in June 2017 that can be trained on a massive corpus of unlabeled text. This approach creates a more human-like conversation experience and accommodates several topics. As of this writing, companies of all sizes want to use this technology but need help figuring out where to start. If you are looking to get started with generative AI and the use of LLMs in conversational AI, this post is for you. We have included a sample project to quickly deploy an Amazon Lex bot that consumes a pre-trained open-source LLM. The code also includes the starting point to implement a custom memory manager. This mechanism allows an LLM to recall previous interactions to keep the conversation’s context and pace. Finally, it’s essential to highlight the importance of experimenting with fine-tuning prompts and LLM randomness and determinism parameters to obtain consistent results. Solution overview The solution integrates an Amazon Lex bot with a popular open-source LLM from Amazon SageMaker JumpStart , accessible through an Amazon SageMaker endpoint. We also use LangChain, a popular framework that simplifies LLM-powered applications. Finally, we use a QnABot to provide a user interface for our chatbot. First, we start by describing each component in the preceding diagram: JumpStart offers pre-trained open-source models for various problem types. This enables you to begin machine learning (ML) quickly. It includes the FLAN-T5-XL model , an LLM deployed into a deep learning container. It performs well on various natural language processing (NLP) tasks, including text generation. A SageMaker real-time inference endpoint enables fast, scalable deployment of ML models for predicting events. With the ability to integrate with Lambda functions, the endpoint allows for building custom applications. The AWS Lambda function uses the requests from the Amazon Lex bot or the QnABot to prepare the payload to invoke the SageMaker endpoint using LangChain . LangChain is a framework that lets developers create applications powered by LLMs. The Amazon Lex V2 bot has the built-in AMAZON.FallbackIntent intent type. It is triggered when a user’s input doesn’t match any intents in the bot. The QnABot is an open-source AWS solution to provide a user interface to Amazon Lex bots. We configured it with a Lambda hook function for a CustomNoMatches item, and it triggers the Lambda function when QnABot can’t find an answer. We assume you have already deployed it and included the steps to configure it in the following sections. The solution is described at a high level in the following sequence diagram. Major tasks performed by the solution In this section, we look at the major tasks performed in our solution. This solution’s entire project source code is available for your reference in this GitHub repository . Handling chatbot fallbacks The Lambda function handles the “don’t know” answers via AMAZON.FallbackIntent in Amazon Lex V2 and the CustomNoMatches item in QnABot. When triggered, this function looks at the request for a session and the fallback intent. If there is a match, it hands off the request to a Lex V2 dispatcher; otherwise, the QnABot dispatcher uses the request. See the following code: def dispatch_lexv2(request): """Summary Args: request (dict): Lambda event containing a user's input chat message and context (historical conversation) Uses the LexV2 sessions API to manage past inputs https://docs.aws.amazon.com/lexv2/latest/dg/using-sessions.html Returns: dict: Description """ lexv2_dispatcher = LexV2SMLangchainDispatcher(request) return lexv2_dispatcher.dispatch_intent() def dispatch_QnABot(request): """Summary Args: request (dict): Lambda event containing a user's input chat message and context (historical conversation) Returns: dict: Dict formatted as documented to be a lambda hook for a "don't know" answer for the QnABot on AWS Solution see https://docs.aws.amazon.com/solutions/latest/QnABot-on-aws/specifying-lambda-hook-functions.html """ request['res']['message'] = "Hi! This is your Custom Python Hook speaking!" qna_intent_dispatcher = QnASMLangchainDispatcher(request) return qna_intent_dispatcher.dispatch_intent() def lambda_handler(event, context): print(event) if 'sessionState' in event: if 'intent' in event['sessionState']: if 'name' in event['sessionState']['intent']: if event['sessionState']['intent']['name'] == 'FallbackIntent': return dispatch_lexv2(event) else: return dispatch_QnABot(event) Providing memory to our LLM To preserve the LLM memory in a multi-turn conversation, the Lambda function includes a LangChain custom memory class mechanism that uses the Amazon Lex V2 Sessions API to keep track of the session attributes with the ongoing multi-turn conversation messages and to provide context to the conversational model via previous interactions. See the following code: class LexConversationalMemory(BaseMemory, BaseModel): """Langchain Custom Memory class that uses Lex Conversation history Attributes: history (dict): Dict storing conversation history that acts as the Langchain memory lex_conv_context (str): LexV2 sessions API that serves as input for convo history Memory is loaded from here memory_key (str): key to for chat history Langchain memory variable - "history" """ history = {} memory_key = "chat_history" #pass into prompt with key lex_conv_context = "" def clear(self): """Clear chat history """ self.history = {} @property def memory_variables(self) -> List[str]: """Load memory variables Returns: List[str]: List of keys containing Langchain memory """ return [self.memory_key] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]: """Load memory from lex into current Langchain session memory Args: inputs (Dict[str, Any]): User input for current Langchain session Returns: Dict[str, str]: Langchain memory object """ input_text = inputs[list(inputs.keys())[0]] ccontext = json.loads(self.lex_conv_context) memory = { self.memory_key: ccontext[self.memory_key] + input_text + "\nAI: ", } return memory The following is the sample code we created for introducing the custom memory class in a LangChain ConversationChain: # Create a conversation chain using the prompt, # llm hosted in Sagemaker, and custom memory class self.chain = ConversationChain( llm=sm_flant5_llm, prompt=prompt, memory=LexConversationalMemory(lex_conv_context=lex_conv_history), verbose=True ) Prompt definition A prompt for an LLM is a question or statement that sets the tone for the generated response. Prompts function as a form of context that helps direct the model toward generating relevant responses. See the following code: # define prompt prompt_template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. You are provided with information about entities the Human mentions, if relevant. Chat History: {chat_history} Conversation: Human: {input} AI:""" Using an Amazon Lex V2 session for LLM memory support Amazon Lex V2 initiates a session when a user interacts to a bot. A session persists over time unless manually stopped or timed out. A session stores metadata and application-specific data known as session attributes. Amazon Lex updates client applications when the Lambda function adds or changes session attributes. The QnABot includes an interface to set and get session attributes on top of Amazon Lex V2. In our code, we used this mechanism to build a custom memory class in LangChain to keep track of the conversation history and enable the LLM to recall short-term and long-term interactions. See the following code: class LexV2SMLangchainDispatcher(): def __init__(self, intent_request): # See lex bot input format to lambda https://docs.aws.amazon.com/lex/latest/dg/lambda-input-response-format.html self.intent_request = intent_request self.localeId = self.intent_request['bot']['localeId'] self.input_transcript = self.intent_request['inputTranscript'] # user input self.session_attributes = utils.get_session_attributes( self.intent_request) self.fulfillment_state = "Fulfilled" self.text = "" # response from endpoint self.message = {'contentType': 'PlainText','content': self.text} class QnABotSMLangchainDispatcher(): def __init__(self, intent_request): # QnABot Session attributes self.intent_request = intent_request self.input_transcript = self.intent_request['req']['question'] self.intent_name = self.intent_request['req']['intentname'] self.session_attributes = self.intent_request['req']['session'] Prerequisites To get started with the deployment, you need to fulfill the following prerequisites: Access to the AWS Management Console via a user who can launch AWS CloudFormation stacks Familiarity navigating the Lambda and Amazon Lex consoles Deploy the solution To deploy the solution, proceed with the following steps: Choose Launch Stack to launch the solution in the us-east-1 Region: For Stack name , enter a unique stack name. For HFModel , we use the Hugging Face Flan-T5-XL model available on JumpStart. For HFTask , enter text2text . Keep S3BucketName as is. These are used to find Amazon Simple Storage Service (Amazon S3) assets needed to deploy the solution and may change as updates to this post are published. Acknowledge the capabilities. Choose Create stack . There should be four successfully created stacks. Configure the Amazon Lex V2 bot There is nothing to do with the Amazon Lex V2 bot. Our CloudFormation template already did the heavy lifting. Configure the QnABot We assume you already have an existing QnABot deployed in your environment. But if you need help, follow t hese instructions to deploy it. On the AWS CloudFormation console, navigate to the main stack that you deployed. On the Outputs tab, make a note of the LambdaHookFunctionArn because you need to insert it in the QnABot later. Log in to the QnABot Designer User Interface (UI) as an administrator. In the Questions UI , add a new question. Enter the following values: ID – CustomNoMatches Question – no_hits Answer – Any default answer for “don’t know” Choose Advanced and go to the Lambda Hook section. Enter the Amazon Resource Name (ARN) of the Lambda function you noted previously. Scroll down to the bottom of the section and choose Create. You get a window with a success message. Your question is now visible on the Questions page. Test the solution Let’s proceed with testing the solution. First, it’s worth mentioning that we deployed the FLAN-T5-XL model provided by JumpStart without any fine-tuning. This may have some unpredictability, resulting in slight variations in responses. Test with an Amazon Lex V2 bot This section helps you test the Amazon Lex V2 bot integration with the Lambda function that calls the LLM deployed in the SageMaker endpoint. On the Amazon Lex console, navigate to the bot entitled Sagemaker-Jumpstart-Flan-LLM-Fallback-Bot . This bot has been configured to call the Lambda function that invokes the SageMaker endpoint hosting the LLM as a fallback intent when no other intents are matched. Choose Intents in the navigation pane. On the top right, a message reads, “English (US) has not built changes.” Choose Build . Wait for it to complete. Finally, you get a success message, as shown in the following screenshot. Choose Test . A chat window appears where you can interact with the model. We recommend exploring the built-in  integrations between Amazon Lex bots  and  Amazon Connect . And also, messaging platforms (Facebook, Slack, Twilio SMS) or third-party Contact Centers using Amazon Chime SDK and Genesys Cloud, for example. Test with a QnABot instance This section tests the QnABot on AWS integration with the Lambda function that calls the LLM deployed in the SageMaker endpoint. Open the tools menu in the top left corner. Choose QnABot Client . Choose Sign In as Admin . Enter any question in the user interface. Evaluate the response. Clean up To avoid incurring future charges, delete the resources created by our solution by following these steps: On the AWS CloudFormation console, select the stack named SagemakerFlanLLMStack (or the custom name you set to the stack). Choose Delete . If you deployed the QnABot instance for your tests, select the QnABot stack. Choose Delete . Conclusion In this post, we explored the addition of open-domain capabilities to a task-oriented bot that routes the user requests to an open-source large language model. We encourage you to: Save the conversation history to an external persistence mechanism . For example, you can save the conversation history to Amazon DynamoDB or an S3 bucket and retrieve it in the Lambda function hook. In this way, you don’t need to rely on the internal non-persistent session attributes management offered by Amazon Lex. Experiment with summarization – In multiturn conversations, it’s helpful to generate a summary that you can use in your prompts to add context and limit the usage of conversation history. This helps to prune the bot session size and keep the Lambda function memory consumption low. Experiment with prompt variations –  Modify the original prompt description that matches your experimentation purposes. Adapt the language model for optimal results – You can do this by fine-tuning the advanced LLM parameters such as randomness ( temperature ) and determinism ( top_p ) according to your applications. We demonstrated a sample integration using a pre-trained model with sample values, but have fun adjusting the values for your use cases. In our next post, we plan to help you discover how to fine-tune pre-trained LLM-powered chatbots with your own data. Are you experimenting with LLM chatbots on AWS? Tell us more in the comments! Resources and references Companion source code for this post Amazon Lex V2 Developer Guide AWS Solutions Library: QnABot on AWS Text2Text Generation with FLAN T5 models LangChain – Building applications with LLMs Amazon SageMaker Examples with Jumpstart Foundation Models Amazon BedRock – The easiest way to build and scale generative AI applications with foundation models Quickly build high-accuracy Generative AI applications on enterprise data using Amazon Kendra, LangChain, and large language models About the Authors Marcelo Silva is an experienced tech professional who excels in designing, developing, and implementing cutting-edge products. Starting off his career at Cisco, Marcelo worked on various high-profile projects including deployments of the first ever carrier routing system and the successful rollout of ASR9000. His expertise extends to cloud technology, analytics, and product management, having served as senior manager for several companies like Cisco, Cape Networks, and AWS before joining GenAI. Currently working as a Conversational AI/GenAI Product Manager, Marcelo continues to excel in delivering innovative solutions across industries. Victor Rojo is a highly experienced technologist who is passionate about the latest in AI, ML, and software development. With his expertise, he played a pivotal role in bringing Amazon Alexa to the US and Mexico markets while spearheading the successful launch of Amazon Textract and AWS Contact Center Intelligence (CCI) to AWS Partners. As the current Principal Tech Leader for the Conversational AI Competency Partners program, Victor is committed to driving innovation and bringing cutting-edge solutions to meet the evolving needs of the industry. Justin Leto is a Sr. Solutions Architect at Amazon Web Services with a specialization in machine learning. His passion is helping customers harness the power of machine learning and AI to drive business growth. Justin has presented at global AI conferences, including AWS Summits, and lectured at universities. He leads the NYC machine learning and AI meetup. In his spare time, he enjoys offshore sailing and playing jazz. He lives in New York City with his wife and baby daughter. Ryan Gomes is a Data & ML Engineer with the AWS Professional Services Intelligence Practice. He is passionate about helping customers achieve better outcomes through analytics and machine learning solutions in the cloud. Outside work, he enjoys fitness, cooking, and spending quality time with friends and family. Mahesh Birardar is a Sr. Solutions Architect at Amazon Web Services with specialization in DevOps and Observability. He enjoys helping customers implement cost-effective architectures that scale. Outside work, he enjoys watching movies and hiking. Kanjana Chandren is a Solutions Architect at Amazon Web Services (AWS) who is passionate about Machine Learning. She helps customers in designing, implementing and managing their AWS workloads. Outside of work she loves travelling, reading and spending time with family and friends. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Facilitating the Most Live Streamed Super Bowl and Olympics Using AWS Services _ NBCUniversal Case Study _ AWS.txt
Included in Peacock’s group of CDNs was Amazon CloudFront, a CDN service built for high performance, security, and developer convenience. Besides being economically efficient, Amazon CloudFront offers a global edge network that delivers content to end users with lower latency. “CDNs with large footprints, like Amazon CloudFront, are key because, by using them, we perform better on edge networks to provide customers high-quality video,” says Mastin. “We used Amazon CloudFront and AWS Elemental MediaTailor to optimize our core video key performance indicators and resolve performance issues like bottlenecks. Amazon CloudFront was one of the best of our CDNs.” Français In early December 2021, months into testing, the Peacock team uncovered scalability issues with AWS Elemental MediaTailor but quickly resolved them by engaging AWS Elemental Media Event Management (AWS Elemental MEM), a support program designed to improve the operational reliability of business-critical video events. “Using AWS, we don’t just get a solution that either works or doesn’t—we can iterate together and improve quickly if we find issues,” says David Bohunek, senior vice president, Playback Services, at NBCUniversal. Peacock deploys its encoding and packaging software on Amazon Elastic Compute Cloud (Amazon EC2), which offers the broadest and deepest compute solution to help companies best match the needs of their workload. Peacock content is encoded, packaged, and sent to AWS, where the CDNs take the content and deliver it to viewers. Peacock and the AWS team chose the right type and size of Amazon EC2 instances to scale during the Super Bowl and Olympics. NBCUniversal, a multinational mass media and entertainment conglomerate, received streaming rights to Super Bowl LVI and the Winter Olympics in 2022. For the first time, these events would be simulcasted and live streamed on its streaming platform, Peacock—making it Peacock’s largest concurrent streaming event ever. NBCUniversal had to increase and reinforce Peacock’s global infrastructure to reliably handle such scale to provide a first-rate viewing experience and establish Peacock as a major streaming player powering international solutions. Amazon Elastic Compute Cloud (Amazon EC2) paid subscriptions Español Launched in July 2020 nationally, Peacock offers a catalog of entertainment content from NBCU and beyond, with live sports, critically acclaimed series like The Office and Yellowstone, blockbuster movies, breaking news, and more. Offering video on demand and live broadcasting, the streaming service launched with the backing of the Comcast platform, fueled by Sky’s technology. AWS Elemental MediaTailor is a channel assembly and personalized ad-insertion service for video providers to create linear over-the-top (OTT) channels using existing video content.  日本語 2022 Customer Stories / Media & Entertainment NBCUniversal commissioned Amazon Web Services (AWS) and AWS support teams to prepare Peacock for those business-critical events, focusing on insertion and content delivery networks (CDNs) to provide what it estimated would be millions of users with a viewing experience free of playback disruptions. On AWS, NBCUniversal live streamed the Super Bowl to a record-breaking 6 million concurrent users and the Olympic Games to 1.5 million on Peacock and direct-to-consumer apps, and dropped its most-streamed movie and original TV series at the same time. 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used 13 million Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  Learn more » Outcome | Personalizing Ads for NBCUniversal Customers Using AWS Elemental MediaTailor … Solution | Improving Streaming Quality for Millions of Concurrent Viewers Using Amazon CloudFront and Amazon EC2 To achieve that, NBCUniversal and AWS had daily calls starting in May as they tested and iterated using AWS services. Peacock began using AWS Elemental MediaTailor, a channel assembly and personalized ad insertion service for video providers to create linear over-the-top channels using existing video content and monetize those channels with personalized advertising. “The idea was to consolidate all our ad insertions using a single solution,” says Diwaker. “We slowly tested everything at the scale of millions of concurrent users.” AWS Elemental Media Event Management (MEM) is a consultative support program designed to improve the operational reliability of your business-critical video workloads. Learn more » Learn how NBCUniversal used AWS services to facilitate the most live streamed Super Bowl and Olympics in history. AWS Services Used 1 NBCUniversal took advantage of AWS services and support to break records and establish Peacock as a streaming service competitor. “As a new service with the Super Bowl and the Olympics, Peacock could not have problems if it was to survive,” says Mastin. “AWS understood that our customers getting the content that they desire was life or death for us—and we’re still here.” 中文 (繁體) Bahasa Indonesia Patrick Miceli Executive Vice President and Chief Technology Officer, Direct-to-Consumer, at NBCUniversal AWS Elemental MediaTailor concurrent live stream views for the Olympics, a record high to increase scalability and reliability of content delivery Ρусский 1.5 million عربي Having already used AWS, NBCUniversal was familiar with its scalability and global footprint. “We were looking for a long-term relationship, and AWS gave us the confidence that it would be the right fit and would tackle challenges alongside us,” says Patrick Miceli, executive vice president and chief technology officer, Direct-to-Consumer, at NBCUniversal. The company engaged AWS Enterprise Support teams, including AWS Infrastructure Event Management (AWS IEM), which offers architecture and scaling guidance and operational support during the preparation and running of planned events. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. We were looking for a long-term relationship, and AWS gave us the confidence that it would be the right fit and would tackle challenges alongside us." More Media & Entertainment Stories Overview Amazon CloudFront NBCUniversal plans to use the dynamic adaptive streaming feature of AWS Elemental MediaTailor to personalize ads for every stream to every user. The company also will add 4K and high-definition resolution and even lower latency. “We delivered the Super Bowl in the standard latency, about 30 seconds behind the broadcast streams,” says Bohunek. “We want to get on the broadcast level next time.” Get Started 6 million Türkçe NBCUniversal and AWS began collaborating in May 2021, and in February 2022, Peacock broke every record for customer gains and engagement due to the Super Bowl, the Olympics, and its release of a movie and new drama series. The Beijing Olympics were the most-streamed Olympic Games ever, at 1.5 million viewers. The Super Bowl was the most-streamed Super Bowl in history, with Peacock and other direct-to-consumer apps supporting 6 million concurrent users at peak traffic. Also, on February 13, Peacock dropped Bel-Air, which became its most-streamed original series, reaching 8 million accounts as of May 2022. Heading into Valentine’s Day weekend, Peacock, in partnership with Universal Pictures, launched Marry Me, the platform’s most-watched movie to date. The streaming service ended the first quarter with over 28 million monthly active accounts, 13 million paid subscriptions, and more than 60 million monthly active users. English AWS Elemental Media Event Management concurrent live stream Super Bowl views on Peacock and direct-to-consumer apps About NBCUniversal no items found  Less than 1 year Deutsch Tiếng Việt 1 week A subsidiary of Comcast Corporation, NBCUniversal is a media and entertainment company that develops, produces, and markets entertainment and news to a global audience. Italiano ไทย Opportunity | Using AWS Services to Provide High Playback and Picture Quality for Live Streaming Contact Sales Learn more » to drop the biggest-ever load of TV and film content NBCUniversal also needed to insert personalized ads on Peacock at scale. “If users enter the live stream, they should be able to scrub back and watch the video, and we should be able to insert ads on the content and deliver an optimal user experience,” says Naman Diwaker, director of video software engineering at Peacock. For the Super Bowl and the Beijing Olympics, Peacock had to provide a cinematic viewing experience with high playback and a picture quality that kept viewers satisfied. “Our major key performance indicator for live events is playback failures because if customers are watching a live event and their playback fails, they aren’t happy,” says Chas Mastin, vice president of quality and CDN management at Peacock. NBCUniversal Facilitates the Most Live Streamed Super Bowl and Olympics Using AWS Services Português
FanCode Case Study - Amazon Web Services (AWS).txt
From 2019 to 2021, FanCode worked with an end-to-end video communication platform hosted on AWS to deliver its live streams. However, changes often took up to weeks to implement as FanCode had to work with the vendor’s operations team. This limited its agility and flexibility in responding to customer requests and feedback.   Français Benefits of AWS AWS Services Used Simplified distribution of live streams to a broad range of video playback devices, including web players, smart phones, and connected TVs Español Amazon EC2 Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Learn More 日本語 About FanCode Amit Mirchandani Head of Engineering, FanCode AWS has unlocked many possibilities for the FanCode team. Aside from new features, such as greater personalization for audiences, introducing advertising-based models, and productivity improvements, we plan to increase the number of brand partnerships, increase our merchandise offering, and channel more users to our ecommerce store. On that front, we will be working with Amazon to leverage its last-mile delivery expertise and other best practices. Ultimately, it is about giving users the best possible sports entertainment experience, and we have been able to achieve that with help from AWS.” 한국어 Cloud-based media services that deliver secure live streams Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The AWS Cloud has provided FanCode with the scalability and low latency it needs to ensure consistent, high quality live streams for all its users. FanCode deployed Amazon Elastic Compute Cloud (Amazon EC2) for secure and scalable compute capacity and Amazon Aurora for a fully managed relational database that provides high performance and availability. It also uses Amazon ElastiCache and Amazon CloudFront to minimize latency and shorten live stream loading times for viewers.  Get Started Launched FanCode within 3 months instead of 8 months In 2021, FanCode decided to deploy AWS Media Services, and move away from its previous end-to-end video platform. Using AWS Elemental MediaLive to encode and stream live videos, FanCode’s developers now deploy new channels within 15 minutes to test new features for its video player. To learn more, visit aws.amazon.com/media. FanCode is a sports content aggregator under Dream Sports, an India-based sports technology company. The platform provides live streaming services for sporting events, the latest athlete- and team-related content and statistics, as well as an online merchandise store. Since its founding in 2019, FanCode has grown from 2 million users in the first year to over 80 million in India in 2022.  中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  FanCode is a sports content aggregator incubated by Dream Sports, an India-based sports technology company. The platform provides live streaming services for sporting events, the latest athlete- and team-related content and statistics, as well as an ecommerce marketplace for sports merchandise. Since its founding in 2019, FanCode has grown from 2 million users in the first year to over 80 million in India in 2022.  Can deploy new channels within 15 minutes to test new features for its video player Additionally, with AWS’s pay-as-you-go pricing approach where it only pays for the services consumed, FanCode estimates that being on the AWS Cloud saves it 15 percent/month on operational costs, compared to an on-premises infrastructure. Ρусский In 2019, FanCode streamed about 350 sporting events with near-zero downtime. During a major cricket event, the West Indies tour of India in 2022, FanCode was able to scale its infrastructure to support up to 6 million concurrent viewers without suffering from any downtime or latency issues thanks to the AWS Cloud.  عربي Amazon ElastiCache 中文 (简体) Learn more » Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Amazon Aurora To efficiently and cost-effectively support surges in the number of viewers during live streams, FanCode decided to build its infrastructure on the cloud. It chose Amazon Web Services (AWS) as its preferred cloud provider as Dream Sports has had a good experience with AWS. By tapping the AWS expertise that Dream Sports’ IT team has, FanCode was able to launch the platform in just 3 months, well under its planned timeframe. The aggregator estimated that it would have taken up to 8 months if it had to build from scratch on an on-premises infrastructure.  Türkçe Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-source compatible in-memory data stores in the cloud. English FanCode Grows 40x in 3 years By Delivering High Quality Live Streams on AWS Tapping the cloud to scale computing capacity Deutsch FanCode additionally deployed AWS Elemental MediaPackage to prepare and protect live videos streams over the internet. The service simplifies the distribution of its live streams to a broad range of video playback devices, including web players, smart phones, and connected TVs.  Tiếng Việt Italiano ไทย Amazon CloudFront Contact Sales 2022 Unlocking new innovations with the AWS Cloud “AWS has unlocked many possibilities for the FanCode team. Aside from new features, such as greater personalization for audiences, introducing advertising-based models, and productivity improvements, we plan to increase the number of brand partnerships, increase our merchandise offering, and channel more users to our ecommerce store. On that front, we will be working with Amazon to leverage its last-mile delivery expertise and other best practices. Ultimately, it is about giving users the best possible sports entertainment experience, and we have been able to achieve that with help from AWS,” said Amit Mirchandani, head of engineering at FanCode.  FanCode’s developers are also testing out new features, including ways to overlay athlete- and team-related data over live streams using machine learning (ML). FanCode also wants to enhance personalization by providing content and product recommendations based on users’ favorite teams and players. On the backend, FanCode plans to expand its microservices stack into Kubernetes, which will help developers spend less time deploying, scaling, and managing Kubernetes applications. Português
FanDuel Migrates to AWS in Less than 3 Weeks Improves the Customer Experience _ AWS.txt
On AWS, FanDuel can support cross-functional teams that previously never interacted with one another. Its FanDuel+ department can now collaborate with other teams that have been using AWS since 2014. “On AWS, we broke down internal siloes,” says Girard. Since the migration, FanDuel+ has grown its organization to include dedicated product, commercial, and engineering teams. “AWS convinced a lot of internal stakeholders that we could scale, and we have,” says Girard. By migrating to AWS, we had the flexibility to experiment with lower latency video streaming that enhances our customer experience.”  to migrate four channels to AWS Français Increased Outcome | Preparing for Continued Exponential Growth Español Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer experience. insight into video-streaming processes AWS Elemental MediaPackage 日本語 Customer Stories / Games On AWS, FanDuel has improved the customer experience and security. “We haven’t had a single outage since migrating to AWS,” says Girard. “The reliability that we can provide to our customers has improved tremendously.” The company monitors video input and processing using AWS Media Services Application Mapper, which automatically provisions the services necessary to visualize media services, their relationships, and the near-real-time status of linear video services. “Our operations teams can make their workflows more efficient so we can introduce not just monitoring but also automation and orchestration,” says Girard. Using Amazon CloudWatch—which collects and visualizes real-time logs, metrics, and event data in automated dashboards—FanDuel monitors its AWS Elemental services and CloudFront for suspicious logins and to see that engineers adhere to multifactor authentication policies. Eric Girard Senior Manager of Video Architecture, FanDuel Group to build the first channel on AWS 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Improved AWS Elemental MediaLive Get Started About FanDuel Group By 2021, the company’s app, FanDuel+, had four live streaming linear channels through which it offered one-time sporting events for 50 million US households to watch and wager on. Without a dedicated engineering team for video streaming, the company relied on third-party off-the-shelf products to facilitate video encoding and distribution to customers. The company sought to improve the viewer experience with lower latency, greater reliability, and scalability. AWS Services Used Opportunity | Seeking Reliability on the Cloud 中文 (繁體) Bahasa Indonesia No outages The company was drawn to the performance, global reliability, and availability of AWS solutions. “By migrating to AWS, we had the flexibility to experiment with lower latency video streaming that enhances our customer experience,” says Eric Girard, senior manager of video architecture at FanDuel. “AWS engineers and architects provided support along the way for architecture, configuration, engineering, and operational activities to help train us and deploy this new infrastructure.” Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn more » 中文 (简体) Learn more » experienced on live streams FanDuel plans to live stream other sports on its linear channels. It has signed contracts with more content partners, and because it can scale on AWS, the company intends to add more than 5,000 hours of content in 2023. FanDuel plans to migrate all its one-time television channels and video-on-demand services to AWS. In September 2022, FanDuel TV launched; it’s the first linear/digital network dedicated to sports-wagering content. 2022   Overview FanDuel Migrates to AWS in Less than 3 Weeks, Improves the Customer Experience Solution | Migrating Quickly for High Performance and Reliability Founded in 2009, FanDuel provides an online sports-betting experience to more than 12 million customers in the United States and Canada. The company has grown exponentially year over year since the US Supreme Court struck down the federal ban on sports gambling in 2018, and the states began legalizing it: FanDuel’s annual revenue grew by 81 percent to $896 million in 2020 and by 113 percent to $1.9 billion in 2021. Türkçe English In 2021, sports gaming company FanDuel Group (FanDuel) faced an obstacle that threatened to halt its year-over-year exponential growth: its third-party video-streaming vendor couldn’t handle the 24/7 live streams that facilitated near-real-time betting for its customers. Expecting continued growth, FanDuel needed to scale without adversely affecting the viewing experience. The company contacted Amazon Web Services (AWS) in December 2021, and by January 2022, it had migrated four live stream linear channels to AWS, where it gained high reliability and scalability. AWS Elemental MediaPackage can take a single video input from an encoder, package it in multiple streaming formats, and automatically scale outputs in response to audience demand. customer experience AWS Elemental MediaConnect AWS Elemental MediaLive is a broadcast-grade live video processing service that creates high-quality streams for delivery to broadcast TVs and internet-connected devices. Learn more » Although many internal stakeholders at FanDuel anticipated the project would take 6 months, the team migrated its four channels to AWS in less than 3 weeks. “The cross-functional relationships across our teams meant that I could set up accounts in 1 day and start architecting and building the solution in less than 4 days,” says Girard. “AWS was instrumental in helping us launch and engineer the solution. With a close collaboration between our teams and AWS, we can operate more efficiently.” FanDuel built its first channel in 10 days, which it replicated for the remaining three channels. Next came rigorous failover testing. Deutsch Once the video streams are input into AWS Elemental MediaLive, HTTP live streaming outputs go to AWS Elemental MediaPackage, which prepares and protects video for delivery over the internet to connected devices. “We like AWS Elemental MediaPackage because of its capability and functionality, such as the restart, rewind, record feature,” says Girard. “We can also use its digital-rights management to protect our content.” From AWS Elemental MediaPackage, the video goes to Amazon CloudFront—a content delivery network service built for high performance, scalability, security, and developer convenience—and then to FanDuel’s application. AWS Elemental MediaConnect is a high-quality transport service for live video. It delivers the reliability and security of satellite and fiber-optic combined with the flexibility, agility, and economics of IP-based networks. Tiếng Việt Founded in 2009, FanDuel Group is a sports gaming company owned by Flutter Entertainment. With offices in Los Angeles, New York, and Atlanta, it offers an online sports-betting experience to more than 12 million customers in the United States and Canada. Learn how FanDuel in the gaming industry improved customer experience using AWS Elemental MediaConnect. Italiano ไทย Amazon CloudFront < 3 weeks On AWS, FanDuel not only rapidly enhanced the customer experience but also set itself up for continued growth and improvement. “AWS doesn’t just deliver reliability, it also supports us in scaling and using new technology,” says Girard. “We have flexibility to innovate and improve long term.” 10 days FanDuel uses AWS Elemental MediaConnect, a high-quality, highly reliable transport service for live video, to transmit its video signals over the public internet to AWS from its headquarters in Los Angeles. FanDuel has created two redundant paths by which its video streams pass to AWS Elemental MediaLive, a broadcast-grade live video processing service that creates high-quality video streams for delivery to broadcast televisions and internet-connected multiscreen devices. “We created the AWS Elemental MediaLive input for failover between those two paths,” says Girard. “We can turn off one of those paths if we need to—or, if one of them breaks, the video stream will stay on air.” Português
Fantom Case Study - Amazon Web Services (AWS).txt
Fantom runs an open-source, public blockchain platform that provides ledger services to individuals and enterprises seeking greater security, traceability, and veracity, across decentralized applications in business and government settings. Fantom turned to Amazon Web Services (AWS) to build a stable, secure, and fast platform to better serve a wide range of private users and capture new enterprise users in the financial and public sectors. Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français With Amazon Elastic Compute Cloud (Amazon EC2), Fantom optimized its platform’s speed and security and was recognized as the one of the fastest blockchain platforms in April 2021.  To learn more, visit aws.amazon.com/financial-services. Español 99.9% platform uptime MySQL is the world's most popular open source relational database and Amazon RDS makes it easy to set up, operate, and scale MySQL deployments in the cloud. Amazon Elastic Compute Cloud (EC2) 日本語 Verified blockchain transactions within 1 second each Contact Sales Get Started 한국어 To Learn More With AWS, Fantom can now pursue its business goals with the assurance that its software infrastructure is robust enough to meet the needs of a wider pool of users. About Fantom With secure, resizable compute capacity from Amazon EC2, Fantom offers fast, traceable multi-chain support for its business users to do more. Users can build securities exchanges using Fantom’s patented distributed ledger technologies for smart contracts, and accurately track shipments and identify potential counterfeit goods. AWS Services Used Fantom is a public decentralized blockchain platform servicing a wide range of decentralized applications in business and government settings. Fantom developed one of the first open-source public blockchain platforms that runs asynchronously to complete each transaction in one second. Today, Fantom has a network of more than 100 partners and investors, 8800 smart contracts deployed on its platform, and a market capitalization of USD 1 billion. 中文 (繁體) Bahasa Indonesia Quan Nguyen Chief Technology Officer, Fantom Amazon Elastic Block Store (EBS) is an easy to use, high-performance, block-storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. Ρусский عربي Business Growth, Full Speed Ahead 中文 (简体) “Compared to other cloud providers and services, we’ve found AWS Cloud, and Amazon EC2 in particular, to be the most reliable, stable, and secure. We’ve actively recommended Amazon EC2 to our members since the platform first launched in December 2019,” says Nguyen. Learn more » Benefits of AWS Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud.  Türkçe English Amazon RDS for PostgreSQL Compared to other cloud providers and services, we’ve found AWS Cloud, and Amazon EC2 in particular, to be the most reliable, stable, and secure. We’ve actively recommended this service to our members since the platform first launched in December 2019.” Deutsch Fantom’s Blockchain Platform Raises the Bar for Transaction Verifications Setting a Solid Infrastructural Foundation Tiếng Việt Since running on AWS in 2018, Fantom has grown its network and ecosystem from a few partners and investors to more than a hundred. The number of smart contracts—programs that carry out a specific set of instructions, which cannot be changed once in force—deployed on its platform has increased from 0 to 8800, while its market capitalization expanded from USD 40 million to 1 billion. With AWS Cloud, Fantom has achieved 400 times growth in the number of daily transactions, with 3 times faster peer-to-peer synchronization for sub-second transaction verification speeds and better user experience. Italiano ไทย PostgreSQL has become the preferred open source relational database for many enterprise developers and start-ups, powering leading business and mobile applications.  Amazon RDS for MySQL Michael Kong, chief executive officer and chief information officer at Fantom adds, “We’re planning to enhance our platform with more AWS services to further improve platform nodes, create better monitoring capabilities, and provide new business analytics and recommendations to customers.” 2021 Offering a highly stable and efficient platform, Fantom is actively expanding its services to new enterprise users in sectors including financial services, healthcare, and logistics. According to Quan Nguyen, chief technology officer at Fantom, the company uses Amazon EC2, Amazon Elastic Block Store (EBS), Amazon RDS for PostgreSQL, and Amazon RDS for MySQL to optimize its platform and development environment, serving hundreds of developers and thousands of users with low network latency and 99.9 percent uptime. Português Amazon Elastic Block Store (EBS)
Fatshark Delivers Warhammer 40K_ Darktide Fully on AWS for Millions of Players _ Case Study _ AWS.txt
AWS Global Accelerator is a networking service that helps you improve the availability, performance, and security of your public applications. Learn more » Français 2023 The search for backend services to make a high-quality game experience led Fatshark to AWS. Claridge says, “If we migrate to AWS, there are so many solutions available that we can use to improve the quality of services to our players.” The team started the migration to AWS in early 2020, and Amazon GameLift FleetIQ was a key part of the journey. Amazon GameLift FleetIQ optimizes the use of low-cost Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which let customers take advantage of unused Amazon EC2 capacity in the AWS Cloud, for cloud-based game hosting to deliver inexpensive, resilient game hosting. After the core was up and running, Fatshark started using a range of other services in a serverless development environment. Español Optimized Andrew Claridge Lead Backend Developer, Fatshark 日本語 Amazon GameLift Customer Stories / Games ultralow latency gaming Efficiency was an overriding concern on the project, and Fatshark saved time and effort by using Amazon DynamoDB, a fast, flexible NoSQL database service for single-digit millisecond performance at virtually any scale. “We don’t have to worry about things like database scaling using Amazon DynamoDB,” says Claridge. “It just works.” Fatshark has also accelerated development by using infrastructure as code. Claridge says, “Using infrastructure as code means that we can easily and cost-efficiently stand up developer environments that are one-to-one clones of production environments.” The time saved on building developer environments has given the team more freedom to test new features. Contact Sales Fatshark developed the game backend entirely on AWS with a team of only eight people. After migrating the backend logic for Darktide to the cloud, Fatshark used a host of AWS services within AWS for Games, a purpose-built game development offering. “The fact that there are so many AWS solutions gives us the confidence to keep building, because we know that we’re not walking into a trap,” says Claridge. “There will almost certainly be something that solves our use case.” Fatshark has also accelerated the development process by attracting talent familiar with using AWS. 한국어 AWS Global Accelerator Overview | Opportunity | Solution | Outcome | AWS Services Used In gaming, usage tends to spike very quickly. “Our peaks and troughs are highly compact,” says Claridge. “In just a matter of hours, we go from quite chill to a lot of people playing during the evening.” Moreover, Fatshark is especially well known for its rhythmic approach to melee combat. As players engage artificial intelligence enemies, their parries and redoubts fall into a familiar pattern. One service that Fatshark uses to deliver seamless gaming is AWS Global Accelerator, a networking service that optimizes the user path to applications to keep packet loss, jitter, and latency consistently low. When groups of friends distributed across several continents set up a Darktide game together, Fatshark uses AWS Global Accelerator to eliminate lag spikes. Claridge says, “Using AWS Global Accelerator, our servers aren’t on fire trying to catch up because people are pinging around all over the place.” The result is a high-quality gaming experience that scales to meet spikes in demand. GameLift FleetIQ optimizes the use of low-cost Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances for cloud-based game hosting. With GameLift FleetIQ, you can work directly with your hosting resources in Amazon EC2 and Amazon EC2 Auto Scaling while taking advantage of GameLift optimizations to deliver inexpensive, resilient game hosting for your players. Learn more » Improved About Fatshark Get Started Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools. AWS Services Used Fatshark Delivers Warhammer 40K: Darktide Fully on AWS for Millions of Players After running the backend of previous games on managed services, Fatshark wanted to have more control of features for Darktide. To achieve that goal, the team started using high-level AWS services and chose lower-level services when that seemed optimal. “We have a philosophy of almost entirely starting with serverless technology because it lets our smaller team innovate like a larger studio, and then we drop down when we want more control over the environment,” says Claridge. “Using AWS, we can start quickly and take on complexity when we need it but not when we don’t.” That strategic approach helps Fatshark maximize the impact of its talent pool. Outcome | Focusing on Features to Enhance Player Experience  中文 (繁體) Bahasa Indonesia Amazon GameLift deploys and manages dedicated game servers hosted in the cloud, on-premises, or through hybrid deployments. GameLift provides a low-latency and low-cost solution that scales with fluctuating player demand. Learn more » infrastructure costs Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Adapted gaming experience   Overview Using AWS, we have a very powerful infrastructure for our game, and we can focus on writing features." Fatshark chose to meet those needs by developing Darktide on Amazon Web Services (AWS). “Because we’ve built on AWS before, we know that the game backend, the communication features, and the gameplay servers can scale simultaneously to the level that we need,” says Claridge. Fatshark used services such as Amazon GameLift, a dedicated game server hosting solution, to achieve its desired levels of elasticity, scalability, and cost optimization, which helped prepare the studio to launch Darktide globally. Türkçe Opportunity | Migrating to a Serverless Infrastructure with Amazon GameLift  English Amazon GameLift FleetIQ Founded in 2007, Fatshark is a Stockholm-based studio with two fully supported online cooperative multiplayer games. Both take place within the Warhammer universe from Games Workshop. “We are quite fanatical about the Warhammer universe,” says Claridge. The team was excited to add a new chapter to the franchise, and Fatshark knew that it was time to use a new approach. “We want to make all this cool stuff, but we don’t particularly want to host it,” says Claridge. Provided Deutsch Learn how Fatshark built its new game on the cloud using AWS for Games solutions. Amazon DynamoDB Solution | Improving Global Gaming Performance Using AWS  Tiếng Việt Italiano ไทย to rapid scaling Learn more » Fatshark, a Swedish video game developer, wanted to build its most complex game yet—Warhammer 40,000: Darktide. To build on the success of the studio’s Warhammer: Vermintide series, the combat-focused cooperative multiplayer game must offer ultralow latency to over 100,000 concurrent players. “If players join, they need a server, they need to talk to all their friends, and they need to get to all their characters,” says Andrew Claridge, lead backend developer at Fatshark. Fatshark, a Swedish video game developer, creates high-quality PC and console games. The studio has 200 employees and two titles—Warhammer: Vermintide and Warhammer 40,000: Darktide. Português Fatshark is confident in the game that it has built and is eager to see gamers enjoy the new title. “Using AWS, we have a very powerful infrastructure for our game, and we can focus on writing features,” says Claridge. Now the team aims to keep improving the gaming experience. Claridge says, “Given the smooth experience that we’ve had on AWS so far, we’re looking for new ways to use features and create awesome things for our players.”
Finch Computing Reduces Inference Costs by 80 Using AWS Inferentia for Language Translation _ Case Study _ AWS.txt
Amazon Elastic Compute Cloud (Amazon EC2) Opportunity | Seeking Scalability and Cost Optimization for ML Models Français 80% decrease Amazon Elastic Container Service (Amazon ECS) 3 additional languages Español Optimized 日本語 The strategy involved the deployment of Docker containers to Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service that makes it simple for organizations to deploy, manage, and scale containerized applications. The solution incorporated AWS Deep Learning AMIs (DLAMI), preconfigured environments to build deep learning applications quickly. Finch plugged the AWS Inferentia AMIs into its DevOps pipeline and updated its infrastructure-as-code templates to use AWS Inferentia to run customized containers using Amazon ECS. “Once we had our DevOps pipeline running on Amazon EC2 Inf1 Instances and Amazon ECS, we were able to rapidly deploy more deep learning models,” says Franz Weckesser, chief architect at Finch. In fact, Finch built a model to support the Ukrainian language in just 2 days. Within a few months, Finch deployed three additional ML models—supporting NLP in German, French, and Spanish—and improved the performance of its existing Dutch model. Scott Lightner CTO and Founder, Finch Computing 2022 Outcome | Migrating Additional Applications to AWS Inferentia Amazon EC2 offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » 한국어 Finch Computing is a natural language processing company that uses machine learning to help customers gain near-real-time insights from text. Clients include media companies and data aggregators, US government and intelligence, and financial services. Overview | Opportunity | Solution | Outcome | AWS Services Used Together, Finch and Slalom built a solution that optimized the use of AWS Inferentia–based Amazon EC2 Inf1 Instances, which deliver high-performance ML inference at a low cost in the cloud. “Given the cost of GPUs, we simply couldn’t have offered our customers additional languages while keeping our product profitable,” says Lightner. “Amazon EC2 Inf1 Instances changed that equation for us.” throughput and response times for customers supported because of cost-savings Finch Computing develops natural language processing (NLP) technology to provide customers with the ability to uncover insights from huge volumes of text data, and it was looking to fulfill customers’ requests to support additional languages. Finch had built its own neural translation models using deep learning algorithms with a heavy compute requirement that depended on GPUs. The company was looking for a scalable solution that would scale to support global data feeds and give it the ability to iterate new language models quickly without taking on prohibitive costs. About Finch Computing AWS Services Used for new products 中文 (繁體) Bahasa Indonesia At AWS re:Invent 2021, a yearly conference hosted by AWS for the global cloud computing community, Finch representatives learned about AWS Inferentia–based instances in the Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. AWS introduced Finch to AWS Partner Slalom, a consulting firm focused on strategy, technology, and business transformation. For 2 months after AWS re:Invent, Slalom and Finch team members worked on building a cost-effective solution. “In addition to getting guidance from the AWS team, we connected with Slalom, which helped us optimize our workloads and accelerate this project,” says Scott Lightner, Finch’s founder and chief technology officer. Given the cost of GPUs, we simply couldn’t have offered our customers additional languages while keeping our product profitable. Amazon EC2 Inf1 Instances changed that equation for us.” Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. in computing costs Learn more » Additional customers Faster time to market Overview AWS Inferentia is Amazon's first custom silicon designed to accelerate deep learning workloads and is part of a long-term strategy to deliver on this vision. Get Started attracted by using the service Solution | Building a Solution Using AWS Inferentia Türkçe With offices in Reston, Virginia, and Dayton, Ohio, Finch—a combination of the words “find” and “search”—serves media companies and data aggregators, US intelligence and government organizations, and financial services companies. Its products center around NLP, a subset of artificial intelligence that trains models to understand the nuances of human language, including deciphering tone and intent. Its product Finch for Text uses dense, parallel machine learning (ML) computations that rely on high-performance, accelerated computing so that it can deliver near-real-time insights to customers about their informational assets. For example, its entity disambiguation feature provides customers with the ability to interpret the correct meaning of a word that has multiple meanings or spellings. Since its inception, Finch had been using solutions from Amazon Web Services (AWS). The company began looking at AWS Inferentia, a high performance machine learning inference accelerator, purpose built by AWS, to accelerate deep learning workloads. Creating a compute infrastructure that is centered around the use of AWS Inferentia, Finch reduced its costs by more than 80 percent compared with the use of GPUs while maintaining its throughput and response times for its customers. With a powerful compute infrastructure in place, Finch has accelerated its time to market, expanded its NLP to support three additional languages, and attracted new customers. English Using Amazon EC2 Inf1 Instances, the company improved the speed of developing these new products while reducing its inference costs by more than 80 percent. The addition of the new models attracted customers interested in gaining insights from the additional languages and received positive feedback from existing customers. “There are always challenges in making wholesale changes to the infrastructure,” says Lightner. “But we were able to quickly overcome them with the perseverance of our team with help from Slalom and AWS. The end result made it worthwhile.” Finch is looking to continue migrating more models to AWS Inferentia. These models include Sentiment Assignment, which identifies a piece of content as positive, negative, or neutral, and a new feature called Relationship Extraction, a compute-intensive application that discovers relationships between entities mentioned in text. And Finch continues to add new languages, with plans for Arabic, Chinese, and Russian next. “Our experience working on AWS Inferentia has been great,” says Lightner. “It’s been excellent having a cloud provider that works alongside us and helps us scale as our business grows.” The AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud, at any scale. Learn more » Deutsch Tiếng Việt Italiano ไทย Finch Computing Reduces Inference Costs by 80% Using AWS Inferentia for Language Translation Finch expanded its capabilities to support Dutch, which sparked the idea that it needed to scale further to include French, German, Spanish, and other languages. This decision was valuable not only because Finch’s clients had a lot of content in those languages but also because models that could support additional languages could attract new customers. Finch needed to find a way to process a significant amount of additional data without affecting throughput or response times, critical factors for its clients, or increasing deployment costs. AWS Deep Learning AMIs (DLAMI) Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Learn more » Amazon Inferentia Português The company’s proprietary deep learning translation models were running on PyTorch on AWS, an open-source deep learning framework that makes it simple to develop ML models and deploy them to production. Finch used Docker to containerize and deploy its PyTorch models. Finch migrated these compute-heavy models from GPU-based instances to Amazon EC2 Inf1 Instances powered by AWS Inferentia. Amazon EC2 Inf1 Instances were built to accelerate a diverse set of models—ranging from computer vision to NLP. The team could build a solution that mixed model sizes and maintained the same throughput as it had when it used GPUs but at a significantly lower cost. “Using AWS Inferentia, we are able to get the throughput and performance needed at a price point that our customers can afford,” Lightner says.
Fine-tune GPT-J using an Amazon SageMaker Hugging Face estimator and the model parallel library _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Fine-tune GPT-J using an Amazon SageMaker Hugging Face estimator and the model parallel library by Zmnako Awrahman , Anastasia Pachni Tsitiridou , Dhawalkumar Patel , Rahul Huilgol , Roop Bains , and Wioletta Stobieniecka | on 12 JUN 2023 | in Amazon SageMaker , Best Practices , Generative AI , PyTorch on AWS , Technical How-to | Permalink | Comments |  Share GPT-J is an open-source 6-billion-parameter model released by Eleuther AI. The model is trained on the Pile and can perform various tasks in language processing. It can support a wide variety of use cases, including text classification, token classification, text generation, question and answering, entity extraction, summarization, sentiment analysis, and many more. GPT-J is a transformer model trained using Ben Wang’s Mesh Transformer JAX . In this post, we present a guide and best practices on training large language models (LLMs) using the Amazon SageMaker distributed model parallel library to reduce training time and cost. You will learn how to train a 6-billion-parameter GPT-J model on SageMaker with ease. Finally, we share the main features of SageMaker distributed model parallelism that help with speeding up training time. Transformer neural networks A transformer neural network is a popular deep learning architecture to solve sequence-to-sequence tasks. It uses attention as the learning mechanism to achieve close to human-level performance. Some of the other useful properties of the architecture compared to previous generations of natural language processing (NLP) models include the ability distribute, scale, and pre-train. Transformers-based models can be applied across different use cases when dealing with text data, such as search, chatbots, and many more. Transformers use the concept of pre-training to gain intelligence from large datasets. Pre-trained transformers can be used as is or fine-tuned on your datasets, which can be much smaller and specific to your business. Hugging Face on SageMaker Hugging Face is a company developing some of the most popular open-source libraries providing state-of-the-art NLP technology based on transformers architectures. The Hugging Face transformers , tokenizers , and datasets libraries provide APIs and tools to download and predict using pre-trained models in multiple languages. SageMaker enables you to train, fine-tune, and run inference using Hugging Face models directly from its Hugging Face Model Hub using the Hugging Face estimator in the SageMaker SDK . The integration makes it easier to customize Hugging Face models on domain-specific use cases. Behind the scenes, the SageMaker SDK uses AWS Deep Learning Containers (DLCs), which are a set of prebuilt Docker images for training and serving models offered by SageMaker. The DLCs are developed through a collaboration between AWS and Hugging Face. The integration also offers integration between the Hugging Face transformers SDK and SageMaker distributed training libraries, enabling you to scale your training jobs on a cluster of GPUs. Overview of the SageMaker distributed model parallel library Model parallelism is a distributed training strategy that partitions the deep learning model over numerous devices, within or across instances. Deep learning (DL) models with more layers and parameters perform better in complex tasks like computer vision and NLP. However, the maximum model size that can be stored in the memory of a single GPU is limited. GPU memory constraints can be bottlenecks while training DL models in the following ways: They limit the size of the model that can be trained because a model’s memory footprint scales proportionately to the number of parameters They reduce GPU utilization and training efficiency by limiting the per-GPU batch size during training SageMaker includes the distributed model parallel library to help distribute and train DL models effectively across many compute nodes, overcoming the restrictions associated with training a model on a single GPU. Furthermore, the library allows you to obtain the most optimal distributed training utilizing EFA-supported devices, which improves inter-node communication performance with low latency, high throughput, and OS bypass. Because large models such as GPT-J, with billions of parameters, have a GPU memory footprint that exceeds a single chip, it becomes essential to partition them across multiple GPUs. The SageMaker model parallel (SMP) library enables automatic partitioning of models across multiple GPUs. With SageMaker model parallelism, SageMaker runs an initial profiling job on your behalf to analyze the compute and memory requirements of the model. This information is then used to decide how the model is partitioned across GPUs, in order to maximize an objective, such as maximizing speed or minimizing memory footprint. It also supports optional pipeline run scheduling in order to maximize the overall utilization of available GPUs. The propagation of activations during forward pass and gradients during backward pass requires sequential computation, which limits the amount of GPU utilization. SageMaker overcomes the sequential computation constraint utilizing the pipeline run schedule by splitting mini-batches into micro-batches to be processed in parallel on different GPUs. SageMaker model parallelism supports two modes of pipeline runs: Simple pipeline – This mode finishes the forward pass for each micro-batch before starting the backward pass. Interleaved pipeline – In this mode, the backward run of the micro-batches is prioritized whenever possible. This allows for quicker release of the memory used for activations, thereby using memory more efficiently. Tensor parallelism Individual layers, or nn.Modules , are divided across devices using tensor parallelism so they can run concurrently. The simplest example of how the library divides a model with four layers to achieve two-way tensor parallelism ( "tensor_parallel_degree": 2 ) is shown in the following figure. Each model replica’s layers are bisected (divided in half) and distributed between two GPUs. The degree of data parallelism is eight in this example because the model parallel configuration additionally includes "pipeline_parallel_degree": 1 and "ddp": True . The library manages communication among the replicas of the tensor-distributed model. The benefit of this feature is that you may choose which layers or which subset of layers you want to apply tensor parallelism to. To dive deep into tensor parallelism and other memory-saving features for PyTorch, and to learn how to set up a combination of pipeline and tensor parallelism, see Extended Features of the SageMaker Model Parallel Library for PyTorch . SageMaker sharded data parallelism Sharded data parallelism is a memory-saving distributed training technique that splits the training state of a model (model parameters, gradients, and optimizer states) across GPUs in a data parallel group. When scaling up your training job to a large GPU cluster, you can reduce the per-GPU memory footprint of the model by sharding the training state over multiple GPUs. This returns two benefits: you can fit larger models, which would otherwise run out of memory with standard data parallelism, or you can increase the batch size using the freed-up GPU memory. The standard data parallelism technique replicates the training states across the GPUs in the data parallel group and performs gradient aggregation based on the AllReduce operation. In effect, sharded data parallelism introduces a trade-off between the communication overhead and GPU memory efficiency. Using sharded data parallelism increases the communication cost, but the memory footprint per GPU (excluding the memory usage due to activations) is divided by the sharded data parallelism degree, therefore larger models can fit in a GPU cluster. SageMaker implements sharded data parallelism through the MiCS implementation. For more information, see Near-linear scaling of gigantic-model training on AWS . Refer to Sharded Data Parallelism for further details on how to apply sharded data parallelism to your training jobs. Use the SageMaker model parallel library The SageMaker model parallel library comes with the SageMaker Python SDK. You need to install the SageMaker Python SDK to use the library, and it’s already installed on SageMaker notebook kernels. To make your PyTorch training script utilize the capabilities of the SMP library, you need to make the following changes: Strat by importing and initializing the smp library using the smp.init() call. Once it’s initialized, you can wrap your model with the smp.DistributedModel wrapper and use the returned DistributedModel object instead of the user model. For your optimizer state, use the smp.DistributedOptimizer wrapper around your model optimizer, enabling smp to save and load the optimizer state. The forward and backward pass logic can be abstracted as a separate function and add a smp.step decorator to the function. Essentially, the forward pass and back-propagation needs to be run inside the function with the smp.step decorator placed over it. This allows smp to split the tensor input to the function into a number of microbatches specified while launching the training job. Next, we can move the input tensors to the GPU used by the current process using the torch.cuda.set_device API followed by the .to() API call. Finally, for back-propagation, we replace torch.Tensor.backward and torch.autograd.backward . See the following code: @smp.step def train_step(model, data, target): output = model(data) loss = F.nll_loss(output, target, reduction="mean") model.backward(Loss) return output, loss with smp.tensor_parallelism(): model = AutoModelForCausalLM.from_config(model_config) model = smp.DistributedModel (model) optimizer = smp. DistributedOptimizer(optimizer) The SageMaker model parallel library’s tensor parallelism offers out-of-the-box support for the following Hugging Face Transformer models: GPT-2 , BERT , and RoBERTa (available in the SMP library v1.7.0 and later) GPT-J (available in the SMP library v1.8.0 and later) GPT-Neo (available in the SMP library v1.10.0 and later) Best practices for performance tuning with the SMP library When training large models, consider the following steps so that your model fits in GPU memory with a reasonable batch size: It’s recommended to use instances with higher GPU memory and high bandwidth interconnect for performance, such as p4d and p4de instances. Optimizer state sharding can be enabled in most cases, and will be helpful when you have more than one copy of the model (data parallelism enabled). You can turn on optimizer state sharding by setting "shard_optimizer_state": True in the modelparallel configuration. Use activation checkpointing , a technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass of selected modules in the model. Use activation offloading , an additional feature that can further reduce memory usage. To use activation offloading, set "offload_activations": True in the modelparallel configuration. Use when activation checkpointing and pipeline parallelism are turned on and the number of microbatches is greater than one. Enable tensor parallelism and increase parallelism degrees where the degree is a power of 2. Typically for performance reasons, tensor parallelism is restricted to within a node. We have run many experiments to optimize training and tuning GPT-J on SageMaker with the SMP library. We have managed to reduce GPT-J training time for an epoch on SageMaker from 58 minutes to less than 10 minutes—six times faster training time per epoch. It took initialization, model and dataset download from Amazon Simple Storage Service (Amazon S3) less than a minute, tracing and auto partitioning with GPU as the tracing device less than 1 minute, and training an epoch 8 minutes using tensor parallelism on one ml.p4d.24xlarge instance, FP16 precision, and a SageMaker Hugging Face estimator. To reduce training time as a best practice, when training GPT-J on SageMaker, we recommend the following: Store your pretrained model on Amazon S3 Use FP16 precision Use GPU as a tracing device Use auto-partitioning, activation checkpointing , and optimizer state sharding : auto_partition: True shard_optimizer_state: True Use tensor parallelism Use a SageMaker training instance with multiple GPUs such as ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g5.48xlarge, ml.p4d.24xlarge, or ml.p4de.24xlarge. GPT-J model training and tuning on SageMaker with the SMP library A working step-by-step code sample is available on the Amazon SageMaker Examples public repository. Navigate to the training/distributed_training/pytorch/model_parallel/gpt-j folder . Select the gpt-j folder and open the train_gptj_smp_tensor_parallel_notebook.jpynb Jupyter notebook for the tensor parallelism example and train_gptj_smp_notebook.ipynb for the pipeline parallelism example. You can find a code walkthrough in our Generative AI on Amazon SageMaker workshop . This notebook walks you through how to use the tensor parallelism features provided by the SageMaker model parallelism library. You’ll learn how to run FP16 training of the GPT-J model with tensor parallelism and pipeline parallelism on the GLUE sst2 dataset. Summary The SageMaker model parallel library offers several functionalities. You can reduce cost and speed up training LLMs on SageMaker. You can also learn and run sample codes for BERT, GPT-2, and GPT-J on the Amazon SageMaker Examples public repository. To learn more about AWS best practices for training LLMS using the SMP library, refer to the following resources: SageMaker Distributed Model Parallelism Best Practices Training large language models on Amazon SageMaker: Best practices To learn how one of our customers achieved low-latency GPT-J inference on SageMaker, refer to How Mantium achieves low-latency GPT-J inference with DeepSpeed on Amazon SageMaker . If you’re looking to accelerate time-to-market of your LLMs and reduce your costs, SageMaker can help. Let us know what you build! About the Authors Zmnako Awrahman, PhD , is a Practice Manager, ML SME, and Machine Learning Technical Field Community (TFC) member at Global Competency Center, Amazon Web Services. He helps customers leverage the power of the cloud to extract value from their data with data analytics and machine learning. Roop Bains is a Senior Machine Learning Solutions Architect at AWS. He is passionate about helping customers innovate and achieve their business objectives using artificial intelligence and machine learning. He helps customers train, optimize, and deploy deep learning models. Anastasia Pachni Tsitiridou is a Solutions Architect at AWS. Anastasia lives in Amsterdam and supports software businesses across the Benelux region in their cloud journey. Prior to joining AWS, she studied electrical and computer engineering with a specialization in computer vision. What she enjoys most nowadays is working with very large language models. Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing and artificial intelligence. He focuses on deep learning, including NLP and computer vision domains. He helps customers achieve high-performance model inference on SageMaker. Wioletta Stobieniecka is a Data Scientist at AWS Professional Services. Throughout her professional career, she has delivered multiple analytics-driven projects for different industries such as banking, insurance, telco, and the public sector. Her knowledge of advanced statistical methods and machine learning is well combined with a business acumen. She brings recent AI advancements to create value for customers. Rahul Huilgol is a Senior Software Development Engineer in Distributed Deep Learning at Amazon Web Services. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Firework Games case study.txt
Moses Ip Chief executive officer, Firework Games To ingest and process the game data for its machine learning models, Firework Games deployed a combination of Amazon Relational Database for MySQL (Amazon RDS for MySQL), Amazon ElastiCache, Amazon Aurora, and AWS Glue. The company estimates that Spark Era handles 114 TB of data per hour on average. Français Amazon EC2 Auto Scaling Find out how being on the AWS Cloud lets Firework Games keep latency low for players across the world. Español Using Amazon CloudFront, a low-latency content delivery network, players were able to download the game content within 10 minutes. During testing on its previous on-premises servers, this took double the time. 日本語 On AWS, Firework Games reduced its average latency to 160ms from 300ms connecting users from Korea to AWS servers located in US. This was 46 percent lower than on its previous on-premises servers, while also saving up to 30 percent in costs. Customer Stories / Games 2022 Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Amazon Elastic Graphics allows you to easily attach low-cost graphics acceleration to a wide range of EC2 instances. Simply choose an instance with the right amount of compute, memory, and storage for your application, and then use Elastic Graphics to add acceleration required by your application. Learn more » 한국어 “The AWS team and AWS Enterprise Support has been very helpful in supporting our development of Spark Era and ensuring the best player experience. They guided us on which AWS Regions meet our needs, and advised us on the Amazon EC2 instances needed for our setup. The AWS team also worked with us on a globally scalable design – from latency reduction through region selection, Amazon CloudFront and AWS Global Accelerator implementation, and Transmission Control Protocol (TCP)-based autoscaling strategy to optimize compute resource usage,” said Moses Ip, chief executive officer, Firework Games. “In summary, AWS helped us make the most of our resources, which is vital for us as a startup.”  Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Firework Games is a Hong Kong-based startup established in 2021 that develops blockchain games for the metaverse. In November 2022, the company launched its first game, Spark Era, a Massive Multiplayer Online Role-Playing Game (MMORPG). To achieve a smooth in-game experience for its players, Firework Games built Spark Era on Amazon Web Services (AWS). Benefits Get Started As Spark Era is a highly competitive battle royale game, Firework Games needs to deliver consistently low latencies to its players for a fun and fair gaming experience. AWS Services Used Further Opportunity to Innovate and Evolve 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Ensuring Smooth and Fast Gameplay for Players Across the World عربي • 40% improvement in latency for all its gamers globally • 30% reduction in manpower costs • US$30,000 saved from migrating from on-premises to the AWS Cloud • 20% faster in developing the game • 300% increase in download speeds for new players With AWS Cloud, Firework Games (MMORPG) allows up to 50 players to participate simultaneously per match. To achieve low latency, scalability, and time and cost savings for Spark Era. On the day of the game’s launch, it was able to support 10 million game downloads concurrently, and up to 1 million users to log into its servers with near-zero lag or latency.   中文 (简体) Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define. Solution Overview Looking ahead, Firework Games plans to integrate Amazon Polly and Amazon Transcribe into Spark Era, with players interacting with NPCs using their voice instead of having to click on a given list of options, building a more immersive gaming experience. About Company Amazon Elastic Graphics Türkçe More importantly, Firework Games can deliver a level playing field for players globally with AWS Availability Zones and AWS Regions, keeping latencies within the range of 100-160 ms for global users. English Firework Games Delivers a Smooth In-Game User Experience and Saves Manpower Costs with AWS The company accelerated the game development by 80 percent by deploying AWS Deep Learning AMIs. These provide pre-configured environments for Firework Games, allowing it to move straight to development instead of having to set up a deep learning framework and pipelines, which typically takes up to 3 months. Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Amazon Elastic Compute Cloud The AWS team and AWS Enterprise Support has been very helpful in supporting our development of Spark Era and ensuring the best player experience. They guided us on which AWS Regions meet our needs, and advised us on the Amazon EC2 instances needed for our setup. The AWS team also worked with us on a globally scalable design – from latency reduction through region selection, Amazon CloudFront and AWS Global Accelerator implementation, and Transmission Control Protocol (TCP)-based autoscaling strategy to optimize compute resource usage. In summary, AWS helped us make the most of our resources, which is vital for us as a startup.” Deutsch Additionally, using artificial intelligence (AI) to generate unique non-player characters (NPCs), Spark Era aims to deliver a personalized gameplay experience for every player. The NPCs are tailored based on players’ prior activities, such as their in-game gear choices and interactions with other players. As such, Spark Era has to ingest and process large amounts of data for its AI-based features. With these in mind, Firework Games turned to AWS for a cloud infrastructure that can deliver on performance, scalability, and cost-effectiveness.   Opportunity Tiếng Việt Italiano ไทย Amazon CloudFront Learn more » Outcome Overview | Opportunity | Solution | Benefits | Outcome | AWS Services Used With Amazon Elastic Cloud Compute (Amazon EC2) Auto Scaling and Amazon Elastic Graphics, Firework Games easily scales its compute capacity for player traffic. Since it launched, Spark Era has hosted up to 20,000 players concurrently, with near-zero downtime. Amazon EC2 Auto Scaling has also saved its developers about 40 hours in manual infrastructure maintenance and scaling. Firework Games is a Hong Kong-based game development company that uses cutting-edge technologies to create limitless unique player experiences. The studio focuses on immersive and portable applications that allow users to play games while also bringing innovation into the gaming industry. Its first game, Spark Era, is a massively multiplayer online role-playing game and global metaverse game set in an interstellar environment.   Português Delivering an Unmatched User Experience
FLSmidth Case Study.txt
Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Amazon FSx for Lustre Español Tapped virtually unlimited compute capacity Iterating and Innovating Its Way to Zero Emissions 日本語 Amazon FSx for Lustre makes it easy and cost effective to launch and run the world’s most popular high-performance file system. Use it for workloads where speed matters, such as machine learning, high performance computing (HPC), video processing, and financial modeling. Get Started 한국어 Tapping into Vast Compute Capacity in the Cloud FLSmidth Reduces Simulation Time from Months to Days on AWS Using Virtual Reactor, we’ve explored a wider range of possibilities than we ever could have considered using physical testing for scaling up to industrial size. AWS gave us speed, scalability, and flexibility in our simulations.” Amazon EC2 Gained on-demand access to the latest NVIDIA GPU technology AWS Services Used 中文 (繁體) Bahasa Indonesia Speeding Up Mission-Critical Simulations Contact Sales Ρусский عربي With nearly 12,000 employees in 60 countries, FLSmidth is a global leader in the mining and cement industry. Critical to its operations is cement calcination, a thermochemical process in which limestone is converted into lime and carbon dioxide. To iterate and improve cement calcination, FLSmidth needs to run a series of simulations. However, the company found that running such simulations on its legacy on-premises system was time and cost intensive. “We would regularly run simulations that took 1–2 weeks to complete for a single design analysis,” says Sam Zakrzewski, a fluid dynamics specialist at FLSmidth. “Comparing five design alternatives would take 5–10 weeks on a fairly high-end engineering workstation if we were to run them serially.” Ideally, FLSmidth engineers preferred to compare as many design iterations as they could through physics-based simulations before identifying and implementing the final design. To simulate multiple design scenarios simultaneously, the company needed to invest in additional hardware. But simply adding compute capacity to its legacy system would be cost inefficient, as FLSmidth would still have to pay for the added infrastructure even when not in use. 中文 (简体) About FLSmidth FLSmidth and CPFD also used AWS ParallelCluster—an AWS-supported open-source cluster management tool that makes it simple to deploy and manage HPC clusters on AWS—to integrate other HPC services into the architecture. Once the cluster was up and running, FLSmidth was soon able to run multiple workloads concurrently. For one project, FLSmidth ran five simulations over a single weekend—a feat that just months prior would have taken over 40 days to complete sequentially using limited on-premises capacity. The p3.8xlarge Amazon EC2 instance enabled the simulations to run on four NVIDIA Tesla V100 GPUs. Switching to the NVIDIA GPUs alone resulted in a time reduction of nearly 4 times over FLSmidth’s legacy on-premises compute capability. Since its founding in 1882, innovation has always been at the core of multinational engineering company FLSmidth. Though the company continues to develop sophisticated engineering solutions to lift up the mining and cement industries, the times also demand steady advancements in digital technology. With this in mind, FLSmidth is pursuing sustainable, technology-driven productivity under MissionZero, an initiative to achieve zero emissions and zero waste in cement production and mining by 2030. Benefits of AWS To deliver the powerful, elastic, and cost-effective compute capacity required to run sophisticated simulations concurrently, the company recognized that it needed a cloud solution. So FLSmidth and CPFD consulted AWS on the appropriate HPC cloud services. Amazon Elastic Compute Cloud (Amazon EC2)—a service that provides secure, resizable compute capacity in the cloud—emerged as an obvious choice. For this particular workload, CPFD chose Amazon EC2 P3 Instances with NVIDIA Tesla V100 GPUs, because its Virtual Reactor could harness the compute capabilities of NVIDIA GPUs. Reduced simulation project time frames from months to days The other HPC services involved were Amazon FSx for Lustre—a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads—and NICE DCV, a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device. NICE DCV Rüdiger Zollondz Vice President of Innovation and R&D, FLSmidth AWS ParallelCluster Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English NICE DCV is a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions. By using CPFD’s Barracuda Virtual Reactor powered by cloud compute capacity from AWS, FLSmidth has brought together leaders in cement technology, advanced industrial fluid-particle simulations, GPU computing, and cloud computing to drive positive change. “The digitalization technology enables us to optimize the energy efficiency and emissions of our cement technologies as well as minimize our overall carbon footprint,” says Zollondz. AWS, like FLSmidth, has a perpetual impulse to improve and innovate. As FLSmidth continues to iterate on its cement technologies and edge closer to fulfilling its MissionZero initiative, AWS will continue to step up its support by releasing new features and services. Already the teams at CPFD and FLSmidth are eager to try the newly available Amazon EC2 P4d Instances, which use NVIDIA A100 Tensor Core GPUs. Deutsch “With MissionZero, we seek to accelerate the use of technology and knowledge to enable our customers to produce cement and process minerals with zero environmental impact,” says Thomas Schulz, CEO of FLSmidth. One way FLSmidth is honoring its MissionZero initiative is by using physics-based engineering software package Barracuda Virtual Reactor from AWS Partner CPFD Software (CPFD). Powered by high-performance computing (HPC) on Amazon Web Services (AWS), Barracuda Virtual Reactor enables FLSmidth to more efficiently run simulations that are critical to optimizing its cement technologies. Tiếng Việt Italiano ไทย Because Amazon EC2 is available across 24 regions and 77 Availability Zones, FLSmidth’s engineers have local access to the AWS-powered Barracuda Virtual Reactor across the company’s various global teams. “Using Virtual Reactor, we’ve explored a wider range of possibilities than we ever could have considered using physical testing for scaling up to industrial size,” says Rüdiger Zollondz, vice president of innovation and R&D at FLSmidth. “AWS gave us speed, scalability, and flexibility in our simulations.” 2021 Learn more » AWS ParallelCluster is an AWS-supported open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Enabled broader R&D exploration into bold environmental solutions Português Present in more than 60 countries, FLSmidth delivers sustainable productivity to the global mining and cement industries around the world.
FLYING WHALES Case Study.txt
FLYING WHALES is a French startup that is developing a 60-ton payload cargo airship for the heavy lift and outsize cargo market. The project was born out of France’s ambition to provide efficient, environmentally friendly transportation for collecting wood in remote areas. “We have one of the biggest forested areas in Europe, but these areas are on mountains that are very difficult to access,” says Guillaume Martinat, lead aerodynamics engineer for FLYING WHALES. “This is why we need to create an airship that can load and unload cargo without landing, in hovering flight.” Français FLYING WHALES is using its ability to scale quickly to complete more work than before. Because of the wide variety of AWS instance types available, the company can perform complex simulations that were not possible in an on-premises environment. For example, some ground effect calculations that are critical to size the airship would have required the company to block its entire on-premises cluster for weeks. Now, those calculations can be performed quickly, without having to delay other activities. “There were some studies we couldn’t do because we lacked the compute resources,” says Martinat. “Now, we can do everything we want to. It’s not just a matter of being faster on AWS—it’s a matter of having the ability to get the job done. Furthermore, by selecting high-memory hardware among the large range of available instance types, we are now able to remotely generate finer/heavier meshes than we could on-premises, for better CFD accuracy.” Benefits of AWS Español Amazon Elastic Compute Cloud (EC2) 日本語 Moving an HPC Platform to AWS Get Started 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Rapid Scaling to Support 600-Core Computational Models More Flexibility for Engineers Additionally, the on-demand availability of resources helps FLYING WHALES engineers perform many computations simultaneously, instead of performing each job sequentially. As a result, engineers can spend more time analyzing data and creating intellectual property instead of managing infrastructure. With these capabilities, along with the direct support from AWS, FLYING WHALES will be able to deliver its first airship in 2024, as planned. AWS Activate AWS Services Used FLYING WHALES also leveraged AWS expertise to accelerate the HPC solution’s adoption time. Running its HPC environment on AWS, FLYING WHALES can turn around CFD workflows faster than before. “We can run CFD workflow jobs 15 times faster on AWS thanks to the computing power and inter-node network performance we get using the Amazon EC2 C5n.18xlarge instances and EFA,” says Martinat. “As a result, we can complete jobs in days instead of the months it used to take.” We can run CFD workflow jobs 15 times faster on AWS thanks to the computing power and inter-node network performance we get using the Amazon EC2 C5n.18xlarge instances and EFA.” Elastic Fabric Adapter 中文 (繁體) Bahasa Indonesia Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. FLYING WHALES Runs CFD on AWS to Quickly Launch Environmentally Friendly Cargo Transport Airships Contact Sales Ρусский عربي 中文 (简体) AWS Activate provides startups with a host of benefits, including AWS credits*, AWS support plan credits, and training, to help grow your business. Turning Around CFD Workflows 15 Times Faster Initially, the company relied on an in-house high-performance computing (HPC) cluster to perform the CFD analysis. However, the cluster only had 200 cores, and the company didn’t have the scalability or flexibility it needed to support the workloads. FLYING WHALES also needed to ensure its IT environment was cost-effective and ready for a 2021 model delivery. “As a startup, we were lacking the resources to meet that deadline on our own,” says Martinat. AWS ParallelCluster is an AWS-supported open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Guillaume Martinat Lead Aerodynamics Engineer, FLYING WHALES About FLYING WHALES AWS ParallelCluster Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Thanks to the scalability and flexibility of AWS, FLYING WHALES can now focus on its core business: designing innovative cargo airships. “For our company, the strength of AWS is that it helps us scale and customize our HPC cluster so we always have an environment that performs well and responds to our CFD workloads,” says Martinat. “This will not only enable us to launch our product on time, but it will also help us grow our company.” FLYING WHALES chose to move its HPC environment to the cloud, running its CFD workloads on Amazon Web Services (AWS). “We evaluated several cloud providers, and AWS provided the best performance for us,” says Martinat. Specifically, FLYING WHALES chose to run on Amazon Elastic Compute Cloud (Amazon EC2) C5n.18xlarge instances, which support Elastic Fabric Adapter (EFA) as the Amazon EC2 instance network interface. The C5n instances provide the power and scalability FLYING WHALES needs for its CFD workloads. FLYING WHALES provisions C5n instances using Amazon EC2 Spot Instances. Spot Instances are spare Amazon EC2 capacity available at up to a 90-percent discount. With Spot Instances, FLYING WHALES was able to lower the cost of its HPC clusters by 64 percent.  Additionally, the company uses AWS ParallelCluster to simplify the deployment and management of an HPC cluster to run CFD simulations on AWS. Now, using NICE DCV, FLYING WHALES can securely stream applications while dramatically decreasing data transfer costs, so engineers can inspect solutions without ever having to download them locally. FLYING WHALES also took advantage of the financial and technical assistance provided through the AWS Activate program. “The credits and technical support from AWS helped us get off the ground faster than we could have on our own,” says Martinat. FLYING WHALES is relying on AWS to scale its HPC environment quickly to support 600-core computational models, each 6 TB in size. “We have almost unlimited compute capacity on AWS, which gives us a level of scalability nearly equivalent to the power of a national supercomputer,” says Martinat. “If we need 6,000 cores, we can use all those cores, which means we can do all our computation at the same time, whenever we need to.” Also, the company’s engineers don’t have to wait in job queues to perform simulations, which saves dozens of hours each week. Deutsch To design its airship, FLYING WHALES runs complex Computational Fluid Dynamics (CFD), a tool to numerically simulate the flow of any fluid, and structural analysis simulations, which require large amounts of compute capacity. The company cannot perform physical testing because the airship is too large, and testing would prove too expensive and take too much time. Instead, engineers need data to size the airship and define workloads for every flight phase. CFD gives engineers this much-needed data without having to manufacture any parts, enabling a much faster design process. However, each computation requires about 600 cores, and it takes approximately 400 computations to generate one model, requiring significant computational resources. Tiếng Việt FLYING WHALES, founded in France in 2012, is developing a cargo airship for the heavy lift and outsize cargo market. The company’s environmentally friendly airships can transport up to 60 metric tons of goods at altitudes close to 3,000 meters and in difficult-to-reach areas. Italiano ไทย Runs CFD workflow jobs 15x faster With the flexibility of AWS ParallelCluster, the company’s engineers can get HPC jobs up and running in 15 minutes, instead of taking months to acquire, configure, and manage servers. “We can tailor our instances to fit CFD job sizes by using AWS ParallelCluster,” says Martinat. As an example, if the company doesn’t need large compute capacity, engineers can select an instance type that might be less expensive and then scale it up when necessary. “We get flexibility and cost savings by using this solution. This was key for us as a startup with limited resources,” says Martinat. 2021 Learn more » Scales HPC environment to support 600-core computational models Completes CFD jobs in days instead of months Português Expects to launch first airship on schedule
Fujita Health University Case Study _ Amazon Web Services.txt
Furthermore, the university has bolstered disaster recovery (DR) with its cloud-based PHR system. Fujita Health University is situated on a major fault line in Japan, so having data on the cloud—protected from the threat of natural disasters—made sense for business continuity. Additionally, the university is now conducting a proof of concept to move its EMR system—which currently stores information from its clinicians’ paper charts—from on-premises servers to the AWS Cloud. Ensuring Compliance with Three Japanese Ministries Français AWS Services Used Amazon Cognito provides an identity store that scales to millions of users, supports social and enterprise identity federation, and offers advanced security features to protect your consumers and business. Español Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Learn More 日本語 About Fujita Health University Contact Sales Get Started 한국어 Fujita Health University Aims to Improve Continuity of Patient Care and Deliver Higher Quality Healthcare with Patient Records on AWS AWS Fargate The Fast Healthcare Interoperability Resources (FHIR) standard was instituted in 2012 to provide a standardized format for healthcare information exchange. FHIR allows healthcare providers to build interoperable records systems that facilitate faster and more accurate care with a full picture of patients’ medical history. To learn more, visit aws.amazon.com/health. 中文 (繁體) Bahasa Indonesia Preparing to Scale Records System to One Million Patients We learned a lot working with AWS engineers and business development teams on the architecture of our FHIR-compliant system.” Ρусский Improving DR and Migrating Existing EMR عربي 中文 (简体) Stores high volumes of compute-heavy medical images Learn more » Amazon Elastic Container Service Benefits of AWS Until recently, handwritten medical notes were the norm among medical practitioners in Japan. Even with the proliferation of electronic medical records, inputting these notes into proprietary EMR systems took away time that could otherwise be spent on patient interaction. The university aimed to change this with a digital PHR system. Furthermore, building a scalable PHR system would allow the university to store large volumes of images including X-rays in a central location, and to deploy compute-heavy artificial intelligence (AI) models to support diagnoses. Benefiting from Data-Driven Models and APIs Kobayashi concludes, “We learned a lot working with AWS engineers and business development teams on the architecture of our FHIR-compliant system. We also appreciate how AWS collaborated with our internal teams and external IT vendors and auditors throughout the project, which is not something that happens often in this industry. Everyone is rowing in the same direction, which gives us confidence for the next step in migrating our EMR to AWS.” Fujita Health University had weekly meetings with AWS engineers to ensure its PHR system was securely set up. “The process went smoothly because AWS already had an FHIR-compliant framework in place,” says Nobuyuki Kobayashi, head of IT at Fujita Health University. In addition, the university worked with a third-party auditor to ensure all processes—particularly the transfer of on-premises medical data to the cloud—were performed according to security best practices. Fujita Health University is the largest private health university in Japan and is recognized for its cutting-edge research and advances in medicine. It has four teaching hospitals, with about 13,500 surgeries carried out at its largest hospital annually. To improve quality and continuity of care for its patients, Fujita Health University decided to build a personal health records (PHR) system according to FHIR standards.  Amazon Cognito Türkçe Reduces potential for diagnostic or other medical errors Nobuyuki Kobayashi Head of IT, Fujita Health University English AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Expected benefits of the system include higher record reliability, reduced risk of diagnostic or other medical errors, and doctors being able to spend more time with patients rather than on administrative work. Fujita Health University will have access to API-driven software applications that can be deployed for drug discovery and the development of targeted medical devices and supplements. Integrated at-home health tracking devices and omnichannel communications are among the innovations being developed by other medical institutions using FHIR systems to create a safer and more convenient healthcare experience. Fujita Health University takes advantage of Amazon Cognito for user access control and AWS WAF – Web Application Firewall to protect its patient records against common web exploits. It relies on Amazon Elastic Container Service (Amazon ECS) as a fully managed container orchestration tool and AWS Fargate as a serverless compute engine for deploying containerized applications. The marketing team is also exploring the construction of a data lake on AWS to streamline and personalize customer communications. Similar to electronic health records (EHR), PHR stores patient data from multiple clinical providers in an inter-organizational system. However, as medicine becomes more personalized and patient-centric, many organizations are adopting systems dominated by PHR, which unlike EHR are controlled and managed by patients rather than the medical institutions where they seek treatment. Currently, Fujita Health University is trialing the PHR system with 6,000 staff members before rolling it out to the public. By 2023, patients visiting its teaching hospitals for annual health checks will be able to enter their data into the digital PHR system for the first time. The university anticipates adding one million patient records into the PHR system on AWS within three to four years after deployment. Deutsch FHIR Works on AWS FHIR Works on AWS is a new AWS Solutions Implementation with an open source software toolkit that can be used to create a Fast Healthcare Interoperability Resources (FHIR) interface over existing healthcare applications and data. Tiếng Việt Transitioning to Patient-Centric Care Supported by Cloud Technology Italiano ไทย Fujita Health University is the largest private health university in Japan, with four teaching hospitals. Its largest hospital performs 13,500 surgeries each year. Fujita Health University is a cutting-edge research institution and is committed to advanced medicine to benefit its patients and students. By building its PHR system on AWS, Fujita Health University has opened the door to Internet of Things (IoT) and other modern technology applications that rely on application programming interfaces (APIs). Kobayashi says, “We want to make our data work for us and our patients, empowering them to live a healthier life. Cloud solutions are more flexible for working with IoT, AI, and API-based solutions.” Complies with FHIR standards and guidelines issued by 3 Japanese government ministries 2022 Helps doctors to spend more time with patients Security was the leading requirement for a digital PHR system, to ensure data privacy and compliance with government regulations. The university chose to work with Amazon Web Services (AWS) because AWS provides a toolkit and guidelines on designing medical information systems that are compliant with three Japanese ministries: the Ministry of Health, Labour and Welfare; the Ministry of Internal Affairs; and the Ministry of Economy, Trade and Industry. FHIR Works on AWS is a toolkit that facilitates the design of health data exchange interfaces. Facilitates innovation with IoT, AI, and API-driven solutions Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Game Studio Small Impact Games Runs Successful Alpha and Beta Tests Using Amazon GameLift _ Case Study _ AWS.txt
Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools. Learn more » Français SIG was founded in 2012 and is based in Leicester, England. It is working on developing tactile, first-person, looter-shooter games and has been involved with 22 different gaming projects. The gaming studio’s focus is on making player-centric games with a small team of 12 developers. 2023 Español AWS AppSync to deploy regional infrastructure Although SIG frequently hit its predefined bandwidth limits during the testing phases of Marauders, it was able to expand quickly as needed. Given the company’s size, launching Marauders would not have been as successful without the flexibility afforded by AWS. “You hear stories about games where the whole system will just bottom out because of capacity, but we’ve never seen that or been close to it,” says Rowbotham. “Even in our busiest points, we were matchmaking in 30 seconds, and everyone was having a great time. Scaling was never a concern using AWS.” On AWS, SIG can monitor new players joining its game worldwide and can quickly deploy infrastructure when necessary. “Not only were we spreading out horizontally, but we were also dealing with vertical capacity issues, which were painless to resolve,” says Rowbotham. 日本語 Amazon GameLift Customer Stories / Games Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used ≤ 30 second AWS Services Used Game Studio Small Impact Games Runs Successful Alpha and Beta Tests Using Amazon GameLift 中文 (繁體) Bahasa Indonesia Small Impact Games (SIG), a small, independent computer video game development company, wanted to launch Alpha and Beta testing for its new game: Marauders. However, SIG believed that the scale of these tests would go far beyond that of any game that it had previously created and supported. The company wanted a solution that would give it the ability to retain primary control over its infrastructure. Because of the performance and scalability that these tests required, and the large number of concurrent users that were expected worldwide, the company decided to use a suite of Amazon Web Services (AWS) solutions to support the game. Now SIG has access to the bandwidth that it requires while maintaining the control that it wants. Contact Sales Ρусский Increased flexibility عربي When SIG began working on Marauders, a first-person multiplayer game, the studio expected the demand from participation rates would overwhelm the testing of the game. At the start of the Marauders testing period, SIG wanted to prepare for fluctuations in traffic by investing in a highly scalable infrastructure solution that would be simple to manage and deliver an optimal gaming experience. To meet these goals, the SIG team decided to use Amazon GameLift, a solution for dedicated game server hosting that deploys, operates, and scales cloud servers for multiplayer games. “We went all in using Amazon GameLift for Marauders,” says James Rowbotham, lead developer at SIG. “GameLift is a service that gave us the ability to do specifically what we wanted to do.” SIG’s service upgrade ultimately proved to be a wise choice because the tests far exceeded expectations; the closed Alpha test logged 3,000 concurrent users, and the closed Beta test logged around 7,000 concurrent users. 中文 (简体) Opportunity | Searching for a Scalable, Reliable Infrastructure Solution for Small Impact Games 7,000 In addition to retaining control over its infrastructure, SIG was able to improve the performance of Marauders while using AWS. The company wanted to add a persistent gear aspect to the game so that players could keep the gear that they collected across different game sessions. The technical demands and capacity that this feature required led the SIG team to adopt Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at virtually any scale. SIG also wanted to use the data that it collected to improve the game for players. For this purpose, the SIG team chose Amazon QuickSight, which empowers everyone in an organization to understand data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning.   Overview Maintained control Amazon DynamoDB Starting in July 2020, a core team of three lead developers transformed SIG’s game development environment in 16 months, adopting several fully managed AWS services, including Amazon GameLift and AWS AppSync, which creates serverless GraphQL and Pub/Sub APIs that simplify application development through a single endpoint to securely query, update, or publish data. The scalability, elasticity, and control offered by these services worked perfectly for the small team that was working on Marauders, and the gaming studio scaled its infrastructure to support over 7,000 concurrent players during one of its tests. Moreover, SIG gained the ability to control its infrastructure in house so that it did not depend on a third party to perform the actions required to keep its system running. “As we dug deeper into AWS, we gained more knowledge, and we retained control. Using AWS, we can be fully autonomous,” says Mitchell Small, managing director at SIG. Türkçe English Amazon GameLift deploys and manages dedicated game servers hosted in the cloud, on-premises, or through hybrid deployments. GameLift provides a low-latency and low-cost solution that scales with fluctuating player demand. The success of the Marauders Alpha and Beta tests—marked by the sale of more than 80,000 copies of the game as of September 2022—has positioned SIG to become a significant player and successful developer in its market. Marauders has been featured on the home page of Team17, a video game developer and SIG’s publisher. Additionally, as of September 2022, Marauders was a top Wishlist item on Steam, a popular video game digital distribution service and storefront, and the game’s Discord channel had grown to over 36,000 members. Solution | Using Amazon GameLift to Scale Testing to a Global Fan Base  James Rowbotham Lead Developer, Small Impact Games Small Impact Games is a small, independent computer video game development company that primarily creates tactile, first-person, looter-shooter games. Deutsch concurrent users Learn how game studio Small Impact Games supported Alpha and Beta testing for its new video game using Amazon GameLift. Tiếng Việt AWS AppSync creates serverless GraphQL and Pub/Sub APIs that simplify application development through a single endpoint to securely query, update, or publish data. Learn more » Italiano ไทย Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. With QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries. Learn more » SIG’s immediate goal is to focus on the early-access release of Marauders. The company is all in on AWS following the success of the Marauders Alpha and Beta tests. Going forward, SIG sees the potential for more growth with the flexibility and speed that using AWS provides. The company wants to use more events and tournaments to publicize its games, and it believes that AWS is the solution to make that happen. “I’m so glad that we ended up fully embracing AWS. It gives you so much for such little work, which is perfect for us,” says Rowbotham. Learn more » About Small Impact Games over the development environment Amazon QuickSight Even in our busiest points, we were matchmaking in 30 seconds, and everyone was having a great time. Scaling was never a concern using AWS.” matchmaking during peak times Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Outcome | Becoming a Larger Player in the Gaming Market Using AWS
Games24x7.txt
Games24x7 believes that data science is the future of mainstream gaming. Hyper-personalization—driven by data, analytics, and machine learning (ML)—is at the core of Games 24x7’s business. Furthermore, system bottlenecks often prolonged hypothesis testing, which typically comprises 80 percent of data scientists’ workloads. “We’re experimentation-oriented problem solvers, not ML engineers, and we need to try many iterations before finalizing a model,” says Mukherjee. Under the previous system, it could take weeks to formulate and test analytics hypotheses. In a highly competitive industry such as gaming, this was simply too long. Français AWS Step Functions to create ML workflows. Following that, teams enhanced data workflows with For Mukherjee, the greatest benefit of the modernization project with AWS has been productizing its ML models. By fully leveraging the rich feature set within AWS analytics and ML tools, Games24x7 has reduced model iteration time, improved productivity, and lowered analytics costs. “AI and ML are truly at the core of our internal operations and user-facing platform," Mukherjee explains. "This couldn’t have happened without the ability to scale up our development efforts seamlessly on the AWS Cloud.” 2023 Amazon EMR as a big data framework. The company consulted its AWS account team, then began optimizing its ML pipeline by leveraging more cloud-native capabilities and serverless delivery models. With support from its AWS team, Games24x7 modernized its ML models, following MLOps best practices and automating key training, production, and post-production processes. faster iteration cycle Español Amazon SageMaker for ML model training and Looking ahead, Games24x7 is considering how it could reuse or reposition already-developed models. The gaming industry is highly dynamic, and models are becoming irrelevant at an increasingly faster rate. Users come and go, but attrition rates are highest after the first platform trial. Games24x7 views post-production modeling activities as extremely important, to automate the identification of user drift and introduce features that cater to the profile of users who are starting to veer from the platform. Learn More Learn more » Games24x7 Accelerates Machine Learning Lifecycle with Cloud-Native Data Science Tools on AWS As a first step in the process, the company adopted Contact Sales To learn more, visit aws.amazon.com/solutions/analytics. Amazon SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all machine learning (ML) development steps, from preparing data to building, training, and deploying your ML models, improving data science team productivity by up to 10x. Support from AWS has been instrumental in upskilling Games24x7’s teams and introducing the tools to fit the company’s dynamic use cases. “AWS has helped us ensure we’re using our resources optimally and following MLOps best practices. That’s been key to our productivity acceleration,” Mukherjee adds. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. About Games24x7 Reliable support Productivity is further enhanced with increase in user retention We’ve improved the quality of outcomes from our ML models as a result of our modernization efforts on AWS, and we can manage our overall data science ecosystem more efficiently.” Amazon SageMaker Model AWS Services Used Outcome | Accelerating Iteration while Lowering Costs of Analyses Amazon EMR Serverless to automate infrastructure management. Data scientists no longer need to overprovision instances for experimentation or shut down instances when they’re done. This has led to significant time and cost savings. “The rate of iteration is about 10 times faster than before, which allows us to consistently deliver projects on time or even ahead of schedule,” Mukherjee says. 中文 (繁體) Bahasa Indonesia 10x Amazon SageMaker Studio AWS Step Functions Building Pipeline. Data scientists now enjoy higher autonomy thanks to reduced interdependencies between their team and those responsible for infrastructure and engineering. Ρусский Customer Stories / Software & Internet عربي Games24x7 is India’s leading multigame platform, with offerings such as RummyCircle, My11Circle—India’s second-largest fantasy games platform—and U Games, a portfolio of casual games. The company leverages hyper-personalization and data science to provide superior user experiences. Games24x7 sought to modernize its machine learning (ML) pipeline using cloud-native tools. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Get Started 3x Cost was also a growing concern. Without a background in ML engineering, data scientists typically overprovisioned virtual servers running on Amazon Web Services (AWS). The business sought to increase data science efficiency by leveraging cloud-native automation tools for faster iterations at scale. Games24x7 is an India-headquartered online gaming company with a portfolio that spans skill games and casual games. Founded by New York University–trained economists in 2006, the company is backed by marquee international investors. It specializes in using behavioral science, technology, and artificial intelligence to provide an exceptional game-playing experience across its platforms. Overview Amazon EMR Serverless As Games24x7 has grown, so has the number of business use cases of its ML models. Scaling was becoming increasingly tedious for its team of data scientists, and post-production activities such as ML model monitoring were growing cumbersome. Tridib Mukherjee, vice president & head, AI & Data Science at Games24x7, explains, “The volume of data that we handle involves a lot of infrastructure configuration and frequent scaling up. Our pipelines often timed out when we were processing heavy loads and had to be restarted, which was a productivity drain.” Games24x7 had been using Since beginning the MLOps project on AWS, Games24x7 has driven a threefold increase in productivity. Previously, a team of eight data scientists and analysts could complete four projects within a year—with each project containing 15–100 individual models that influence factors such as user game choice. The Games24x7 team has grown to 30, and its expertise and efficiency have scaled dramatically: the company can now complete 50 projects a year.   Türkçe English The company is using Amazon SageMaker as a fully managed development environment, Amazon EMR as a big data platform, and AWS Step Functions with Amazon SageMaker Pipelines to orchestrate its ML pipelines. With the support of AWS, Games24x7 automated post-production tasks such as ML monitoring to increase productivity and empower its data scientists to solve more business problems, faster. Solution | Adopting MLOps for Increased Automation and Productivity Opportunity | Solving for Bottlenecks that Delay Solution Delivery Games24x7 prides itself on providing a responsible gaming platform. The company tracks its users’ journeys and temporarily blocks players who start to become disruptive or fail to take breaks from marathon gaming sessions. It has deployed other data science use cases such as hyper-personalization, which offers a 360-degree view of each user's activities. Amazon SageMaker Studio, a fully managed development environment that allows data scientists to quickly move through the ML model lifecycle. The environment automates post-production monitoring of ML models, and data scientists can scale individual jobs separately. With Amazon SageMaker Pipelines, Games24x7 has greater visibility into its ML pipeline and models. It uses the model registry in Amazon SageMaker to store all model metadata and evaluation metrics, which data scientists use to track models and share progress among team members. Collaboration has improved and it’s much easier for one team member to pick up where another left off in developing and testing models. Tridib Mukherjee Vice President & Head, AI & Data Science, Games24x7 Deutsch Games24x7 improved data science productivity using Amazon SageMaker Studio and Amazon EMR, reducing overhead and automating ML processes for faster iterations. Tiếng Việt Italiano ไทย 20% Amazon EMR Serverless is a serverless option in Amazon EMR that makes it easy for data analysts and engineers to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Next, Games24x7 switched to These efforts to streamline and boost ML model deployment have paid dividends, with user retention increasing by 20 percent, and long-term attribution and revenue increasing by 10 percent. Games24x7 projects a significant indirect impact on long-term revenue thanks to its MLOps project. 日本語 “We’ve improved the quality of outcomes from our ML models as a result of our modernization efforts on AWS, and we can manage our overall data science ecosystem more efficiently,” Mukherjee concludes. higher productivity Português Optimizes architecture with AWS support
Ganit Transforms Fast Fashion Apparel Retail with Intelligent Demand Forecasting on AWS _ AWS Partner Network (APN) Blog.txt
AWS Partner Network (APN) Blog Ganit Transforms Fast Fashion Apparel Retail with Intelligent Demand Forecasting on AWS by Gaurav H Kankaria , Vaishnavi B , and Sriram Kuravi | on 28 JUN 2023 | in Amazon Forecast , Artificial Intelligence , AWS Partner Network , Case Study , Customer Solutions , Industries , Intermediate (200) , Retail , Thought Leadership | Permalink | Comments |  Share By Gaurav H Kankaria, Head of Strategic Partnerships and Engagement Manager – Ganit By Vaishnavi B, Apprentice Leader – Ganit By Sriram Kuravi, Sr. Partner Management Solution Architect – AWS Ganit Gauging market demand for the apparel retail industry is challenging. The success of stock keeping units (SKUs) sold in this market depends on customer preference (fitting, feel, regional acceptance) and latest trends, which can change frequently. Because of this, large amounts of stock remain unsold, impacting retailers’ working capital in the short term (3-6 months) and eventually leading to large liquidation of leftover stock, reducing the company’s overall profitability. Ganit is an AWS Advanced Tier Services Partner with the Retail Competency that provides intelligent solutions at the intersection of hypothesis-based analytics, discovery-driven artificial intelligence (AI), and new-data insights. Over the years, Ganit has successfully deployed inventory management systems using intelligent demand forecasting at the core of its solutions. This system has helped many clients optimize their inventory, leading to efficient working capital deployment and improvement in topline and bottom-line numbers. In this post, we will discuss how Ganit helped an apparel retailer design their intelligent demand forecasting engine by addressing key business problems such as inventory stockouts, overstocking scenarios, and excess stock liquidation. We’ll detail the approach towards addressing these challenges and designing an efficient demand forecast and allocation engine using Amazon Forecast . Customer Challenges Ganit’s customer is an apparel retailer selling more than ~1,500 unique SKUs at any point across its chain of stores. Demand patterns for its SKUs vary significantly across stores due to the diverse geographical presence within the country. A single apparel center of excellence (CoE) team carries procurement and replenishment activity through a central warehouse (lead time to store varies between 1-7 days) for all SKUs. Two key challenges faced by the customer in running its operations are: Decisions on what and how much to procure (procure to sell model) for all seasonal/fast fashion SKUs are made by subject matter experts (SMEs), which is subjective and leads to ~40% of all SKUs procured liquidated as stock clearance sales post-6 months of purchase, thus impacting overall profit margins. Regular selling SKUs (like white T-shirts, socks, and inner garments) are replenished from the warehouse (procurement to replenish model), leading to improper inventory allocation across stores and causing over- and under-stock events regularly. These challenges negatively impact multiple key performance indicators (KPIs) like inventory turns, working capital, stockouts, overstocking, and higher procurement costs. They also lead to an increase in product damages that impact top and bottom line figures. Solution Overview To address the challenges faced by the customer, Ganit recommended a two-part solution for initial stock allocation and stock replenishment: An item attribute-based demand forecasting method for the fast fashion SKUs was chosen, as these SKUs didn’t have any historical data for modelling. Item attributes like color, size, type, and price range were selected as model levels for demand forecasting. Automated intelligent demand forecasting and an inventory optimization approach were used to address the inventory allocation issue. The demand forecasting engine was designed to use historical and external demand drivers (promotion, weather), and the inventory optimization engine was designed to accommodate varying demand, lead time, and supply chain constraints like minimum order quantity and service unit factors. Figure 1 – Overall approach to building automated replenishment system. Attribute-Based Demand Forecasting To study the demand pattern of fast fashion SKUs, historical sales were time adjusted based on the first day of sales till 183 days of sales (see Figure 2 ) using a Jupyter notebook on Amazon SageMaker . Figure 2 – Standardizing data based on first sales date for Target Time Series Forecasting. Analyzing the data, Ganit observed that SKUs followed an exponential decay pattern of sales at the overall org level with fluctuating demand at the granular level (see Figure 3 ). Figure 3 – Overall sales pattern across stores. Based on the distribution of the demand observed, three models were chosen: Gamma Distribution (GLM) Two-parameter exponential curve Three-parameter exponential curve These models were built using the custom model feature on Amazon SageMaker. The Weighted Absolute Percentage Error (WAPE) metric was used to measure the accuracy of the models. Figure 4 – Statistical model chosen for model fit on historical time adjusted sales data. The three-parameter model had the best model fit accuracy among the models chosen. This was due to the decay parameter in the model, which makes the decay faster initially and then slows it down (like what was observed in the sales trend). Model fit results at lower hierarchy levels are as shown in Figure 5 . For simplicity in understanding, SKUs were classified into ABC segments based on their saliency. Figure 5 – Model fit output for three-parameter exponential model. Using the outputs from the three-parameter model, a decision board was designed using Amazon QuickSight . This decision board provided guidance to the business to procure SKUs and distribute them across stores based on the attributes. With this decision board, the decision-maker can: Get an estimate on what quantity they can procure overall, in accordance to the budget allocated for procuring a new fast fashion SKU. Efficiently allocate those procured SKUs based on probability of success, shelf space available, etc. Figure 6 – Decision board for fast fashion SKU procurement and initial allocation. For regular SKUs, the auto-replenishment model has two engines: Intelligent demand forecasting model Inventory management system Demand Forecasting Engine Amazon Forecast was chosen to build the intelligent forecasting model for the auto-replenishment system. This model was designed to predict demand at Store-SKU-Week level for rolling six weeks. Datasets used were: Historical Target Time Series (TTS) data was used to learn sales trends and seasonality. Regressor Time Series (RTS) data includes factors like promotion, liquidation, stock-outs, and holidays model to learn the impact on demand due to events that occurred in the past. Store-Item Metadata was used to capture synergies like Halo and cannibalization effect between SKUs. Halo effect occurs when the purchase of one SKU positively correlates with the purchase of another; that is, when two SKUs are frequently bought together. Cannibalization effect is when the purchase of one SKU negatively impacts the demand of another SKU. TTS, RTS, and Store-Item Metadata were fed as the inputs to Amazon Forecast. Ganit tried and tested multiple modelling techniques—namely exponential smoothening (ETS), Arima and its variations, Prophet, CNN-QR, and Deep AR+ (AutoML feature was also used). CNN-QR model produced the best acceptable results and was chosen as the forecasting model. During the model design, three forecasts were generated at p40, p50, and p60 quantiles, with p50 being the base quantile which had equal probability of both over and under forecast. The selection of quantiles was based on SKU classification (SKUs were classified into fast- and slow-moving SKUs based on days of inventory of the SKU). p60 was chosen for fast-moving SKUs, as the business impact of customer loss was significantly higher than holding extra inventory, and p50 was chosen for slow-moving SKUs. Once the forecast export was complete, the files were combined to yield the consolidated forecast file. Using the historical estimates, Ganit ran the forecast file through its bias corrector mechanism to adjust for bias and select the right quantile for store-SKU combinations. Inventory Management System There are two key elements required to build an efficient inventory management system: safety stock (SS) and reorder point (ROP). Ganit incorporated the forecasted demand and its variability in calculating the SS and ROP for an efficient stock replenishment system and proper allocation of SKUs across different stores. Safety stock = Minimum display quantity required at store + Demand variability Reorder point = SS + rate of sale (RoS) * (Warehouse-to-store lead time + Purchase time) Automated alerts and transfer order from warehouse to stores were raised when net inventory at store (stock on hand at store + stock in transit + stock allocated to the store) was less than the reorder point. The automated inventory management system helped the client eliminate manual intervention in their procurement team, thereby minimizing stockout conditions caused because of manpower shortage. Production System Development A robust technical architecture for the production system was designed and implemented, following AWS Well-Architected best practices, enabling a sustainable, scalable, and cost-effective tool. Figure 7 – Architecture for automated replenishment system for regular SKUs. Historical demand and regressor time series data was stored in Amazon Redshift , an optimized data warehouse with massive data processing speed for instantaneous data retrieval. The latest regressor-related information was loaded to Amazon Simple Storage Service (Amazon S3) by business users to have an updated data repository for the forecast model development. Amazon SageMaker was used to identify the hypothesis list and perform required analysis to understand the correlation between the regressors and demand. Amazon S3 was a transformed data layer with cleaned and processed data ready for analytical consumption and to store the forecast outputs from Amazon Forecast. Amazon Forecast was used to test and run different models (from ARIMA, Prophet, ETS, BSTS, Deep AR+ and CNN-QR) to improve the accuracy levels AWS Glue was used for running bias correction mechanism and perform reorder point calculation with the stock related (near real-time) inputs from the data warehouse. Amazon QuickSight was used to estimate the procurement quantity based on the budget provided by the user and allocate the SKUs across the stores. End-to-end process was in AWS ecosystem which was secured through its innate features like AWS Identity and Access Management (IAM) access policies, security group, and virtual private cloud (VPC), row-level security for certain users and data encryption using AWS Key Management Service (AWs KMS). Business Impact For fast fashion SKUs, Ganit observed cost-per-invoice for procurement reduced by ~15%, thus improving the working capital of the division. Efficient allocation of SKUs led to increased revenue by ~3% reduction in damage of goods (shrinkage loss) by ~18%, thereby improving both the top and bottom line of the business unit. For regular SKUs, Ganit defined the baselines as a weighted average of the last four weeks for the same day (in the absence of a forecasting model earlier), and estimated a ~12% improvement in forecast accuracy (from 71% to 83%). This automated replenishment system reduced inventory turns by ~2 days (improved working capital), reduced stockout by ~3%, and a topline increase of ~1.4%. Conclusion A machine learning-based procurement and auto-replenishment system helped Ganit’s client unlock value in its existing value chain. Given the current dynamics and competition in the market, companies need to work towards unleashing the true capabilities of data and AI/ML. To give your supply chain operations an edge using the power of ML and data analytics, Ganit recommends you apply Amazon Forecast and Amazon SageMaker to unlock additional value from your existing system. To learn more about Ganit and its solutions, reach out to info@ganitinc.com . . . Ganit – AWS Partner Spotlight Ganit is an AWS Partner  that provides intelligent solutions at the intersection of hypothesis-based analytics, discovery-driven AI, and new-data insights. Contact Ganit | Partner Overview TAGS: AWS Competency Partners , AWS Partner Guest Post , AWS Partner References , AWS Partner Solutions Architects (SA) , AWS Partner Success Stories , AWS Service Delivery Partners , Ganit Comments View Comments Resources AWS Partner and Customer Case Studies AWS Partner Network Case Studies Why Work with AWS Partners Join the AWS Partner Network Partner Central Login AWS Training for Partners AWS Sponsorship Opportunities Follow  AWS Partners LinkedIn  AWS Partners Twitter  AWS Partners YouTube  AWS Email Updates  APN Blog RSS Feed
Generating 100000 Images Daily Using Amazon ECS _ Scenario Case Study _ AWS.txt
time to market for game studios Based on its first 3 months on the market, Scenario hopes to soon be a household name in the gaming industry. “We just launched our mobile app and acquired companies doing texture generation and art pixelization, which will be built into Scenario,” says Nivon. “We’re also working on 3D-image generation, and we’re not constrained by the infrastructure, so we have plenty to work on.” Français 2023 Hervé Nivon Co-Founder and Chief Technology Officer Scenario Incorporated is a generative artificial intelligence company that accelerates time to market for game developers by harnessing artificial intelligence to create style-aligned images and assets in minutes. Español Scenario was founded to revolutionize the way in-game and marketing assets are produced for studios. For example, without the assistance of AI, game artists spend valuable time on repetitive tasks to mass-produce assets for their games. This time could be spent to create more original visuals that attract players and make games more engaging. “It’s super time consuming for game artists to generate assets, edit them, send them for approval, and go back and forth with their colleagues,” says Marie Gerard, head of growth at Scenario. “That’s not the core of what an artist in the gaming industry wants to do.” 日本語 Customer Stories / Games Because AI is not inherently creative, Scenario needed its solution to be simple for customers to interact with. “As a game studio, you bring your own art to Scenario, and our solution accelerates the development process by generating style-aligned images,” says Hervé Nivon, cofounder and chief technology officer of Scenario. “The challenge is scalability—when customers generate images, they’re not willing to wait for minutes.” Scenario has to deliver images in seconds so that customers can train their models and generate the game assets that suit their aesthetic. to buid a generative AI offering Get Started 한국어 2 Months Overview | Opportunity | Solution | Outcome | AWS Services Used After launching its beta in December of 2022, Scenario scaled to over 40 countries in 3 months. “We haven’t had any downtime since our launch, even though we’ve been growing so quickly,” says Nivon. “Our company has served and generated millions of images with only three people, proving a new use case for generative AI with little time and effort.” As of March 2023, Scenario provides customers with approximately 100,000 images each day. Scenario expects that its tools will have a lasting impact on the game industry. If artists no longer have to devote time to marketing and other repetitive tasks related to asset generation, they can focus on producing more rich, detailed, and original content. “Michelangelo had assistants, and so do the fine artists of today,” says Gerard. “Scenario gives game artists an AI assistant so that they can focus on creative work.” Similarly, if game developers can easily generate game assets, they can spend more time creating engaging storylines. “Scenario is empowering creatives to waste less time on repetitive tasks and devote themselves to the game they’re developing,” says Gerard. Scenario built its solution exceptionally fast. “We wrote the first line of code on October 13, 2022,” says Nivon. “We built the beta in 2 months with only three engineers, and Scenario generated over one million images in its first 2 weeks.” The company used a host of AWS services to accelerate its development process. It chose Amazon API Gateway, a fully managed service to create, publish, and secure APIs at nearly any scale, to act as the “front door” for its applications. AWS Cloud Development Kit (AWS CDK) accelerates cloud development using common programming languages to model your applications. Learn more » scaled to in 3 months game artists from noncreative tasks AWS Services Used 40 Countries Amazon ECS 中文 (繁體) Bahasa Indonesia Accelerated Outcome | Continuing to Scale Rapidly on AWS Game development company Scenario Incorporated (Scenario) wanted to reduce time to market for game studios by using generative artificial intelligence (AI) to create style-consistent assets, but it had to deliver fast to meet industry demand for its offering. Studios need to generate many assets and variations based on their artwork, and Scenario aims to assist artists by putting AI to work on these noncreative tasks. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) About Scenario Overview AWS Batch of images provided with three engineers Solution | Building a Generative AI Offering in 2 Months Using AWS Batch Learn how Scenario accelerated time to market for game studios using Amazon ECS. “With only three engineers, we built the cloud backend, the infrastructure, and the native mobile app,” says Nivon. Scenario implemented a continuous integration and continuous deployment process on AWS Cloud Development Kit (AWS CDK), a tool that accelerates cloud development using common programming languages to model applications. “Without AWS CDK, Scenario wouldn’t have been possible. All the infrastructure is deployed through it, so we are doing almost nothing manually,” says Nivon. Türkçe The company also uses AWS Batch, which efficiently runs hundreds of thousands of batch and machine learning computing jobs while optimizing compute resources, to train its machine learning models. “The strategy was to use AWS services that reduce the development workload and are simple to maintain, while meeting low-latency and availability requirements,” says Nivon. In keeping with that strategy, Scenario also uses Amazon ECS to run the containers that its image-generation application uses. English Without AWS CDK, Scenario wouldn’t have been possible. All the infrastructure is deployed through it, so we are doing almost nothing manually.” Amazon API Gateway Millions How Scenario Produces 100,000 Images Daily Using Generative AI on AWS Deutsch Scenario plans to remain all in on AWS as it continues to grow. “The culture of AWS is really part of our DNA,” says Nivon. “We are hiring for leadership principles, customer obsession, and a bias for action. Working with that culture in mind is simple, and those values have greatly helped us achieve our goals.” Tiếng Việt Italiano ไทย Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Learn more » Liberated Contact Sales To get its product up and running quickly, Scenario committed to going all in on Amazon Web Services (AWS). The company used Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service, to build its generative AI offering. Using Scenario’s API-first offering, studios can generate hundreds of usable characters, props, and landscapes for their games in minutes from team workspaces or directly within their games. Learn more » Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that simplifies your deployment, management, and scaling of containerized applications. Learn more » Opportunity | Using AWS CDK to Accelerate Cloud Development AWS Batch lets developers, scientists, and engineers efficiently run hundreds of thousands of batch and ML computing jobs while optimizing compute resources, so you can focus on analyzing results and solving problems. AWS CDK Português
Generative AI for Telcos_ taking customer experience and productivity to the next level _ AWS for Industries.txt
AWS for Industries Generative AI for Telcos: taking customer experience and productivity to the next level by Chris Featherstone | on 16 JUN 2023 | in Amazon CodeWhisperer , Amazon SageMaker JumpStart , Generative AI , Industries , Telecommunications | Permalink | Comments |  Share According to a recent Gartner ® CEO survey – The Pause and Pivot Year, what is the “top new technology that CEOs believe will significantly impact their industry over the next three years”? You guessed it: Artificial Intelligence. “21% of CEO’s say AI is the top disruptive technology.” i Telcos are not alone in recognizing the immense power of artificial intelligence (AI) – virtually all business leaders are eager to harness its potential. There are several exciting variants, but one that has captured everyone’s attention recently is generative AI. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. This technology promises to enhance customer experiences, boost employee productivity, streamline operations, and much more. Mark Raskino, VP analyst at Gartner , said generative AI will “profoundly impact business and operating models.” ii Telcos (and everyone else) are racing to invest in this transformative capability to avoid being left behind. However, realizing the full potential of generative AI requires the right infrastructure, expertise, and support. In this post, we explore some of the most promising use cases for Telcos and explain how AWS can help you innovate with generative AI. “Fear of missing out [FOMO] is a powerful driver of technology markets. AI is reaching the tipping point where CEOs who are not yet invested become concerned that they are missing something competitively important.” Mark Raskino, VP Analyst, Gartner iii Generative AI represents the next evolution in AI Generative AI represents the next evolution in AI, seamlessly empowering Telcos to create diverse types of content, such as text, images, audio, and synthetic data. This capability is a significant time-saver and productivity booster, providing accurate and up-to-date information that fills skills gaps and enables Telco employees to focus on other crucial tasks. Here are some compelling use cases for generative AI in the Telco industry: Customer support – Instantly providing accurate and personalized responses to customer queries through chatbots and virtual assistants. Network performance – Identifying potential network issues, suggesting troubleshooting steps, and automating maintenance tasks. Marketing – Predicting customer preferences, generating targeted content, and offering smart product recommendations. Software development – Automating software development with text/voice to code, filling skills gaps, and empowering non-coding specialists. Sales – Improving productivity and sales with B2B offer generation and sales toolkits. Operations – Producing insights to help optimize operating costs and reducing revenue leakages through cross-platform correlation and analysis. The benefits of adopting generative AI are clear: more innovation, more efficient services, more productive employees, and, ultimately, happier customers. All of these factors contribute to a significant competitive advantage. However, we are still in the early days. Customers have told us there are a few big things standing in their way today. First, they need a straightforward way to find and access high-performing foundation models (FMs) that give outstanding results and are best-suited for their purposes. Second, customers want integration into applications to be seamless, without having to manage huge clusters of infrastructure or incur large costs. Finally, customers want it to be easy to take the base FM, and build differentiated apps using their own data (a little data or a lot). Since the data customers want to use for customization is incredibly valuable IP, they need it to stay completely protected, secure, and private during that process, and they want control over how their data is shared and used. And whatever customers are trying to do with FMs—running them, building them, customizing them—they need the most performant, cost-effective infrastructure that is purpose-built for machine learning (ML). Fortunately, Telcos can overcome these challenges and achieve dramatic savings and productivity gains by selecting the most performant and cost-effective infrastructure that is purpose-built for machine learning. This is where AWS comes to the rescue. How AWS supports Telcos in exploring the potential of generative AI: Choosing the right Foundation Model. Amazon Bedrock is a managed service that provides access to generative AI models from leading AI startups like AI21 Labs, Anthropic, Stability AI, and Amazon’s own Titan models. This enables Telcos to select the perfect model for their required use case. In addition, all models are available through APIs, which makes it easy to build generative AI capabilities into customer and third-party applications. Amazon SageMaker JumpStart offers FMs not available in Amazon Bedrock such as Cohere and LightOn, as well as open source models such as Flan T5, GPT-J and Bloom. Saving Time and Money on Foundation Model Training. Amazon Elastic Compute Cloud Trn1 (Amazon EC2) instances powered by AWS Trainium are purpose-built for high-performance deep learning (DL) training of generative AI models. They reduce the time required to train models from months to weeks, or even days, while also lowering costs. This enables Telcos to save up to 50% on training costs versus other EC2 instances. Improving Productivity and Reducing Deployment Costs. When deploying generative AI models at scale, most costs are associated with running the models and doing inference. Fortunately, Telco customers can cost-effectively crunch massive amounts of data with the help of Amazon EC2 Inf2 instances powered by AWS Inferentia2. Inf2 instances are optimized for large-scale generative AI applications with models containing hundreds of billions of parameters (and deliver up to 4x higher throughput and up to 10x lower latency than Inf1 instances). Building Applications Faster and More Securely. Amazon CodeWhisperer radically improves developer productivity by making coding seamless. The AI coding companion uses a foundation model to generate code suggestions in real-time based on developers’ comments in natural language and prior code in an integrated development environment. It also has built-in security scanning (powered by automated reasoning) for finding and suggesting remediations for hard-to-detect vulnerabilities. Are you prepared to unleash the full potential of generative AI? At AWS, we have a mission to empower every developer with AI/ML capabilities, and we have a long-standing history of collaborating with Telcos to implement a wide range of AI initiatives. We continually develop purpose-built ML services and trained models to address everyday use cases, such as automatic object recognition, voice-to-text transcription, recommendation generation, fraud detection, chatbots, and automated call centers. Moreover, we understand the importance of tailoring these services to Telco-specific needs. We pay meticulous attention to the unique characteristics of Telco data and customer behaviors, making sure of seamless and secured integration with other Telco-specific data sources like the network. We invite you to explore how AWS can accelerate your innovation, streamline cost management, and keep you ahead of the competition in a Telco-focused generative AI workshop, and equip yourself with the knowledge and tools to thrive in the rapidly evolving landscape. Register here to learn more. i Gartner, 2023 CEO Survey — The Pause and Pivot Year, Mark Raskino, Stephen Smith, Kristin Moyer, Gabriela Vogel, 17 April 2023 ii Gartner Press Release, Gartner Survey Finds CEOs Cite AI as the Top Disruptive Technology Impacting Industries, May 17, 2023 iii Gartner Press Release, Gartner Survey Finds CEOs Cite AI as the Top Disruptive Technology Impacting Industries, May 17, 2023 GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. TAGS: AI , telecom , Telecommunications Chris Featherstone Chris Featherstone is an AI and data expert who helps organizations improve their business processes and workflows through innovative technology solutions. At AWS Chris specializes in data architectures, chatbots, virtual assistants, and all things artificial intelligence and machine learning specifically for communication service providers and telecommunications customers. With over 26 years of experience, Chris has worked with dozens of enterprise clients to build custom AI, machine learning, and automated conversational interfaces tailored to their needs. His work focuses on optimizing data governance and usage, automating manual tasks, personalizing user experiences, and enabling smarter decision making through data-driven insights and AI/ML. Chris is passionate about the possibilities of AI and its potential to transform businesses. Using his technical and domain expertise, Chris has delivered data and AI solutions that drive real impact for organizations. You will find him speaking at re:Invent as well as other industry conferences. In his spare time, you'll find Chris and his family in the mountains of Montana where they reside. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Generative AI with Large Language Models New Hands-on Course by DeepLearning.AI and AWS _ AWS News Blog.txt
AWS News Blog Generative AI with Large Language Models — New Hands-on Course by DeepLearning.AI and AWS by Antje Barth | on 28 JUN 2023 | in Announcements , Artificial Intelligence , Generative AI , Launch , News | Permalink | Comments |  Share Generative AI has taken the world by storm, and we’re starting to see the next wave of widespread adoption of AI with the potential for every customer experience and application to be reinvented with generative AI. Generative AI lets you to create new content and ideas including conversations, stories, images, videos, and music. Generative AI is powered by very large machine learning models that are pre-trained on vast amounts of data, commonly referred to as foundation models (FMs). A subset of FMs called large language models (LLMs) are trained on trillions of words across many natural-language tasks. These LLMs can understand, learn, and generate text that’s nearly indistinguishable from text produced by humans. And not only that, LLMs can also engage in interactive conversations, answer questions, summarize dialogs and documents, and provide recommendations. They can power applications across many tasks and industries including creative writing for marketing, summarizing documents for legal, market research for financial, simulating clinical trials for healthcare, and code writing for software development. Companies are moving rapidly to integrate generative AI into their products and services. This increases the demand for data scientists and engineers who understand generative AI and how to apply LLMs to solve business use cases. This is why I’m excited to announce that DeepLearning.AI and AWS are jointly launching a new hands-on course Generative AI with large language models on Coursera’s education platform that prepares data scientists and engineers to become experts in selecting, training, fine-tuning, and deploying LLMs for real-world applications. DeepLearning.AI was founded in 2017 by machine learning and education pioneer Andrew Ng with the mission to grow and connect the global AI community by delivering world-class AI education. DeepLearning.AI teamed up with generative AI specialists from AWS including Chris Fregly , Shelbee Eigenbrode , Mike Chambers , and me to develop and deliver this course for data scientists and engineers who want to learn how to build generative AI applications with LLMs. We developed the content for this course under the guidance of Andrew Ng and with input from various industry experts and applied scientists at Amazon, AWS, and Hugging Face. Course Highlights This is the first comprehensive Coursera course focused on LLMs that details the typical generative AI project lifecycle, including scoping the problem, choosing an LLM, adapting the LLM to your domain, optimizing the model for deployment, and integrating into business applications. The course not only focuses on the practical aspects of generative AI but also highlights the science behind LLMs and why they’re effective. The on-demand course is broken down into three weeks of content with approximately 16 hours of videos, quizzes, labs, and extra readings. The hands-on labs hosted by AWS Partner  Vocareum let you apply the techniques directly in an AWS environment provided with the course and includes all resources needed to work with the LLMs and explore their effectiveness. In just three weeks, the course prepares you to use generative AI for business and real-world applications. Let’s have a quick look at each week’s content. Week 1 – Generative AI use cases, project lifecycle, and model pre-training In week 1, you will examine the transformer architecture that powers many LLMs, see how these models are trained, and consider the compute resources required to develop them. You will also explore how to guide model output at inference time using prompt engineering and by specifying generative configuration settings. In the first hands-on lab, you’ll construct and compare different prompts for a given generative task. In this case, you’ll summarize conversations between multiple people. For example, imagine summarizing support conversations between you and your customers. You’ll explore prompt engineering techniques, try different generative configuration parameters, and experiment with various sampling strategies to gain intuition on how to improve the generated model responses. Week 2 – Fine-tuning, parameter-efficient fine-tuning (PEFT), and model evaluation In week 2, you will explore options for adapting pre-trained models to specific tasks and datasets through a process called fine-tuning. A variant of fine-tuning, called parameter efficient fine-tuning (PEFT), lets you fine-tune very large models using much smaller resources—often a single GPU. You will also learn about the metrics used to evaluate and compare the performance of LLMs. In the second lab, you’ll get hands-on with parameter-efficient fine-tuning (PEFT) and compare the results to prompt engineering from the first lab. This side-by-side comparison will help you gain intuition into the qualitative and quantitative impact of different techniques for adapting an LLM to your domain specific datasets and use cases. Week 3 – Fine-tuning with reinforcement learning from human feedback (RLHF), retrieval-augmented generation (RAG), and LangChain In week 3, you will make the LLM responses more humanlike and align them with human preferences using a technique called reinforcement learning from human feedback (RLHF). RLHF is key to improving the model’s honesty, harmlessness, and helpfulness. You will also explore techniques such as retrieval-augmented generation (RAG) and libraries such as LangChain that allow the LLM to integrate with custom data sources and APIs to improve the model’s response further. In the final lab, you’ll get hands-on with RLHF. You’ll fine-tune the LLM using a reward model and a reinforcement-learning algorithm called proximal policy optimization (PPO) to increase the harmlessness of your model responses. Finally, you will evaluate the model’s harmlessness before and after the RLHF process to gain intuition into the impact of RLHF on aligning an LLM with human values and preferences. Enroll Today Generative AI with large language models is an on-demand, three-week course for data scientists and engineers who want to learn how to build generative AI applications with LLMs. Enroll for generative AI with large language models today. —  Antje Antje Barth Antje Barth is a Principal Developer Advocate for AI and ML at AWS. She is co-author of the O’Reilly book – Data Science on AWS. Antje frequently speaks at AI/ML conferences, events, and meetups around the world. She also co-founded the Düsseldorf chapter of Women in Big Data. Comments View Comments Resources Getting Started What's New Top Posts Official AWS Podcast Case Studies Follow  Twitter  Facebook  LinkedIn  Twitch  RSS Feed  Email Updates
Genpact Delivers Innovative Services to Customers Faster by Running Critical Applications on AWS _ Case Study _ AWS.txt
Français With AWS Identity and Access Management (AWS IAM), you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS.  Learn more » 2023 Genpact is currently implementing a cloud-based contact center on Amazon Connect and AWS serverless technologies. Says Kumar, “We’re looking to further modernize our business applications, and AWS Professional Services is helping us do that.” Kumar concludes, “With AWS, we have made our infrastructure more agile, resilient, automated and flexible to support dynamic business demand and drive collaborative innovation.” To increase innovation agility, Genpact engaged Amazon Web Services (AWS) Professional Services and migrated its application environment to AWS. The company established a global AWS Landing Zone, with an exclusive zone for its business in China, allowing customers to set up a multi-account, scalable, and secure AWS environment. Srihari notes, “Our custom AWS Landing Zone has helped Genpact ensure resource deployments are in sync with global regions and that new account organization units are able to automatically deploy resources on demand." Español Outcome | Delivering Solutions Faster with On-Demand Deployment  日本語 Contact Sales With the agility gained from migrating to AWS, Genpact has significantly reduced deployment times for new applications. “Previously, it would take at least 12 weeks to procure and provision servers to deploy an application. Now, we can provision on demand,” Srihari says. Genpact leverages AWS Service Catalog to govern infrastructure-as-code templates, AWS Config to deploy a compliance-as-code framework, and Amazon API Gateway to create application programming interfaces (APIs) at scale. The company migrated 45 business-critical applications, including customer-facing applications and core services such as Active Directory, from its on-premises data centers to AWS. In total, the company shut down over 1,300 physical servers and decommissioned 14 data centers. Furthermore, Genpact optimized operational costs on AWS, largely as a result of decommissioning 14 data centers. “We’ve eliminated hardware refresh and maintenance costs, as well as data center power and cooling costs,” Srihari explains. 한국어 data centers decommissioned Overview | Opportunity | Solution | Outcome | AWS Services Used To learn more, visit aws.amazon.com/solutions/cloud-operations. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Learn more » About Genpact Genpact uses over 30 AWS services, including AWS Config, AWS Service Catalog, and Amazon API Gateway, to support a wide range of business applications. As a result, the company can now provision infrastructure on demand, quickly set up sandbox environments, and scale seamlessly.  AWS Services Used Amazon API Gateway Genpact can also quickly set up sandbox environments for developers to test new features and applications before moving them to production. With accelerated testing and deployment times, Genpact can deliver solutions to customers faster and thus differentiate its business from competing professional services providers. 中文 (繁體) Bahasa Indonesia Genpact Delivers Innovative Services to Customers Faster by Running Critical Applications on AWS AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. Ρусский Genpact collaborates with AWS Professional Services to securely migrate its infrastructure to the cloud, delivering solutions to global customers faster and more efficiently. عربي Learn more » 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Genpact is a global professional services company with 800 clients across the globe. To gain agility and flexibility, Genpact migrated 45 business-critical applications to AWS. “If we wanted to test new applications, we would typically spend 12–16 weeks to procure and provision new servers,” says Santhosh Srihari, cloud & operations lead at Genpact.  Overview Solution | Migrating 45 Business-Critical Applications  45 Get Started AWS Identity and Access Management servers migrated Türkçe AWS Config Mohan Kumar, cloud engineering lead at Genpact, says, “We partner with our clients to identify their key challenges and create innovative solutions based on process, data, technology and AI expertise to help them overcome those challenges and deliver transformation at scale.” English Genpact is a global professional services company dedicated to delivering outcomes that transform businesses. The company serves 800 clients across the globe in industries including financial services, consumer goods, retail, healthcare, manufacturing, and technology. Opportunity | Improving Pace of Innovation and Provisioning time saved on infrastructure setup  Customer Stories / Professional Services AWS Service Catalog Previously, it would take at least 12 weeks to procure and provision servers to deploy a new application. Now, we can provision on demand.” AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Learn more » Deutsch applications migrated Tiếng Việt Learn More Italiano ไทย 14 12 weeks Santhosh Srihari Cloud & Operations Lead, Genpact 1,300+ “We’ve improved our security posture with the ability to manage security from a central location on AWS, deploying rules that are specific to our technology and blocking malicious events,” Srihari says. In collaboration with AWS Professional Services, Genpact embarked on an Experience-Based Acceleration (EBA) program, a step-by-step transformation methodology that expedites the AWS Cloud migration journey by empowering internal teams. “EBA was a highly collaborative experience, mobilizing teams to work towards a common goal by breaking down silos and removing blockers to accelerate and scale cloud adoption,” says Kumar. Genpact is a global professional services firm that transforms its clients' businesses and shapes their futures. The company is guided by its real-world experience redesigning and running thousands of processes for hundreds of global companies. With deep industry and functional expertise, Genpact runs digitally enabled operations and applies its Data-Tech-AI services to design, build, and transform businesses.   To bolster security, Genpact implemented AWS Identity and Access Management (IAM), defining detailed roles for functional teams in its global organization. Furthermore, Genpact’s AWS infrastructure yields proactive security insights the company uses to thwart potential threats. Should an issue occur, engineers can perform a root cause analysis to understand the error and avoid a recurrence. Português
Geo.me Reduces Customers Annual Geospatial Costs by up to 90 Using Amazon Location Service _ Geo.me Case Study _ AWS.txt
Learn how Geo.me in the software industry optimized costs for customers using Amazon Location Service. Stuart Grant Cofounder and Director, Geo.me Français Increased in annual geocoding costs for customers 2023 Enhanced Español 日本語 As for geocoding, “Amazon Location Service offered better terms of use than our existing solution, thus reducing annual geocoding costs for our customers by more than 90 percent while also removing onerous compliance processes from their workflows,” says Grant. Amazon Location Service transactional geocoding is a tenth of the cost of other providers, and customers can save even more by combining it with stored geocodes for frequently accessed addresses. Amazon Location Service offered better terms of use than our existing solution, thus reducing annual geocoding costs for our customers by more than 90% while also removing onerous compliance processes from their workflows.” Contact Sales Get Started 한국어 Solution | Opening Industry Opportunities and Optimizing Costs through Enhanced Location Data Storage Overview | Opportunity | Solution | Outcome | AWS Services Used Expanded AWS Services Used company market opportunities It needed a new location data solution to better serve its global customers in the retail, logistics, transportation, and insurance industries. Its existing location data service provider prohibited the storing of geocoded data and was too expensive for some customers. Geo.me was dealing with millions of geocoded records that it wanted to store or cache. Geo.me needed a backend system capable of storing these location records in a secure and private way that was cost effective while performing geospatial calculations. Additionally, Geo.me’s existing solution could not handle truck routing, so the company sought a global solution, which was important to much of its customer base and would avoid needing different regional truck routing providers. Geo.me Reduces Customers’ Annual Geospatial Costs by up to 90% Using Amazon Location Service Overview 中文 (繁體) Bahasa Indonesia Looking forward, Geo.me is actively exploring how to use mapping capabilities with Amazon Location Service to visualize and optimize the data they collect. For example, insurance customers can geolocate risks and then analyze the concentration of those risks. Customers can use geofencing capabilities to analyze historical situations where an insured asset enters and exits permitted areas, high-risk areas, low-risk areas and adjust fees based on data they collect. “Now that Amazon Location Service is starting to provide out-of-the-box building blocks to do things like location data storage, the focus can shift to what customers can do with that data,” says Grant. “There’s a huge amount of analytical capability that Amazon Location Service has the potential to unlock.” As an AWS Partner since 2014, Geo.me had the opportunity to be an early adopter of Amazon Location Service. Because the service includes routing, tracking, geofencing, stored geocodes, and other managed location data services that Geo.me offers to its customer base as a service, Geo.me did not need to create its own solutions. Such efficiencies aligned with the company philosophy of using recognizable, managed services. This philosophy makes the best use of Geo.me’s resources and has earned the company credibility with its customers. “We decided very early on in our evolution that we would always stand on the biggest shoulders we could,” says Stuart Grant, cofounder and director of Geo.me. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Amazon Location Service Opportunity | Supporting Customers’ Geospatial Information Needs Geo.me was founded in 2008 and delivers location-based applications that provide geospatial web services like routing, geofencing, tracking, placing points of interest, and storing geolocation data for enterprises in the B2B sector. It does this in two ways: First, it builds digital mapping applications that take an asset or customer’s location, including route, and render it onto a map to provide information on where the customer or asset is located at any given time. This includes truck routing, asset tracking, and locating specialized refueling stations and residential addresses on a map. Second, the company provides the capability of storing geocoded records for future geospatial calculations, assessments, and analysis. Geospatial web services provider Geo.me opened industry opportunities, expanded innovation possibilities, and optimized costs for its customers using Amazon Location Service, a location-based service that makes it simple for developers to add geospatial data and location functionality to applications without compromising data security and user privacy. Geo.me enhances digital mapping solutions that engage customers, optimize deliveries, and help customers make better decisions. company scalability Outcome | Adding Mapping Capabilities Using Amazon Location, Geo.me has saved time that the team can now spend on product innovation, such as adding sophisticated heuristic algorithms to optimize route planning. “Because Amazon Location Service provides building blocks like geocoding or routing, which are core to any geospatial service, we can now shift the focus to what we do with the data we collect,” says Grant. “We can now analyze that data and look at how more efficient heavy road transportation routes can be generated.” Türkçe Because Amazon Location incorporates HERE Technologies and Esri and integrates seamlessly with other AWS services, Geo.me gained access to mapping, geocoding, geofencing, asset tracking, and routing data on a global scale. The company could accelerate application development by using other AWS service capabilities outside of Amazon Location Service to meet its customer’s needs. Geo.me is a software company that specializes in handling location data for large enterprises. Its solutions gather, analyze, and deliver location data to its customers using smartphone apps, navigational systems, and mobile devices. English About Geo.me Geo.me has helped European transportation customers plan and optimize delivery routes so that trucks can avoid roads that are narrow, unpaved, or otherwise unsuitable for heavy traffic. Using Amazon Location APIs, Geo.me clients can optimize routing to avoid roads where trucks are not allowed due to bridge heights and other regulations. By using Geo.me’s solution to plan reliable routes, customers can more efficiently meet their sustainability targets; for example, customers could identify usage opportunities for the 24 percent of European intracountry truck journeys that run with empty vehicles. 90% reduction Deutsch Tiếng Việt Each month, Geo.me serves around 120 million API calls. Handling millions of geolocation records requires a system that can store geocoded records or use geospatial capabilities like routing, tracking, and locating points of interest to improve delivery times by optimizing the routing of vehicles. customer sustainability goals Italiano ไทย Geo.me started using Amazon Web Services (AWS) solutions in 2008 and adopted Amazon Location Service in 2021. Using Amazon Location Service, Geo.me increased innovation by performing geospatial calculations that identified areas for route planning improvement and reduced annual geocoding costs by 90 percent. Learn more » Amazon Location Service makes it easy for developers to add location functionality, such as maps, points of interest, geocoding, routing, tracking, and geofencing, to their applications without sacrificing data security and user privacy. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Gileads Journey from Migration to Innovation on AWS _ Case Study _ AWS.txt
Contact Sales Français Increased sustainability and automated compliance SAP HANA on AWS Enhanced Español Outcome | Deriving Value from Data Analytics Using AWS   日本語 2023 AWS and SAP have worked together closely to certify the AWS platform so that companies of all sizes can fully realize all the benefits of the SAP HANA in-memory computing platform on AWS. Learn more » Amazon S3 Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Sustainability and cost efficiency were other important considerations for Gilead. After thoroughly reviewing its infrastructure in 2020, the company decided to accelerate its cloud migration to reduce the carbon footprint of its data systems. “Migrating our data analytics to the cloud also meant that we could avoid large capital expenditure in bringing our data centers up to higher standards of resilience,” says Berson. “Today, we manage over 50 PB of data on AWS.” About Gilead Amazon Simple Storage Service (Amazon S3)—an object storage service offering industry-leading scalability, data availability, security, and performance—to store and retrieve data at scale. The company also uses Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » Outside of the data mesh, Gilead has built several other solutions to break down data silos and creatively approach innovation. This includes the enterprise semantics search application, Morpheus, which increases search result accuracy while reducing data search results times by over 50 percent. Another example is a Gilead data marketplace with massive data transfer speeds, built on operating model transformation Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Storing data is only part of the challenge, however. Gilead adopted AWS Services Used Gilead’s Journey from Migration to Innovation on AWS Overview 中文 (繁體) Bahasa Indonesia SAP HANA on AWS as part of its enterprise resource planning transformation. ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 70% 中文 (简体) of data center footprint migrated to cloud Amazon Redshift Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » Solution | Implementing a Data Mesh Architecture on AWS  The underlying architecture uses several major AWS services. Gilead uses of data managed on the cloud For the past 35 years, Gilead has focused on bold advances in biopharmaceutical innovation, setting high standards for research into HIV, viral hepatitis, cancer, and other diseases. The company began migrating 70 percent of its workloads to AWS in 2020 to streamline and democratize data access.   Türkçe Marc Berson Chief Information Officer, Gilead Sciences Inc. English Learn how Gilead, leading global biopharmaceutical organization, built a data mesh architecture on AWS to accelerate innovation and drug commercialization. Amazon RDS Three years into its cloud transformation, Gilead has big plans for the future. “The primary reason that we chose AWS was its passion for innovative transformation,” says Berson. “We had discussions on transforming the way clinical trials are performed and changing the way molecules are discovered.” Armed with its new cloud foundation on AWS, the company feels confident in its ability to deliver lifesaving treatments faster. Gilead adopted a data mesh approach to improve agility, accelerate insight generation, and increase its return on investment. A simplified user interface helped business units easily find data products from the catalog, inspect their quality, and get access to the data through a federated query engine. On the other side, four platform APIs reduced the friction for data producers to register their data products on the mesh, building a self-serve infrastructure. This also included observability and data quality APIs to record the data quality on a scorecard as a part of the data catalog. 50 PB Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. The primary reason that we chose AWS was its passion for innovative transformation. With AWS, we have developed an enterprise data solution to create better access to and analysis of data across the organization using a data mesh approach.” Unlocked Deutsch Today, the mesh hosts hundreds of data products in the catalog, providing useful descriptions, row-level and column-level access, and cross-lines of business coordination. The platform’s data stewards govern the quality by looking at scorecards. “Now, we have business, technical, and observability metadata, along with service-level objectives and quality in our catalog,” says Murali. “The data mesh platform has decentralized data ownership—we don’t have to chase subject matter experts to go find information about the data because we have that in a catalog.” Gilead chose Amazon Web Services (AWS) as its preferred cloud provider and began migrating its critical workloads from its data centers to the cloud. It chose AWS for its innovation, willingness to invest in co-innovation, and strong industry capabilities. Using AWS, Gilead has developed an enterprise data solution to create better access to and analysis of data across the organization, using a data mesh approach. Tiếng Việt Opportunity | Using AWS to Host and Manage 50 PB of Data  Italiano Customer Stories / Life Sciences AWS Data Exchange—which makes it simple to find, subscribe to, and use third-party data in the cloud. “We have a 38 PB observational dataset that previously took 36 hours for data transfer,” says Murali. “Now it takes 6 minutes.” After 1 year in this new phase of optimization, Gilead has seen operational and financial improvements across capital expenditure avoidance, software asset consolidation, cycle-time improvements, and compliance automation. Amazon Redshift—which uses SQL to analyze structured and semistructured data across data warehouses, operational databases, and data lakes—to get from data to insights faster. The company also uses Learn more » agility to deliver innovation Gilead Sciences Inc. (Gilead) wanted to modernize its data infrastructure and use cloud innovation to improve its operational performance. With thousands of virtual machines running hundreds of regulated applications in on-premises data centers, the company was challenged to balance governance and agility. “We wanted to support our business stakeholders to innovate faster and discover drugs with higher efficacy,” says Marc Berson, chief information officer (CIO) of Gilead. The company also wanted to increase its operational resilience for data recovery and backup in the event of a disaster without substantial capital investment. In addition, it wanted to automate GxP compliance to further streamline its processes. “We have aspirations to bring more than 10 transformative therapies to patients by 2030 and strategic priorities to expand internal and external innovation,” says Murali Vridhachalam, head of cloud, data, and analytics at Gilead. Seamless access to trusted data was very important for Gilead to achieve these strategic priorities. The company realized it needed to move away from traditional monolithic data management approaches and apply modern engineering practices and organizational models to quickly generate insights and respond to changing business needs. Português Gilead Sciences Inc. is a biopharmaceutical company that has pursued and achieved breakthroughs in medicine for more than 3 decades. The company is committed to advancing innovative medicines to prevent and treat life-threatening diseases, including HIV, viral hepatitis, and cancer.
Global Unichip Corporation Case Study.txt
About Global Unichip Corporation Français Benefits of AWS Running High-Performance Computing Workloads on Amazon EC2 Spot Instances Prevents costly system failures and replacement during operation Español Since data privacy is important to GUC, proteanTecs provides GUC an Amazon Virtual Private Cloud (Amazon VPC), which it runs on its own system using AWS. Any connection to the proteanTecs solution is using a virtual private network, or a secure closed channel, that reduces risk and prevents proteanTecs and GUC from seeing each other’s data. GUC and proteanTecs are collaborating on the next generation of interfaces, which will be developed using TSMC’s 3DFabric dies assembly as opposed to the side-by-side dies assembly in 2.5D generation. These interfaces will have hundreds of thousands of lines between the dies, greatly increasing computing power and memory in each ASIC. “Even in the very early stage of development, proteanTecs is already an integral part of our mechanism for reliability monitoring and repair,” says Elkanovich. “Now we can address reliability at all development stages—from architecture to physical implementation—together.”  日本語 Igor Elkanovich Chief Technology Officer, Global Unichip Corporation Growing in Scale and Complexity 한국어 Even in the very early stage of development, proteanTecs is already an integral part of our mechanism for reliability monitoring and repair.” proteanTecs runs its high-performance computing workloads on Intel Xeon processor–powered Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances. Its Kubernetes container orchestration system also runs on Amazon EC2 instances. And whenever proteanTecs sees a burst in workload, its Kubernetes cluster triggers a request to increase the number of Spot Instances so that proteanTecs can process that workload with ease. Using Spot Instances reduces the company’s compute costs by approximately 60 percent.  Get Started Amazon EC2 Facilitating Quality and Reliability of ASICs Using AWS Partner proteanTecs Every time GUC releases a new generation of ASICs, the design and processes become more complex. “We’ve multiplied the number of transistors, the chip complexity, and the processing power many times, and with the recent revolution in advanced packaging technology, we can now assemble many different dies together in one heterogeneous integrated circuit package,” explains Elkanovich. Big functional circuits are fabricated using several silicon dies. “There is a dense interconnect between the dies in order to provide high bandwidth and performance to our customers,” says Elkanovich. “They demand reliability because most of the ASICs go to mission-critical applications, like data center applications that grow exponentially. And once they grow, the effect of every failure worsens. We want to develop the most complex designs while increasing reliability. And this is a challenge we address with proteanTecs.”  AWS Services Used “To quickly provide GUC feedback on a very large amount of data, proteanTecs uses AWS to achieve the scalability and flexibility it needs to support high-performance computing workloads that run millions of simulations each day,” says Yuval Bonen, cofounder and vice president of software at proteanTecs. Through the AWS-powered proteanTecs analytics platform, GUC customers can closely monitor their ASICs to proactively detect and repair silicon failures. 中文 (繁體) Bahasa Indonesia proteanTecs also uses Amazon Relational Database Service (Amazon RDS) to store application metadata. Amazon RDS makes it simple to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. That saves the company’s DevOps team a lot of time.  Contact Sales Ρусский عربي 中文 (简体) Global Unichip Corporation (GUC) helps system and semiconductor companies develop application-specific integrated circuits (ASICs), or microchips. Each generation of ASICs has a more complex design and uses more advanced semiconductor processes, making it harder to reach quality targets. But these ASICs become components in data center systems, where uptime and system reliability are critical. To tackle that challenge, GUC engaged Amazon Web Services (AWS) Select Technology Partner proteanTecs, which uses deep data and machine learning to predict failures in electronics. Its software solution could monitor ASIC performance, even as ASICs operate in the field, with zero downtime or disruption to the system.  Building Additional Lines to Future Reliability GUC and proteanTecs first collaborated on GUC’s high-bandwidth memory interface IP for 2.5D die-to-die interconnects. In the typical design, the ASIC uses several high-bandwidth memory components with tens of thousands of lines connecting them. During normal ASIC operation, proteanTecs collects data from the Universal Chip Telemetry embedded in the ASIC and analyzes that data to assess the signal integrity of lines in the field. When proteanTecs detects a quality degradation for a line that may lead to future defects, the system replaces it with a preinstalled redundant line during the next maintenance cycle. This extends the ASIC’s lifecycle, prevents system failure, and avoids costly replacements of failing systems for customers’ data center applications. This entire process is accomplished with no downtime or disruption to the customers’ normal operation.  Amazon VPC GUC focuses on the design, interface intellectual property (IP) development, and management of ASIC manufacturing by its key shareholder, Taiwan Semiconductor Manufacturing Company (TSMC). The large-scale global semiconductor foundry manufactured 10,761 different products using 272 distinct technologies for 499 different customers in 2019. “We adopt a new semiconductor process, a new assembly technology, and new interfaces before the customer comes to us with their projects,” says Igor Elkanovich, chief technology officer at GUC. “We work very closely with TSMC so that while its technology is still in development, we are already starting to adopt it and develop IP in parallel. By the time TSMC technology is available for the customer, the IP is silicon proven and a part of GUC’s development flow.”  Amazon EC2 Spot Instances Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. English Amazon RDS GUC engaged proteanTecs to combine data derived from Universal Chip Telemetry technology embedded in the ASICs with predictive artificial intelligence and data analytics—using the proteanTecs cloud system on AWS—to track and repair silicon defects before they cause system failure. By taking these measures, GUC and proteanTecs can increase the quality and reliability of GUC’s ASICs. Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you launch AWS resources in a logically isolated virtual network that you define.  Headquartered in Taiwan, Global Unichip Corporation (GUC) helps system and semiconductor companies design and develop application-specific integrated circuits (ASICs), or microchips. Its parent company, Taiwan Semiconductor Manufacturing Company, is a global semiconductor foundry. Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices.  Deutsch GUC Enlists AWS Partner proteanTecs to Increase ASIC Reliability and Quality at Scale Tiếng Việt Monitors and repairs ASICs in the field during normal system operation Italiano ไทย Even as customers’ data center applications grow and ASICs become more complex, GUC will continue to offer predictive ASIC monitoring using the solution offered by AWS Partner proteanTecs. “Some people think that with growing complexity, the reliability will inevitably be compromised,” says Elkanovich. “Our purpose is the opposite. Our goal is to bring our customers more scalability at an even better level of reliability.” 2021 Learn more » Achieves ASIC reliability and quality at scale GUC previously monitored its ASICs during the manufacturing process—but by using proteanTecs, it can maintain that visibility and repairability in the field. “We previously had little visibility into what happened in the ASICs,” says Elkanovich. “Once we added the proteanTecs solution, we got a totally different view. Now we observe and repair physical effects that we weren’t able to discover before.” Português
Glossika case study.txt
With an eye to expansion, Glossika is constantly innovating and developing new features that improve the learning process. One potential future feature would utilize machine-learning models in Amazon SageMaker to analyze audio files hosted on Glossika. This analysis would generate two colored lines above the text of a given sentence: one color showing the intonation of the native speaker’s recording and another showing the intonation of a user's uploaded recording. This information would let users see where their rhythm and intonation diverge from the native speaker’s. Learners can use the analysis to independently assess their speaking ability in any target language, helping them to improve their pronunciation and allowing them to more objectively assess how natural they sound. Français Campbell elaborates, “Our algorithm considers several factors such as how recently a student learned certain information and how well they’re retaining it. If they just learned a structure yesterday or are struggling to replicate the sentence independently, students will see that structure more frequently.”   While Glossika works with professional translators and voice actors to produce content, Viva crowdsources this information from users around the world who record and document their native languages in Glossika’s database. Participants who upload recordings of their language not only help preserve lesser-known languages and dialects, but also earn a share of the subscription revenue generated from learners studying that language.   Español From Glossika’s experience, approximately 50,000 total sentence repetitions are necessary to “graduate” from the program and it takes a typical learner approximately 300 hours to achieve this. Overview | Opportunity | Solution | Benefits | AWS Services Used Preserving Less-Spoken Dialects with Viva Project 日本語 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Contact Sales 2022 In addition to scaling its business to add more users in more countries, Glossika recently launched a beta version of its Viva project to expand Glossika’s content offerings and preserve endangered and/or “minorized” languages such as Gaelic and Hakka. Amazon S3 Campbell says, “The stability of AWS services is excellent. With active paying users in 148 countries, this is especially important for us. With AWS, we’re confident that our users have a reliable experience on our app no matter where they’re located.” 한국어 To serve its global customers, Glossika relies on Amazon CloudFront as a content delivery network. The company is headquartered in Taiwan, but most of its customers live in the United States or Europe. To manage incoming traffic from around the world, its engineers built high-availability architecture using Elastic Load Balancing. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Benefits Customer Stories / Education Get Started AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Glossika is an online learning site and mobile app that uses adaptive learning algorithms to offer customized content based on a student’s language proficiency level, learning progress, and interests. Michael Campbell, chief executive officer and founder of Glossika, says, “We focus on efficiency and aim to streamline the learning process as much as possible. We use adaptive learning algorithms to determine when previously learned content should be reviewed. This ensures that more study time is spent practicing things users struggle with and less on repeating things they’ve already mastered.” 中文 (繁體) Bahasa Indonesia Michael Campbell CEO and Founder, Glossika Glossika is an education technology company whose application uses common sentence structures to train people to understand and speak more than 60 languages. Serving customers in 148 countries, Glossika curates content to match users’ preferences and ability, making learning more efficient.   Ρусский • Serves customers in 148 countries with low-latency content delivery • Stores textual and audio data of more than 350,000 sentences and translations • Stores over 25 million user-uploaded audio recordings • Scales infrastructure to accommodate growing user base • Saves human and financial resources with automation on the cloud • Manages global incoming traffic to maintain high uptime عربي 中文 (简体) Glossika started out producing language learning books and transitioned to an online model in 2017, followed by a mobile app in 2022. Upon going digital, Glossika chose Amazon Web Services (AWS) to build its IT infrastructure. Sheena Chen, chief operations officer at Glossika, says, “As a company striving for worldwide product adoption, we need cloud technology that can scale with us. AWS makes it easy to purchase the infrastructure we need right now and to adjust as we expand. AWS is also feature-rich and highly configurable, with an intuitive user console—all of which facilitates our growth as a startup in a sustainable way.” Solution Overview Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Learn more » About Company Glossika’s users continue to offer praise for the application as well. One learner comments, “Nothing else combines comprehensible natural audio input and active language production with nearly the same amount of practice as Glossika; every session is challenging, but always comprehensible and rewarding.” Türkçe Amazon ElastiCache for Redis Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » English According to the Foreign Service Institute, where employees working in the US foreign affairs community receive training, it takes 600–2,200 class hours to learn a foreign language. These figures vary based on the complexity of the language relative to English, and are also affected by a given learner’s ability, past experience, and exposure to the target language. However, most teachers tend to apply a one-size-fits-all approach in designing curriculum, which may not be suited to every student's learning needs. Amazon Relational Database Service Glossika Builds Language-Learning Platform on AWS to Serve Users in 148 Countries Glossika’s application works by organizing sentences according to the type of grammar/syntax they contain and how well a user is retaining them. At present, Glossika has uploaded about 6,000 unique sentences per language for learners to practice, and the company expects that figure to double in the next 2–3 years. Glossika uses Amazon Relational Database Service (Amazon RDS) with autoscaling enabled to store more than 350,000 sentences and their translations, Amazon Simple Storage Service (Amazon S3) to store over 25 million user-uploaded audio recordings of sentences cost-effectively, and Amazon ElastiCache for Redis as a low-latency caching service. Its adaptive learning algorithms run on Amazon Elastic Compute Cloud (Amazon EC2) instances. Glossika has big plans for its global business and will continue to rely on AWS as its cloud provider in the next phase of its journey. Chen concludes, “AWS is reliable and easy to use. Because AWS has efficiently taken care of server- and security-related issues, our engineers have been able to focus completely on product development since day one. We look forward to growing our business further on AWS.” Deutsch Opportunity Glossika currently serves customers in 148 countries who are learning over 60 different languages. Its courses guide users through a massive database of sentences in their target language(s). Sentences gradually increase in difficulty and are accompanied by recordings from native speakers. Users can make adjustments to each course in accordance with their preferences. For instance, someone interested in learning Japanese to read manga comics could choose to not practice sentences about working in an office.   Tiếng Việt With AWS, we’re confident that our users have a reliable experience on our app no matter where they’re located.” Italiano ไทย Curating Content to Match User Preferences Amazon CloudFront Ensuring Uptime and Low Latency for Global Customers Teaching through 6,000 Core Sentence Structures Learn more » Glossika built its language learning platform on AWS to ensure low latency for users in 148 countries and access to on-demand compute and storage as it expands. Glossika is an education technology company that uses syntax-sorted and customized content to train people to understand and speak foreign languages. The startup uses Amazon RDS to store lesson data, Amazon ElastiCache for Redis for caching, and Amazon CloudFront as a content delivery network. Adding ML Analysis to Improve Rhythm and Intonation Português
GoDaddy Case Study _ AWS.txt
About GoDaddy AWS Lambda Français As it began to migrate its on-premises resources to the cloud using Amazon Web Services (AWS), GoDaddy saw an opportunity to reimagine its security processes. It incorporated AWS Security Hub, a cloud security posture management service that performs security best practice checks, aggregates alerts, and facilitates automated remediation. Using Security Hub, GoDaddy manages security from a serverless, customizable, centralized location that has increased visibility and coverage while saving GoDaddy significant overhead and maintenance costs. Benefits of AWS As a global leader in domain registration and web hosting, GoDaddy sought to embed best practices in its development and operational processes as it migrated to the cloud. The company was looking for a way to streamline the time-consuming processes of parsing and normalizing data from multiple security tools into a common format for search, analytics, and response and remediation. Español Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Learn more » When Security Hub became available in late 2018, GoDaddy incorporated it as a single source of truth for security findings on AWS. GoDaddy uses multiple in-house and third-party automated on-demand tools that scan its workloads for security misconfigurations and report the findings on Security Hub. Each team has its own set of AWS accounts and uses Security Hub to view security findings on their accounts. GoDaddy uses its own central ticketing tool and Security Hub to create problem tickets for the corresponding application teams, who receive alerts about the findings on their accounts. “We are running a large set of security tools, and using AWS Security Hub gives us a way to import results of these tools into a central place,” says Aarushi Goel, GoDaddy’s Application Security manager. “Our users no longer have to go to 10 different places to get findings. They just go to their account’s Security Hub and have findings from all the tools listed for them.” In addition, GoDaddy has automated the process of closing tickets upon remediation using AWS Lambda, a serverless, event-driven compute service that lets users run code for virtually any type of application or backend service without provisioning or managing servers.  日本語 Contact Sales AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Get Started 한국어 Scott Bailey Senior Software Engineer, Application Security, GoDaddy Alleviated maintenance and overhead by automating processes AWS Fargate Created customized dashboards for users Diagram 1: CirrusScan Overview AWS Services Used GoDaddy’s use of Security Hub has been so successful that it has begun to extend its use alongside CirrusScan to scan legacy workloads. The process helps reduce coverage, latency, and consistency gaps between GoDaddy’s on-premises processes and those that use AWS. The company also plans to incorporate Amazon Inspector, an automated vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. AWS rearchitected Amazon Inspector in November of 2021 so that it automates vulnerability management and delivers near real-time findings, which reduces the delay between the introduction of a potential vulnerability and its remediation. “Our security program on AWS is far more mature and streamlined than our legacy on premises infrastructure,” Goel says. “Using AWS Security Hub in conjunction with our in house tools, we have come a long way in managing security risks since we migrated to AWS.” 中文 (繁體) Bahasa Indonesia AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation. Ρусский عربي Using AWS, GoDaddy has been able to automate and streamline its security processes—running scans, reporting findings in Security Hub, and making findings available to users in its central ticketing system. Scans run every few hours with much better coverage than under the previous system, when scanning might have only occurred monthly. Automation saves time for GoDaddy’s developers as well as for customers, and the company saves money because it doesn’t pay for unused resources between scans. Application builders use Security Hub for a high-level view of their accounts and to remediate critical findings. “Using AWS serverless solutions, we don’t have to manage the infrastructure—including databases—to store security findings for all the accounts, so it’s very efficient for us,” says Goel. 中文 (简体) Founded in 1997, GoDaddy serves more than 21 million customers as a global leader in domain registration and web hosting. Headquartered in Tempe, Arizona, GoDaddy provides the tools that everyday entrepreneurs need to succeed online and in person.  Amazon ECS GoDaddy built CirrusScan as a containerized solution using Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service that makes it simple for companies to deploy, manage, and scale containerized applications. To look for security vulnerabilities in the targeted accounts, CirrusScan uses third-party, open-house, and its own customized scanners. The scans run as independent Amazon ECS tasks using AWS Fargate, a serverless, pay-as-you-go compute engine that lets companies focus on building applications without managing servers. “AWS Security Hub made it straightforward for us to bring in our in-house-developed, customized tools,” says Goel. Diagram 2: CirrusScan Detailed Architecture GoDaddy Centralizes Security Findings and Gains Insights Using AWS Security Hub Türkçe Saved cost by not paying for downtime between scans When it hit a roadblock in development or needed general guidance, GoDaddy has benefited from online documentation available for Security Hub as well as quick, personalized assistance from AWS Support. The AWS Support team has facilitated GoDaddy’s understanding of best practices for using AWS, always considering the company’s particular requirements so that the team can better support GoDaddy’s objectives. “We don’t have to go through a series of escalations before we speak to an engineer,” Goel says. “AWS customer support has been above and beyond.”  English AWS Security Hub is there, it’s reliable, and it just works. We can plug stuff into it from anywhere in any particular individual AWS account and then pull data out into the central account when we need to use it somewhere else. And we don’t have to worry about maintaining it or backing it up.” The security tooling in development pipelines notifies GoDaddy developers about security risks early in the application lifecycle, avoiding the deployment of insecure code in production. “As a result, our exposure is reduced, and we can do a lot more with a lot fewer people than we could before,” says Scott Bailey, GoDaddy senior software engineer. In addition, GoDaddy discovers potential problems earlier in the development process before they can impact production. This reduced latency also helps GoDaddy address issues proactively and at convenient times, rather than respond to an emergency. Aggregating Security Findings Using AWS Security Hub Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Deutsch AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Tiếng Việt Italiano ไทย Reduced mean time to remediate with continual vulnerability scanning 2022 Customizing Security Tools AWS Security Hub Centralized and streamlined security findings Expanding Security Management Using AWS Services Português Founded in 1997, GoDaddy has grown to serve more than 21 million customers around the world. Initially, the company did all of its processing on premises, running a number of security tools that each provided findings that users had to access individually instead of from a central dashboard. In March of 2018, GoDaddy began to migrate a large part of its infrastructure to AWS and searched for scalable open-source or commercial tools that it could use to scan its accounts for security-related issues and centralize its findings. Unable to find a solution that met all of its criteria at that time, the company developed its own framework, called CirrusScan, which is designed to run in conjunction with the AWS services GoDaddy was already using. However, CirrusScan did not include a convenient way to display findings from a central dashboard.
Greenway Health Scales to Hundreds of Terabytes of Data Using Amazon DocumentDB (with MongoDB compatibility) _ Greenway Health Case Study _ AWS.txt
Français Greenway Health LLC provides electronic health record (EHR) solutions to over 50,000 healthcare organizations. The company, one of the oldest in its field, offers both software and services to support medical practices. Opportunity | Developing a Highly Scalable and Secure Solution Using Amazon DocumentDB 2023 Solution | Using Amazon DocumentDB to Deliver Unified EHR Systems and Easily Use Other AWS Services In 2021, Greenway started using Amazon DocumentDB to build both a rapid enterprise data hub solution and a change data capture engine. Because Greenway provides EHR systems, privacy is of the utmost importance. Greenway is committed to delivering highly secure services, and its clients demand full HIPAA and 21st Century Cures Act compliance from all Greenway solutions. The need to protect patient data meant that every phase of the cloud migration had to be secure. However, Greenway was driven by more than a desire to meet regulations—it wanted to apply industry best practices to bring insights to its clients. The company opted to use AWS services to meet its complex project requirements. “AWS had the strongest offering for a number of services we were seeking and was the most willing to collaborate with us on our projects,” says Nick. Español Outcome | Powering a New Generation of Innovative Services on AWS Amazon DocumentDB 日本語 Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. AWS Professional Services Get Started 한국어 When Greenway started its cloud journey, it had two existing EHR solutions with separate data processing and analytics workflows. Greenway wanted to build common ground between those two disparate datasets and turned to Amazon DocumentDB (with MongoDB compatibility)—a fully managed native JSON document database that makes it easy and cost effective to operate critical document workloads at virtually any scale—to centralize and normalize Greenway’s EHR data. By streamlining its technical infrastructure, Greenway built a solution that scales seamlessly to process hundreds of terabytes of data and makes it simpler for healthcare providers to focus on serving patients. “We didn’t want to deal with scaling and managing a MongoDB engine ourselves, so we used Amazon DocumentDB,” says Philip Nick, senior director of production engineering at Greenway. Greenway Health Scales to Hundreds of Terabytes of Data Using Amazon DocumentDB Overview | Opportunity | Solution | Outcome | AWS Services Used of patient data Amazon DocumentDB (with MongoDB compatibility) is a fully managed native JSON document database that makes it easy and cost effective to operate critical document workloads at virtually any scale without managing infrastructure. Greenway collaborated extensively with AWS Professional Services—a global team of AWS-certified experts that supplements customers’ teams with specialized skills and experience to achieve results—to define the architecture of its new solution and identify the optimal tools for each step of its complex project. “We saw AWS absolutely step up to collaborate with us on this project and picked AWS because of this collaboration,” says Macaluso. By planning out each element of the effort in tandem with dedicated AWS team members, Greenway accelerated the development process by 6–12 months. The project was complex, and Greenway experimented with several iterations until it found a set of solutions that met its performance requirements. The company succeeded by loading the raw data from Amazon S3 into Amazon DocumentDB, which acts as a mirror database of all its clients’ systems. When clients update their EHRs, the change is reflected in Amazon DocumentDB, and the data is dumped back into the unified model using Amazon S3 data lakes. “At full scale, with all the regulatory reporting functionality, we will be pulling forward nearly 100 TB of data using Amazon DocumentDB,” says Nick. “Greenway is also benefiting from 99.999999999 percent, or ‘eleven nines,’ of durability and lower storage costs.” AWS Services Used 中文 (繁體) Bahasa Indonesia Accelerated Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Learn more » Greenway is excited to deliver its new unified data solution to clients. A seamless EHR experience makes it easier for healthcare organizations to center their resources on the provision of high-quality care to patients. The scale and durability that the company achieved using Amazon DocumentDB will have a real impact on customers. “Having this solution available makes things easier for our clients,” says Nick. “By using a cloud-based data solution, Greenway makes it easier for clients to adopt the company’s EHR software without requiring them to invest in their own data centers.” Learn more » Migrated 20 years Overview Greenway’s EHR software and services are currently used by over 50,000 healthcare organizations. Its two EHR solutions, Intergy and Prime Suite, have separate reporting mechanisms that required significant staff resources. Greenway wanted to build a powerful enterprise data hub that would serve as the foundation for all its solutions and reduce the complexity of development. The company had a requirement for scalability, which was a crucial business necessity. It quickly became clear that a cloud solution would deliver the best value to clients. “We have a large number of practices across the United States that rely on our services, so we needed something that would scale up seamlessly,” says Macaluso. Developed We didn’t want to deal with scaling and managing a MongoDB engine ourselves, so we used Amazon DocumentDB.” Unified of terabytes of data After laying out the solution architecture, Greenway used several AWS solutions to implement its project. The company built a change data capture engine on Amazon Simple Storage Service (Amazon S3), an object storage service, and migrated 20 years of historic patient data to the solution. It then transformed the data into regulatory reporting engines. “For us, it was very helpful to choose solutions off the shelf,” says Nick. Türkçe Greenway’s new unified data solution has made it simple for the company to focus on developing new offerings for its clients. Moving forward, Greenway will use AWS infrastructure to provide a central place for providers and vendors to share and interoperate with healthcare data. “With our new data solution on Amazon DocumentDB, we can now provide solutions and services to our clients at a speed that is unusually fast for the healthcare industry,” says Macaluso. a highly secure solution English About Greenway Health LLC EHR systems Scaled to hundreds The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Deutsch Tiếng Việt Amazon S3 Customer Stories / Healthcare Italiano ไทย Philip Nick Senior Director of Production Engineering, Greenway Health LLC system development by 6-12 months Greenway Health LLC (Greenway), one of the first companies to offer electronic health record (EHR) solutions to medical providers, was seeking to unify its data processing and storage in the cloud. Greenway’s products had streamlined reporting for its customers, but using them was a manual and time-consuming process. The company needed a cloud-based solution that streamlined data reporting, sharing, and analyses, as well as on-premises data centers at its medical institutions. Greenway was also committed to creating a secure and compliant solution that would meet stringent health-data regulations. It turned to Amazon Web Services (AWS) to unify its data offering. “Our goal was to capture, transform, and use the data from operational settings in a cloud-based environment powered by AWS to provide a launching point of new services for our clients,” says Michael Macaluso, vice president of product management at Greenway. Learn how Greenway Health developed a health-data solution using Amazon DocumentDB. Português
GSR Scales Fast on AWS to Become One of the Largest Crypto Market Makers _ Amazon S3.txt
Supported - about 13 trades every second. Français Rapidly generates Español GSR launched several projects to improve its infrastructure with the help of dedicated AWS account managers and technical teams.  Customer Stories / Trading Platforms / Switzerland Learn how »  日本語 2023 It’s definitely been challenging when you scale that quickly. But using AWS, we’ve integrated elasticity into our day-to-day, and that makes it a lot easier.” The goal was to ensure scalability and fast network connections. “There was a lot of trading happening, and a lot of new liquidity coming into these markets,” says Matteo Cerutti, head of trading platform at GSR. “There was just this massive influx of interest into the sector. I think that started this next wave of the market.”  Enables rapid business growth 한국어 The addition of three new AWS Availability Zones allows GSR to provide faster regional connections to exchanges. “For trading at very high speed, that kind of connectivity is very useful,” says Cerutti. For services in the New York area, GSR uses an AWS Local Zone, which places AWS compute, storage, and database services close to large population centers. This means that GSR can run applications with single-digit-millisecond latency.  Automated trading models help GSR to rapidly manage transactions, enabling it to handle more than 1.1 million trades a day and, at times, over 100 million daily orders. GSR supports these models by using Amazon Simple Storage Service (Amazon S3), an object storage service that lets it retrieve any amount of data from anywhere, and ato run batch computing jobs at any scale. It manages data using Amazon Aurora, which is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility.  Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Although the crypto market has since seen a downturn, Cerutti believes GSR has a promising future, thanks to its scalable, responsive IT foundation built using AWS. “We don’t want to scale back any infrastructure over the next few months, even if the market is quiet,” he says. “We expect that it’s only going in one direction in the long term, now that we have that foundation built.” Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Get Started Amazon EC2 AWS Services Used 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  Amazon Aurora automated trading models using large volumes of market data Global crypto market maker GSR provides cryptocurrency exchanges, token issuers, financial institutions, and investors with critical liquidity services that buy and sell digital assets at scale. When the COVID-19 pandemic disrupted economies around the world and led many governments to respond with large financial stimulus programs, GSR saw a fast-rising demand for cryptocurrency trading. It turned to AWS to increase the speed and scalability of its systems. Using AWS, GSR gained the elasticity it needed to serve its growing customer base, helping it to expand the business and increase the size of its workforce by a factor of 5. It added support for more than 1,400 trading instruments. It also gained the ability to manage daily trading volumes that have reached values of over $5 billion at times. Ρусский عربي GSR has roots in traditional finance, with many of its executive team members coming from the likes of Goldman Sachs, Citadel, and Two Sigma. In addition to providing liquidity services, it also manages trade derivatives, supports over-the-counter trading, and creates custom-made trading algorithms.  中文 (简体) Cryptocurrency values can rise or fall fast, so trading depends on liquidity—the ability to buy or sell quickly before prices change. GSR, founded in 2013, has a global footprint and provides cryptocurrency token issuers, exchanges, financial institutions, and investors with that liquidity. Reduces costs GSR also optimizes costs by working with AWS solutions architects to use AWS Reserved Instances, which provides discounts of up to 75 percent compared to buying capacity on demand.  Overview AWS Batch using reserved capacity through AWS Reserved Instances 1 million daily trades AWS Customer Success Stories Türkçe Using these services, GSR’s research team can access the data it needs to analyze trading results. The trading team then develops automated strategies to monetize trading signals. “The main trading that we do is done programmatically on exchanges,” says Cerutti. “So, if you go to a crypto trading platform today, and you see the order book going 100 miles an hour, and then you put in your bid for Bitcoin or Ethereum or another crypto asset, we’re there to sell it to you and we’re also there to buy it from you. We’re both sides of that order book. And we’re doing that at scale.” English Using AWS, GSR has grown rapidly as it expands its market capabilities. In May 2021, it had around 60 employees. Today, it has 300. The elasticity it’s gained by using AWS has helped it to scale up services fast to meet rising customer demand, paving the way for the company to expand in size. “It’s definitely been challenging when you scale that quickly,” says Cerutti. “But using AWS, we can integrate the elasticity into our day-to-day, and that makes it a lot easier.”  Support for More than 1.1 Million Daily Trades—13 Every Second Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Learn more » The company needed to update its IT systems quickly to meet a rise in demand for cryptocurrency trading during the pandemic. It had already been using AWS and it looked for opportunities to expand its use.  During the COVID-19 pandemic, GSR saw a rapid rise in demand for cryptocurrency trading, as many governments created large financial stimulus programs and large investors put more money into digital assets. To improve its systems’ speed and scalability, it turned to Amazon Web Services (AWS). Matteo Cerruti,  Head of Trading Platform, GSR A Need to Meet the Rapid Increase in Trading Demands Deutsch Single-Digit-Millisecond Latency with New AWS Availability Zones Tiếng Việt Amazon S3 Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. GSR Scales Fast on AWS to Become One of the Largest Crypto Market Makers Italiano ไทย AWS Batch lets developers, scientists, and engineers efficiently run hundreds of thousands of batch and ML computing jobs while optimizing compute resources, so you can focus on analyzing results and solving problems. Financial services Life sciences Digital media. Building on GSR’s existing foundation on AWS, the provider was a natural choice. “AWS is very reliable,” Cerruti says. “I think it’s very hard to beat.” Using AWS, GSR has added support for more than 1,400 trading instruments, which let customers conduct transactions in many different currency combinations. It also gained the elasticity it needed to serve its growing customer base, helping it to increase the size of its workforce by a factor of 5. And it’s gained the ability to manage daily trading volumes that, at times, have reached values of over $5 billion. and services to more than 60 global cryptocurrency exchanges. Learn more » With its expanded use of AWS and its ability to automate trading models for fast transactions, GSR can support more daily trades and types of transactions. It now trades more than 1,400 trading instruments—for example, Bitcoin to Ethereum or Ethereum to USD. At the market’s height in 2021, it handled more than 1.1 million daily trades—about 13 trades every second.  GSR has 9 years of deep crypto market expertise as an ecosystem partner and active, multi-stage investor. It sources and provides spot and non-linear liquidity in digital assets for token issuers, institutional investors, and leading cryptocurrency exchanges. Its trading technology is connected to 60 trading venues. GSR employs 300 people around the globe. Improving System Speed, Support, and Scalability on AWS Português Contact Sales
Helen of Troy Case Study _ Consumer Packaged Goods _ AWS.txt
AWS IoT Core lets you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure. Insights from our solution built using AWS IoT Core drive the new products that we create, how we think about innovation, and the decisions that we make in the near term.” Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » 2023 Español Analyzes real customer data 日本語 Customer Stories / Consumer Packaged Goods Rich Thrush Vice President of Design and Innovation, Helen of Troy Solution | Analyzing Data to Improve Products and Increase Innovation Using AWS IoT Core Get Started 한국어 About Helen of Troy Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Connected devices also bring value by providing feedback to help improve a product. Customers expect products like thermometers to be accurate, and the perception of accuracy often determines whether a customer has a positive experience. By analyzing data collected using AWS IoT Core, Helen of Troy plans to identify readings that a customer might perceive as inaccurate, such as an outlier reading when a customer takes several readings in the same day. Helen of Troy will also be able to see when a customer enters a temperature manually in the app and can compare the value to readings received through the CDF. “Using data collected from our solution built on AWS, we can see if the customer experience is degrading and proactively fix issues before a user complains,” says Jim Gorsich, associate director of engineering at Helen of Troy. Helen of Troy Reference Architecture Outcome | Expanding the Framework Using AWS IoT Core to More Products and Locales AWS Services Used Global consumer products company Helen of Troy has years of experience producing quality physical products across many well-recognized and widely trusted brands, including Braun1, Vicks1, and Honeywell2. To stay on the cutting edge and best meet customer needs, the company saw an opportunity to add a digital experience to its physical products within its Beauty and Wellness division. Founded in 1968 as a beauty products company, Helen of Troy has grown to offer durables from its Beauty and Wellness and Home and Outdoor divisions. In the fall of 2020, the company’s Beauty and Wellness team set out to develop the CDF and the first connected experience: the Braun Family Care app for the Braun ThermoScan®3 7 Connect thermometer. Helen of Troy’s goals were to help families get value out of temperature data with features like age-appropriate recommendations for fever care and to use a cloud infrastructure to continue updating the software and enhancing features for the life of the product. The framework needed to be scalable so that Helen of Troy could quickly and simply launch more connected experiences across its brands. Helen of Troy compared other cloud providers and chose AWS because of its expertise and flexibility to meet immediate needs, with room to expand for future initiatives. With experience using AWS services for IoT projects since 2018, Helen of Troy trusted AWS to deliver a quality solution that maintains high security for sensitive health data. 中文 (繁體) Bahasa Indonesia AWS Professional Services The IoT infrastructure also facilitates the agile rollout of new products and scaling up of required services, which is important because Helen of Troy can use its worldwide presence to provide intelligent healthcare, wellness, and home comfort products. With support from AWS Professional Services, Helen of Troy is working toward releasing the Braun Family Care app in the European Union, which requires an application to the Medical Device Regulation and compliance with the General Data Protection Regulation. “We’ve benefited greatly from the expertise of AWS Professional Services while pursuing compliance with the General Data Protection Regulation,” says Gorsich. “That team has been invaluable and is leveling up our knowledge in a very tricky field.” Helen of Troy can also release software updates remotely to continue improving the experience after a customer takes a product home. Contact Sales Ρусский عربي 中文 (简体) AWS IoT Core Learn how consumer products company Helen of Troy uses its digital solution to give customers a connected experience, improve products, and facilitate innovation using AWS. Overview with a connected device app that makes it simpler to track and understand thermometer data Built a scalable framework Uses data analysis Türkçe that can expand to additional locales and products English Provides value to customers Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram As part of its international expansion plan, Helen of Troy intends to launch the Braun Family Care app and Braun ThermoScan 7 Connect thermometer in the European Union next. Helen of Troy plans to expand the CDF framework to more products, using the foundation that it built alongside AWS Professional Services to make the apps simpler and faster to implement. “As we create more connected products, release updates to continue improving software, and continue to grow the user base, we expect to see more cost savings,” says Gorsich. “We know that going fully cloud based from the start will pay dividends in the end.” Motivated by a vision to help customers track and use data collected from smart devices, Helen of Troy looked to Amazon Web Services (AWS) for assistance designing and implementing an Internet of Things (IoT) solution, the connected devices framework (CDF). Helen of Troy and its customers both benefit from this innovative solution: customers are more engaged with products and can use advanced features, and Helen of Troy receives feedback in near real time for troubleshooting and product-improvement initiatives. With a cloud infrastructure, Helen of Troy collects and delivers insights to customers in near real time, helping them understand the data to make informed decisions. For example, the Braun Family Care app serves as a centralized place for all household members to track temperature data, regardless of who took a reading. To access this data, Helen of Troy uses several AWS services, such as Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. “Using AWS IoT Core and the CDF, we didn’t need to build, stitch together, and manage as much ourselves,” says Uwe Meding, senior IoT architect at Helen of Troy. “Reducing the development time and complexity required to build and maintain IoT-scale systems was really important for us.” Getting the Most Out of Temperature Data with the Braun Family Care™ App Built Using AWS IoT Core with Helen of Troy Global consumer products company Helen of Troy began in 1968 as a family business for beauty products and has grown to offer durables in various industries. Its Beauty and Wellness division provides beauty, healthcare, and home comfort products. To continually innovate and improve the customer experience, Helen of Troy collects data from Bluetooth-capable customer devices using AWS IoT Core, which organizations use to easily and securely connect billions of IoT devices to the cloud. Helen of Troy uses data analysis to determine which features of smart devices are most useful to customers, helping the company invest resources effectively. The CDF that the company built on AWS facilitates innovation by providing data to guide advanced feature development. For example, because the company collects temperature data across the United States, Helen of Troy is looking to notify parents if illnesses are increasing in a geographic area, which could affect behavior and reduce disease transmission. “Using data in the app, we can answer questions that we have but can’t design consumer product testing around,” says Rich Thrush, vice president of design and innovation at Helen of Troy. “Insights from our solution built using AWS IoT Core drive the new products that we create, how we think about innovation, and the decisions that we make in the near term.” Deutsch Tiếng Việt Amazon S3  The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud.  Learn more » Italiano ไทย From the beginning, Helen of Troy engaged AWS Professional Services, which organizations use to achieve desired business outcomes when using AWS, to help design the digital solution and provide guidance around regulatory compliance. “The AWS Professional Services team had expertise, which gave us the confidence to complete the project the right way the first time,” says Edwin De Leon, director of engineering at Helen of Troy. “There was a sense of collaboration from the start.” In early 2022, Helen of Troy launched the Braun Family Care app in the United States. 1 Certain trademarks used under license from The Proctor & Gamble Company or its affiliates. 2 Honeywell is a trademark of Honeywell International Inc., used under license by Helen of Troy Limited. 3 ThermoScan is a registered trademark of Helen of Troy Limited and/or its affiliates. Architecture Diagram Opportunity | Using AWS Professional Services to Build a Digital Solution that Improves the Customer Experience for the Life of the Product for Helen of Troy Learn more » to improve products to drive innovation and guide feature development Português
Help Customers Reduce Data Query Time by 70 and Improve Business Insights Capabilities with Amazon OpenSearch Service _ Deputy Case Study _ AWS.txt
More Software & Internet Customer Stories Français Once it made its decision, Deputy began using a single index with routing keys and filters to achieve a multi-tenant architecture within Amazon OpenSearch Service. The company also built a data pipeline based on Furthermore, Deputy’s AWS-based data pipeline provides the ability to quickly scale up or down based on demand at specific times, supporting hundreds of millions of data points for each customer. Español This is just the beginning. We have so many other use cases to solve with Amazon OpenSearch Service, especially unlocking predictive analytics and ML capabilities for scheduling.” Develops new software due to improved data efficiency Opportunity | Improving Application Performance with Amazon OpenSearch Service Deputy initially evaluated several database solutions alongside Amazon OpenSearch Service, performing a query use-case analysis to compare performance, and found the service to be the fastest and most flexible. Plus, as a fully managed service, the business can focus on its applications instead of scaling. “We documented our access patterns and ensured we had a query to match each pattern across different services. We then ran the queries manually to record the timings,” says Marchant. 日本語 AWS Services Used 2022 Deputy is also able to develop new software features due to the improved data efficiency of Business Insights. “Amazon OpenSearch Service provides more flexibility in terms of data retrieval and eliminates performance bottleneck. As a result, we’ve been able to release new features on top of the application,” says Marchant. “With better performance, we can view multiple weeks of data at once in a summarized format, aggregated to the week, within a six-month timeframe. This allows our customers to analyze trends and compare week or month totals, is something not possible before Amazon OpenSearch Service.” Amazon OpenSearch Service, a fully managed service that makes it easy to perform interactive log analytics, real-time application monitoring, and website search functions. “We already use a lot of AWS services and were also planning to build a data pipeline on AWS, so it made sense to integrate everything using AWS,” says Marchant. 1 Deputy also anticipates cost savings from implementing Amazon OpenSearch Service. “Amazon OpenSearch Service was around three times less expensive than the other solutions we evaluated, and we've removed the need for an engineer to maintain our infrastructure on a daily basis,” says Li. He continues, “Amazon OpenSearch Service has helped us increase the performance of Business Insights. We look forward to leveraging AWS to help maximize what our customers get out of their data.” Adds Rajini Carpenter, Deputy’s vice president of engineering, “This is just the beginning. We have so many other use cases to solve with Amazon OpenSearch Service, especially unlocking predictive analytics and ML capabilities for scheduling.” 한국어 AWS Lambda. The application writes data to an Amazon Kinesis Data Stream, triggering an AWS Lambda function to push data into the correct cluster for the customer. This helps Deputy further improve performance via more efficient batch processing and simple scaling depending on traffic volume. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Overview | Opportunity | Solution | Outcome | AWS Services Used no items found  Deputy Helps Customers Reduce Data Query Time by 70% and Improve Business Insights Capabilities with Amazon OpenSearch Service Read about the Deputy OpenSearch service here, visit https://www.deputy.com/features/demand-forecasting Deputy uses AWS to drive 70 percent faster data request times for customers, scale to support hundreds of millions of data points, save time by eliminating management and maintenance, and lower costs. Deputy, based in Australia, provides software that automates scheduling and facilitates workforce management for global customers.   Solution | Driving 70% Faster Data Request Times … Reduces operational costs Improved data efficiency Amazon OpenSearch Service Overview 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي Rajini Carpenter Vice President of Engineering, Deputy 70% 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Kinesis Data Streams and Scales to support data points Learn more » To learn more, visit aws.amazon.com/opensearch-service/. Deputy often had to manually intervene by vertically scaling MySQL clusters to support increased data volumes, but it needed an efficient solution to address the problem. “We wanted to identify a new database solution that could scale easily and query data in real time. We also needed to improve page load times and help our customers take in more data,” says Li. By eliminating management and maintenance Get Started   Deputy is on a mission to Simplify Shift WorkTM for millions of workers and businesses worldwide. The company streamlines scheduling, timesheets, tasks, and communication for business owners and their workers. More than 300,000 global workplaces use Deputy to schedule and effectively communicate with employees, providing millions of shift workers with more flexibility and control over their schedules. By relying on Amazon OpenSearch Service for data querying, Deputy is experiencing 70 percent faster overall request times for data-powered Business Insights. “For some of our larger customers, data queries that took minutes to complete now take just seconds using Amazon OpenSearch Service, so they’re not sitting there waiting for the screen to load,” says Marchant. With reduced request times, Deputy customers can quickly analyze business performance by checking updated metrics across multiple stores or regions. Türkçe Drives 70% faster data request times for customers Saves time English Deputy launched its business on Amazon Web Services (AWS), and was interested in expanding its AWS environment by implementing  Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. Because Amazon OpenSearch Service is fully managed, Deputy has eliminated the time it previously spent managing and maintaining the Business Insights application environment. “Our engineers are no longer resizing MySQL instance clusters just to cope with slow queries or new demands,” says Marchant. Discover how data drives transformation Amazon Kinesis Data Streams Deutsch Tiếng Việt Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Italiano ไทย Scales to support hundreds of millions of data points Learn more » More than 320,000 workplaces and 1.3 million shift workers in over 100 countries use Deputy software to automate scheduling and facilitate workforce management. Many of these customers, including Fortune 500 companies, use Deputy’s Business Insights Dashboard to access analytical data about their organization. Caesar Li, senior product manager at Deputy, says, “Business Insights uses historical information to forecast projected future sales, allowing customers to make smarter, data-driven scheduling decisions.” The tool integrates point-of-sale data with wage and shift data, with up to 4,000 monthly active users depending on these combined data sets to streamline their scheduling. Outcome | Saving Time and Money by Eliminating Management and Maintenance AWS Lambda However, the solution's MySQL-based database struggled to scale as the business experienced rapid growth and data sets expanded to millions of records. This resulted in customers reporting delays in page load times. Jack Marchant, technical lead at Deputy, says, “Our customers experienced slow page loading times for their data, sometimes waiting 2 minutes for queries to complete.” This load time delay was unacceptable for customers requiring fast data snapshots. “Our customers benefit from viewing all their data in one place so they can reduce labor-intensive, manual scheduling processes. They need to visualize their weekly data in under 30 seconds to update their employee work schedules,” Marchant says. Português About Deputy
Helping Customers Modernize Their Cloud Infrastructure Using the AWS Well-Architected Framework with Comprinno _ Comprinno Technologies Case Study _ AWS.txt
To achieve this goal, Comprinno looked to Amazon Web Services (AWS) and the AWS Well-Architected Framework, which helps cloud architects learn, measure, and build using architectural best practices. By adopting the AWS Well-Architected Framework during the sales process, Comprinno provides a standardized experience for customers, identifies and resolves for blind spots, and builds trust that leads to business growth. Français 55–60% of revenue comes from long-term customers 2023 71% conversion achieved in 2022 Español About Comprinno Technologies Established by a team of technical experts, Comprinno recognized that it needed business expertise to improve and standardize its sales process. Previously, its solutions architects independently determined the direction of presale conversations. This strategy was effective because of the company’s experienced staff, but it didn’t provide a consistent customer journey. The company also sought standardization to reach a wider audience instead of exclusively building and selling custom solutions. In 2019, Comprinno became an AWS Software Partner, a path for organizations that develop software to run on or alongside AWS services. As part of the process, the company performed an AWS Well-Architected review to evaluate its Tevico solution. After going through the AWS Well-Architected review process, Comprinno knew that it would be a good tool for its customers as well. to retain customers and facilitate growth Learn more » 日本語 Prasad Puranik CEO, Comprinno 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads. Comprinno is a cloud consulting and professional services startup based in India. Its software-as-a-service brand, Tevico, provides artificial intelligence to automate processes for its customers’ AWS solutions. Standardized the sales process Get Started AWS Services Used with annual AWS Well-Architected reviews 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Comprinno works primarily with customers in the startup sector. Although its customers span multiple industries, they often have shared needs and a common goal of cloud infrastructure modernization. Comprinno needs standardization to be fast and accurate so that it can quickly reach a mutual understanding with its customers about the next steps. To accomplish that goal, Comprinno developed content explaining principles from the AWS Well-Architected Framework and made it available in Tevico. Multiple users can contribute information about customer needs, which facilitates collaboration and engagement. When customers engage with Tevico, Comprinno can better understand the business needs and build better solutions, translating those needs into actionable technical requirements. Founded in 2013, Comprinno is a cloud consulting and professional services company based in India that serves over 500 customers. Its software-as-a-service brand, Tevico, provides an artificial intelligence layer on top of AWS solutions to help customers automate processes, detect anomalies, and repair issues automatically. For example, when an ecommerce company approaches Comprinno with the goal of increasing conversions, Comprinno uses Tevico and the structure of the AWS Well-Architected Framework to capture data about the company’s existing infrastructure, amount of traffic, and other business requirements. Then, Comprinno offers consultations to identify technical changes that the customer can make to scale in the cloud, refactor code, and adopt additional AWS services if needed. “Using the AWS Well-Architected Framework, our customers get a well-ordered and structured way of understanding their solution implementation,” says Puranik. “This structured approach is a big win both for the customer, because they are guided in the right direction, and for us, because we have a path to accomplish the customer’s goals.” Using this structure to provide value to customers, Comprinno increased its number of launched opportunities in 2022 by two and a half times with a 71 percent conversion rate of qualified opportunities into launched opportunities. Helping Customers Modernize Their Cloud Infrastructure Using the AWS Well-Architected Framework with Comprinno Helps customers identify blind spots Overview to deliver a consistent customer experience Professional services startup Comprinno Technologies (Comprinno) excels in cloud orchestration and management, but the company wanted to grow its business by gaining sales experience and providing a more standardized process for customers. Because we work with customers across multiple industries and have seen a wide range of setups, we can share lessons and help customers identify their blind spots faster using the AWS Well-Architected Framework.” In April 2021, Comprinno was accepted into the AWS Well-Architected Partner Program, which helps organizations establish good architectural habits, reduce risks, and build robust applications. Comprinno underwent extensive training and boot camps to be equipped to provide exceptional AWS Well-Architected reviews for its clients. “As a startup, Comprinno benefits from the experienced framework that AWS offers to facilitate building our business better,” says Prasad Puranik, CEO of Comprinno. “This framework has been helpful in enhancing our business maturity by teaching us how to build sales, customer relationships, and technical solutions.” for qualified opportunities into launched opportunities Türkçe AWS Well-Architected Framework Learn how Comprinno Technologies standardized the customer experience and grew its business using the AWS Well-Architected Framework. English Opportunity | Using the AWS Well-Architected Framework to Standardize the Customer Experience for Comprinno Comprinno strives to continue learning and investing deeper in all the pillars of the AWS Well-Architected Framework. The company also plans to expand its reach to bring in more revenue from small and medium-sized businesses while retaining its influence in the startup sector. To cater to small and medium-sized businesses, Comprinno plans to develop more packaged solutions that are quick to deploy. “Without the AWS Well-Architected Framework, we wouldn’t have been as successful,” says Puranik. “We learned that you need to be good at solving problems and good at doing business. The teams at AWS have provided us with good business guidance over the past several years.” quickly using extensive content and case studies Comprinno also uses the AWS Well-Architected Framework to build trust, which has helped the company retain customers and increase business. An estimated 55–60 percent of its revenue comes from existing customers through value-added and managed-services contracts, which feature an annual AWS Well-Architected review. “The AWS Well-Architected Framework acts like an icebreaker and helps our customers see the efficacy of our solutions architects, how thoughtful their suggestions are, and how insightful the conversation is,” says Puranik. “Those components become the cornerstone of building trust and show why a customer would want to work with Comprinno for subsequent engagements.” One of Comprinno’s customers, a large company in the wearable and hearable technology industry in India, has continued the relationship after the success of its initial project. “We did an AWS Well-Architected review with the customer and helped them optimize costs,” says Puranik. “Now, we are engaged with them for application modernization to further reduce costs by redesigning their existing architecture.” Deutsch Known for working in varied and highly regulated industries, like financial technology and healthcare, Comprinno has a lot of expertise to offer its customers. Using Tevico and the AWS Well-Architected Framework, Comprinno can clearly present best practices and identify blind spots that the customer might have. “Because we work with customers across multiple industries and have seen a wide range of setups, we can share lessons and help customers identify their blind spots faster using the AWS Well-Architected Framework,” says Puranik. For example, Comprinno can use the framework to present security best practices alongside compelling case studies about what happens if best practices aren’t followed. Tiếng Việt Solution | Providing Direction, Building Trust, and Generating More than 50% of Its Revenue with Loyal Repeat Business Using the AWS Well-Architected Framework Italiano ไทย Builds trust Outcome | Expanding to Use the AWS Well-Architected Framework for Additional Business Sectors Português
Helping Doctors Treat Pediatric Cancer Using AWS Serverless Services _ Nationwide Childrens Hospital Case Study _ AWS.txt
Using AWS serverless solutions, we can focus not on the upkeep of technology but on the output of the science. Grant Lammi Cloud Development Manager, the Steve and Cindy Rasmussen Institute for Genomic Medicine at Nationwide Children’s Hospital to analyze genomics data for pediatric cancer patients Outcome | Improving the Treatment and Diagnosis of Children with Cancer Français Amazon EventBridge is a serverless event bus that lets you build event-driven applications at scale across AWS and existing systems. Learn more » Español Using AWS serverless solutions, the IGM is turning cancer samples from pediatric patients into valuable data. After the samples are run through the sequencing workflows, an expert interprets the results and prepares two reports: one that provides deidentified results to the researchers at the National Cancer Institute and one that helps doctors determine the best course of treatment for their patients. Researchers from the National Cancer Institute can access this information through a dedicated bucket on Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. The hospital uses AWS Identity and Access Management (AWS IAM), a service that securely manages identities and access to AWS services and resources, to protect patient data throughout the pipeline and prevent unauthorized users from accessing sensitive health information. Nationwide Children’s Hospital, an academic pediatric medical center, is one of the largest pediatric hospitals in the United States. It brings advanced clinical genomics capabilities to patients to help select the best care pathways. 日本語 AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. Learn more » NCH automated complex analyses of cancer samples using a wide variety of AWS serverless services, like AWS Step Functions, a visual workflow service, to model its laboratory procedures and automate pipeline-based, step-by-step processes. On AWS, the hospital spends less time managing infrastructure and more time focusing on what matters most: improving treatment for patients with pediatric cancer. AWS Step Functions 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Overview Based in Columbus, Ohio, NCH is one of the largest pediatric hospitals in the United States. The IGM at NCH specializes in genomics data generation and analysis, using blood and cancer samples to help physicians better treat pediatric patients. The IGM handles 6¬–7 PB of genomics data, which increases by 1–2 PB every year, and migrated from an on-premises environment to the AWS Cloud in 2017. “We couldn’t keep up with our goals by doing everything on premises,” says Grant Lammi, cloud development manager at the IGM at NCH. “We needed a solution where we could have more elastic compute and storage, so we migrated to the cloud.” Faster Scales Get Started analyses of cancer samples AWS Services Used Helping Doctors Treat Pediatric Cancer Using AWS Serverless Services with Nationwide Children’s Hospital 中文 (繁體) Bahasa Indonesia AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. Learn more » Contact Sales Ρусский time through automation عربي 中文 (简体) Opportunity | Using AWS Serverless Services to Analyze Cancer Samples for Nationwide Children’s Hospital The IGM uses AWS Step Functions to automate the analysis of cancer samples and runs multiple jobs concurrently using AWS Batch, which is used to efficiently run hundreds of thousands of batch and machine learning computing jobs. The hospital uses Amazon EventBridge, a serverless event bus, to emit events throughout the workflow and track the progress of each cancer sample as it travels from primary, secondary, and tertiary analyses. “Because the sequencing workflows are activated by Amazon EventBridge events, they’re all automated,” says Lammi. “There’s no manual intervention needed beyond kicking things off in the lab.” This data is then stored in Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at virtually any scale. Learn more » 2022 Saves AWS Batch to deliver critical data to doctors In spring 2021, the Steve and Cindy Rasmussen Institute for Genomic Medicine (IGM) at Nationwide Children’s Hospital (NCH) entered into an agreement with the National Cancer Institute and the Children’s Oncology Group to perform molecular characterization for all children living with cancer in the United States. For the pediatric teaching hospital, the project would be a major undertaking. To perform this advanced genomics testing, it would need to process massive amounts of data in a highly secure and scalable environment. As part of its ongoing journey to the cloud on Amazon Web Services (AWS), NCH looked to adopt serverless solutions to handle these genomics testing pipelines. Türkçe Protects Facilitates 24/7 English AWS IAM Solution | Saving More Time with Automated Genomics Pipelines on AWS Amazon EventBridge In the future, NCH plans on expanding the solution to other programs, such as the diagnosis and treatment of epilepsy and rare genetic diseases. “Using AWS serverless solutions, we can focus not on the upkeep of technology but on the output of the science,” says Lammi. “We can focus on improving the lives of kids everywhere.” The Steve and Cindy Rasmussen Institute for Genomic Medicine at Nationwide Children’s Hospital is analyzing critical genomics data for pediatric cancer patients at scale using AWS serverless solutions. By 2021, the IGM had already begun using AWS Step Functions. When the National Cancer Institute and the Children’s Oncology Group approached the IGM that year, it was in a strong position to handle the compute-intensive molecular-characterization project. “We would need to sequence the genomes of essentially all kids with cancer in the United States to see if they qualified for clinical trials that could treat them,” says Lammi. “On AWS, we were able to scale from our internal research protocol to handle cases from all over the country in about 12 months.” Deutsch AWS Step Functions is a low-code, visual workflow service that developers use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. Tiếng Việt sensitive patient data Customer Stories / Healthcare Italiano ไทย Using AWS serverless services, the IGM has saved significant time through automation. “We’ve automatically bought ourselves extra time,” says Lammi. “By the time we get the data out of the lab and synced up, we’re looking at maybe 1 day to process the genome and for the results to be ready for review.” Because it no longer needs to manage each step in the sequencing workflow manually, the hospital has reduced the risk of human error and can analyze cancer samples 24 hours per day. The IGM can now focus its time on writing scientific software to improve patient outcomes, rather than managing infrastructure. Because the hospital analyzes cancer samples at a faster pace, it can deliver important data to physicians and help pediatric patients get the care that they need. “There are actual kids who need the results that we are generating, and they need them as quickly as possible,” says Lammi. “The faster that we can get the report into a doctor’s hands, the better off the kid will be. That’s what drives everything for us.” Using serverless solutions from AWS, the IGM can move much faster than traditional hospital developers. It can quickly analyze cancer samples from pediatric patients to recommend treatment, supporting stronger patient outcomes. Additionally, the IGM scales without worrying about compute capacity; now, it is confident that it can handle its workflows regardless of how many tests it needs to run. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. About Nationwide Children’s Hospital Português
Helping Fintech Startup Snoop Deploy Quickly and Scale Using Amazon ECS with AWS Fargate _ Case Study _ AWS.txt
Helping Fintech Startup Snoop Deploy Quickly and Scale Using Amazon ECS with AWS Fargate Snoop, a cloud-native fintech startup, wanted to harness the United Kingdom’s system of open banking and develop an app to help users control their finances. To achieve this, the company had to scale up rapidly, from zero to millions of daily open banking transactions, with uninterrupted availability. Français Jamie West Senior DevSecOps Engineer, Snoop Español Savings of £1500 Solution | Building an App that Scales from Zero to One Billion Transactions in 2 Years AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Learn more » The small team of cofounders looked to Amazon Web Services (AWS) to provide the infrastructure needed to bring their vision to life. Snoop uses Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service that facilitates deploying, managing, and scaling containerized applications. Using Amazon ECS with AWS Fargate, a serverless, pay-as-you-go compute engine, Snoop gives users hyperpersonalized insights in seconds. Using AWS, Snoop can deploy containerized apps quickly, scale efficiently, and spend more time focusing on its mission of helping customers cut the cost of living. 日本語 2023 AWS Cloud Map is a cloud resource discovery service. With Cloud Map, you can define custom names for your application resources, and it maintains the updated location of these dynamically changing resources. Learn more » AWS Cloud Map Get Started 한국어 overhead Amazon ECS Overview | Opportunity | Solution | Outcome | AWS Services Used Scaled significantly AWS Fargate Snoop’s goal is to offer a bespoke experience for users to manage all their finances in one place. This means the app needs to be secure, simple to use, and available 24/7. Automatic scaling and availability mean Snoop can keep growing, whether branching out into new territories or adding business-to-business applications. And the team stayed within budget using AWS Customer Enablement, which supports companies in migrating and building faster in the cloud. AWS Services Used Enhanced All of our Amazon ECS instances use AWS Fargate, which takes off a huge piece of overhead. As a fast-scaling startup, that’s exactly what we need.”  Reduced 中文 (繁體) Bahasa Indonesia Opportunity | Using AWS to Take Insights a Step Further for Snoop Founded in 2019 and launched in April 2020, Snoop saw an opportunity in open banking in the United Kingdom. When open banking started in 2018, the country’s largest banks began sharing data in a secure, standardized form. In response, Snoop created its own cloud-based app that uses open banking data to empower users. Customers can access their accounts in one place and receive additional insights into their account activities. Going all in on AWS, Snoop built its architecture to easily scale to a billion banking transactions and grow rapidly while maintaining the security and performance users expect. “We’ve found that, on average, if customers take the actions we propose, they can save up to £1500 per year,” says Walters. Snoop offers users privacy and security as well as performance and availability. “Making sure the solution performs as we grow is key to building trust and building a powerful brand,” Walters added. per year potential for customers Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) staff productivity AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Learn more » Learn how Snoop in the fintech industry used Amazon ECS with AWS Fargate to build its app and scale from zero to one billion transactions in 2 years. Outcome | Putting Autoscaling to Work for Customers Overview Snoop offers customizable features, like social media–style “Snoop Feed,” emails, event-driven alerts, and more. When customers join Snoop, they give their name, email, and phone number, along with secure access to their account through Open Banking APIs. Snoop gathers over 300 data points from their transactions, and then its artificial intelligence and machine learning engines kick in. Snoop’s recurring payments engine shows customers where their money goes. Its recommendation engine offers timely content to help them make better financial decisions. For example, the app might tell a user they’re autopaying for a subscription they’d forgotten all about, or a user might learn that they have better options for car insurance plans. Zero to one billion “Performance is everything, and when something isn’t right, we fix it, and fix it fast,” says Andy Makings, head of DevSecOps at Snoop. This mindset makes it easier for Snoop to get processes in place from the start. Snoop’s engineers can talk in near real time with AWS Startups—a service that helps companies get started, connect with other founders, and find resources to grow—to get quick assistance. “We’ve had some great support from the AWS Startups team along the way,” says Walters. Customer Stories / Financial Services Starting from zero in 2020 when it launched, Snoop has now had well over one million downloads, with 150,000–200,000 active monthly users. Using Amazon ECS with AWS Fargate to provision, manage, and orchestrate containers serverlessly means Snoop can continue to put customers first. “We have an ambitious and exciting growth and product development road map ahead of us,” says Walters, “and AWS will be at the heart of everything we do.” Türkçe English The company’s innovation and customer service have already earned recognition. In 2021, the Banking Tech Awards declared Snoop the year’s Best Open Banking Solution. More recently, Snoop won a “Rising Star” award from the AWS Software Startup Awards for being an early-stage startup that has demonstrated innovative tech solutions to support customers. Deutsch About Snoop Using AWS solutions, Snoop can handle the massive task of interface and traffic management, making it possible for a few engineers to accomplish a lot. Rather than creating a monolithic application, Snoop’s developers can treat software applications as independent parts, streamlining their tasks. Using AWS Cloud Map, a cloud resource discovery service, Snoop constantly checks the dynamic environment to keep locations up to date. Tiếng Việt Snoop uses Amazon ECS with AWS Fargate to build resilient applications without having to manage its own infrastructure. This includes AWS Fargate Spot, which can run interruption-tolerant Amazon ECS tasks at savings of up to 70–90 percent off on-demand pricing. “All of our Amazon ECS instances use AWS Fargate, which takes off a huge piece of overhead. As a fast-scaling startup, that’s exactly what we need,” says Jamie West, senior DevSecOps engineer at Snoop. Snoop builds resilience and scalability into the program using AWS Lambda—a serverless, event-driven compute service used to run code for virtually any type of application or backend service without provisioning or managing infrastructure. Snoop uses AWS Lambda for asynchronous integrations, in which the function code hands off to AWS Lambda, which places the user request in a queue and returns a successful response. A separate process reads events from the queue and sends them to the function. Snoop uses Amazon API Gateway, a service that makes it simple for developers to create, publish, monitor, and secure APIs at virtually any scale, for the “front door” of its applications. Tying it all together is AWS App Mesh, which provides application-level networking so services can communicate across multiple types of compute infrastructure. With an ambition to make everyone better off, Snoop is a fintech firm that helps people cut their bills, pay off debt, grow their savings, and save where they spend, all without changing banks. Italiano ไทย Turning insights into a useful app takes time, expertise, and compute power. Born in the cloud, Snoop was a startup that had to work without the large teams and budgets that established companies enjoy. With lean resources, the cofounders looked to AWS. They knew from prior experience that AWS had solutions for hastening the time to market of scalable apps. And using AWS Activate, Snoop accessed tools, resources, content, and expert support to accelerate the startup. “It was a straightforward decision to use AWS,” says Jem Walters, chief technology officer for Snoop. “We’re really pleased that using its services supported us in building Snoop the way that we wanted.” Contact Sales with optimized costs Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Learn more » AWS Lambda transactions in 2 years Português
Helping Patients Access Personalized Healthcare from Anywhere Using Amazon Chime SDK with Salesforce _ Salesforce Case Study _ AWS.txt
Salesforce and AWS have many joint customers who use Salesforce to manage customer relationships and AWS for compute, storage, database, and other managed-service solutions. In June 2021, Salesforce and AWS announced plans to launch a series of new intelligent applications that combine AWS and Salesforce Customer 360. Joint customers can seamlessly deploy AWS voice, video, and artificial intelligence services natively within Salesforce business applications in a scalable way. More Salesforce Stories Français 2023 Español Salesforce wanted to help its life sciences customers improve healthcare access for patients, lower costs for services, and provide a connected, equitable experience. It also wanted to help healthcare teams garner a 360-degree view of patients to provide meaningful insights into health outcomes. Using Amazon Web Services (AWS), Salesforce built Salesforce Health Cloud: Virtual Care on AWS, which simplifies virtual appointments for patients and healthcare providers. The turnkey solution is built on Amazon Chime SDK, which provides embedded intelligent near-real-time communication capabilities. Using Amazon Chime SDK and other managed services from AWS, Salesforce built a scalable, agile telehealth solution that saves time for doctors and patients, provides more-personalized care, and helps remove barriers to healthcare. Explore Salesforce's journey of innovation using AWS Provides more-personalized healthcare 日本語 AWS Services Used Divya Daftari Senior Director of Product, Salesforce Opportunity | Using AWS to Build a Telehealth Solution for Salesforce  Amazon Transcribe 한국어 In October 2022, Salesforce launched its first such application: Virtual Care. Virtual Care is built using AWS and functions within Salesforce Health Cloud, which serves as a centralized platform for clinical and nonclinical patient data. Salesforce wanted to deliver this more efficient care remotely at scale so that physicians could broadly improve health outcomes. The aim of Virtual Care was to remove friction from the healthcare experience by helping patients to overcome difficulties such as transportation, location, mobility, or limited appointment availability. “Our Virtual Care solution is a critical part of our vision to achieve whole-patient value and provide equitable care to patients and members,” says Divya Daftari, senior director of product at Salesforce. Overview | Opportunity | Solution | Outcome | AWS Services Used no items found  Helping Patients Access Personalized Healthcare from Anywhere Using Amazon Chime SDK with Salesforce … Solution | Improving Patient Engagement through Managed Solutions  Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Salesforce is one of the world’s leading customer relationship management companies. It provides centralized management of the customer experience for the marketing, sales, commerce, service, and IT teams of more than 150,000 companies. Amazon Chime SDK 1 The Virtual Care solution serves as a model to optimize the use of Amazon Chime SDK in other Salesforce Industry Clouds. Salesforce plans to support remote sales and services sessions in a variety of industries, including automotive, manufacturing, retail, and wealth management. “Through AWS, we have trusted, scalable, performant services,” Daftari says. “Using the technology has helped us innovate for our joint customers.” 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Outcome | Expanding Intelligent Features of Virtual Care Amazon Transcribe is an automatic speech recognition service that makes it easy to add speech to text capabilities to any application. Learn more » Learn more » It is critical that video visits are secure, responsive, and reliable. Using AWS helps us provide all this in a performant and scalable way. "   Overview With the Amazon Chime SDK, builders can easily add real-time voice, video, and messaging powered by machine learning into their applications. Get Started Beyond traditional use cases, Salesforce is adding capabilities in medication-therapy management, connectivity for care coordinators, and other approaches for patient engagement. The company is developing a new feature that will expand its support of Virtual Care sessions to multiple participants, instead of just clinician and patient. This will facilitate care-team coordination with multiple parties in a single meeting. Using AWS, Salesforce circumvented the heavy lifting that would have been required to build and maintain a video-calling solution from scratch. Patients self-schedule virtual appointments, coordinate previsit activities, and conduct virtual visits in a HIPAA-compliant environment. A patient’s appointment request gets routed to Amazon Chime SDK. Clinicians then review a patient’s intake form and correlate the patient to a Virtual Care session using Amazon Chime SDK messaging, which connects providers and patients with secure, scalable messaging in their web and mobile applications. The Amazon Chime SDK control plane sends event notifications through a default event bus to Amazon EventBridge, a serverless event bus that helps organizations receive, filter, transform, route, and deliver events. Healthcare professionals deliver care over the internet in near real time, which has significantly reduced no-shows for appointments. “Using Amazon Chime SDK, we don’t have to worry about the mechanics of the video call,” Daftari says. “We can focus on features and functions that help differentiate our product in the marketplace, while also significantly improving our speed to launch.” Salesforce further supports accessibility through embedding closed-captioning of video calls using Amazon Chime SDK live transcription. Amazon Chime SDK sends live audio streams to Amazon Transcribe, which automatically converts speech to text. Salesforce Health Cloud customers can use the live transcription capability to display subtitles, create meeting transcripts, or analyze content. Virtual Care goes a step further by incorporating Amazon Transcribe Medical, an automatic speech recognition service that makes it simple to add medical speech-to-text capabilities to voice applications. The solution also builds in protections in the case of event delivery failure. Using Amazon EventBridge, Salesforce customers route events to a variety of targets, such as Amazon Simple Queue Service (Amazon SQS), which provides fully managed message queuing for microservices, distributed systems, and serverless applications. To monitor the Amazon SQS queue depth and send alerts when it exceeds the configured threshold, Salesforce Health Cloud uses Amazon CloudWatch, which collects and visualizes near-real-time logs, metrics, and event data in automated dashboards. An Amazon CloudWatch alarm initiates email notifications to stakeholders, using Amazon Simple Notification Service (Amazon SNS), a fully managed service for application-to-application and application-to-person messaging. “It is critical that video visits are secure, responsive, and reliable,” says Daftari. “Using AWS helps us provide all this in a performant and scalable way.” Türkçe EventBridge makes it easier to build event-driven applications at scale using events generated from your applications, integrated SaaS applications, and AWS services. English Removes barriers to healthcare About Salesforce Amazon EventBridge Saves time for doctors and patients Deutsch Expands accessibility through live closed-captioning Tiếng Việt Italiano ไทย Amazon CloudWatch Learn more » Reduces appointment no-shows Português
High-quality human feedback for your generative AI applications from Amazon SageMaker Ground Truth Plus _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog High-quality human feedback for your generative AI applications from Amazon SageMaker Ground Truth Plus by Jesse Manders , Alex Williams , Jonathan Buck , Erran Li , Romi Datta , and Sarah Gao | on 30 MAY 2023 | in Amazon SageMaker , Amazon SageMaker Ground Truth , Artificial Intelligence , Foundational (100) , Generative AI | Permalink | Comments |  Share Amazon SageMaker Ground Truth Plus helps you prepare high-quality training datasets by removing the undifferentiated heavy lifting associated with building data labeling applications and managing the labeling workforce. All you do is share data along with labeling requirements, and Ground Truth Plus sets up and manages your data labeling workflow based on these requirements. From there, an expert workforce that is trained on a variety of machine learning (ML) tasks labels your data. You don’t even need deep ML expertise or knowledge of workflow design and quality management to use Ground Truth Plus. Now, Ground Truth Plus is serving customers who need data labeling and human feedback for fine-tuning foundation models for generative AI applications. In this post, you will learn about recent advancements in human feedback for generative AI available through SageMaker Ground Truth Plus. This includes new workflows and user interfaces (UIs) available for preparing demonstration datasets used in supervised fine-tuning, gathering high-quality human feedback to make preference datasets for aligning generative AI foundation models with human preferences, as well as customizing models to application builders’ requirements for style, substance, and voice. Challenges of getting started with generative AI Generative AI applications around the world incorporate both single-mode and multi-modal foundation models to solve for many different use cases. Common among them are chatbots, image generators, and video generators. Large language models (LLMs) are being used in chatbots for creative pursuits, academic and personal assistants, business intelligence tools, and productivity tools. You can use text-to-image models to generate abstract or realistic AI art and marketing assets. Text-to-video models are being used to generate videos for art projects, highly engaging advertisements, video game development, and even film development. Two of the most important problems to solve for both model producers who create foundation models and application builders who use existing generative foundation models to build their own tools and applications are: Fine-tuning these foundation models to be able to perform specific tasks Aligning them with human preferences to ensure they output helpful, accurate, and harmless information Foundation models are typically pre-trained on large corpora of unlabeled data, and therefore don’t perform well following natural language instructions. For an LLM, that means that they may be able to parse and generate language in general, but they may not be able to answer questions coherently or summarize text up to a user’s required quality. For example, when a user requests a summary of a text in a prompt, a model that hasn’t been fine-tuned how to summarize text may just recite the prompt text back to the user or respond with something irrelevant. If a user asks a question about a topic, the response from a model could just be a recitation of the question. For multi-modal models, such as text-to-image or text-to-video models, the models may output content unrelated to the prompt. For example, if a corporate graphic designer prompts a text-to-image model to create a new logo or an image for an advertisement, the model may not generate a relevant graphic related to the prompt if it has only a general concept of an image and elements of an image. In some cases, a model may output a harmful image or video, risking user confidence or company reputation. Even if models are fine-tuned to perform specific tasks, they may not be aligned with human preferences with respect to the meaning, style, or substance of their output content. In an LLM, this could manifest itself as inaccurate or even harmful content being generated by the model. For example, a model that isn’t aligned with human preferences through fine-tuning may output dangerous, unethical, or even illegal instructions when prompted by a user. No care will have been taken to limit the content being generated by the model to ensure it is aligned with human preferences to be accurate, relevant, and useful. This misalignment can be a problem for companies that rely on generative AI models for their applications, such as chatbots and multimedia creation. For multi-modal models, this may take the form of toxic, dangerous, or abusive images or video being generated. This is a risk when prompts are input to the model without the intention of generating sensitive content, and also if the model producer or application builder had not intended to allow the model to generate that kind of content, but it was generated anyway. To solve the issues of task-specific capability and aligning generative foundation models with human preferences, model producers and application builders must fine-tune the models with data using human-directed demonstrations and human feedback of model outputs. Data and training types There are several types of fine-tuning methods with different types of labeled data that are categorized as instruction tuning – or teaching a model how to follow instructions. Among them are supervised fine-tuning (SFT) using demonstration data, and reinforcement learning from human feedback (RLHF) using preference data. Demonstration data for supervised fine-tuning To fine-tune foundation models to perform specific tasks such as answering questions or summarizing text with high quality, the models undergo SFT with demonstration data. The purpose of demonstration data is to guide the model by providing it with labeled examples (demonstrations) of completed tasks being done by humans. For example, to teach an LLM how to answer questions, a human annotator will create a labeled dataset of human-generated question and answer pairs to demonstrate how a question and answer interaction works linguistically and what the content means semantically. This kind of SFT trains the model to recognize patterns of behavior demonstrated by the humans in the demonstration training data. Model producers need to do this type of fine-tuning to show that their models are capable of performing such tasks for downstream adopters. Application builders who use existing foundation models for their generative AI applications may need to fine-tune their models with demonstration data on these tasks with industry-specific or company-specific data to improve the relevancy and accuracy of their applications. Preference data for instruction tuning such as RLHF To further align foundation models with human preferences, model producers—and especially application builders—need to generate preference datasets to perform instruction tuning. Preference data in the context of instruction tuning is labeled data that captures human feedback with respect to a set of options output by a generative foundation model. It typically includes rating or ranking several inferences or pairwise comparing two inferences from a foundation model according to some specific attribute. For LLMs, these attributes may be helpfulness, accuracy, and harmlessness. For text-to-image models, it may be an aesthetic quality or text-image alignment. This preference data based on human feedback can then be used in various instruction tuning methods—including RLHF—in order to further fine-tune a model to align with human preferences. Instruction tuning using preference data plays a crucial role in enhancing the personalization and effectiveness of foundation models. This is a key step in building custom applications on top of pre-trained foundation models and is a powerful method to ensure models are generating helpful, accurate, and harmless content. A common example of instruction tuning is to instruct a chatbot to generate three responses to a query, and have a human read and rank all three according to some specified dimension, such as toxicity, factual accuracy, or readability. For example, a company may use a chatbot for its marketing department and wants to make sure that content is aligned to its brand message, doesn’t exhibit biases, and is clearly readable. The company would prompt the chatbot during instruction tuning to produce three examples, and have their internal experts select the ones that most align to their goal. Over time, they build a dataset used to teach the model what style of content humans prefer through reinforcement learning. This enables the chatbot application to output more relevant, readable, and safe content. SageMaker Ground Truth Plus Ground Truth Plus helps you address both challenges—generating demonstration datasets with task-specific capabilities, as well as gathering preference datasets from human feedback to align models with human preferences. You can request projects for LLMs and multi-modal models such as text-to-image and text-to-video. For LLMs, key demonstration datasets include generating questions and answers (Q&A), text summarization, text generation, and text reworking for the purposes of content moderation, style change, or length change. Key LLM preference datasets include ranking and classifying text outputs. For multi-modal models, key task types include captioning images or videos as well as logging timestamps of events in videos. Therefore, Ground Truth Plus can help both model producers and application builders on their generative AI journey. In this post, we dive deeper into the human annotator and feedback journey on four cases covering both demonstration data and preference data for both LLMs and multi-modal models: question and answer pair generation and text ranking for LLMs, as well as image captioning and video captioning for multi-modal models. Large language models In this section, we discuss question and answer pairs and text ranking for LLMs, along with customizations you may want for your use case. Question and answer pairs The following screenshot shows a labeling UI in which a human annotator will read a text passage and generate both questions and answers in the process of building a Q&A demonstration dataset. Let’s walk through a tour of the UI in the annotator’s shoes. On the left side of the UI, the job requester’s specific instructions are presented to the annotator. In this case, the annotator is supposed to read the passage of text presented in the center of the UI and create questions and answers based on the text. On the right side, the questions and answers that the annotator has written are shown. The text passage as well as type, length, and number of questions and answers can all be customized by the job requester during the project setup with the Ground Truth Plus team. In this case, the annotator has created a question that requires understanding the whole text passage to answer and is marked with a References entire passage check box. The other two questions and answers are based on specific parts of the text passage, as shown by the annotator highlights with color-coded matching. Optionally, you may want to request that questions and answers are generated without a provided text passage, and provide other guidelines for human annotators—this is also supported by Ground Truth Plus. After the questions and answers are submitted, they can flow to an optional quality control loop workflow where other human reviewers will confirm that customer-defined distribution and types of questions and answers have been created. If there is a mismatch between the customer requirements and what the human annotator has produced, the work will get funneled back to a human for rework before being exported as part of the dataset to deliver to the customer. When the dataset is delivered back to you, it’s ready to incorporate into the supervised fine-tuning workflow at your discretion. Text ranking The following screenshot shows a UI for ranking the outputs from an LLM based on a prompt. You can simply write the instructions for the human reviewer, and bring prompts and pre-generated responses to the Ground Truth Plus project team to start the job. In this case, we have requested for a human reviewer to rank three responses per prompt from an LLM on the dimension of writing clarity (readability). Again, the left pane shows the instructions given to the reviewer by the job requester. In the center, the prompt is at the top of the page, and the three pre-generated responses are the main body for ease of use. On the right side of the UI, the human reviewer will rank them in order of most to least clear writing. Customers wanting to generate this type of preference dataset include application builders interested in building human-like chatbots, and therefore want to customize the instructions for their own use. The length of the prompt, number of responses, and ranking dimension can all be customized. For example, you may want to rank five responses in order of most to least factually accurate, biased, or toxic, or even rank and classify multiple dimensions simultaneously. These customizations are supported in Ground Truth Plus. Multi-modal models In this section, we discuss image and video captioning for training multi-modal models such as text-to-image and text-to-video models, as well as customizations you may want to make for your particular use case. Image captioning The following screenshot shows a labeling UI for image captioning. You can request a project with image captioning to gather data to train a text-to-image model or an image-to-text model. In this case, we have requested to train a text-to-image model and have set specific requirements on the caption in terms of length and detail. The UI is designed to walk the human annotators through the cognitive process of generating rich captions by providing a mental framework through assistive and descriptive tools. We have found that providing this mental framework for annotators results in more descriptive and accurate captions than simply providing an editable text box alone. The first step in the framework is for the human annotator to identify key objects in the image. When the annotator chooses an object in the image, a color-coded dot appears on the object. In this case, the annotator has chosen both the dog and the cat, creating two editable fields on the right side of the UI wherein the annotator will enter the names of the objects—cat and dog—along with a detailed description of each object. Next, the annotator is guided to identify all the relationships between all the objects in the image. In this case, the cat is relaxing next to the dog. Next, the annotator is asked to identify specific attributes about the image, such as the setting, background, or environment. Finally, in the caption input text box, the annotator is instructed to combine all of what they wrote in the objects, relationships, and image setting fields into a complete single descriptive caption of the image. Optionally, you can configure this image caption to be passed through a human-based quality check loop with specific instructions to ensure that the caption meets the requirements. If there is an issue identified, such as a missing key object, that caption can be sent back for a human to correct the issue before exporting as part of the training dataset. Video captioning The following screenshot shows a video captioning UI to generate rich video captions with timestamp tags. You can request a video caption project to gather data to build text-to-video or video-to-text models. In this labeling UI, we have built a similar mental framework to ensure high-quality captions are written. The human annotator can control the video on the left side and create descriptions and timestamps for each activity shown in the video on the right side with the UI elements. Similar to the image captioning UI, there is also a place for the annotator to write a detailed description of the video setting, background, and environment. Finally, the annotator is instructed to combine all the elements into a coherent video caption. Similar to the image caption case, the video captions may optionally flow through a human-based quality control workflow to determine if your requirements are met. If there is an issue with the video captions, it will be sent for rework by the human annotator workforce. Conclusion Ground Truth Plus can help you prepare high-quality datasets to fine-tune foundation models for generative AI tasks, from answering questions to generating images and videos. It also allows skilled human workforces to review model outputs to ensure that they are aligned with human preferences. Additionally, it enables application builders to customize models using their industry or company data to ensure their application represents their preferred voice and style. These are the first of many innovations in Ground Truth Plus, and more are in development. Stay tuned for future posts. Interested in starting a project to build or improve your generative AI models and applications? Get started with Ground Truth Plus by connecting with our team today. About the authors Jesse Manders is a Senior Product Manager in the AWS AI/ML human in the loop services team. He works at the intersection of AI and human interaction with the goal of creating and improving AI/ML products and services to meet our needs. Previously, Jesse held leadership roles in engineering at Apple and Lumileds, and was a senior scientist in a Silicon Valley startup. He has an M.S. and Ph.D. from the University of Florida, and an MBA from the University of California, Berkeley, Haas School of Business. Romi Datta is a Senior Manager of Product Management in the Amazon SageMaker team responsible for Human in the Loop services. He has been in AWS for over 4 years, holding several product management leadership roles in SageMaker, S3 and IoT. Prior to AWS he worked in various product management, engineering and operational leadership roles at IBM, Texas Instruments and Nvidia. He has an M.S. and Ph.D. in Electrical and Computer Engineering from the University of Texas at Austin, and an MBA from the University of Chicago Booth School of Business. Jonathan Buck  is a Software Engineer at Amazon Web Services working at the intersection of machine learning and distributed systems. His work involves productionizing machine learning models and developing novel software applications powered by machine learning to put the latest capabilities in the hands of customers. Alex Williams is an applied scientist in the human-in-the-loop science team at AWS AI where he conducts interactive systems research at the intersection of human-computer interaction (HCI) and machine learning. Before joining Amazon, he was a professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee where he co-directed the People, Agents, Interactions, and Systems (PAIRS) research laboratory. He has also held research positions at Microsoft Research, Mozilla Research, and the University of Oxford. He regularly publishes his work at premier publication venues for HCI, such as CHI, CSCW, and UIST. He holds a PhD from the University of Waterloo. Sarah Gao is a Software Development Manager in Amazon SageMaker Human In the Loop (HIL) responsible for building the ML based labeling platform. Sarah has been in AWS for over 4 years, holding several software management leadership roles in EC2 security and SageMaker. Prior to AWS she worked in various engineering management roles at Oracle and Sun Microsystem. Erran Li is the applied science manager at human-in-the-loop services, AWS AI, Amazon. His research interests are 3D deep learning, and vision and language representation learning. Previously he was a senior scientist at Alexa AI, the head of machine learning at Scale AI and the chief scientist at Pony.ai. Before that, he was with the perception team at Uber ATG and the machine learning platform team at Uber working on machine learning for autonomous driving, machine learning systems and strategic initiatives of AI. He started his career at Bell Labs and was adjunct professor at Columbia University. He co-taught tutorials at ICML’17 and ICCV’19, and co-organized several workshops at NeurIPS, ICML, CVPR, ICCV on machine learning for autonomous driving, 3D vision and robotics, machine learning systems and adversarial machine learning. He has a PhD in computer science at Cornell University. He is an ACM Fellow and IEEE Fellow. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Highlight text as its being spoken using Amazon Polly _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Highlight text as it’s being spoken using Amazon Polly by Varad Varadarajan | on 05 JUL 2023 | in Amazon Polly , Amazon Translate , Artificial Intelligence , Intermediate (200) , Technical How-to | Permalink | Comments |  Share Amazon Polly is a service that turns text into lifelike speech. It enables the development of a whole class of applications that can convert text into speech in multiple languages. This service can be used by chatbots, audio books, and other text-to-speech applications in conjunction with other AWS AI or machine learning (ML) services. For example, Amazon Lex and Amazon Polly can be combined to create a chatbot that engages in a two-way conversation with a user and performs certain tasks based on the user’s commands. Amazon Transcribe , Amazon Translate , and Amazon Polly can be combined to transcribe speech to text in the source language, translate it to a different language, and speak it. In this post, we present an interesting approach for highlighting text as it’s being spoken using Amazon Polly. This solution can be used in many text-to-speech applications to do the following: Add visual capabilities to audio in books, websites, and blogs Increase comprehension when customers are trying to understand the text rapidly as it’s being spoken Our solution gives the client (the browser, in this example), the ability to know what text (word or sentence) is being spoken by Amazon Polly at any instant. This enables the client to dynamically highlight the text as it’s being spoken. Such a capability is useful for providing visual aid to speech for the use cases mentioned previously. Our solution can be extended to perform additional tasks besides highlighting text. For example, the browser can show images, play music, or perform other animations on the front end as the text is being spoken. This capability is useful for creating dynamic audio books, educational content, and richer text-to-speech applications. Solution overview At its core, the solution uses Amazon Polly to convert a string of text into speech. The text can be input from the browser or through an API call to the endpoint exposed by our solution. The speech generated by Amazon Polly is stored as an audio file (MP3 format) in an Amazon Simple Storage Service (Amazon S3) bucket. However, using the audio file alone, the browser can’t find what parts of the text are being spoken at any instant because we don’t have granular information on when each word is spoken. Amazon Polly provides a way to obtain this using speech marks. Speech marks are stored in a text file that shows the time (measured in milliseconds from start of the audio) when each word or sentence is spoken. Amazon Polly returns speech mark objects in a line-delimited JSON stream. A speech mark object contains the following fields: Time – The timestamp in milliseconds from the beginning of the corresponding audio stream Type – The type of speech mark (sentence, word, viseme, or SSML) Start – The offset in bytes (not characters) of the start of the object in the input text (not including viseme marks) End – The offset in bytes (not characters) of the object’s end in the input text (not including viseme marks) Value – This varies depending on the type of speech mark: SSML – <mark> SSML tag Viseme – The viseme name Word or sentence – A substring of the input text as delimited by the start and end fields For example, the sentence “Mary had a little lamb” can give you the following speech marks file if you use SpeechMarkTypes = [“word”, “sentence”] in the API call to obtain the speech marks: {"time":0,"type":"sentence","start":0,"end":23,"value":"Mary had a little lamb."} {"time":6,"type":"word","start":0,"end":4,"value":"Mary"} {"time":373,"type":"word","start":5,"end":8,"value":"had"} {"time":604,"type":"word","start":9,"end":10,"value":"a"} {"time":643,"type":"word","start":11,"end":17,"value":"little"} {"time":882,"type":"word","start":18, "end":22,"value":"lamb"} The word “had” (at the end of line 3) begins 373 milliseconds after the audio stream begins, starts at byte 5, and ends at byte 8 of the input text. Architecture overview The architecture of our solution is presented in the following diagram. Highlight Text as it’s spoken, using Amazon Polly Our website for the solution is stored on Amazon S3 as static files (JavaScript, HTML), which are hosted in Amazon CloudFront (1) and served to the end-user’s browser (2). When the user enters text in the browser through a simple HTML form, it’s processed by JavaScript in the browser. This calls an API (3) through Amazon API Gateway , to invoke an AWS Lambda function (4). The Lambda function calls Amazon Polly (5) to generate speech (audio) and speech marks (JSON) files. Two calls are made to Amazon Polly to fetch the audio and speech marks files. The calls are made using JavaScript async functions. The output of these calls is the audio and speech marks files, which are stored in Amazon S3 (6a). To avoid multiple users overwriting each others’ files in the S3 bucket, the files are stored in a folder with a timestamp. This minimizes the chances of two users overwriting each others’ files in Amazon S3. For a production release, we can employ more robust approaches to segregate users’ files based on user ID or timestamp and other unique characteristics. The Lambda function creates pre-signed URLs for the speech and speech marks files and returns them to the browser in the form of an array (7, 8, 9). When the browser sends the text file to the API endpoint (3), it gets back two pre-signed URLs for the audio file and the speech marks file in one synchronous invocation (9). This is indicated by the key symbol next to the arrow. A JavaScript function in the browser fetches the speech marks file and the audio from their URL handles (10). It sets up the audio player to play the audio. (The HTML audio tag is used for this purpose). When the user clicks the play button, it parses the speech marks retrieved in the earlier step to create a series of timed events using timeouts. The events invoke a callback function, which is another JavaScript function used to highlight the spoken text in the browser. Simultaneously, the JavaScript function streams the audio file from its URL handle. The result is that the events are run at the appropriate times to highlight the text as it’s spoken while the audio is being played. The use of JavaScript timeouts provides us the synchronization of the audio with the highlighted text. Prerequisites To run this solution, you need an AWS account with an AWS Identity and Access Management (IAM) user who has permission to use Amazon CloudFront, Amazon API Gateway, Amazon Polly, Amazon S3, AWS Lambda, and AWS Step Functions. Use Lambda to generate speech and speech marks The following code invokes the Amazon Polly synthesize_speech function two times to fetch the audio and speech marks file. They’re run as asynchronous functions and coordinated to return the result at the same time using promises. const p1 = new Promise(doSynthesizeSpeech marks); const p2 = new Promise(doSynthesizeSpeech); var result; await Promise.all([p1, p2]) .then((values) => { //return array of presigned urls console.log('Values:', values); result = { "output" : values }; }) .catch((err) => { console.log("Error:" + err); result = err; }); On the JavaScript side, the text highlighting is done by highlighter(start, finish, word) and the timed events are set by setTimers() : function highlighter(start, finish, word) { let textarea = document.getElementById("postText"); //console.log(start + "," + finish + "," + word); textarea.focus(); textarea.setSelectionRange(start, finish); } function setTimers() { let speech marksStr = sessionStorage.getItem("speech marks"); //read through the speech marks file and set timers for every word console.log(speech marksStr); let speech marks = speech marksStr.split("\n"); for (let i = 0; i < speech marks.length; i++) { //console.log(i + ":" + speech marks[i]); if (speech marks[i].length == 0) { continue; } smjson = JSON.parse(speech marks[i]); t = smjson["time"]; s = smjson["start"]; f = smjson["end"]; word = smjson["value"]; setTimeout(highlighter, t, s, f, word); } } Alternative approaches Instead of the previous approach, you can consider a few alternatives: Create both the speech marks and audio files inside a Step Functions state machine. The state machine can invoke the parallel branch condition to invoke two different Lambda functions: one to generate speech and another to generate speech marks. The code for this can be found in the using-step-functions subfolder in the Github repo. Invoke Amazon Polly asynchronously to generate the audio and speech marks. This approach can be used if the text content is large or the user doesn’t need a real-time response. For more details about creating long audio files, refer to Creating Long Audio Files . Have Amazon Polly create the presigned URL directly using the generate_presigned_url call on the Amazon Polly client in Boto3. If you go with this approach, Amazon Polly generates the audio and speech marks newly every time. In our current approach, we store these files in Amazon S3. Although these stored files aren’t accessible from the browser in our version of the code, you can modify the code to play previously generated audio files by fetching them from Amazon S3 (instead of regenerating the audio for the text again using Amazon Polly). We have more code examples for accessing Amazon Polly with Python in the AWS Code Library. Create the solution The entire solution is available from our Github repo . To create this solution in your account, follow the instructions in the README.md file. The solution includes an AWS CloudFormation template to provision your resources. Cleanup To clean up the resources created in this demo, perform the following steps: Delete the S3 buckets created to store the CloudFormation template (Bucket A), the source code (Bucket B) and the website ( pth-cf-text-highlighter-website-[Suffix] ). Delete the CloudFormation stack pth-cf . Delete the S3 bucket containing the speech files ( pth-speech-[Suffix] ). This bucket was created by the CloudFormation template to store the audio and speech marks files generated by Amazon Polly. Summary In this post, we showed an example of a solution that can highlight text as it’s being spoken using Amazon Polly. It was developed using the Amazon Polly speech marks feature, which provides us markers for the place each word or sentence begins in an audio file. The solution is available as a CloudFormation template. It can be deployed as is to any web application that performs text-to-speech conversion. This would be useful for adding visual capabilities to audio in books, avatars with lip-sync capabilities (using viseme speech marks), websites, and blogs, and for aiding people with hearing impairments. It can be extended to perform additional tasks besides highlighting text. For example, the browser can show images, play music, and perform other animations on the front end while the text is being spoken. This capability can be useful for creating dynamic audio books, educational content, and richer text-to-speech applications. We welcome you to try out this solution and learn more about the relevant AWS services from the following links. You can extend the functionality for your specific needs. Amazon API Gateway Amazon CloudFront AWS Lambda Amazon Polly Amazon S3 About the Author Varad G Varadarajan is a Trusted Advisor and Field CTO for Digital Native Businesses (DNB) customers at AWS. He helps them architect and build innovative solutions at scale using AWS products and services. Varad’s areas of interest are IT strategy consulting, architecture, and product management. Outside of work, Varad enjoys creative writing, watching movies with family and friends, and traveling. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Host ML models on Amazon SageMaker using Triton_ ONNX Models _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Host ML models on Amazon SageMaker using Triton: ONNX Models by Abhi Shivaditya , Dhawalkumar Patel , James Park , and Rupinder Grewal | on 09 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence | Permalink | Comments |  Share ONNX ( Open Neural Network Exchange ) is an open-source standard for representing deep learning models widely supported by many providers. ONNX provides tools for optimizing and quantizing models to reduce the memory and compute needed to run machine learning (ML) models. One of the biggest benefits of ONNX is that it provides a standardized format for representing and exchanging ML models between different frameworks and tools. This allows developers to train their models in one framework and deploy them in another without the need for extensive model conversion or retraining. For these reasons, ONNX has gained significant importance in the ML community. In this post, we showcase how to deploy ONNX-based models for multi-model endpoints (MMEs) that use GPUs. This is a continuation of the post Run multiple deep learning models on GPU with Amazon SageMaker multi-model endpoints , where we showed how to deploy PyTorch and TensorRT versions of ResNet50 models on Nvidia’s Triton Inference server. In this post, we use the same ResNet50 model in ONNX format along with an additional natural language processing (NLP) example model in ONNX format to show how it can be deployed on Triton. Furthermore, we benchmark the ResNet50 model and see the performance benefits that ONNX provides when compared to PyTorch and TensorRT versions of the same model, using the same input. ONNX Runtime ONNX Runtime is a runtime engine for ML inference designed to optimize the performance of models across multiple hardware platforms, including CPUs and GPUs. It allows the use of ML frameworks like PyTorch and TensorFlow. It facilitates performance tuning to run models cost-efficiently on the target hardware and has support for features like quantization and hardware acceleration, making it one of the ideal choices for deploying efficient, high-performance ML applications. For examples of how ONNX models can be optimized for Nvidia GPUs with TensorRT, refer to TensorRT Optimization (ORT-TRT) and ONNX Runtime with TensorRT optimization . The Amazon SageMaker Triton container flow is depicted in the following diagram. Users can send an HTTPS request with the input payload for real-time inference behind a SageMaker endpoint. The user can specify a TargetModel header that contains the name of the model that the request in question is destined to invoke. Internally, the SageMaker Triton container implements an HTTP server with the same contracts as mentioned in How Containers Serve Requests . It has support for dynamic batching and supports all the backends that Triton provides . Based on the configuration, the ONNX runtime is invoked and the request is processed on CPU or GPU as predefined in the model configuration provided by the user. Solution overview To use the ONNX backend, complete the following steps: Compile the model to ONNX format. Configure the model. Create the SageMaker endpoint. Prerequisites Ensure that you have access to an AWS account with sufficient AWS Identity and Access Management IAM permissions to create a notebook, access an Amazon Simple Storage Service (Amazon S3) bucket, and deploy models to SageMaker endpoints. See Create execution role for more information. Compile the model to ONNX format The transformers library provides for convenient method to compile the PyTorch model to ONNX format. The following code achieves the transformations for the NLP model: onnx_inputs, onnx_outputs = transformers.onnx.export( preprocessor=tokenizer, model=model, config=onnx_config, opset=12, output=save_path ) Exporting models (either PyTorch or TensorFlow) is easily achieved through the conversion tool provided as part of the Hugging Face transformers repository. The following is what happens under the hood: Allocate the model from transformers (PyTorch or TensorFlow). Forward dummy inputs through the model. This way, ONNX can record the set of operations run. The transformers inherently take care of dynamic axes when exporting the model. Save the graph along with the network parameters. A similar mechanism is followed for the computer vision use case from the torchvision model zoo: torch.onnx.export( resnet50, dummy_input, args.save, export_params=True, opset_version=11, do_constant_folding=True, input_names=["input"], output_names=["output"], dynamic_axes={"input": {0: "batch_size"}, "output": {0: "batch_size"}}, ) Configure the model In this section, we configure the computer vision and NLP model. We show how to create a ResNet50 and RoBERTA large model that has been pre-trained for deployment on a SageMaker MME by utilizing Triton Inference Server model configurations. The ResNet50 notebook is available on GitHub . The RoBERTA notebook is also available on GitHub . For ResNet50, we use the Docker approach to create an environment that already has all the dependencies required to build our ONNX model and generate the model artifacts needed for this exercise. This approach makes it much easier to share dependencies and create the exact environment that is needed to accomplish this task. The first step is to create the ONNX model package per the directory structure specified in ONNX Models . Our aim is to use the minimal model repository for a ONNX model contained in a single file as follows: <model-repository-path> / Model_name ├── 1 │ └── model.onnx └── config.pbtxt Next, we create the model configuration file that describes the inputs, outputs, and backend configurations for the Triton Server to pick up and invoke the appropriate kernels for ONNX. This file is known as config.pbtxt and is shown in the following code for the RoBERTA use case. Note that the BATCH dimension is omitted from the config.pbtxt . However, when sending the data to the model, we include the batch dimension. The following code also shows how you can add this feature with model configuration files to set dynamic batching with a preferred batch size of 5 for the actual inference. With the current settings, the model instance is invoked instantly when the preferred batch size of 5 is met or the delay time of 100 microseconds has elapsed since the first request reached the dynamic batcher. name: "nlp-onnx" platform: "onnxruntime_onnx" backend: "onnxruntime" max_batch_size: 32 input { name: "input_ids" data_type: TYPE_INT64 dims: [512] } input { name: "attention_mask" data_type: TYPE_INT64 dims: [512] } output { name: "last_hidden_state" data_type: TYPE_FP32 dims: [-1, 768] } output { name: "1550" data_type: TYPE_FP32 dims: [768] } instance_group { count: 1 kind: KIND_GPU } dynamic_batching { max_queue_delay_microseconds: 100 preferred_batch_size:5 } The following is the similar configuration file for the computer vision use case: name: "resenet_onnx" platform: "onnxruntime_onnx" max_batch_size : 128 input [ { name: "input" data_type: TYPE_FP32 format: FORMAT_NCHW dims: [ 3, 224, 224 ] } ] output [ { name: "output" data_type: TYPE_FP32 dims: [ 1000 ] } ] Create the SageMaker endpoint We use the Boto3 APIs to create the SageMaker endpoint. For this post, we show the steps for the RoBERTA notebook, but these are common steps and will be the same for the ResNet50 model as well. Create a SageMaker model We now create a SageMaker model . We use the Amazon Elastic Container Registry (Amazon ECR) image and the model artifact from the previous step to create the SageMaker model. Create the container To create the container, we pull the appropriate image from Amazon ECR for Triton Server. SageMaker allows us to customize and inject various environment variables. Some of the key features are the ability to set the BATCH_SIZE ; we can set this per model in the config.pbtxt file, or we can define a default value here. For models that can benefit from larger shared memory size, we can set those values under SHM variables. To enable logging, set the log verbose level to true . We use the following code to create the model to use in our endpoint: mme_triton_image_uri = ( f"{account_id_map[region]}.dkr.ecr.{region}.{base}" + "/sagemaker-tritonserver:22.12-py3" ) container = { "Image": mme_triton_image_uri, "ModelDataUrl": mme_path, "Mode": "MultiModel", "Environment": { "SAGEMAKER_TRITON_SHM_DEFAULT_BYTE_SIZE": "16777216000", # "16777216", #"16777216000", "SAGEMAKER_TRITON_SHM_GROWTH_BYTE_SIZE": "10485760", }, } from sagemaker.utils import name_from_base model_name = name_from_base(f"flan-xxl-fastertransformer") print(model_name) create_model_response = sm_client.create_model( ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer={ "Image": inference_image_uri, "ModelDataUrl": s3_code_artifact }, ) model_arn = create_model_response["ModelArn"] print(f"Created Model: {model_arn}") Create a SageMaker endpoint You can use any instances with multiple GPUs for testing. In this post, we use a g4dn.4xlarge instance. We don’t set the VolumeSizeInGB parameters because this instance comes with local instance storage. The VolumeSizeInGB parameter is applicable to GPU instances supporting the Amazon Elastic Block Store (Amazon EBS) volume attachment. We can leave the model download timeout and container startup health check at the default values. For more details, refer to CreateEndpointConfig . endpoint_config_response = sm_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[{ "VariantName": "AllTraffic", "ModelName": model_name, "InstanceType": "ml.g4dn.4xlarge", "InitialInstanceCount": 1, #"VolumeSizeInGB" : 200, #"ModelDataDownloadTimeoutInSeconds": 600, #"ContainerStartupHealthCheckTimeoutInSeconds": 600, }, ],)' Lastly, we create a SageMaker endpoint: create_endpoint_response = sm_client.create_endpoint( EndpointName=f"{endpoint_name}", EndpointConfigName=endpoint_config_name) Invoke the model endpoint This is a generative model, so we pass in the input_ids and attention_mask to the model as part of the payload. The following code shows how to create the tensors: tokenizer("This is a sample", padding="max_length", max_length=max_seq_len) We now create the appropriate payload by ensuring the data type matches what we configured in the config.pbtxt . This also give us the tensors with the batch dimension included, which is what Triton expects. We use the JSON format to invoke the model. Triton also provides a native binary invocation method for the model. response = runtime_sm_client.invoke_endpoint( EndpointName=endpoint_name, ContentType="application/octet-stream", Body=json.dumps(payload), TargetModel=f"{tar_file_name}", # TargetModel=f"roberta-large-v0.tar.gz", ) Note the TargetModel parameter in the preceding code. We send the name of the model to be invoked as a request header because this is a multi-model endpoint, therefore we can invoke multiple models at runtime on an already deployed inference endpoint by changing this parameter. This shows the power of multi-model endpoints! To output the response, we can use the following code: import numpy as np resp_bin = response["Body"].read().decode("utf8") # -- keys are -- "outputs":[{"name":"1550","datatype":"FP32","shape":[1,768],"data": [0.0013,0,3433...]}] for data in json.loads(resp_bin)["outputs"]: shape_1 = list(data["shape"]) dat_1 = np.array(data["data"]) dat_1.resize(shape_1) print(f"Data Outputs recieved back :Shape:{dat_1.shape}") ONNX for performance tuning The ONNX backend uses C++ arena memory allocation. Arena allocation is a C++-only feature that helps you optimize your memory usage and improve performance. Memory allocation and deallocation constitutes a significant fraction of CPU time spent in protocol buffers code. By default, new object creation performs heap allocations for each object, each of its sub-objects, and several field types, such as strings. These allocations occur in bulk when parsing a message and when building new messages in memory, and associated deallocations happen when messages and their sub-object trees are freed. Arena-based allocation has been designed to reduce this performance cost. With arena allocation, new objects are allocated out of a large piece of pre-allocated memory called the arena . Objects can all be freed at once by discarding the entire arena, ideally without running destructors of any contained object (though an arena can still maintain a destructor list when required). This makes object allocation faster by reducing it to a simple pointer increment, and makes deallocation almost free. Arena allocation also provides greater cache efficiency: when messages are parsed, they are more likely to be allocated in continuous memory, which makes traversing messages more likely to hit hot cache lines. The downside of arena-based allocation is the C++ heap memory will be over-allocated and stay allocated even after the objects are deallocated. This might lead to out of memory or high CPU memory usage. To achieve the best of both worlds, we use the following configurations provided by Triton and ONNX : arena_extend_strategy – This parameter refers to the strategy used to grow the memory arena with regards to the size of the model. We recommend setting the value to 1 (= kSameAsRequested ), which is not a default value. The reasoning is as follows: the drawback of the default arena extend strategy ( kNextPowerOfTwo ) is that it might allocate more memory than needed, which could be a waste. As the name suggests, kNextPowerOfTwo (the default) extends the arena by a power of 2, whereas kSameAsRequested extends by a size that is the same as the allocation request each time. kSameAsRequested is suited for advanced configurations where you know the expected memory usage in advance. In our testing, because we know the size of models is a constant value, we can safely choose kSameAsRequested . gpu_mem_limit – We set the value to the CUDA memory limit. To use all possible memory, pass in the maximum size_t . It defaults to SIZE_MAX if nothing is specified. We recommend keeping it as default. enable_cpu_mem_arena – This enables the memory arena on CPU. The arena may pre-allocate memory for future usage. Set this option to false if you don’t want it. The default is True . If you disable the arena, heap memory allocation will take time, so inference latency will increase. In our testing, we left it as default. enable_mem_pattern – This parameter refers to the internal memory allocation strategy based on input shapes. If the shapes are constant, we can enable this parameter to generate a memory pattern for the future and save some allocation time, making it faster. Use 1 to enable the memory pattern and 0 to disable. It’s recommended to set this to 1 when the input features are expected to be the same. The default value is 1. do_copy_in_default_stream – In the context of the CUDA execution provider in ONNX, a compute stream is a sequence of CUDA operations that are run asynchronously on the GPU. The ONNX runtime schedules operations in different streams based on their dependencies, which helps minimize the idle time of the GPU and achieve better performance. We recommend using the default setting of 1 for using the same stream for copying and compute; however, you can use 0 for using separate streams for copying and compute, which might result in the device pipelining the two activities. In our testing of the ResNet50 model, we used both 0 and 1 but couldn’t find any appreciable difference between the two in terms of performance and memory consumption of the GPU device. Graph optimization – The ONNX backend for Triton supports several parameters that help fine-tune the model size as well as runtime performance of the deployed model. When the model is converted to the ONNX representation (the first box in the following diagram at the IR stage), the ONNX runtime provides graph optimizations at three levels: basic, extended, and layout optimizations. You can activate all levels of graph optimizations by adding the following parameters in the model configuration file: optimization { graph : { level : 1 }} cudnn_conv_algo_search – Because we’re using CUDA-based Nvidia GPUs in our testing, for our computer vision use case with the ResNet50 model, we can use the CUDA execution provider-based optimization at the fourth layer in the following diagram with the cudnn_conv_algo_search parameter. The default option is exhaustive (0), but when we changed this configuration to 1 – HEURISTIC , we saw the model latency in steady state reduce to 160 milliseconds. The reason this happens is because the ONNX runtime invokes the lighter weight cudnnGetConvolutionForwardAlgorithm_v7 forward pass and therefore reduces latency with adequate performance. Run mode – The next step is selecting the correct execution_mode at layer 5 in the following diagram. This parameter controls whether you want to run operators in your graph sequentially or in parallel. Usually when the model has many branches, setting this option to ExecutionMode.ORT_PARALLEL (1) will give you better performance. In the scenario where your model has many branches in its graph, setting the run mode to parallel will help with better performance. The default mode is sequential, so you can enable this to suit your needs. parameters { key: "execution_mode" value: { string_value: "1" } } For a deeper understanding of the opportunities for performance tuning in ONNX, refer to the following figure. Source: https://static.linaro.org/connect/san19/presentations/san19-211.pdf Benchmark numbers and performance tuning By turning on the graph optimizations, cudnn_conv_algo_search , and parallel run mode parameters in our testing of the ResNet50 model, we saw the cold start time of the ONNX model graph reduce from 4.4 seconds to 1.61 seconds. An example of a complete model configuration file is provided in the ONNX configuration section of the following notebook . The testing benchmark results are as follows: PyTorch – 176 milliseconds, cold start 6 seconds TensorRT – 174 milliseconds, cold start 4.5 seconds ONNX – 168 milliseconds, cold start 4.4 seconds The following graphs visualize these metrics. Furthermore, in our testing of computer vision use cases, consider sending the request payload in binary format using the HTTP client provided by Triton because it significantly improves model invoke latency. Other parameters that SageMaker exposes for ONNX on Triton are as follows: Dynamic batching – Dynamic batching is a feature of Triton that allows inference requests to be combined by the server, so that a batch is created dynamically. Creating a batch of requests typically results in increased throughput. The dynamic batcher should be used for stateless models. The dynamically created batches are distributed to all model instances configured for the model. Maximum batch size – The max_batch_size property indicates the maximum batch size that the model supports for the types of batching that can be exploited by Triton. If the model’s batch dimension is the first dimension, and all inputs and outputs to the model have this batch dimension, then Triton can use its dynamic batcher or sequence batcher to automatically use batching with the model. In this case, max_batch_size should be set to a value greater than or equal to 1, which indicates the maximum batch size that Triton should use with the model. Default max batch size – The default-max-batch-size value is used for max_batch_size during autocomplete when no other value is found. The onnxruntime backend will set the max_batch_size of the model to this default value if autocomplete has determined the model is capable of batching requests and max_batch_size is 0 in the model configuration or max_batch_size is omitted from the model configuration. If max_batch_size is more than 1 and no scheduler is provided, the dynamic batch scheduler will be used. The default max batch size is 4. Clean up Ensure that you delete the model, model configuration, and model endpoint after running the notebook. The steps to do this are provided at the end of the sample notebook in the GitHub repo. Conclusion In this post, we dove deep into the ONNX backend that Triton Inference Server supports on SageMaker. This backend provides for GPU acceleration of your ONNX models. There are many options to consider to get the best performance for inference, such as batch sizes, data input formats, and other factors that can be tuned to meet your needs. SageMaker allows you to use this capability using single-model and multi-model endpoints. MMEs allow a better balance of performance and cost savings. To get started with MME support for GPU, see Host multiple models in one container behind one endpoint . We invite you to try Triton Inference Server containers in SageMaker, and share your feedback and questions in the comments. About the authors Abhi Shivaditya is a Senior Solutions Architect at AWS, working with strategic global enterprise organizations to facilitate the adoption of AWS services in areas such as Artificial Intelligence, distributed computing, networking, and storage. His expertise lies in Deep Learning in the domains of Natural Language Processing (NLP) and Computer Vision. Abhi assists customers in deploying high-performance machine learning models efficiently within the AWS ecosystem. James Park  is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In h is spare time he enjoys seeking out new cultures, new experiences,  and staying up to date with the latest technology trends.You can find him on LinkedIn . Rupinder Grewal  is a Sr Ai/ML Specialist Solutions Architect with AWS. He currently focuses on serving of models and MLOps on SageMaker. Prior to this role he has worked as Machine Learning Engineer building and hosting models. Outside of work he enjoys playing tennis and biking on mountain trails. Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
How AWS is helping thredUP revolutionize the resale model for brands _ AWS for Industries.txt
AWS for Industries How AWS is helping thredUP revolutionize the resale model for brands by Madeline Steiner | on 06 JUN 2023 | in Amazon EC2 , Amazon QuickSight , Amazon RDS , Amazon SageMaker , Amazon Simple Storage Service (S3) , Auto Scaling , AWS Cost Explorer , Industries , Retail | Permalink | Comments |  Share Like global landfills, the fashion industry waste problem is growing by the second . Retailers are struggling to address an enormous (and pressing) concern: what happens to their products after point-of-sale and what are the environmental implications? In the United States, companies spend an estimated $50 billion on product returns. These returned goods are responsible for massive landfill waste and 27 million tons of carbon dioxide emissions annually. This is part of what’s called a linear economy, where we take materials from the Earth, make products from them and eventually throw them away as waste. For example, research shows that clothes in the US “are only worn for around a quarter of the global average and some garments are only worn between seven and ten times.” After little wear, “these huge volumes of clothes are landfilled or incinerated each year.” This wastes not just the materials, but also the energy, water, nutrients, land, and other resources used to produce the textiles and garments. On the flip side of this is what’s called a circular economy. According to the Ellen MacArthur Foundation , the circular economy is based on three principles driven by design: eliminate waste and pollution, circulate products and materials (at their highest value), and regenerate nature. Some examples of the circular economy in retail include resale, repairing, reusing, remanufacturing, recycling, rental, subscription, and more. With growing support of this model and the concept of resale, more retailers are discovering the benefits of sustainably driven design and production. Whether retailers are driven by customer demands, reputation risk, or they’re just trying to get ahead of looming regulation, resale is a positive path forward for retailers to achieve their sustainability goals. Some added benefits of resale include: acquiring new, eco-conscious customers or consumers that can access a brand at a discounted rate, controlling the resale experience for their brand and driving additional sales. If resale is so great for businesses, why isn’t every retailer embracing it? Unfortunately, building an in-house resale channel from scratch is complicated and expensive. Not all companies have the resources for complex initiatives like reverse logistics, authentication, and data collection, preventing them from making resale implementation a reality. Fortunately for retailers, this is where thredUP comes in. Reimagining resale thredUP is one of the largest online resale platforms that is transforming resale by making it easy to buy and sell secondhand clothing. Since its inception in 2009, thredUP has leveraged technology and data to build a thriving marketplace that connects buyers and sellers of gently used apparel, shoes, and accessories. Now, thredUP is taking things a step further, offering Resale-as-a-Service (RaaS) for some of the world’s leading brands and retailers that want to provide their customers with a sustainable, eco-friendly, and cost-effective way to shop. According to The Recommerce 100, a comprehensive review of branded resale programs, there are 139 brands with resale shops, a 3.4x growth from 2021 to 2022, with 260,000 total resale shop listings. If all 260,000 resale shop listings in The Recommerce 100 sold, it would be the equivalent of 29,000 trees planted, 400 homes powered annually, and $11.4 million estimated total revenue. Brands’ adoption of Resale showing 3.4x YTD growth between 2021 to 2022 In its 2023 Resale Report , thredUP reported that 86 percent of retail execs say their customers are already participating in resale. With 58 percent of retail executives saying offering resale is becoming table stakes for retailers, it’s safe to say resale is grabbing the attention of higher-ups in the retail industry. That number is only set to increase. In the U.S., the secondhand market is expected to nearly double by 2027, to $70 billion, while the global secondhand market is predicted to grow to $350 billion by 2027. Built for brands, powered by AWS Powering its RaaS offering, Amazon Web Services (AWS) is helping thredUP revolutionize the resale business model for brands. Let’s look at the key features and benefits of thredUP’s RaaS offering and how AWS is helping brands deliver a seamless resale experience to thredUP’s customers. From its start as a secondhand marketplace in 2009, thredUP selected AWS as its cloud provider due to scalability, cost-efficiency, security, reliability, and access to modern advanced technologies. AWS services like Amazon Elastic Compute Cloud (Amazon EC2) , Amazon Relational Database Service (Amazon RDS) , and Amazon Simple Storage Service (Amazon S3) form the foundation of thredUP.com’s infrastructure. Inventory Management thredUP’s RaaS uses Amazon SageMaker to manage and optimize inventory mix, ensuring that brands have the right products at the right time. thredUP has collected secondhand apparel sales data across 55,000 brands for longer than a decade. thredUP unlocks the power of that data to the benefit of resale buyers and sellers by making better decisions on pricing, inventory mix, and merchandising. Nine years ago, a thredUP engineer was able to programmatically provide probability that a given item would sell in the next 30 days using AWS Artificial Intelligence and AWS Machine Learning (AI/ML) services. thredUP was able to implement this model in a month without the need for data scientists or ML engineers. Pricing Optimization Using machine learning algorithms to automatically price products based on market demand, thredUP’s RaaS enables brands to maximize their profits while offering competitive prices to customers. thredUP handles millions of used products and reprices hundreds of thousands of items daily. On any given day, these new product arrivals are added, and millions of emails and push notifications are sent, all using Amazon Managed Streaming for Apache Kafka (Amazon MSK). With this much activity on different platforms and RaaS resale sites, thredUP greatly relies on Amazon MSK to help things run smoothly. Repricing in event driven architecture, Amazon MSK is also foundational to cross-list secondhand products on multiple resale websites and reprice as many as 100,000 items in one hour. Analytics and Insights thredUP’s RaaS employs Amazon QuickSight to supply brands with near real-time analytics and insights into their resale performance, enabling them to make data-driven decisions and optimize their operations. Amazon QuickSight dashboards provide usage-based pricing and gives thredUP the ability to provision access to brands programmatically and embed the dashboards and reports into web applications. Security thredUP’s RaaS clients require a high level of security and data protection from thredUP, and AWS is able to deliver on this with a wide range of robust security features, such as firewalls, encryption, and identity and access management. AWS has certifications with various industry standards, such as HIPAA, PCI DSS, and SOC 2, which helps thredUP provide brands with confidence that their RaaS services meet the necessary security requirements and are independently audited and certified by recognized industry organizations. Having a prominent level of compliance certification speeds up the sale process and vendor onboarding process significantly. Scale thredUP can scale its infrastructure and resources up or down based on demand using AWS Auto Scaling . Just like with typical ecommerce, sales are critical for resale. Sales generate revenue, attract and retain customers, build a strong brand, gain market share, and enable growth. Cost Efficiency thredUP is able to optimize costs with flexible usage-based pricing models for the resources they need, only when they need them. AWS Cost Explorer helps ensure efficiency for thredUP and the brands they work with. As a specific example, thredUP recently migrated from a self-managed Kubernetes cluster to Amazon Elastic Kubernetes Service (Amazon EKS) / Amazon Elastic Container Registry (Amazon ECR) because custom configuration became too complex to maintain internally and caused unplanned downtimes during upgrades. After the migration, thredUP was able to keep the infrastructure team small, supporting 80+ Kubernetes deployments and 20+ tools. The time spent on patching decreased by 80 percent, downtime related to unsuccessful patching was eliminated, security posture by outsourcing security hardening improved, and CIS Kubernetes Benchmarking was enabled. thredUP also enjoyed instance cost reduction of around 20 percent by switching to Graviton instances. While consumers do care about the planet, most can’t seem to shake the habit of wanting more clothes more frequently thanks to a history of fast fashion. thredUP believes secondhand is a way for consumers to satisfy constant newness while being mindful of their environmental impact. In fact, in thredUP’s 2023 Resale Report , 64 percent of Gen Z and Millennials say they look for an item secondhand before purchasing it new. By leveraging the power of AWS, thredUP is helping brands tap into the fast-growing resale market and provide their customers with a sustainable, affordable, and convenient shopping experience. With thredUP’s RaaS, brands can easily integrate resale into their existing business models, reduce their environmental impact, and drive customer loyalty and engagement. As the demand for sustainable and ethical fashion continues to grow, thredUP’s RaaS is poised to become a game-changer for the retail industry. Interested in how AWS tools and technologies can help revolutionize your business? Learn more about AWS for retail or contact an AWS Representative. Further Reading ● How immersive commerce can drive your sustainability goals while making your merch look fabulous ● Reduce food waste to improve sustainability and financial results in retail with Amazon Forecast ● AWS customers create sustainable solutions to impact climate change ● Green Is the New Black: How the Apparel Industry Is Embracing Circularity TAGS: ESG , sustainability Madeline Steiner Madeline Steiner leads Amazon Web Services’ Retail & CPG worldwide strategy and thought leadership for ESG (Environmental, Social, and Governance) Solutions. In partnership with the AWS Retail and CPG leadership teams, Madeline works to shape and deliver go-to-market strategies and innovative partner solutions for consumer enterprises looking for guidance on how to integrate environmental and social initiatives into their business operations. Madeline has 8+ years of experience in retail and retail technology, including 5 years of merchandising and fashion product development roles at Gap, Inc., and 3 years in customer success at Trendalytics, a consumer intelligence platform for data-driven product decisions. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
How BrainPad fosters internal knowledge sharing with Amazon Kendra _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog How BrainPad fosters internal knowledge sharing with Amazon Kendra by Dr. Naoki Okada | on 13 JUN 2023 | in Amazon Kendra , Artificial Intelligence , AWS Lambda , Customer Solutions | Permalink | Comments |  Share This is a guest post by Dr. Naoki Okada, Lead Data Scientist at BrainPad Inc. Founded in 2004, BrainPad Inc. is a pioneering partner in the field of data utilization, helping companies create business and improve their management through the use of data. To date, BrainPad has helped more than 1,300 companies, primarily industry leaders. BrainPad has the advantage of providing a one-stop service from formulating a data utilization strategy to proof of concept and implementation. BrainPad’s unique style is to work together with clients to solve problems on the ground, such as data that isn’t being collected due to a siloed organizational structure or data that exists but isn’t organized. This post discusses how to structure internal knowledge sharing using Amazon Kendra and AWS Lambda and how Amazon Kendra solves the obstacles around knowledge sharing many companies face. We summarize BrainPad’s efforts in four key areas: What are the knowledge sharing problems that many companies face? Why did we choose Amazon Kendra? How did we implement the knowledge sharing system? Even if a tool is useful, it is meaningless if it is not used. How did we overcome the barrier to adoption? Knowledge sharing problems that many companies face Many companies achieve their results by dividing their work into different areas. Each of these activities generates new ideas every day. This knowledge is accumulated on an individual basis. If this knowledge can be shared among people and organizations, synergies in related work can be created, and the efficiency and quality of work will increase dramatically. This is the power of knowledge sharing. However, there are many common barriers to knowledge sharing: Few people are proactively involved, and the process can’t be sustained for long due to busy schedules. Knowledge is scattered across multiple media, such as internal wikis and PDFs, making it difficult to find the information you need. No one enters knowledge into the knowledge consolidation system. The system will not be widely used because of its poor searchability. Our company faced a similar situation. The fundamental problem with knowledge sharing is that although most employees have a strong need to obtain knowledge, they have little motivation to share their own knowledge at a cost. Changing employee behavior for the sole purpose of knowledge sharing is not easy. In addition, each employee or department has its own preferred method of accumulating knowledge, and trying to force unification won’t lead to motivation or performance in knowledge sharing. This is a headache for management, who wants to consolidate knowledge, while those in the field want to have knowledge in a decentralized way. At our company, Amazon Kendra is the cloud service that has solved these problems. Why we chose Amazon Kendra Amazon Kendra is a cloud service that allows us to search for internal information from a common interface. In other words, it is a search engine that specializes in internal information. In this section, we discuss the three key reasons why we chose Amazon Kendra. Easy aggregation of knowledge As mentioned in the previous section, knowledge, even when it exists, tends to be scattered across multiple media. In our case, it was scattered across our internal wiki and various document files. Amazon Kendra provides powerful connectors for this situation. We can easily import documents from a variety of media, including groupware, wikis, Microsoft PowerPoint files, PDFs, and more, without any hassle. This means that employees don’t have to change the way they store knowledge in order to share it. Although knowledge aggregation can be achieved temporarily, it’s very costly to maintain. The ability to automate this was a very desirable factor for us. Great searchability There are a lot of groupware and wikis out there that excel at information input. However, they often have weaknesses in information output (searchability). This is especially true for Japanese search. For example, in English, word-level matching provides a reasonable level of searchability. In Japanese, however, word extraction is more difficult, and there are cases where matching is done by separating words by an appropriate number of characters. If a search for “Tokyo-to (東京都)” is separated by two characters, “Tokyo (東京)” and “Kyoto (京都),” it will be difficult to find the knowledge you are looking for. Amazon Kendra offers great searchability through machine learning . In addition to traditional keyword searches such as “technology trends,” natural language searches such as “I want information on new technology initiatives” can greatly enhance the user experience. The ability to search appropriately for collected information is the second reason we chose Amazon Kendra. Low cost of ownership IT tools that specialize in knowledge aggregation and retrieval are called enterprise search systems. One problem with implementing these systems is the cost. For an organization with several hundred employees, operating costs can exceed 10 million yen per year. This is not a cheap way to start a knowledge sharing initiative. Amazon Kendra is offered at a much lower cost than most enterprise search systems. As mentioned earlier, knowledge sharing initiatives are not easy to implement. We wanted to start small, and Amazon Kendra’s low cost of ownership was a key factor in our decision. In addition, Amazon Kendra’s ease of implementation and flexibility are also great advantages for us. The next section summarizes an example of our implementation. How we implemented the knowledge sharing system Implementation is not an exaggerated development process; it can be done without code by following the Amazon Kendra processing flow. Here are five key points in the implementation process: Data source (accumulating knowledge) – Each department and employee of our company frequently held internal study sessions, and through these activities, knowledge was accumulated in multiple media, such as wikis and various types of storage. At that time, it was easy to review the information from the study sessions later. However, in order to extract knowledge about a specific area or technology, it was necessary to review each medium in detail, which was not very convenient. Connectors (aggregating knowledge) – With the connector functionality in Amazon Kendra, we were able to link knowledge scattered throughout the company into Amazon Kendra and achieve cross-sectional searchability. In addition, the connector is loaded through a restricted account, allowing for a security-conscious implementation. Search engine (finding information) – Because Amazon Kendra has a search page for usability testing , we were able to quickly test the usability of the search engine immediately after loading documents to see what kind of knowledge could be found. This was very helpful in solidifying the image of the launch. Search UI (search page for users) – Amazon Kendra has a feature called Experience Builder that exposes the search screen to users. This feature can be implemented with no code, which was very helpful in getting feedback during the test deployment. In addition to Experience Builder, Amazon Kendra also supports Python and React.js API implementations, so we can eventually provide customized search pages to our employees to improve their experience. Analytics (monitoring usage trends) – An enterprise search system is only valuable if a lot of people are using it. Amazon Kendra has the ability to monitor how many searches are being performed and for what terms. We use this feature to track usage trends. We also have some Q&A related to our implementation: What were some of the challenges in gathering internal knowledge? We had to start by collecting the knowledge that each department and employee had, but not necessarily in a place that could be directly connected to Amazon Kendra. How did we benefit from Amazon Kendra? We had tried to share knowledge many times in the past, but had often failed. The reasons were information aggregation, searchability, operational costs, and implementation costs. Amazon Kendra has features that solve these problems, and we successfully launched it within about 3 months of conception. Now we can use Amazon Kendra to find solutions to tasks that previously required the knowledge of individuals or departments as the collective knowledge of the entire organization. How did you evaluate the searchability of the system, and what did you do to improve it? First, we had many employees interact with the system and get feedback. One problem that arose at the beginning of the implementation was that there was a scattering of information that had little value as knowledge. This was because some of the data sources contained information from internal blog posts, for example. We are continually working to improve the user experience by selecting the right data sources. As mentioned earlier, by using Amazon Kendra, we were able to overcome many implementation hurdles at minimal cost. However, the biggest challenge with this type of tool is the adoption barrier that comes after implementation. The next section provides an example of how we overcame this hurdle. How we overcame the barrier to adoption Have you ever seen a tool that you spent a lot of effort, time, and money implementing become obsolete without widespread use? No matter how good the functionality is at solving problems, it will not be effective if people are not using it. One of the initiatives we took with the launch of Amazon Kendra was to provide a chatbot. In other words, when you ask a question in a chat tool, you get a response with the appropriate knowledge. Because all of our telecommuting employees use a chat tool on a daily basis, using chatbots is much more compatible than having them open a new search screen in their browsers. To implement this chatbot, we use Lambda, a service that allows us to run serverless, event-driven programs. Specifically, the following workflow is implemented: A user posts a question to the chatbot with a mention. The chatbot issues an event to Lambda. A Lambda function detects the event and searches Amazon Kendra for the question. The Lambda function posts the search results to the chat tool. The user views the search results. This process takes only a few seconds and provides a high-quality user experience for knowledge discovery. The majority of employees were exposed to the knowledge sharing mechanism through the chatbot, and there is no doubt that the chatbot contributed to the diffusion of the mechanism. And because there are some areas that can’t be covered by the chatbot alone, we have also asked them to use the customized search screen in conjunction with the chatbot to provide an even better user experience. Conclusion In this post, we presented a case study of Amazon Kendra for knowledge sharing and an example of a chatbot implementation using Lambda to propagate the mechanism. We look forward to seeing Amazon Kendra take another leap forward as large-scale language models continue to evolve. If you are interested in trying out Amazon Kendra, check out Enhancing enterprise search with Amazon Kendra . BrainPad can also help you with internal knowledge sharing and document exploitation using generative AI. Please contact us for more information. About the Author Dr. Naoki Okada is a Lead Data Scientist at BrainPad Inc. With his cross-functional experience in business, analytics, and engineering, he supports a wide range of clients from building up DX organizations to leveraging data in unexplored areas. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
How Earth.com and Provectus implemented their MLOps Infrastructure with Amazon SageMaker _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog How Earth.com and Provectus implemented their MLOps Infrastructure with Amazon SageMaker by Marat Adayev , Dmitrii Evstiukhin , and James Burdon | on 27 JUN 2023 | in Advanced (300) , Amazon SageMaker , Customer Solutions | Permalink | Comments |  Share This blog post is co-written with Marat Adayev and Dmitrii Evstiukhin from Provectus. When machine learning (ML) models are deployed into production and employed to drive business decisions, the challenge often lies in the operation and management of multiple models. Machine Learning Operations (MLOps) provides the technical solution to this issue, assisting organizations in managing, monitoring, deploying, and governing their models on a centralized platform. At-scale, real-time image recognition is a complex technical problem that also requires the implementation of MLOps. By enabling effective management of the ML lifecycle, MLOps can help account for various alterations in data, models, and concepts that the development of real-time image recognition applications is associated with. One such application is EarthSnap , an AI-powered image recognition application that enables users to identify all types of plants and animals, using the camera on their smartphone. EarthSnap was developed by Earth.com , a leading online platform for enthusiasts who are passionate about the environment, nature, and science. Earth.com’s leadership team recognized the vast potential of EarthSnap and set out to create an application that utilizes the latest deep learning (DL) architectures for computer vision (CV). However, they faced challenges in managing and scaling their ML system, which consisted of various siloed ML and infrastructure components that had to be maintained manually. They needed a cloud platform and a strategic partner with proven expertise in delivering production-ready AI/ML solutions, to quickly bring EarthSnap to the market. That is where Provectus , an AWS Premier Consulting Partner with competencies in Machine Learning, Data & Analytics, and DevOps, stepped in. This post explains how Provectus and Earth.com were able to enhance the AI-powered image recognition capabilities of EarthSnap, reduce engineering heavy lifting, and minimize administrative costs by implementing end-to-end ML pipelines, delivered as part of a managed MLOps platform and managed AI services. Challenges faced in the initial approach The executive team at Earth.com was eager to accelerate the launch of EarthSnap. They swiftly began to work on AI/ML capabilities by building image recognition models using Amazon SageMaker. The following diagram shows the initial image recognition ML workflow, run manually and sequentially. The models developed by Earth.com lived across various notebooks. They required the manual sequential execution run of a series of complex notebooks to process the data and retrain the model. Endpoints had to be deployed manually as well. Earth.com didn’t have an in-house ML engineering team, which made it hard to add new datasets featuring new species, release and improve new models, and scale their disjointed ML system. The ML components for data ingestion, preprocessing, and model training were available as disjointed Python scripts and notebooks, which required a lot of manual heavy lifting on the part of engineers. The initial solution also required the support of a technical third party, to release new models swiftly and efficiently. First iteration of the solution Provectus served as a valuable collaborator for Earth.com, playing a crucial role in augmenting the AI-driven image recognition features of EarthSnap. The application’s workflows were automated by implementing end-to-end ML pipelines, which were delivered as part of Provectus’s managed MLOps platform and supported through managed AI services . A series of project discovery sessions were initiated by Provectus to examine EarthSnap’s existing codebase and inventory the notebook scripts, with the goal of reproducing the existing model results. After the model results had been restored, the scattered components of the ML workflow were merged into an automated ML pipeline using Amazon SageMaker Pipelines, a purpose-built CI/CD service for ML. The final pipeline includes the following components: Data QA & versioning – This step run as a SageMaker Processing job, ingests the source data from Amazon Simple Storage Service (Amazon S3) and prepares the metadata for the next step, containing only valid images (URI and label) that are filtered according to internal rules. It also persists a manifest file to Amazon S3, including all necessary information to recreate that dataset version. Data preprocessing – This includes multiple steps wrapped as SageMaker processing jobs, and run sequentially. The steps preprocess the images, convert them to RecordIO format, split the images into datasets (full, train, test and validation), and prepare the images to be consumed by SageMaker training jobs. Hyperparameter tuning – A SageMaker hyperparameter tuning job takes as input a subset of the training and validation set and runs a series of small training jobs under the hood to determine the best parameters for the full training job. Full training – A step SageMaker training job launches the training job on the entire data, given the best parameters from the hyperparameter tuning step. Model evaluation – A step SageMaker processing job is run after the final model has been trained. This step produces an expanded report containing the model’s metrics. Model creation – The SageMaker ModelCreate step wraps the model into the SageMaker model package and pushes it to the SageMaker model registry. All steps are run in an automated manner after the pipeline has been run. The pipeline can be run via any of following methods: Automatically using AWS CodeBuild, after the new changes are pushed to a primary branch and a new version of the pipeline is upserted (CI) Automatically using Amazon API Gateway, which can be triggered with a certain API call Manually in Amazon SageMaker Studio After the pipeline run (launched using one of preceding methods), a trained model is produced that is ready to be deployed as a SageMaker endpoint. This means that the model must first be approved by the PM or engineer in the model registry, then the model is automatically rolled out to the stage environment using Amazon EventBridge and tested internally. After the model is confirmed to be working as expected, it’s deployed to the production environment (CD). The Provectus solution for EarthSnap can be summarized in the following steps: Start with fully automated, end-to-end ML pipelines to make it easier for Earth.com to release new models Build on top of the pipelines to deliver a robust ML infrastructure for the MLOps platform, featuring all components for streamlining AI/ML Support the solution by providing managed AI services (including ML infrastructure provisioning, maintenance, and cost monitoring and optimization) Bring EarthSnap to its desired state (mobile application and backend) through a series of engagements, including AI/ML work, data and database operations, and DevOps After the foundational infrastructure and processes were established, the model was trained and retrained on a larger dataset. At this point, however, the team encountered an additional issue when attempting to expand the model with even larger datasets. We needed to find a way to restructure the solution architecture, making it more sophisticated and capable of scaling effectively. The following diagram shows the EarthSnap AI/ML architecture. The AI/ML architecture for EarthSnap is designed around a series of AWS services: Sagemaker Pipeline runs using one of the methods mentioned above (CodeBuild, API, manual) that trains the model and produces artifacts and metrics. As a result, the new version of the model is pushed to the Sagemaker Model registry Then the model is reviewed by an internal team (PM/engineer) in model registry and approved/rejected based on metrics provided Once the model is approved, the model version is automatically deployed to the stage environment using the Amazon EventBridge that tracks the model status change The model is deployed to the production environment if the model passes all tests in the stage environment Final solution To accommodate all necessary sets of labels, the solution for EarthSnap’s model required substantial modifications, because incorporating all species within a single model proved to be both costly and inefficient. The plant category was selected first for implementation. A thorough examination of plant data was conducted, to organize it into subsets based on shared internal characteristics. The solution for the plant model was redesigned by implementing a multi-model parent/child architecture. This was achieved by training child models on grouped subsets of plant data and training the parent model on a set of data samples from each subcategory. The Child models were employed for accurate classification within the internally grouped species, while the parent model was utilized to categorize input plant images into subgroups. This design necessitated distinct training processes for each model, leading to the creation of separate ML pipelines. With this new design, along with the previously established ML/MLOps foundation, the EarthSnap application was able to encompass all essential plant species, resulting in improved efficiency concerning model maintenance and retraining. The following diagram illustrates the logical scheme of parent/child model relations. Upon completing the redesign, the ultimate challenge was to guarantee that the AI solution powering EarthSnap could manage the substantial load generated by a broad user base. Fortunately, the managed AI onboarding process encompasses all essential automation, monitoring, and procedures for transitioning the solution into a production-ready state, eliminating the need for any further capital investment. Results Despite the pressing requirement to develop and implement the AI-driven image recognition features of EarthSnap within a few months, Provectus managed to meet all project requirements within the designated time frame. In just 3 months, Provectus modernized and productionized the ML solution for EarthSnap, ensuring that their mobile application was ready for public release. The modernized infrastructure for ML and MLOps allowed Earth.com to reduce engineering heavy lifting and minimize the administrative costs associated with maintenance and support of EarthSnap. By streamlining processes and implementing best practices for CI/CD and DevOps, Provectus ensured that EarthSnap could achieve better performance while improving its adaptability, resilience, and security. With a focus on innovation and efficiency, we enabled EarthSnap to function flawlessly, while providing a seamless and user-friendly experience for all users. As part of its managed AI services, Provectus was able to reduce the infrastructure management overhead, establish well-defined SLAs and processes, ensure 24/7 coverage and support, and increase overall infrastructure stability, including production workloads and critical releases. We initiated a series of enhancements to deliver managed MLOps platform and augment ML engineering. Specifically, it now takes Earth.com minutes, instead of several days, to release new ML models for their AI-powered image recognition application. With assistance from Provectus, Earth.com was able to release its EarthSnap application at the Apple Store and Playstore ahead of schedule. The early release signified the importance of Provectus’ comprehensive work for the client. ”I’m incredibly excited to work with Provectus. Words can’t describe how great I feel about handing over control of the technical side of business to Provectus. It is a huge relief knowing that I don’t have to worry about anything other than developing the business side.” – Eric Ralls, Founder and CEO of EarthSnap. The next steps of our cooperation will include: adding advanced monitoring components to pipelines, enhancing model retraining, and introducing a human-in-the-loop step. Conclusion The Provectus team hopes that Earth.com will continue to modernize EarthSnap with us. We look forward to powering the company’s future expansion, further popularizing natural phenomena, and doing our part to protect our planet. To learn more about the Provectus ML infrastructure and MLOps, visit Machine Learning Infrastructure and watch the webinar for more practical advice. You can also learn more about Provectus managed AI services at the Managed AI Services. If you’re interested in building a robust infrastructure for ML and MLOps in your organization, apply for the ML Acceleration Program to get started. Provectus helps companies in healthcare and life sciences, retail and CPG, media and entertainment, and manufacturing, achieve their objectives through AI. Provectus is an AWS Machine Learning Competency Partner and AI-first transformation consultancy and solutions provider helping design, architect, migrate, or build cloud-native applications on AWS. Contact Provectus | Partner Overview About the Authors Marat Adayev  is an ML Solutions Architect at Provectus Dmitrii Evstiukhin  is the Director of Managed Services at Provectus James Burdon  is a Senior Startups Solutions Architect at AWS Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
How Forethought saves over 66 in costs for generative AI models using Amazon SageMaker _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog How Forethought saves over 66% in costs for generative AI models using Amazon SageMaker by Jad Chamoun , Salina Wu , Dhawalkumar Patel , James Park , and Sunil Padmanabhan | on 13 JUN 2023 | in Amazon SageMaker , Artificial Intelligence , Customer Solutions , Generative AI | Permalink | Comments |  Share This post is co-written with Jad Chamoun, Director of Engineering at Forethought Technologies, Inc. and Salina Wu, Senior ML Engineer at Forethought Technologies, Inc. Forethought  is a leading generative AI suite for customer service. At the core of its suite is the innovative SupportGPT™ technology which uses machine learning to transform the customer support lifecycle—increasing deflection, improving CSAT, and boosting agent productivity. SupportGPT™ leverages state-of-the-art Information Retrieval (IR) systems and large language models (LLMs) to power over 30 million customer interactions annually. SupportGPT’s primary use case is enhancing the quality and efficiency of customer support interactions and operations. By using state-of-the-art IR systems powered by embeddings and ranking models, SupportGPT can quickly search for relevant information, delivering accurate and concise answers to customer queries. Forethought uses per-customer fine-tuned models to detect customer intents in order to solve customer interactions. The integration of large language models helps humanize the interaction with automated agents, creating a more engaging and satisfying support experience. SupportGPT also assists customer support agents by offering autocomplete suggestions and crafting appropriate responses to customer tickets that align with the company’s based on previous replies. By using advanced language models, agents can address customers’ concerns faster and more accurately, resulting in higher customer satisfaction. Additionally, SupportGPT’s architecture enables detecting gaps in support knowledge bases, which helps agents provide more accurate information to customers. Once these gaps are identified, SupportGPT can automatically generate articles and other content to fill these knowledge voids, ensuring the support knowledge base remains customer-centric and up to date. In this post, we share how Forethought uses Amazon SageMaker multi-model endpoints in generative AI use cases to save over 66% in cost. Infrastructure challenges To help bring these capabilities to market, Forethought efficiently scales its ML workloads and provides hyper-personalized solutions tailored to each customer’s specific use case. This hyper-personalization is achieved through fine-tuning embedding models and classifiers on customer data, ensuring accurate information retrieval results and domain knowledge that caters to each client’s unique needs. The customized autocomplete models are also fine-tuned on customer data to further enhance the accuracy and relevance of the responses generated. One of the significant challenges in AI processing is the efficient utilization of hardware resources such as GPUs. To tackle this challenge, Forethought uses SageMaker multi-model endpoints (MMEs) to run multiple AI models on a single inference endpoint and scale. Because the hyper-personalization of models requires unique models to be trained and deployed, the number of models scales linearly with the number of clients, which can become costly. To achieve the right balance of performance for real-time inference and cost, Forethought chose to use SageMaker MMEs, which support GPU acceleration. SageMaker MMEs enable Forethought to deliver high-performance, scalable, and cost-effective solutions with subsecond latency, addressing multiple customer support scenarios at scale. SageMaker and Forethought SageMaker is a fully managed service that provides developers and data scientists the ability to build, train, and deploy ML models quickly. SageMaker MMEs provide a scalable and cost-effective solution for deploying a large number of models for real-time inference. MMEs use a shared serving container and a fleet of resources that can use accelerated instances such as GPUs to host all of your models. This reduces hosting costs by maximizing endpoint utilization compared to using single-model endpoints. It also reduces deployment overhead because SageMaker manages loading and unloading models in memory and scaling them based on the endpoint’s traffic patterns. In addition, all SageMaker real-time endpoints benefit from built-in capabilities to manage and monitor models, such as including shadow variants , auto scaling , and native integration with Amazon CloudWatch (for more information, refer to CloudWatch Metrics for Multi-Model Endpoint Deployments ). As Forethought grew to host hundreds of models that also required GPU resources, we saw an opportunity to create a more cost-effective, reliable, and manageable architecture through SageMaker MMEs. Prior to migrating to SageMaker MMEs, our models were deployed on Kubernetes on Amazon Elastic Kubernetes Service (Amazon EKS). Although Amazon EKS provided management capabilities, it was immediately apparent that we were managing infrastructure that wasn’t specifically tailored for inference. Forethought had to manage model inference on Amazon EKS ourselves, which was a burden on engineering efficiency. For example, in order to share expensive GPU resources between multiple models, we were responsible for allocating rigid memory fractions to models that were specified during deployment. We wanted to address the following key problems with our existing infrastructure: High cost – To ensure that each model had enough resources, we would be very conservative in how many models to fit per instance. This resulted in much higher costs for model hosting than necessary. Low reliability – Despite being conservative in our memory allocation, not all models have the same requirements, and occasionally some models would throw out of memory (OOM) errors. Inefficient management – We had to manage different deployment manifests for each type of model (such as classifiers, embeddings, and autocomplete), which was time-consuming and error-prone. We also had to maintain the logic to determine the memory allocation for different model types. Ultimately, we needed an inference platform to take on the heavy lifting of managing our models at runtime to improve the cost, reliability, and the management of serving our models. SageMaker MMEs allowed us to address these needs. Through its smart and dynamic model loading and unloading, and its scaling capabilities, SageMaker MMEs provided a significantly less expensive and more reliable solution for hosting our models. We are now able to fit many more models per instance and don’t have to worry about OOM errors because SageMaker MMEs handle loading and unloading models dynamically. In addition, deployments are now as simple as calling Boto3 SageMaker APIs and attaching the proper auto scaling policies. The following diagram illustrates our legacy architecture. To begin our migration to SageMaker MMEs, we identified the best use cases for MMEs and which of our models would benefit the most from this change. MMEs are best used for the following: Models that are expected to have low latency but can withstand a cold start time (when it’s first loaded in) Models that are called often and consistently Models that need partial GPU resources Models that share common requirements and inference logic We identified our embeddings models and autocomplete language models as the best candidates for our migration. To organize these models under MMEs, we would create one MME per model type, or task, one for our embeddings models, and another for autocomplete language models. We already had an API layer on top of our models for model management and inference. Our task at hand was to rework how this API was deploying and handling inference on models under the hood with SageMaker, with minimal changes to how clients and product teams interacted with the API. We also needed to package our models and custom inference logic to be compatible with NVIDIA Triton Inference Server using SageMaker MMEs. The following diagram illustrates our new architecture. Custom inference logic Before migrating to SageMaker, Forethought’s custom inference code (preprocessing and postprocessing) ran in the API layer when a model was invoked. The objective was to transfer this functionality to the model itself to clarify the separation of responsibilities, modularize and simplify their code, and reduce the load on the API. Embeddings Forethought’s embedding models consist of two PyTorch model artifacts, and the inference request determines which model to call. Each model requires preprocessed text as input. The main challenges were integrating a preprocessing step and accommodating two model artifacts per model definition. To address the need for multiple steps in the inference logic, Forethought developed a Triton ensemble model with two steps: a Python backend preprocessing process and a PyTorch backend model call. Ensemble models allow for defining and ordering steps in the inference logic, with each step represented by a Triton model of any backend type. To ensure compatibility with the Triton PyTorch backend, the existing model artifacts were converted to TorchScript format. Separate Triton models were created for each model definition, and Forethought’s API layer was responsible for determining the appropriate TargetModel to invoke based on the incoming request. Autocomplete The autocomplete models (sequence to sequence) presented a distinct set of requirements. Specifically, we needed to enable the capability to loop through multiple model calls and cache substantial inputs for each call, all while maintaining low latency. Additionally, these models necessitated both preprocessing and postprocessing steps. To address these requirements and achieve the desired flexibility, Forethought developed autocomplete MME models utilizing the Triton Python backend, which offers the advantage of writing the model as Python code. Benchmarking After the Triton model shapes were determined, we deployed models to staging endpoints and conducted resource and performance benchmarking. Our main goal was to determine the latency for cold start vs in-memory models, and how latency was affected by request size and concurrency. We also wanted to know how many models could fit on each instance, how many models would cause the instances to scale up with our auto scaling policy, and how quickly the scale-up would happen. In keeping with the instance types we were already using, we did our benchmarking with ml.g4dn.xlarge and ml.g4dn.2xlarge instances. Results The following table summarizes our results. Request Size Cold Start Latency Cached Inference Latency Concurrent Latency (5 requests) Small (30 tokens) 12.7 seconds 0.03 seconds 0.12 seconds Medium (250 tokens) 12.7 seconds 0.05 seconds 0.12 seconds Large (550 tokens) 12.7 seconds 0.13 seconds 0.12 seconds Noticeably, the latency for cold start requests is significantly higher than the latency for cached inference requests. This is because the model needs to be loaded from disk or Amazon Simple Storage Service (Amazon S3) when a cold start request is made. The latency for concurrent requests is also higher than the latency for single requests. This is because the model needs to be shared between concurrent requests, which can lead to contention. The following table compares the latency of the legacy models and the SageMaker models. Request Size Legacy Models SageMaker Models Small (30 tokens) 0.74 seconds 0.24 seconds Medium (250 tokens) 0.74 seconds 0.24 seconds Large (550 tokens) 0.80 seconds 0.32 seconds Overall, the SageMaker models are a better choice for hosting autocomplete models than the legacy models. They offer lower latency, scalability, reliability, and security. Resource usage In our quest to determine the optimal number of models that could fit on each instance, we conducted a series of tests. Our experiment involved loading models into our endpoints using an ml.g4dn.xlarge instance type, without any auto scaling policy. These particular instances offer 15.5 GB of memory, and we aimed to achieve approximately 80% GPU memory usage per instance. Considering the size of each encoder model artifact, we managed to find the optimal number of Triton encoders to load on an instance to reach our targeted GPU memory usage. Furthermore, given that each of our embeddings models corresponds to two Triton encoder models, we were able to house a set number of embeddings models per instance. As a result, we calculated the total number of instances required to serve all our embeddings models. This experimentation has been crucial in optimizing our resource usage and enhancing the efficiency of our models. We conducted similar benchmarking for our autocomplete models. These models were around 292.0 MB each. As we tested how many models would fit on a single ml.g4dn.xlarge instance, we noticed that we were only able to fit four models before our instance started unloading models, despite the models having a small size. Our main concerns were: Cause for CPU memory utilization spiking Cause for models getting unloaded when we tried to load in one more model instead of just the least recently used (LRU) model We were able to pinpoint the root cause of the memory utilization spike coming from initializing our CUDA runtime environment in our Python model, which was necessary to move our models and data on and off the GPU device. CUDA loads many external dependencies into CPU memory when the runtime is initialized. Because the Triton PyTorch backend handles and abstracts away moving data on and off the GPU device, we didn’t run into this issue for our embedding models. To address this, we tried using ml.g4dn.2xlarge instances, which had the same amount of GPU memory but twice as much CPU memory. In addition, we added several minor optimizations in our Python backend code, including deleting tensors after use, emptying the cache, disabling gradients, and garbage collecting. With the larger instance type, we were able to fit 10 models per instance, and the CPU and GPU memory utilization became much more aligned. The following diagram illustrates this architecture. Auto scaling We attached auto scaling policies to both our embeddings and autocomplete MMEs. Our policy for our embeddings endpoint targeted 80% average GPU memory utilization using custom metrics. Our autocomplete models saw a pattern of high traffic during business hours and minimal traffic overnight. Because of this, we created an auto scaling policy based on InvocationsPerInstance so that we could scale according to the traffic patterns, saving on cost without sacrificing reliability. Based on our resource usage benchmarking, we configured our scaling policies with a target of 225 InvocationsPerInstance . Deploy logic and pipeline Creating an MME on SageMaker is straightforward and similar to creating any other endpoint on SageMaker. After the endpoint is created, adding additional models to the endpoint is as simple as moving the model artifact to the S3 path that the endpoint targets; at this point, we can make inference requests to our new model. We defined logic that would take in model metadata, format the endpoint deterministically based on the metadata, and check whether the endpoint existed. If it didn’t, we create the endpoint and add the Triton model artifact to the S3 patch for the endpoint (also deterministically formatted). For example, if the model metadata indicated that it is an autocomplete model, it would create an endpoint for auto-complete models and an associated S3 path for auto-complete model artifacts. If the endpoint existed, we would copy the model artifact to the S3 path. Now that we had our model shapes for our MME models and the functionality for deploying our models to MME, we needed a way to automate the deployment. Our users must specify which model they want to deploy; we handle packaging and deployment of the model. The custom inference code packaged with the model is versioned and pushed to Amazon S3; in the packaging step, we pull the inference code according to the version specified (or the latest version) and use YAML files that indicate the file structures of the Triton models. One requirement for us was that all of our MME models would be loaded into memory to avoid any cold start latency during production inference requests to load in models. To achieve this, we provision enough resources to fit all our models (according to the preceding benchmarking) and call every model in our MME at an hourly cadence. The following diagram illustrates the model deployment pipeline. The following diagram illustrates the model warm-up pipeline. Model invocation Our existing API layer provides an abstraction for callers to make inference on all of our ML models. This meant we only had to add functionality to the API layer to call the SageMaker MME with the correct target model depending on the inference request, without any changes to the calling code. The SageMaker inference code takes the inference request, formats the Triton inputs defined in our Triton models, and invokes the MMEs using Boto3. Cost benefits Forethought made significant strides in reducing model hosting costs and mitigating model OOM errors, thanks to the migration to SageMaker MMEs. Before this change, ml.g4dn.xlarge instances running in Amazon EKS. With the transition to MMEs, we discovered it could house 12 embeddings models per instance while achieving 80% GPU memory utilization. This led to a significant decline in our monthly expenses. To put it in perspective, we realized a cost saving of up to 80%. Moreover, to manage higher traffic, we considered scaling up the replicas. Assuming a scenario where we employ three replicas, we found that our cost savings would still be substantial even under these conditions, hovering around 43%. The journey with SageMaker MMEs has proven financially beneficial, reducing our expenses while ensuring optimal model performance. Previously, our autocomplete language models were deployed in Amazon EKS, necessitating a varying number of ml.g4dn.xlarge instances based on the memory allocation per model. This resulted in a considerable monthly cost. However, with our recent migration to SageMaker MMEs, we’ve been able to reduce these costs substantially. We now host all our models on ml.g4dn.2xlarge instances, giving us the ability to pack models more efficiently. This has significantly trimmed our monthly expenses, and we’ve now realized cost savings in the 66–74% range. This move has demonstrated how efficient resource utilization can lead to significant financial savings using SageMaker MMEs. Conclusion In this post, we reviewed how Forethought uses SageMaker multi-model endpoints to decrease cost for real-time inference. SageMaker takes on the undifferentiated heavy lifting, so Forethought can increase engineering efficiency. It also allows Forethought to dramatically lower the cost for real-time inference while maintaining the performance needed for the business-critical operations. By doing so, Forethought is able to provide a differentiated offering for their customers using hyper-personalized models. Use SageMaker MME to host your models at scale and reduce hosting costs by improving endpoint utilization. It also reduces deployment overhead because Amazon SageMaker manages loading models in memory and scaling them based on the traffic patterns to your endpoint. You can find code samples on hosting multiple models using SageMaker MME on GitHub . About the Authors Jad Chamoun is a Director of Core Engineering at Forethought. His team focuses on platform engineering covering Data Engineering, Machine Learning Infrastructure, and Cloud Infrastructure.  You can find him on  LinkedIn . Salina Wu is a Sr. Machine Learning Infrastructure engineer at Forethought.ai. She works closely with the Machine Learning team to build and maintain their end-to-end training, serving, and data infrastructures. She is particularly motivated by introducing new ways to improve efficiency and reduce cost across the ML space. When not at work, Salina enjoys surfing, pottery, and being in nature. James Park  is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In h is spare time he enjoys seeking out new cultures, new experiences,  and staying up to date with the latest technology trends.You can find him on LinkedIn . Sunil Padmanabhan is a Startup Solutions Architect at AWS. As a former startup founder and CTO, he is passionate about machine learning and focuses on helping startups leverage AI/ML for their business outcomes and design and deploy ML/AI solutions at scale. Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
How Generative AI will transform manufacturing _ AWS for Industries.txt
AWS for Industries How Generative AI will transform manufacturing by Scot Wlodarczak | on 20 JUN 2023 | in *Post Types , Amazon Machine Learning , Amazon SageMaker , Artificial Intelligence , Generative AI , Industries , Manufacturing , Thought Leadership | Permalink |  Share Introduction Artificial intelligence (AI) and machine learning (ML) have been a focus for Amazon for decades, and we’ve worked to democratize ML and make it accessible to everyone who wants to use it, including more than 100,000 customers of all sizes and industries. This includes manufacturing companies who are looking beyond AI/ML to generative AI at the prospect of delivering even more exciting results. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. It is powered by large models that are pre-trained on vast amounts of data, commonly referred to as foundation models (FMs). With generative AI, manufacturers have the potential to reinvent their businesses and disrupt their industry. The potential of generative AI is incredibly exciting. But, we are still in the very early days. Companies have been working on FMs for years, but how can manufacturers take advantage of what is out there today to transform their business, and where should they start? A study by IDC titled, The State of Manufacturing and Generative AI Adoption in Manufacturing Organizations ,¹ revealed that for manufacturers, the top business areas where survey respondents felt generative AI could make the most impact in the next 18 months were in manufacturing (production), product development and design, followed by sales and supply chain. In this blog we will focus on generative AI potential to create radical, new product designs, drive unprecedented levels of manufacturing productivity, and optimize supply chain applications. Innovate with Generative AI in Product Engineering The first area we will explore is product engineering. AI and ML are already being used alongside high-performance computing to enhance the design of discrete product components to ultimately offer new and innovative designs that humans don’t typically ideate. These technologies provide manufacturers with a way to more quickly and effectively explore various design options to find the most efficient solutions with minimized cost, mass, materials, engineering design time, and even production time. One example is from Autodesk – a leader in 3D design, engineering, and entertainment software. They have been producing software for the architecture, construction, engineering, manufacturing, and media and entertainment industries since 1982. To speed and streamline development, Autodesk has been steadily expanding its use of Amazon Web Services (AWS) and decreasing its data center footprint. Autodesk offers generative design capabilities – a generative AI-like service – in their Fusion 360 software to help product designers create innovative new designs within parameters specified by the user, including materials, manufacturing constraints, safety factors, and other variables. At the Hannover Messe tradeshow in Germany in April 2023, Autodesk gave a presentation on a mobility start-up who improved its processes for creating new mobility solutions to shorten lead times while rapidly exploring new mobility design concepts and controlling engineering and manufacturing costs. The start-up adopted Autodesk Fusion 360, which leverages Amazon SageMaker to enable AI-enhanced generative design and additive manufacturing. It was able to reduce the time-to-market for new designs from 3.5 years to 6 months, an 86% faster time-to-market. Beyond extensive design potential, with generative AI, engineers can analyze large data sets in an effort to help improve safety, create simulation datasets, explore how a part might be manufactured or machined faster, and bring their products to market more quickly. These data sets could become the source information, or FMs, upon which a manufacturer’s generative AI strategy can be built. This allows the data to remain private and secure, while also allowing them to reap the benefits of this technology. In April 2023, AWS announced Amazon Bedrock , a new managed service that makes FMs from AI21 Labs, Anthropic, Stability AI, and Amazon accessible via an API. Amazon Bedrock is the easiest way for customers to build and scale generative AI-based applications using FMs, democratizing access for all builders. One of the most important capabilities of Amazon Bedrock is how easy it is to customize a model. Customers simply point Bedrock at labeled examples in Amazon Simple Storage Service (S3) , and the service can fine-tune the model for a particular task without having to annotate large volumes of data (as few as 20 examples is enough). Imagine a content marketing manager who works at a leading fashion retailer and needs to develop fresh, targeted ad and campaign copy for an upcoming new line of handbags. To do this, they provide Bedrock a few labeled examples of their best performing taglines from past campaigns, along with the associated product descriptions. Bedrock makes a separate copy of the base foundational model that is accessible only to the customer and trains this private copy of the model. After training, Bedrock will automatically start generating effective social media, display ad, and web copy for the new handbags. None of the customer’s data is used to train the original base models. Customers can configure their Amazon Virtual Private Cloud (Amazon VPC) settings to access Bedrock APIs and provide model fine-tuning data in a secure manner and all data is encrypted. Customer data is always encrypted in transit (TLS1.2) and at rest through service managed keys. Optimize Production with Generative AI Manufacturers are often hesitant to adopt and implement new technology in production environments due to the high risk of production loss and the associated costs. In factory production, it is early days for generative AI use cases, but we are certainly hearing from factory leaders already about how generative AI might help optimize overall equipment effectiveness (OEE). As generative AI needs large amounts of data to create FM’s, manufacturers have a unique industry challenge of gaining access to their factory data and moving it into the cloud to begin their generative AI journey. Step one for many manufacturers is adopting an industrial data strategy. Data is the foundation of any digital transformation effort, and having an industrial data strategy is critical to enable business teams to easily and effectively leverage that data to address a variety of use cases across an organization. Why? Manufacturers have often struggled with disconnected and siloed data sources that were not designed to work together, making it challenging to gain economical, secure, structured, and easy access to high quality datasets for FMs. AWS addresses many of these challenges with Industrial Data Fabric solutions. Companies like Georgia Pacific (GP) have used AI and ML for years to optimize quality on paper production, for example. GP improved profits and maximized plant resources by using AWS data analysis technologies to predict how fast converting lines should run to avoid paper tearing in production. But how can generative AI help manufacturers with production? In conversations with business and production leaders, one issue that pops up again and again is that attrition continues to erode the knowledge and experience on their factory floors. Experienced workers are retiring, and their decades of knowledge is often lost with them. These are the kind of workers who can hear when a machine bearing needs grease, or feel when a machine is vibrating excessively and not running properly. The challenge is how to equip less experienced operators with the knowledge required to keep complex production operations running efficiently, and how to maximize production, quality, and machine availability. If manufacturers are willing to digitize and capture historical machine maintenance data, repair data, equipment manuals, production data, and potentially even other manufacturer’s data to augment an effective FM to influence real change. As an example, take a machine that continues to break down, causing unplanned downtime. What if production engineers could use generative AI to query possible failure causes, and get high-probability suggestions on equipment input adjustments, maintenance required, or even spare parts to purchase that will mitigate downtime. In the absence of experienced engineers and operators, generative AI holds real promise in production environments to maximize OEE. Optimize Supply Chains with Generative AI AWS offers multiple services to address supply chain use cases. AWS Supply Chain is an application that helps businesses increase supply chain visibility to make faster, more informed decisions that mitigate risks, save costs, and improve customer experiences. AWS Supply Chain automatically combines and analyzes data across multiple supply chain systems so businesses can observe their operations in real-time, find trends more quickly, and generate more accurate demand forecasts that ensure adequate inventory to meet customer expectations. Based on nearly 30 years of Amazon.com logistics network experience, AWS Supply Chain improves supply chain resiliency by providing a unified data lake, machine learning-powered insights, recommended actions, and in-application collaboration capabilities. Given the uncertainty in supply chains due to the pandemic, regional conflicts, raw material shortages, and even natural disasters, manufacturers supply chains continue to be an area of concern, if not outright angst. The sourcing function is fertile ground where generative AI could add value. Let’s say a manufacturer runs out of custom machined components, and is looking to find alternate vendors to deliver some custom machining work. Generative AI could be used to provide alternate vendors with the proper capabilities to provide the specialty work required. Another application might be substituting generative AI, where possible, for routine human interactions –  getting questions answered that formerly would have taken hours or days to get the right data and then make sense of it. Generative AI could also serve as a supply chain control tower by proactively assessing risk related to shipping challenges, natural disasters, strikes, or other geopolitical events. This would allow the supply chain function to properly allocate scarce resources to mitigate disruptions. Conclusion We are clearly at the beginning of a new and exciting foray into generative AI, and I’ve just scratched the surface of some potential applications in the manufacturing industry – from product design to production and supply chain. AWS announced some exciting new offering in the previous months: Amazon Bedrock , the easiest way for customers to build and scale generative AI-based applications using FMs, democratizing access for all builders Amazon Titan FMs, which allow customers to innovate responsibly with high-performing foundation models (FMs) from Amazon New, network-optimized Amazon EC2 Trn1 instances , which offer 1600 Gbps of network bandwidth and are designed to deliver 20% higher performance over Trn1 for large, network-intensive models Amazon EC2 Inf2 instances powered by AWS Inferentia2, which are optimized specifically for large-scale generative AI applications with models containing hundreds of billions of parameters Amazon CodeWhisperer , an AI coding companion that uses a FM under the hood to radically improve developer productivity by generating code suggestions in real-time based on developers’ comments in natural language and prior code in their Integrated Development Environment (IDE). We are excited about what our customers will build with generative AI on AWS. Starting exploring our services and finding out where generative AI could benefit your organization. Our mission is to make it possible for developers of all skill levels and for organizations of all sizes to innovate using generative AI. This is just the beginning of what we believe will be the next wave of ML, powering new possibilities in manufacturing. ¹ IDC, The State of Manufacturing and Generative AI Adoption in Manufacturing Organizations, 1Q23, r:# EUR250654623, May 2023 TAGS: AWS for Industrial , Industrial , Manufacturing Scot Wlodarczak Scot joined AWS in July 2018, where he now manages the manufacturing industry marketing efforts. Scot worked previously at Cisco, and Rockwell Automation where he held roles as Industrial Marketing Manager and Regional Marketing Leader. Scot has focused on marketing to industrial customers on their digital transformation journey, and bridging the gap between IT and operations. He has experience in automation across a wide range of industries. Scot holds a Mechanical Engineering degree from SUNY - Buffalo, and an MBA from Colorado University. He lives in Colorado. Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
How Imperva uses Amazon Athena for machine learning botnets detection _ AWS Big Data Blog.txt
AWS Big Data Blog How Imperva uses Amazon Athena for machine learning botnets detection by Ori Nakar and Yonatan Dolan | on 12 MAY 2021 | in Amazon Athena , Amazon SageMaker , Analytics , Artificial Intelligence | Permalink | Comments |  Share This is a guest post by Ori Nakar, Principal Engineer at Imperva. In their own words, “Imperva is a large cyber security company and an AWS Partner Network (APN) Advanced Technology Partner, who protects web applications and data assets. Imperva protects over 6,200 enterprises worldwide and many of them use Imperva Web Application Firewall (WAF) solutions to secure their public websites and other web assets.” In this post, we explain how Imperva used Amazon Athena , Amazon SageMaker , and Amazon QuickSight to develop a machine learning (ML) clustering algorithm that can efficiently detect botnets attacking your infrastructure. Athena is an interactive query service that makes it easy to analyze data in Amazon Simple Storage Service (Amazon S3) using standard SQL. Athena is serverless, easy to use, and makes it easy for anyone with SQL skills to quickly analyze large-scale datasets in multiple Regions. Imperva Cloud WAF protects hundreds of thousands of websites and blocks billions of security events every day. Security events are correlated online into security narratives, and an innovative offline process enables you to detect botnets. Events, narratives, and many other security data types are stored in Imperva’s Threat Research multi-Region data lake. Botnets and data flow Botnets are internet connected devices that perform repetitive tasks, such as Distributed Denial of Service (DDoS). In many cases, these consumer devices are infected with malicious malware that is controlled by an external entity, often without the owner’s knowledge. Imperva botnet detection allows you to enhance your website’s security and get detailed information on botnet attacks and come up with ways to mitigate their impact. The following is a visualization of a botnets attack map. Each botnet can be composed of tens to thousands of IPs, one or more source location, and one or more target locations, performing an attack such as DDoS, vulnerability scanning, and others. The following diagram illustrates Imperva’s flow to detect botnets. The remainder of this post dives into the process of developing the botnet detection capability and describes the AWS services Imperva uses to enable and accelerate it. Botnet detection development process Imperva’s development process has three main steps: query, detect and evaluate. The following diagram summarizes these steps. Query Imperva stores the narrative data in Imperva’s Threat Research data lake. Data is continuously added as objects to Amazon S3 and stored in multiple Regions due to regulation and data locality requirements. For more information about querying data stored in multiple Regions using Athena, see Running SQL on Amazon Athena to Analyze Big Data Quickly and Across Regions . One of the tables in the data lake is the narratives tables, which has the following columns. Column Description narrative_id ID of a detected narrative. ip Each narrative has one or more IPs. site_id ID of the attacked site. Narrative has a single attacked site. The following screenshot is a sample of the data being queried. Finding correlations between attacking IPs of the same website generates our initial dataset, which allows us to hone in on those that are botnets. The following query in Athena generates that initial list. The query first finds narratives and sites per IP, and stores those in arrays. Next, the query finds all the pairs using a SELF JOIN (L for left, R for right). For each IP pair, it calculates the number of narratives and number of attacked sites. Then it filters on pairs with one common narrative. See the following code: -------------------- STEP 1 -------------------- WITH nar_ips AS ( SELECT ip, ARRAY_AGG(narrativ_id) AS ids, ARRAY_AGG(site_id) AS sites FROM narratives GROUP BY 1) -------------------- STEP 2 -------------------- SELECT l.ip AS ip_1, r.ip AS ip_2, CARDINALITY(ARRAY_INTERSECT(l.ids, r.ids)) AS narratives, CARDINALITY(ARRAY_INTERSECT(l.sites, r.sites)) AS sites FROM nar_ips AS l INNER JOIN nar_ips AS r ON l.ip < r.ip AND ARRAYS_OVERLAP(l.ids, r.ids) The following screenshot shows a query result of IP pairs that attacked the same websites and the number of attacks that they performed together. Imperva uses Create Table as Select (CTAS) to store the query results in Amazon S3 using a CSV file format that the SageMaker training job uses in the next step. Use the following query: CREATE TABLE [temp_table_name] WITH (format='TEXTFILE', bucketed_by=ARRAY['ip_1'], bucket_count=5, external_location='s3://my-bucket/my-temp-location', field_delimiter = ',') AS [SQL] The TEXTFILE format saves the data compressed as gzip, and the bucketing information controls the number of objects and therefore their sizes. Athena CTAS supports multiple types of data formats, and it’s recommended to evaluate which file format is best suited for your use case. The following screenshot shows objects created in the S3 data lake by Athena. Detect: Botnets clustering The next step in Imperva’s process is to cluster the IP pairs from the previous step into botnets. This includes steps for input, model training and output. Input The first step is to calculate the distance between each IP pair in a narrative. This process raises a couple of options. The first is if you use Athena with either the included analytic functions such as cosine_similarity , or develop a custom UDF to perform the calculation. For Imperva’s needs, we decided to use SageMaker and implement the distance calculation using Python. For other implementations, you should experiment with your data and decide which big data processing method to use. The following diagram shows some of the characteristics of each method. Each language has different capabilities. For example, Java and Python are much more flexible than SQL, but makes the pipeline more complex in terms of development and maintenance. The volume of data consumed and processed by SageMaker directly impacts the time it takes to complete the model training. Model training and output We use the SageMaker Python SDK to create a training job, which is used for the model training. The jobs are created and monitored using simple Python code. When running the training job, you can choose which remote instance type best fits the needs of the job, and use Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances to save costs. Imperva used the Python Scikit-learn base image, which includes all libraries required, and more libraries can be installed if needed. Logs from the remote instance are captured for monitoring, and when the job is complete, the output is saved to Amazon S3. See the following code: from sagemaker.sklearn import SKLearn estimator = SKLearn(entry_point="my_script.py", use_spot_instances=True, hyperparameters={"epsilon": 0.1, "min_samples": 10}, instance_type="ml.m4.xlarge") estimator.fit(inputs={"train": "s3://my_bucket/my_folder"}) The following code is the details of the script running in the remote instance that was launched. The distance function gets a list of features and returns a distance between 0–1: def distance(narratives: int, sites: int) -> float: return 1 - (1 / sites) - (1 / narratives) SageMaker copies the data from Amazon S3 and runs the calculation of distance based on all IP pairs. The following code goes over the files and records: distances_arr = [] for file_name in file_names: df = pd.read_csv(file_name, header=None, chunksize=100_000, names=["ip_1", "ip_2", "sites", "narratives"]) for _, row in df.iterrows(): distances_arr.append(distance(row["sites"], row["narratives"])) The output of that calculation is transformed into a sparse distance matrix, which is fed into a DBSCAN algorithm and detects clusters. DBSCAN is one of the most common clustering algorithms. DBSCAN runs on a given set of points; it groups together points that are closely packed together. See the following code: model = DBSCAN(eps=0.1, min_samples=10, metric="precomputed") result = model.fit_predict(dist_mat) When the clustering results are ready, SageMaker writes the results to Amazon S3. The table is created by copying the output of SageMaker to a new table partition in Amazon S3. The results are IP clusters, and a working pipeline is established. The following screenshot shows an example of the clustering algorithm results. The pipeline allows for the evaluation and experimentation phase to begin. This is often the more time-consuming phase to help ensure optimal results are achieved. Evaluate: Run various experiments and compare between them The IP clusters (which Imperva refers to as botnets) that were found are written back to a dedicated table in the data lake. You can run the botnet detection process with different parameters within SageMaker. The following are some examples of parameters that you can alter: Adjust query parameters such as IP hits, sites hits, and more Change the distance function being used Adjust hyperparameters such as DBScan epsilon and minimum samples Change the clustering algorithm being used (for example, OPTICS) After you complete several experiments, the following step is to compare them. Imperva accomplishes this by using Athena to query the results for a set of experiments and joining the detected botnet IP data with various additional tables in the data lake. The following example code walks through joining the detected botnet IP data with newer narratives data: WITH narratives_ips AS ( SELECT experiment, botnet, ip, narrarive_id FROM botnets INNER JOIN narratives USING (validation_day, ip)) SELECT experiment, botnet, narrarive_id, COUNT() AS ips GROUP BY 1,2,3 For each detected botnet, Imperva finds the relevant narratives and checks if those IPs continue to jointly attack as a group. Visualizing results from multiple experiments allows you to quickly glean their level effectiveness. Imperva uses QuickSight connected to Athena to query and visualize the experiments table. In the following analysis example, for each experiment, the following information is reviewed: Number of botnets Total number of narratives Average number of IPs in a narrative—this means that the same IPs continued to attack as a group, as predicted The data is visualized using a pivot table in QuickSight, and additional conditional formatting allows for an easy comparison between experiments. To further analyze the results, it was hypothesized that the number of tools used by the botnet might provide additional insights. These tools could be custom-built code or common libraries such as PhantonJS used in malicious ways. The tool information is added to the pivot table, with the ability to drill down to each experiment to view how many tools were used by each botnet. The tool hypothesis is just one example of the analyses available. It’s also possible to drill down further and view the sum of narratives by tool as a donut chart. This visualization can help you quickly see the distribution of tools in a specific experiment. You can perform such analysis on any other field, table, or data source. Imperva uses this method to analyze, compare, and fine-tune experiments in order to improve results. Summary Thousands of customers use the Imperva Web Application Firewall to defend their applications from hacking and denial of service attacks. The most common source of these attacks are botnets, comprised of a large network of computers across the internet. For Imperva to improve our ability to identify, isolate, and stop these attacks, we developed a simple pipeline that allows us to quickly collect and store network traffic in Amazon S3 and analyze it using Athena to identify patterns. We used SageMaker to quickly experiment with different clustering and ML algorithms that help detect patterns in botnet activity. You can generalize this flow to other ML development pipelines, and use any part of it in a model development process. The following diagram illustrates the generalized process. Running many experiments quickly and easily helps achieve business objectives faster. Running experiments on large volumes of data often requires a lot of time and can be rather expensive. An AWS-based processing pipeline eliminates these challenges by utilizing various AWS services: Athena to quickly and cost-effectively analyze large amounts of data SageMaker to experiment with different ML algorithms in a scalable and cost-effective manner QuickSight to visualize and dive deep into the data in order to extract critical insights that help you fine-tune your ML models This blog post is based on a demo at re:Invent 2020 by the authors. You can watch that presentation on YouTube. About the Authors Ori Nakar is Principal Engineer at Imperva’s Threat Research Group. His main interests are WEB application and database security, data science, and big data infrastructure.     Yonatan Dolan is a Business Development Manager at Amazon Web Services. He is located in Israel and helps customers harness AWS analytical services to leverage data, gain insights, and derive value.         Comments View Comments Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
How KYTC Transformed the States Customer Experience for 4.1 Million Drivers Using Amazon Connect _ Case Study _ AWS.txt
The Division of Customer Service under the Department of Vehicle Regulation in KYTC is the sole point of contact for all incoming customers with questions and issues to resolve. Its contact center assists a wide array of customer inquiries, from licensing and taxes to titles for motor vehicles through voice calls. Français The Kentucky Transportation Cabinet (KYTC) modernized its contact center solution in 6 weeks using Amazon Connect.  2023 Español Amazon Connect Customer Profiles 900,000 chatbot interactions per month KYTC chose to migrate from its previous cloud provider to AWS and to use Amazon Connect because of the opportunity for innovation. It chose Amazon Connect because of the scalability and pay-as-you-go pricing, which freed KYTC of needing to pay heavy licensing fees or for third-party assistance. After planning the design of what it wanted its new system to be capable of, KYTC worked to create it alongside AWS Professional Services, a global team of experts who work with customers to realize desired business outcomes. “The AWS Professional Services team could jump in from our preplanning and build out our current solution, which was amazing,” says Tony Momenpour, system consultant with the Division of Customer Service at KYTC. The modernization of the contact center solution for KYTC took 6 weeks, which was significantly faster than its previous solution migration. 日本語 KYTC agents are using a new desktop when interacting with customers, which has positively impacted training time and agent experience. This is the Amazon Connect Agent Workspace, empowering agents with a unified experience, including guided step-by-step actions. Whenever customers call in to KYTC, if their questions cannot be answered by the chatbot, they start with a tier-one agent. These agents can send customers to specialists (tier-two agents) or answer questions for customers. KYTC agents use a machine learning (ML) -based service, Amazon Connect Wisdom, that delivers information that the agents need to solve issues in near real time, and grants access to 45 wikis that house the information customers might need. Get Started 한국어 How KYTC Transformed the State’s Customer Experience for 4.1 Million Drivers Using Amazon Connect for average call time to assist customers reduced from 3–4 minutes Overview | Opportunity | Solution | Outcome | AWS Services Used If a customer is connected to a tier-two agent, a profile is immediately created using Amazon Connect Customer Profiles (Customer Profiles) so that agents can deliver faster, more personalized customer service. Putting these tools in its agents’ hands has improved employee retention for KYTC. The agency has also reduced the training time for new agents from 4 weeks to 2 weeks because Amazon Connect is simple to use. employee training time reduced from 4 weeks Amazon Connect Customer Profiles equips contact center agents with a more unified view of a customer’s profile with the most up to date information, to provide more personalized customer service. to modernize its contact center solution AWS Services Used Opportunity | How KYTC Used Amazon Connect to Modernize Its Contact Center Reduced 中文 (繁體) Bahasa Indonesia The agency serves 4.1 million drivers in Kentucky, providing customer service for vehicle licensing and taxes. KYTC’s previous solution had downtime during peak call times and required expensive third-party assistance. The agency now provides new chatbot features using Amazon Connect and improved customer call experience using Amazon Connect Wisdom and Amazon Connect Cases. KYTC has reduced employee training time by 2 weeks, reduced customer hold and wait times, and improved customer experience by adding several new features to its contact center solution. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Toni Woolums Resource Management Analyst, Department of Vehicle Registration, Kentucky Transportation Cabinet KYTC plans to continue innovating its contact center solution using AWS and features of Amazon Connect. The agency is working alongside the AWS team to discover new and current features that fit its use case and enhance its contact center service for its customers. “The difference between what we had before and what we have now is like night and day,” says Ron Parritt, assistant director of the customer service center at KYTC. “Using AWS, we’re helping our customers more than before, which is great, because we are a customer service. I can’t say enough good things about AWS.” Learn more » Amazon Connect customer hold and waiting time Overview Amazon Connect Agent Workspace Türkçe The Kentucky Transportation Cabinet oversees the state’s highway, byway, and roadway maintenance, road safety mechanics, and motor vehicle regulation and licensing. The agency serves 4.1 million drivers in Kentucky. KYTC has improved both the customer and the agent experience in its contact center using Amazon Connect. “We can assist more customers in less time,” says Mike Miller, director of the Division of Customer Service at KYTC. “This upgrade brings more modern functionality for customers and customer service professionals.” The agency has reduced the duration of calls with customers because it can address their needs quicker. Prior to the AWS solution, KYTC averaged 3–4 minutes per call, and with the modernized contact center, it averages less than 2 minutes. With between 30,000 and 40,000 calls on average per month, this saves significant time for both agents and customers. English Another new feature implemented within the contact center solution is the phone callback queue. When customers have been on hold for 2 minutes, they are put into the callback queue, meaning they don’t have to wait on hold for 30–60 minutes. Instead, they will get a call when an agent is available. KYTC agents also use Amazon Connect Cases to track, collaborate on, and resolve customer issues quickly. Using this feature, agents can more efficiently manage customer issues requiring multiple interactions and follow-up tasks. KYTC now has more insight into the analytics of its customer calls and chats using Amazon Connect Contact Lens, offering near-real-time conversational analytics and quality management powered by ML. “We can run near-real-time reports without the fear of crashing the contact center like we had under the old solution,” says Miller. “Managers are very appreciative of having near-real-time access to metrics instead of needing to wait a day.” KYTC uses Amazon Connect Agent Workspace to integrate all the new capabilities of its call center in one place for its agents. By using Amazon Connect, KYTC added a chatbot functionality for customers to self-service their issues before needing to call in. The agency has an average of 900,000 chatbot interactions a month, and of those, only around 1,000 end up needing to be passed to a representative. KYTC also implemented a question-and-answer bot that sends customers a text message to direct them to the agency that they need to contact, which ultimately saves time for KYTC agents. “The question-and-answer bot is a really big feature of our AWS solution,” says Toni Woolums, resource management analyst with the Department of Vehicle Registration at KYTC. “Our new chatbot feature is a big enhancement for customers as well. We were blown away by the number of chat interactions in the new solution.” The Kentucky Transportation Cabinet (KYTC) needed to modernize its contact center solution to better serve the 4.1 million drivers in Kentucky. The previous solution for KYTC was unreliable with high third-party costs. Therefore, KYTC chose to use Amazon Web Services (AWS) to gain stability and innovate a succesful solution. By using Amazon Connect, a service with capabilities to set up a contact center in minutes that can scale to support millions of customers, KYTC improved its customer experience and reduced employee training time in 6 weeks. Amazon Connect Wisdom delivers agents the information they need, reducing the time spent searching for answers. It became critical for KYTC to assess its customer service organization when it began facing significant challenges with its previous contact center solution. The voice server of the previous on-premises solution needed to be restarted twice a day during peak volumes, leading to 30 minutes of downtime each time. In addition to the downtime issue, the ticketing portion of the service was stable but required high-cost third-party consulting during the cloud-migration process. This was a significant expense for KYTC, but it knew it needed to make a change to modernize its contact center solution. 6 weeks The question-and-answer bot is a really big feature of our AWS solution. Our new chatbot feature is a big enhancement for customers as well.” 2 weeks Deutsch Amazon Connect Wisdom Tiếng Việt Italiano ไทย Solution | Reducing Customer Hold Time and Employee Training Time Using Amazon Connect Customer Stories / Government With Amazon Connect, you can set up a contact center in minutes that can scale to support millions of customers. About the Kentucky Transportation Cabinet Less than 2 minutes Amazon Connect agent workspace is a single, intuitive application that provides your agents with all of the tools and step-by-step guidance they need to resolve issues efficiently, improve customer experiences, and onboard faster. Português Outcome | Innovating Using Amazon Connect for Continual Improvement
How Marubeni is optimizing market decisions using AWS machine learning and analytics _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog How Marubeni is optimizing market decisions using AWS machine learning and analytics by Hernan Figueroa , Pedram Jahangiri , Lino Brescia , Narcisse Zekpa , and Sarah Childers | on 08 MAR 2023 | in Amazon Athena , Amazon SageMaker , AWS Lambda , AWS Step Functions , Customer Solutions , Energy | Permalink | Comments |  Share This post is co-authored with Hernan Figueroa, Sr. Manager Data Science at Marubeni Power International. Marubeni Power International Inc (MPII) owns and invests in power business platforms in the Americas. An important vertical for MPII is asset management for renewable energy and energy storage assets, which are critical to reduce the carbon intensity of our power infrastructure. Working with renewable power assets requires predictive and responsive digital solutions, because renewable energy generation and electricity market conditions are continuously changing. MPII is using a machine learning (ML) bid optimization engine to inform upstream decision-making processes in power asset management and trading. This solution helps market analysts design and perform data-driven bidding strategies optimized for power asset profitability. In this post, you will learn how Marubeni is optimizing market decisions by using the broad set of AWS analytics and ML services, to build a robust and cost-effective Power Bid Optimization solution. Solution overview Electricity markets enable trading power and energy to balance power supply and demand in the electric grid and to cover different electric grid reliability needs. Market participants, such as MPII asset operators, are constantly bidding power and energy quantities into these electricity markets to obtain profits from their power assets. A market participant can submit bids to different markets simultaneously to increase the profitability of an asset, but it needs to consider asset power limits and response speeds as well as other asset operational constraints and the interoperability of those markets. MPII’s bid optimization engine solution uses ML models to generate optimal bids for participation in different markets. The most common bids are day-ahead energy bids, which should be submitted 1 day in advance of the actual trading day, and real-time energy bids, which should be submitted 75 minutes before the trading hour. The solution orchestrates the dynamic bidding and operation of a power asset and requires using optimization and predictive capabilities available in its ML models. The Power Bid Optimization solution includes multiple components that play specific roles. Let’s walk through the components involved and their respective business function. Data collection and ingestion The data collection and ingestion layer connects to all upstream data sources and loads the data into the data lake. Electricity market bidding requires at least four types of input: Electricity demand forecasts Weather forecasts Market price history Power price forecasts These data sources are accessed exclusively through APIs. Therefore, the ingestion components need to be able to manage authentication, data sourcing in pull mode, data preprocessing, and data storage. Because the data is being fetched hourly, a mechanism is also required to orchestrate and schedule ingestion jobs. Data preparation As with most ML use cases, data preparation plays a critical role. Data comes from disparate sources in a number of formats. Before it’s ready to be consumed for ML model training, it must go through some of the following steps: Consolidate hourly datasets based on time of arrival. A complete dataset must include all sources. Augment the quality of the data by using techniques such as standardization, normalization, or interpolation. At the end of this process, the curated data is staged and made available for further consumption. Model training and deployment The next step consists of training and deploying a model capable of predicting optimal market bids for buying and selling energy. To minimize the risk of underperformance, Marubeni used the ensemble modeling technique. Ensemble modeling consists of combining multiple ML models to enhance prediction performance. Marubeni ensembles the outputs of external and internal prediction models with a weighted average to take advantage of the strength of all models. Marubeni’s internal models are based on Long Short-Term Memory (LSTM) architectures, which are well documented and easy to implement and customize in TensorFlow. Amazon SageMaker supports TensorFlow deployments and many other ML environments. The external model is proprietary, and its description cannot be included in this post. In Marubeni’s use case, the bidding models perform numerical optimization to maximize the revenue using a modified version of the objective functions used in the publication Opportunities for Energy Storage in CAISO . SageMaker enables Marubeni to run ML and numerical optimization algorithms in a single environment. This is critical, because during the internal model training, the output of the numerical optimization is used as part of the prediction loss function. For more information on how to address numerical optimization use cases, refer to Solving numerical optimization problems like scheduling, routing, and allocation with Amazon SageMaker Processing . We then deploy those models through inference endpoints. As fresh data is ingested periodically, the models need to be retrained because they become stale over time. The architecture section later in this post provides more details on the models’ lifecycle. Power bid data generation On an hourly basis, the solution predicts the optimal quantities and prices at which power should be offered on the market—also called bids . Quantities are measured in MW and prices are measured in $/MW. Bids are generated for multiple combinations of predicted and perceived market conditions. The following table shows an example of the final bid curve output for operating hour 17 at an illustrative trading node near Marubeni’s Los Angeles office. Date Hour Market Location MW Price 11/7/2022 17 RT Energy LCIENEGA_6_N001 0 $0 11/7/2022 17 RT Energy LCIENEGA_6_N001 1.65 $80.79 11/7/2022 17 RT Energy LCIENEGA_6_N001 5.15 $105.34 11/7/2022 17 RT Energy LCIENEGA_6_N001 8 $230.15 This example represents our willingness to bid 1.65 MW of power if the power price is at least $80.79, 5.15 MW if the power price is at least $105.34, and 8 MW if the power price is at least $230.15. Independent system operators (ISOs) oversee electricity markets in the US and are responsible for awarding and rejecting bids to maintain electric grid reliability in the most economical way. California Independent System Operator (CAISO) operates electricity markets in California and publishes market results every hour prior to the next bidding window. By cross-referencing current market conditions with their equivalent on the curve, analysts are able to infer optimal revenue. The Power Bid Optimization solution updates future bids using new incoming market information and new model predictive outputs AWS architecture overview The solution architecture illustrated in the following figure implements all the layers presented earlier. It uses the following AWS services as part of the solution: Amazon Simple Storage Service (Amazon S3) to store the following data: Pricing, weather, and load forecast data from various sources. Consolidated and augmented data ready to be used for model training. Output bid curves refreshed hourly. Amazon SageMaker to train, test, and deploy models to serve optimized bids through inference endpoints. AWS Step Functions to orchestrate both the data and ML pipelines. We use two state machines: One state machine to orchestrate data collection and ensure that all sources have been ingested. One state machine to orchestrate the ML pipeline as well as the optimized bidding generation workflow. AWS Lambda to implement ingestion, preprocessing, and postprocessing functionality: Three functions to ingest input data feeds, with one function per source. One function to consolidate and prepare the data for training. One function that generates the price forecast by calling the model’s endpoint deployed within SageMaker. Amazon Athena to provide developers and business analysts SQL access to the generated data for analysis and troubleshooting. Amazon EventBridge to trigger the data ingestion and ML pipeline on a schedule and in response to events. In the following sections, we discuss the workflow in more detail. Data collection and preparation Every hour, the data preparation Step Functions state machine is invoked. It calls each of the data ingestion Lambda functions in parallel, and waits for all four to complete. The data collection functions call their respective source API and retrieve data for the past hour. Each function then stores the received data into their respective S3 bucket. These functions share a common implementation baseline that provides building blocks for standard data manipulation such as normalization or indexation. To achieve this, we use Lambda layers and AWS Chalice , as described in Using AWS Lambda Layers with AWS Chalice . This ensures all developers are using the same base libraries to build new data preparation logics and speeds up implementation. After all four sources have been ingested and stored, the state machine triggers the data preparation Lambda function. Power price, weather, and load forecast data is received in JSON and character delimited files. Each record part of each file carries a timestamp that is used to consolidate data feeds into one dataset covering a time frame of 1 hour. This construct provides a fully event-driven workflow. Training data preparation is initiated as soon as all the expected data is ingested. ML pipeline After data preparation, the new datasets are stored into Amazon S3. An EventBridge rule triggers the ML pipeline through a Step Functions state machine. The state machine drives two processes: Check if the bid curve generation model is current Automatically trigger model retraining when performance degrades or models are older than a certain amount of days If the age of the currently deployed model is older than the latest dataset by a certain threshold—say 7 days—the Step Functions state machine kicks off the SageMaker pipeline that trains, tests, and deploys a new inference endpoint. If the models are still up to date, the workflow skips the ML pipeline and moves on to the bid generation step. Regardless of the state of the model, a new bid curve is generated upon delivery of a new hourly dataset. The following diagram illustrates this workflow. By default, the StartPipelineExecution action is asynchronous. We can have the state machine wait for the end of the pipeline before invoking the bids generation step by using the ‘ Wait-for callback ‘ option. To reduce cost and time to market in building a pilot solution, Marubeni used Amazon SageMaker Serverless Inference . This ensures that the underlying infrastructure used for training and deployment incurs charges only when needed. This also makes the process of building the pipeline easier because developers no longer need to manage the infrastructure. This is a great option for workloads that have idle periods between traffic spurts. As the solution matures and transitions into production, Marubeni will review their design and adopt a configuration more suited for predictable and steady usage. Bids generation and data querying The bids generation Lambda function periodically invokes the inference endpoint to generate hourly predictions and stores the output into Amazon S3. Developers and business analysts can then explore the data using Athena and Microsoft Power BI for visualization. The data can also be made available via API to downstream business applications. In the pilot phase, operators visually consult the bid curve to support their power transaction activities on markets. However, Marubeni is considering automating this process in the future, and this solution provides the necessary foundations to do so. Conclusion This solution enabled Marubeni to fully automate their data processing and ingestion pipelines as well as reduce their predictive and optimization models’ deployment time from hours to minutes. Bid curves are now automatically generated and kept up to date as market conditions change. They also realized an 80% cost reduction when switching from a provisioned inference endpoint to a serverless endpoint. MPII’s forecasting solution is one of the recent digital transformation initiatives Marubeni Corporation is launching in the power sector. MPII plans to build additional digital solutions to support new power business platforms. MPII can rely on AWS services to support their digital transformation strategy across many use cases. “ We can focus on managing the value chain for new business platforms, knowing that AWS is managing the underlying digital infrastructure of our solutions. ” – Hernan Figueroa, Sr. Manager Data Science at Marubeni Power International. For more information on how AWS is helping energy organizations in their digital transformation and sustainability initiatives, refer to AWS Energy . Marubeni Power International is a subsidiary of Marubeni Corporation. Marubeni Corporation is a major Japanese trading and investment business conglomerate.  Marubeni Power International mission is to develop new business platforms, assess new energy trends and technologies and manage Marubeni’s power portfolio in the Americas. If you would like to know more about Marubeni Power, check out https://www.marubeni-power.com/ . About the Authors Hernan Figueroa leads the digital transformation initiatives at Marubeni Power International. His team applies data science and digital technologies to support Marubeni Power growth strategies. Before joining Marubeni, Hernan was a Data Scientist at Columbia University. He holds a Ph.D. in Electrical Engineering and a B.S. in Computer Engineering. Lino Brescia is a Principal Account Executive based in NYC. He has over 25 years of technology experience and has joined AWS in 2018. He manages global enterprise customers as they transform their business with AWS cloud services and perform large-scale migrations. Narcisse Zekpa is a Sr. Solutions Architect based in Boston. He helps customers in the Northeast U.S. accelerate their business transformation through innovative, and scalable solutions, on the AWS Cloud. When Narcisse is not building, he enjoys spending time with his family, traveling, cooking, playing basketball, and running. Pedram Jahangiri is an Enterprise Solution Architect with AWS, with a PhD in Electrical Engineering. He has 10+ years experience in the energy and IT industry. Pedram has many years of hands-on experience in all aspects of Advanced Analytics for building quantitative and large-scale solutions for enterprises by leveraging cloud technologies. Sarah Childers is an Account Manager based in Washington DC. She is a former science educator turned cloud enthusiast focused on supporting customers through their cloud journey. Sarah enjoys working alongside a motivated team that encourages diversified ideas to best equip customers with the most innovative and comprehensive solutions. TAGS: Amazon SageMaker , AWS Lambda , machine-learning , serverless , sustainability Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
How Technology Leaders Can Prepare for Generative AI _ AWS Cloud Enterprise Strategy Blog.txt
AWS Cloud Enterprise Strategy Blog How Technology Leaders Can Prepare for Generative AI by Phil Le-Brun | on 24 MAY 2023 | in Artificial Intelligence , Generative AI , Thought Leadership | Permalink | Comments |  Share We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. —Roy Amara, Amara’s law I’m fascinated by the technological tipping points in history that have ignited the public’s imagination—the first TV broadcast, manned space flight, or video conference. Each of these events made a previously esoteric technology or concept tangible. As Amara implies in his “law,” these events are preceded by false starts and inflated expectations. When (if) a tipping point is reached, it is usually accompanied by decades of unseen work described by the S-curve of innovation. Think of past promises of virtual worlds becoming commonplace. While expectations have exceeded reality, organisations and leaders that have curiously leaned in to learn, grounding themselves in real-world business problems like customer demand for more immersive customer experiences, are better prepared for when virtual worlds become mainstream. The most glaring current example of such an emerging technology is generative AI. To the public, generative AI has seemingly appeared from nowhere. But if you dig deeper, you’ll note that the ideas underlying generative AI solutions trace their lineage back to inventions such as the Mark I perceptron in 1958 and neural networks in the late twentieth century. Advancements in statistical techniques, the vast growth of publicly available data, and the power of the cloud have all been instrumental in making generative AI possible. You’ve likely come across two terms associated with generative AI. Foundation Models (FMs) are machine learning (ML) models trained on massive quantities of structured and unstructured data, which can be fine-tuned or adapted for more specific tasks. Large Language Models (LLMs) are a subset of FMs focused on understanding and generating human-like text. These models are ideal for needs such as translation, answering questions, summarising information, and creating or identifying images. AWS and Generative AI AWS has been investing in and using FMs for several years in areas such as search on Amazon.com and delivering conversational experiences with Alexa. You’ve probably seen the announcements from AWS on generative AI, so I won’t repeat them here. With all the hype and marketing that can surround new technologies, having a clear executive understanding of the “what” and “why” is foundational. Since the launch of Amazon SageMaker in 2017, there has been a continual stream of ML and AI services broadening the reach of these tools to technologists and non-technologists alike. AWS’s mission has been to expand access, given the profound implications of these technologies. The recent announcements continue this mission with a more open approach to delivering the capabilities organisations need. For example, the approach with Amazon Bedrock will provide wide access to pre-trained models that can be customised with your own data, allow data to be kept private, and leverage the power of the cloud to deliver capabilities securely and at scale. Companies don’t have to think about model hosting, training, or monitoring and can instead focus on the outcomes they are driving towards. Amazon Bedrock addresses the simple fact that one solution – or one model – is unlikely to solve every business problem you face. Nor will the costly contribution of confidential data to public models, as some organisations have already learned. While generative AI is neither a silver bullet nor “just a better search engine,” it is clearly now on everyone’s radar. The potential is huge. Imagine pharmaceutical companies accelerating the design of gene therapies, borrowers having rich conversational experiences with mortgage providers that quickly approve their loans, or everyone everywhere gaining opportunities through broadening access to ongoing knowledge and educational pathways. I’m a nearly competent hobbyist coder and look forward to improving my skills with active suggestions from generative AI-powered real-time suggestions. So as a Chief Information Officer, Chief Technology Officer, or Chief Data Officer, what should you be thinking about, and how can you prepare? Here are a few topics we believe are important. Get Focused on Your Cloud Journey Do you remember those TV programmes you used to watch as children, the ones that warned: “Don’t try this at home”? I’d give a variant of this warning with generative AI: “Don’t try this without the cloud.” You want your teams focused on problem-solving and innovation, not on managing the underlying complexity and cost of enabling infrastructure and licenses. The cloud is the enabler for generative AI, making available cost-effective data lakes, sustainably provisioned GPUs and compute, high-speed networking, and consumption-based costing. Coupled with compute instances powered by AWS Trainium and AWS Inferentia chipsets to optimise model training and inferences, the cloud can provide lower costs, better performance, and an improved carbon footprint versus on-premises solutions, if the latter is even a realistic alternative. Get Your Data Foundations Right—Now The boldest house built on dodgy foundations will not last. The same is true in the world of ML. With generative AI, quality trumps the quantity of business data available. While it’s common to talk about technology debt, we need to acknowledge that many organisations have unwittingly accumulated analogous debt with data. This typically stems from a lack of data quality, fragmented or siloed data sources, a lack of data literacy, inadequate upfront considerations of how data should be integrated into products, and a culture that talks about data but doesn’t use it day-to-day. Now is the time to implement these fundamentals (many of which I’ve discussed in my previous blog post , including how critical the leaders of data in an organisation are). After all, the bulk of time spent bringing ML to life is still associated with activities such as data wrangling and labelling . Think Beyond the Technology The world of generative AI is incredibly exciting, but technology rarely operates in a vacuum. Face the law of unintended consequences. Start by considering your stance on ethics, transparency, data attribution, security, and privacy with AI. How can you ensure the technology is used accurately, fairly, and appropriately? Resources exist , as do great readings like Michael Kearns’s book The Ethical Algorithm , but these alone are insufficient. It’s a great opportunity to actually do something! For example, prioritise diversity of skills and worldviews and ensure those engaged in creating and using models represent the diversity of your customers; this helps ensure relevance and the early identification of potential biases. Train on these considerations; bake them into your governance and compliance frameworks and even into your vendor selection processes to select partners who share the same values as you. Upskill Yourself and Your People AI simultaneously evokes excitement and concern. It opens a world of knowledge, innovation, and efficiency but leaves many wondering about the implications for their job security. The continued emergence of AI as a profoundly impactful tool requires considering which skills might be needed less in the future and which will be in demand. Consider the technical skills required and how to infuse them into your organisation. Programmes like Machine Learning University can help, but it’s important to think bigger. Skills such as critical thinking and problem-solving will become even more vital. We ultimately want people, assisted by AI, to solve real business challenges and critically assess and question inferences from ML models. This is particularly important with generative AI models that distil data rather than provide considered answers. Make the space to practice these skills by incrementally and consistently eliminating low-value work—perhaps even by using ML! Upskilling goes beyond individuals developing their skills. According to Tom Davenport’s research , 35 percent of Chief Data Officers have found that running data and AI-enabled initiatives are powerful change tools. Hunkering down in data silos in an attempt to deliver value alone has given way to running cross-organisational initiatives. This functional approach helps broaden data advocacy and excitement about what might be possible. Start Considering Use Cases I love the saying, “Fall in love with the problem, not the solution.” It reminds us that while technology is a brilliant enabler, it is just one more set of tools we can apply to real-world problems. What time-consuming, difficult or impossible problems could generative AI help solve? Where do you have data to help in this process? Think big about the opportunities, but start small with problems that cause day-to-day irritations, what we call “paper cuts.” Can these annoyances be automated away, freeing up organisational time while improving comprehension of AI? For instance, developers can use Amazon Code Whisperer to gain an understanding of generative AI’s power in assisting productivity improvements while making suggestions for using unfamiliar APIs, coding more securely, and more. Internal benchmarks show a remarkable 57 percent improvement in productivity while increasing the success rate of completing tasks. What a fantastic, immediate opportunity to be a productivity hero in your organisation! Last, be excited but stay grounded. We’re at an inflexion point with LLMs. Sometimes it feels like the more we learn about AI, the less we know. Approach generative AI with an open, curious mind, but avoid the hype. Critically appraise what you read, and don’t believe there will be a singular best model to adopt. The best approach, and one I’m glad to see AWS has embraced with Amazon Bedrock, is to recognise that different FMs will serve different needs. It democratises access for all builders, allowing commercial and open-source FMs to be adopted. Those already experienced in AI will know this and recognise that the AWS cloud, which provides multiple models, offers a better approach than betting on a single model. Phil Further Reading Announcing New Tools for Building with Generative AI on AWS , Swami Sivasubramanian A guide to making your AI vision a reality , Tom Godden Activating ML in the Enterprise: An Interview with Michelle Lee, VP of Amazon Machine Learning Solutions Labs , Phil Le-Brun Machine Learning University Prioritising Business Value Creation from Data , Phil Le-Brun TAGS: Artificial Intelligence , Machine Learning Phil Le-Brun Phil Le-Brun is an Enterprise Strategist and Evangelist at Amazon Web Services (AWS). In this role, Phil works with enterprise executives to share experiences and strategies for how the cloud can help them increase speed and agility while devoting more of their resources to their customers. Prior to joining AWS, Phil held multiple senior technology leadership roles at McDonald’s Corporation. Phil has a BEng in Electronic and Electrical Engineering, a Masters in Business Administration, and an MSc in Systems Thinking in Practice. Comments View Comments Resources AWS Executive Insights Conversations with Leaders Podcast Conversations with Leaders Video Series AWS Executive Connection on LinkedIn Follow  Twitter  Facebook  LinkedIn  Twitch  RSS Feed  Email Updates
Idealo Case Study.txt
Tiếng Việt Français 151% conversion rate increase in email campaign Español AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn how »  日本語 Amazon SageMaker 2023 With 2.5 million daily page views and over 76 million monthly visits, idealo offers an online portal for customers in six countries across Europe to compare prices for over 500 million products from about 50,000 vendors. User traffic drives revenue from advertisers who closely track certain key performance indicators (KPIs) in the highly competitive retail industry. These KPIs include click-through rates, a measure of how often a customer visits a website to make a purchase, and session rates, the amount of time a user spends on a website. “We wanted to improve what we were already doing as a company and explore other business opportunities,” says Luiz Davi, ML product manager at idealo. “The goal was to build a central offering for the whole company for product recommendations and user-based personalized recommendations.” The team uses solutions from AWS to alleviate much of the manual work involved with the orchestration of data so that it can experiment fast, iterate on models in development, and push useful ML models into production twice as fast as it previously could. It built a pipeline using Amazon SageMaker, which developers use to build, train, and deploy ML models for nearly any use case with fully managed infrastructure, tools, and workflows. “Using Amazon SageMaker really speeds up the whole iteration cycle,” says Arjun Roy, idealo ML engineer. “When I think of innovation, I think about playing around with the data and trying different models. And as an extension to that, the pipelines are very flexible.” For example, ML engineers could run one of their models in one-sixteenth of the time by using a technique called parallelizing. The team spun up 16 compute instances to speed the process of running the model on AWS. “If we had to run the servers and host the applications ourselves, that would require much, much more time,” says Davi. “Now, we can be agile and try different approaches as we go.” Furthermore, idealo allocates costs granularly to certain workloads using the cost transparency of AWS services. AWS Lambda 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Get Started Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Solution | Building an ML Pipeline on AWS that Delivers Personalized Recommendations at Scale Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. Learn more » Outcome | Recommending Products in Near Real Time AWS Services Used Based in Germany, idealo is an online price comparison service that operates in six European countries. The website has over 76 million monthly visits, as customers compare prices for over 500 million products offered from about 50,000 vendors. Luiz Davi, Machine Learning Product Manager, idealo In early 2022, the team developed an ML model that provides complementary recommendations: items that correspond to a purchase, such as a case for a purchased mobile phone. In 3 months, the MLE team released the initial model into production. “That first model showed an impressive improvement from our past benchmark,” says Davi. “That opened multiple doors inside the company so that we could move forward and try more.” The team quickly built upon its success with another model that recommends similar products, which are items that are comparable to a purchased item. The team then created an even more sophisticated model, using data about complementary and similar purchases to deliver personalized recommendations to customers. idealo promotes items of interest to customers based on information collected automatically—with permission—about their shopping history. The German price-comparison site idealo built a machine learning pipeline on AWS that facilitated the ability of its data scientists to deliver models that drive improvements in key marketing metrics. idealo offers 500 million products to users in six European countries. Its Machine Learning Engineering team used Amazon SageMaker and AWS Lambda as tools to help the team experiment fast, automate manual processes, and get models into production quickly. Its user-recommendation model increased click-through rates by 111 percent and session rates by 151 percent, and it enhanced the overall customer experience. Bahasa Indonesia Opportunity | Using ML to Attract Customers Online Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي The automated user-recommendation engine has also generated success for the idealo website, which has seen a 111 percent rise in click-through rates and has increased session rates by 151 percent. “We’ve made a huge leap,” says Davi. “We can see the impact that it’s generating for our internal users. And then we see that impact on the website. People are deciding to buy specific items because they found what they wanted.”   中文 (简体) In 2021, idealo, a subsidiary of Axel Springer SE, decided to migrate all in to the cloud. It wanted to remove the operational risks of its aging on-premises data center, improve the scalability and reliability of the idealo solution for its customers, and boost KPIs. The MLE team was an early adopter of AWS services within idealo, identifying several use cases that it wanted to explore to enhance CRM. The team decided to build a small prototype in the cloud that could drive immediate value, and then iterate through A/B testing to evaluate the impact of the ML model and use the insights to steer business decision-making. 111% The Machine Learning Engineering (MLE) team of the German price-comparison service idealo wanted to create a scalable, customizable product recommendation engine to support the company’s marketing efforts. Targeted product recommendations help to increase online traffic, attract merchants, and inform consumers’ purchasing decisions. To build a streamlined, agile machine learning (ML) pipeline to support powerful data-driven recommendation tools, the team turned to solutions from Amazon Web Services (AWS). ML engineers released models into production and significantly improved the effectiveness of its customer-relationship management (CRM) campaigns. The click-through rates have doubled, session rates have increased by 151 percent, and personalized recommendations are enhancing the customer experience. Overview About Company We see great potential as we advance this initiative. Using AWS, we create products that support us as a company moving forward.” 154% Türkçe 中文 (繁體) English The team delivers additional functionality to the CRM team through the use of AWS Lambda, a serverless, event-driven compute service that lets organizations run code for virtually any type of application or backend service without provisioning or managing servers. Through AWS Lambda functions, customized bargains automatically generate as part of the CRM team’s monthly email campaign. “We have automated the process so that we don’t have to do manual work to keep it running,” says David Rosin, idealo ML engineer. “We set it up once, and ideally, it runs every month.” Customers who receive the emails see bargains that have been automatically selected specifically for them. “Using the MLE team implementation versus our old top-sellers’ logic, we achieved a conversion rate increase of 154 percent,” says Felix Gehlhaar, idealo’s CRM manager, who closely collaborated with the MLE team. “This is exciting for us.” idealo Doubles Click-Through Rate through Personalized Recommendation Engine Developed Using Amazon SageMaker to production for ML models in half increase in session rates Deutsch As the entire company continues its migration to AWS, internal idealo teams share data and collaborate more effectively. “One of our CRM managers told us that the ability to share information makes his life much simpler,” says Davi.Throughout 2023, the MLE team plans to explore using near-real-time data to continue to improve KPIs by driving recommendations, a process that builds upon its strong ML pipeline. “There’s a lot to build,” says Davi. “We have never tried something like this before, but we see great potential as we advance this initiative. Using AWS, we create products that support us as a company moving forward.” Customer Stories / Retail / Germany AWS Customer Success Stories Italiano ไทย Learn more » Cut time rise in click-through rates Português
IDEMIA Case Study _ Security and Compilance _ AWS.txt
Transforming a compliance-driven, on-premise suite into a SaaS solution posed technical challenges, but Jerry O’Brien, IDEMIA’s Chief Product Manager, knew that AWS was the answer. “Many smaller jurisdictions would never have been able to afford our original product,” explains O’Brien. “But with the AWS Cloud, we saw that we could automate delivery, implementation and offer a subscription price model providing predictable year-to-year budgeting.” Technology and identity-security company IDEMIA is a biometrics industry leader known for forensic analysis software that enables law enforcement agencies to scan and identify fingerprints at scale. To expand their market range and serve more customers, IDEMIA needed to adapt their enterprise application into a lightweight, cloud-based software as a service (SaaS) solution, which would offer a subscription cost model that small agencies could deploy. IDEMIA leveraged the Amazon Web Services (AWS) Cloud and the AWS Go to Market team to bring their new solution, STORM ABIS, to life. Français Benefits of AWS Español “It’s about speed,” says Coleman. “If you can run prints right then and there, locally, you can solve crimes faster. You can solve problems faster, you can be proactive, you can catch repeat offenders—and your community will be safer, as a result.” For IDEMIA, the build-market-sell approach was a huge success—only weeks after launching the software, IDEMIA made their first sale. The first deployment of STORM ABIS will launch in Washington County in Oregon in Spring of 2022. Elastic Load Balancing 日本語 Amazon Elastic Block Store (EBS) Get Started 한국어 Trusted by hundreds of governments and thousands of enterprises in over 180 countries, IDEMIA is a global leader in providing identity-related security services. IDEMIA’s technologies enable our clients to credentialize, authenticate and analyze identities for frictionless access control, connectivity, identity, payments, public security, and travel—at scale and in total security. Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones (AZs). The product was created with collaboration from AWS in two strategic areas. AWS provided strategic advisory services with a dedicated team of business and technical professionals from our AWS Service Creation and the AWS Professional Services teams. The final product is a multi-tenant solution, backed by Amazon Elastic Compute Cloud (Amazon EC2) instances, that can be deployed within weeks or faster, if an agency already had a mature cloud environment. To make the solution customizable, the IDEMIA team also created a features toggle, providing agencies the option to turn certain product features on or off depending on their needs. AWS and IDEMIA finished building STORM ABIS in 2021. And with an agile, scalable, and cost-effective end-product in hand, it was time to go to market. Amazon EC2 Jerry O’Brien Chief Product Manager, IDEMIA AWS Services Used With STORM ABIS ready for general availability, Randy Jones, AWS Independent Software Vendor (ISV) Acceleration Manager, worked alongside IDEMIA to implement a marketing and sales strategy targeting mid-market law enforcement agencies across the United States. Beyond providing funding and expertise to market the product, Jones and his team also helped support the product launch at IDEMIA’s annual user conference. “We had multiple members of AWS's Justice and Public Safety team attend the conference, which was crucial to connect with customers,” explains Jones. Jeremy Slavish of AWS’s Justice and Public Safety team had procured and used IDEMIA solutions in a previous role and worked closely with many attendees to determine how STORM would meet their unique ABIS needs. 中文 (繁體) Bahasa Indonesia Amazon Aurora Keeping communities safe with agile, cloud-based solutions Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Now, officers can have this technology at their fingertips, and it’s cost-effective...they can run prints for cold cases and minor crimes that might not otherwise be solved—and get repeat offenders off the streets." No hardware, training, or on-boarding required Choosing AWS for compliance and scalability Learn more » STORM ABIS needed to adhere to strict security and compliance regulations from local, state, and federal agencies. AWS offered IDEMIA the security configurations they required, along with access to a team of cloud experts that could help build the solution from scratch. With a compliant and secure foundation to build on, IDEMIA and AWS worked together to design a cloud-first application that was made by examiners, for examiners. Christopher Coleman, Senior Director of Marketing at IDEMIA, adds that AWS not only offered their knowledge and networking support during product development, but they also supplemented their marketing and sales staff. “AWS expanded our reach beyond large federal and state agencies to reach those critical Tier two, three, and four jurisdictions,” Coleman says. “And they also provided extra support for sales and marketing to help us foster relationships in those smaller cities and counties. That was critical because we simply didn’t have the bandwidth to tackle that on our own.” Easily scales to add more users, as needed Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Making America's Neighborhoods Safer with IDEMIA Cloud-Based Fingerprinting Software English For small agencies, STORM ABIS is about more than cost savings. It’s about giving every law enforcement officer the tools they need to solve crimes faster—which ultimately acts as a preventative measure to keep communities safe. “Before STORM, local jurisdictions had to prioritize which fingerprints they ran because they had limited access to resources and a long backlog,” says O’Brien. “Now, officers have this technology at their fingertips, and it’s cost-effective—so they can run 10, 20, or 100 prints at a time. They can run prints for cold cases and minor crimes that might not otherwise be solved—and get repeat offenders off the streets.” An out-of-box SaaS-based solution, easily deployable by agencies of any size About IDEMIA Deutsch Amazon Aurora is a relational database management system (RDBMS) built for the cloud with full MySQL and PostgreSQL compatibility. Tiếng Việt Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). Italiano ไทย Contact Sales Cloud storage automatically backs up data Marketing and launching the new product 2022 Accessible anywhere, via any web browser—including from home or directly from a crime scene 中文 (简体) Continuous updates of algorithms, features, and security patches via cloud-native architecture Português
Illumina Case Study _ Genomics _ AWS.txt
Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Illumina platforms are also helping research transition seamlessly into a multiomic future. The cloud-based DRAGEN Single-Cell RNA Pipeline, for example, allows scientists to annotate gene expression in individual cells. With the DRAGEN-acceleration, the platform can process three cell samples simultaneously in parallel in approximately 53 minutes. Français Benefits of AWS While advanced users have the option to customize tools like ICA and DRAGEN to perform niche research, Illumina also offers end-to-end cloud solutions with out-of-the-box functionality for specific uses. These include the TruSightTM Software Suite, a variant analysis software solution for uncovering rare disease insights, and TruSight Oncology 500, a fine-tuned sequencing assay for analyzing tumors and identifying immune-oncology biomarkers. Español Amazon EC2 Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. Learn more » 日本語 AWS Services Used See how AWS is supporting other leading life science organizations in their quest to improve human health.    With large population genetics initiatives on the rise and expanding access to powerful analysis software solutions like ICA, Illumina is fully embracing the power of “big data” in genomics to help customers mine rich insights from massive volumes of sequencing data. These projects will fuel a new era of personalized genomics, allowing researchers to draw connections between genes and health outcomes that were not evident in smaller samples. “The genomics industry is expanding in all directions, from direct-to-consumer testing to personalized cancer vaccines,” says Susan Tousi, Illumina’s chief commercial officer. “Illumina’s goal is to democratize access to genomics technologies around the globe; we’ve partnered with AWS from the beginning to give our customers the answers they need. Over the past decade, we’ve expanded our software portfolio available on AWS to provide a seamless, holistic suite of solutions that can be deployed out-of-the-box or customized to meet specific needs.” Building the Future of Genomics and Biotechnology 한국어 Amazon EC2 Spot Instances “With ICA, DRAGEN, and other tools deployed on AWS, we’re providing solutions that enable customers to aggregate any data types, including NGS and health data, to extract novel information from those large cohorts and improve human health at scale,” says Mehio. Data for these platforms is stored on Amazon Simple Storage Service (Amazon S3), a scalable object storage service. Illumina customers power and dramatically accelerate their analyses with DRAGEN running on Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud. Navigating from Sample to Answer Illumina also lowers costs for customers by running many of its platforms’ compute jobs on Amazon EC2 Spot Instances, which are available at up to a 90 percent discount compared to On-Demand pricing.  “Our customers have used hundreds of thousands of hours of Spot Instances in the past year alone, which has provided significant cost savings for them,” says Tousi. Learn More AWS Virtual Private Cloud AWS supports thousands of security standards and compliance certifications, including HIPAA, GDPR, ISO 27001, and ISO 13485, helping customers satisfy compliance requirements throughout their genomics workflows. Illumina offers customers extra peace of mind by offering data management in Amazon Virtual Private Cloud (Amazon VPC), which launches other AWS resources in a logically isolated custom virtual network that separates one customer’s data from another’s. Deployed robust portfolio of genomics solutions globally in secure and compliant environment Cost savings and technical advantages can go hand in hand. Illumina recently migrated the tertiary analysis Correlation Engine to AWS, saving costs while scaling data ingestion pipelines to by six times to make the knowledgebase grow faster and become more powerful. Amazon S3 Storage Classes can be customized according to different data needs, making it easy for Illumina to optimize for maximum cost savings. By storing petabytes of infrequently accessed data in Amazon S3 Glacier Deep Archive, Illumina customers save over 90 percent in storage costs. Similarly, DRAGEN runs on Amazon EC2 F1 instances, which offer affordable, accelerated computing that can support the parallel processes Illumina needs. F1 instances offer customizable hardware acceleration with DRAGEN field-programmable gate arrays (FPGAs). To scale DRAGEN across F1 instances, the company used AWS Batch, a fully managed batch processing service that plans, schedules, and executes batch computing workloads. In the last decade, genomics has evolved from a specialty research area into a powerful clinical tool that has ushered in a new era of patient-focused healthcare. Genome sequencing and analysis have become simpler, cheaper, and more comprehensive, making it realistic for clinicians to order genetic tests for individual patients and for researchers to examine thousands of samples to draw connections between genetic variation and human disease. While the first human genome took decades to sequence, scientists can now efficiently sequence an entire human genome in under 24 hours. 中文 (繁體) Bahasa Indonesia Illumina's mission is to unlock the power of the genome to improve human health. An AWS Partner, the company has been a driving force behind technological advancement in genomics, evolving from a sequencing instrument vendor into a complete genomic solutions provider and deploying software solutions on Amazon Web Services (AWS) since 2013. Illumina’s AWS-backed software solutions are lowering barriers to entry and helping researchers generate new discoveries every day, driving drug discovery and more.  This global scalability and deployment facilitates meaningful collaboration for both long-term projects and expedient crisis response. Researchers worldwide processed over 371,000 COVID-19-related samples on Illumina’s COVID-19 BaseSpace Apps in 2020 and the first half of 2021. “If customers were only able to do this on premises, we would have met serious constraints. Therefore, the cloud was key for powering the global pandemic response on that level,” says Tousi. Contact Sales Ρусский عربي “AWS provides us options to optimize for speed, flexibility, and cost and cater for the end customer use case and needs,” says Mehio. “Some users may want to perform genetic analyses as quickly as possible, whereas some academic users might opt to sacrifice some speed to lower costs and save research dollars. By leveraging different F1 instance types and storage options, our users maintain flexibility and the ability to scale up and down as needed.” Reducing Costs by Saving on AWS 中文 (简体) A complete next-generation genomics workflow starts with sample collection, preparation, and sequencing, but that’s just the beginning. After that comes the heavy bioinformatics lifting, starting with raw read quality control, data preprocessing, and alignment. Scientists can then move into secondary analyses like variant calling, and finally, conduct advanced tertiary analyses based on their interests. These tertiary analyses can include phylogenetic annotation, genotype-phenotype associations, and much more. For researchers and clinicians who aren’t bioinformatics experts, performing each step on a separate platform can quickly become overwhelming. Learn more » Learn more » “We want to democratize access to genomics technologies; passing cost savings on to our customers is a huge part of this effort,” says Tousi. “Cost should not be a deciding factor for research or clinical applications—people should perform sequencing and analysis purely based on how they anticipate being able to use the data.” “Security is job zero––it’s at the center of everything we do,” says Tousi. “At the very foundation, we can count on the AWS Shared Responsibility Model to ensure that our underlying cloud infrastructure maintains enterprise-level security and compliance. By leveraging Amazon EC2 Regions globally, we’re bringing compute to the data, supporting customers in all regions while allowing them to maintain data sovereignty.” About Illumina Get Started Rami Mehio Vice President of Bioinformatics and Instrument Software, Illumina AWS Healthcare & Life Sciences Virtual Symposium 2021: Illumina We’re delivering a complete workflow—from sample preparation to tertiary analysis—in the secure AWS environment that allows all of the information generated before and after sequencing to be aggregated and analyzed.” Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Illumina streamlines this entire genomics workflow for customers, offering integrated solutions for every step. Starting from the beginning, BaseSpaceTM Clarity LIMS (Laboratory Information Management Systems) helps genomics customers track samples and optimize sequencing workflows. Sequencing instruments can upload data directly into the Illumina Connected Analytics (ICA) platform, where users can manage datasets and leverage analytical tools within the platform on AWS. The DRAGENTM Bio-IT platform provides accurate, ultra-rapid secondary analysis results. At the same time, BaseSpace Correlation Engine integrates individuals’ datasets and queries into a repository of open-access and controlled-access public datasets to enable a wide variety of tertiary analyses. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Accelerated research and promoted collaboration of customers worldwide to process over 371,000 COVID-19 related samples Facilitated access to streamlined, unified, customizable samples-to-analysis workflows Drastically reduced computing and storage costs with Amazon EC2 Spot Instances and Amazon S3 Glacier Illumina Brings Genomics from Samples to Answers Using AWS “We’re delivering a complete workflow—from sample preparation to tertiary analysis—in the secure AWS environment that allows all of the information generated before and after sequencing to be aggregated and analyzed,” says Rami Mehio, vice president of software and bioinformatics at Illumina. “That’s powerful for customers who want to track samples over time, cross-reference their data with publicly available databases, and glean insights for faster results.” Since its inception, Illumina has reduced the cost of genomics technology at a rate that exceeds Moore’s Law. Sequencing a single human genome cost over $100 million in 2001; 20 years later, it can cost as little as $600. Deutsch Tiếng Việt Amazon S3 “We rely on the strength of AWS tools as a backbone that allows us to focus on designing genomics-specific algorithms,” says Mehio. “As researchers’ and clinicians’ needs change, we can easily deploy new features and versions of our products.” Italiano ไทย Human genomic data can be associated with highly personal health information, and data breaches are an ever-growing risk for healthcare organizations worldwide. As a result, security is a paramount consideration for Illumina and its customers, many of whom must adhere to increasingly strict data management regulations. 2021 Secure Solutions for Scaling Global Genomics Illumina develops, manufactures, and markets integrated systems for analyzing genetic variation and biological function. Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you launch AWS resources in a logically isolated virtual network that you define. Português
Illumina Reduced Carbon Emissions by 89 and Lowered Data Storage Costs Using AWS _ Illumina Case Study _ AWS.txt
Français 2023 Español 89% carbon emissions savings Learn how Illumina in the life sciences industry drove sustainability, reduced costs, and optimized data storage using AWS. “Before S3 Intelligent-Tiering, we were analyzing our bill every month to try to find ways to reduce our data storage costs,” says Maynard. Previously, Illumina’s teams would use Amazon S3 lifecycle policies to transition its data into different Amazon S3 storage classes to cut its data storage costs. To streamline this task and optimize its data storage, Illumina decided to adopt the S3 Intelligent-Tiering storage class. By using S3 Intelligent-Tiering, Illumina could allocate its cost savings toward expanding its service and software offering, enhancing the customer experience. 日本語 Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Amazon S3 Intelligent-Tiering is the only cloud storage class that delivers automatic storage cost savings when data access patterns change, without performance impact or operational overhead. in Amazon S3 Intelligent-Tiering, simplifying management Outcome | Reducing Carbon Emissions by 89% Using AWS Compared to On-Premises Outcome | Reducing Costs and Optimizing Data Storage Using Amazon S3 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Transferred data Using the AWS customer carbon footprint tool, Illumina realized an 89 percent reduction of carbon emissions for its usage in AWS during the 12-month period ending November 2022. During this period, the tool reported 290 metric tons of carbon dioxide equivalent (MTCO2e) for Illumina’s usage in AWS compared to an estimated 2,657 MTCO2e if the same workloads were run in an on-premises data center. “Illumina has committed to net-zero emissions by 2050 for our direct operations and across our value chain,” says Sharon Vidal, head of corporate social responsibility at Illumina. “As data demands increase, we are thrilled at the opportunity to reduce carbon emissions not only for our environmental footprint but also for our customers on their sustainability journeys.” Get Started in data storage costs Opportunity | Using Amazon S3 Intelligent-Tiering to Manage a Growing Data Footprint for Illumina AWS Customer Carbon Footprint Tool AWS Services Used About Illumina Illumina further optimized its storage footprint offering customers access to DRAGEN Original Read Archive compression technology. DRAGEN (Dynamic Read Analysis for genomics), Illumina’s premier secondary analysis solution, provides accurate, comprehensive, and efficient secondary analysis for customers performing genomic analysis. DRAGEN ORA technology reduces the data footprint of a human genome by up to 80 percent, eliminating the burden of data storage for customers. This technology can drastically reduce customers’ data storage needs while reducing associated carbon emissions and unlocking additional cost savings. 50 PB of data stored 中文 (繁體) Bahasa Indonesia In 2012, Illumina expanded its line of products to include BaseSpace Sequence Hub—a push-button platform for data management and analysis—where its customers can process, analyze, and store their genomic data securely in the cloud using a basic internet connection. In 2021, Illumina released Illumina Connected Analytics, a secure and flexible bioinformatics platform to drive scientific insights, providing its customers with a scalable and highly configurable platform. company-wide sustainability goals Contact Sales Ρусский Track, measure, review, and forecast the carbon emissions generated from your AWS usage. عربي Advancing Analytics and Further Optimizing Data Storage on AWS 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. “Typically, our customers keep a copy of the data that they generate through BaseSpace Sequence Hub,” says Al Maynard, director of software engineering at Illumina. “Our total data footprint has been climbing very fast because our customers rarely delete genomic data that could be used for future analysis.” Because its customers process their analytics on demand, it is a challenge for Illumina to predict when customers will need access to specific data. Learn more » Illumina specializes in genetic sequencing, offering a full range of software, instruments, and services that help its customers advance their genomic research. Illumina’s mission is to improve human health by unlocking the power of the genome. Illumina is a leading developer, manufacturer, and marketer of life science tools and systems for large-scale genetics analysis. Founded in 1998, Illumina offers a full range of software, instruments, and services that help its customers analyze genomes, make rapid advancements in life sciences research, and improve human health. Illumina’s customers use its genetic-sequencing solutions to accelerate therapeutic and pharmaceutical insights. into an Amazon S3 storage class in minutes With its mission to improve human health and a commitment to operate responsibly and sustainably, Illumina used the AWS Customer Carbon Footprint Tool to track the carbon emissions of its AWS usage. This tool uses easy-to-understand data visualizations to provide customers with their historical carbon emissions, evaluate emission trends as their use of AWS evolves, approximate the estimated carbon emissions avoided by using AWS instead of an on-premises data center, and review forecasted emissions based on current use. The forecasted emissions are based on current usage and show how a customer’s carbon footprint will change as AWS stays on path to powering its operations with 100 percent renewable energy by 2025 and reach net-zero carbon by 2040 as part of The Climate Pledge. Studies conducted by the international analyst firm 451 Research found that moving on-premises workloads to AWS can lower the workload carbon footprint by at least 80 percent and up to 96 percent after AWS is powered with 100 percent renewable energy, a target it is on a path to meet by 2025. The infrastructure of AWS is 3.6 times more energy efficient than the median of surveyed US enterprise data centers and up to 5 times more energy efficient than the average in the EU. Overview As the company expanded its customer base and product line, the amount of genetic data that Illumina securely stored in the cloud grew exponentially—from 1 PB to 100 PB in 8 years. The company’s data growth continued to accelerate, and during 2021–2022 alone, Illumina added over 24 PB of data in Amazon Simple Storage Service (Amazon S3), an object storage service built to store and retrieve virtually any amount of data from anywhere. Further, Illumina predicted that its stored data would continue to double every 2 years, prompting the company to explore ways to optimize its data storage, maximize cost savings, and reduce its carbon emissions. Illumina Reduced Carbon Emissions by 89% and Lowered Data Storage Costs Using AWS compared to on-premises equivalent Türkçe English For over 10 years, Illumina has stored data in AWS using Amazon S3. While looking for ways to optimize its data storage using AWS best practices, Illumina began using Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering), which automates storage cost savings by moving data when access patterns change and automatically moving objects that have not been accessed to lower-cost access tiers. This proved to be ideal for Illumina, given its customers’ unpredictable data access patterns; many of Illumina’s customers frequently access their genomic data during data generation, after which it lies dormant until reanalysis is needed. Opportunity | Driving Sustainability Using AWS As data demands increase, we are thrilled at the opportunity to reduce the carbon emissions not only for our internal environmental footprint but also for our customers on their sustainability journeys.” Amazon S3 Intelligent-Tiering After just 3 months of using S3 Intelligent-Tiering, Illumina began to see significant monthly cost savings. For every 1 TB of data, the company saves 60 percent on storage costs. “I think it’s the biggest return on investment that we’ve ever seen,” says Maynard. Further, Illumina can provide its customers with near-instant access to thousands of whole genome sequences at a low, competitive cost, helping its customers accelerate their research and development. Deutsch Tiếng Việt Amazon S3 Customer Stories / Healthcare Italiano ไทย 60% reduction Learn more » Illumina first tested the S3 Intelligent-Tiering storage class in its test environment and then ran a limited pilot with production data in AWS. A few months later, the company decided to transition 50 PB of data from its BaseSpace Sequence Hub to the S3 Intelligent-Tiering storage class, which took only a few minutes to set up. By using S3 Intelligent-Tiering, Illumina streamlined its internal workflows, simplified its data management, and benefited from more-predictable and lower-cost storage pricing, all while experiencing the same performance as the Amazon S3 Standard storage class. Sharon Vidal Head of Corporate Social Responsibility, Illumina Illumina is now in the process of moving its data from research and development and from Illumina Connected Analytics into S3 Intelligent-Tiering so that it can further optimize its data storage and reduce costs. The company is also looking at using Amazon S3 Storage Lens, which delivers organization-wide visibility into object-storage usage and activity trends, while making actionable recommendations to improve cost efficiency and apply best practices for data protection. “By using AWS, we can limit how much we have to think about managing our data,” says Maynard. “AWS does all the hard work for us, and we get the benefit of extra storage savings and continuous innovation to improve energy efficiency.” Supports Português
Implement unified text and image search with a CLIP model using Amazon SageMaker and Amazon OpenSearch Service _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Implement unified text and image search with a CLIP model using Amazon SageMaker and Amazon OpenSearch Service by Kevin Du and Ananya Roy | on 05 APR 2023 | in Advanced (300) , Amazon OpenSearch Service , Amazon SageMaker | Permalink | Comments |  Share The rise of text and semantic search engines has made ecommerce and retail businesses search easier for its consumers. Search engines powered by unified text and image can provide extra flexibility in search solutions. You can use both text and images as queries. For example, you have a folder of hundreds of family pictures in your laptop. You want to quickly find a picture that was taken when you and your best friend were in front of your old house’s swimming pool. You can use conversational language like “two people stand in front of a swimming pool” as a query to search in a unified text and image search engine. You don’t need to have the right keywords in image titles to perform the query. Amazon OpenSearch Service now supports the cosine similarity metric for k-NN indexes. Cosine similarity measures the cosine of the angle between two vectors, where a smaller cosine angle denotes a higher similarity between the vectors. With cosine similarity, you can measure the orientation between two vectors, which makes it a good choice for some specific semantic search applications. Contrastive Language-Image Pre-Training (CLIP) is a neural network trained on a variety of image and text pairs. The CLIP neural network is able to project both images and text into the same latent space , which means that they can be compared using a similarity measure, such as cosine similarity. You can use CLIP to encode your products’ images or description into embeddings , and then store them into an OpenSearch Service k-NN index. Then your customers can query the index to retrieve products that they’re interested in. You can use CLIP with Amazon SageMaker to perform encoding. Amazon SageMaker Serverless Inference is a purpose-built inference service that makes it easy to deploy and scale machine learning (ML) models. With SageMaker, you can deploy serverless for dev and test, and then move to real-time inference when you go to production. SageMaker serverless helps you save cost by scaling down infrastructure to 0 during idle times. This is perfect for building a POC, where you will have long idle times between development cycles. You can also use Amazon SageMaker batch transform to get inferences from large datasets. In this post, we demonstrate how to build a search application using CLIP with SageMaker and OpenSearch Service. The code is open source, and it is hosted on GitHub . Solution overview OpenSearch Service provides text-matching and embedding k-NN search. We use embedding k-NN search in this solution. You can use both image and text as a query to search items from the inventory. Implementing this unified image and text search application consists of two phases: k-NN reference index – In this phase, you pass a set of corpus documents or product images through a CLIP model to encode them into embeddings. Text and image embeddings are numerical representations of the corpus or images, respectively. You save those embeddings into a k-NN index in OpenSearch Service. The concept underpinning k-NN is that similar data points exist in close proximity in the embedding space. As an example, the text “a red flower,” the text “rose,” and an image of red rose are similar, so these text and image embeddings are close to each other in the embedding space. k-NN index query – This is the inference phase of the application. In this phase, you submit a text search query or image search query through the deep learning model (CLIP) to encode as embeddings. Then, you use those embeddings to query the reference k-NN index stored in OpenSearch Service. The k-NN index returns similar embeddings from the embedding space. For example, if you pass the text of “a red flower,” it would return the embeddings of a red rose image as a similar item. The following figure illustrates the solution architecture. The workflow steps are as follows: Create a SageMaker model from a pretrained CLIP model for batch and real-time inference. Generate embeddings of product images using a SageMaker batch transform job. Use SageMaker Serverless Inference to encode query image and text into embeddings in real time. Use Amazon Simple Storage Service (Amazon S3) to store the raw text (product description) and images (product images) and image embedding generated by the SageMaker batch transform jobs. Use OpenSearch Service as the search engine to store embeddings and find similar embeddings. Use a query function to orchestrate encoding the query and perform a k-NN search. We use Amazon SageMaker Studio notebooks (not shown in the diagram) as the integrated development environment (IDE) to develop the solution. Set up solution resources To set up the solution, complete the following steps: Create a SageMaker domain and a user profile. For instructions, refer to Step 5 of Onboard to Amazon SageMaker Domain Using Quick setup . Create an OpenSearch Service domain. For instructions, see Creating and managing Amazon OpenSearch Service domains . You can also use an AWS CloudFormation template by following the GitHub instructions to create a domain. You can connect Studio to Amazon S3 from Amazon Virtual Private Cloud (Amazon VPC) using an interface endpoint in your VPC, instead of connecting over the internet. By using an interface VPC endpoint (interface endpoint), the communication between your VPC and Studio is conducted entirely and securely within the AWS network. Your Studio notebook can connect to OpenSearch Service over a private VPC to ensure secure communication. OpenSearch Service domains offer encryption of data at rest, which is a security feature that helps prevent unauthorized access to your data. Node-to-node encryption provides an additional layer of security on top of the default features of OpenSearch Service. Amazon S3 automatically applies server-side encryption (SSE-S3) for each new object unless you specify a different encryption option. In the OpenSearch Service domain, you can attach identity-based policies define who can access a service, which actions they can perform, and if applicable, the resources on which they can perform those actions. Encode images and text pairs into embeddings This section discusses how to encode images and text into embeddings. This includes preparing data, creating a SageMaker model, and performing batch transform using the model. Data overview and preparation You can use a SageMaker Studio notebook with a Python 3 (Data Science) kernel to run the sample code. For this post, we use the Amazon Berkeley Objects Dataset . The dataset is a collection of 147,702 product listings with multilingual metadata and 398,212 unique catalogue images. We only use the item images and item names in US English. For demo purposes, we use approximately 1,600 products. For more details about this dataset, refer to the README . The dataset is hosted in a public S3 bucket. There are 16 files that include product description and metadata of Amazon products in the format of listings/metadata/listings_<i>.json.gz . We use the first metadata file in this demo. You use pandas to load the metadata, then select products that have US English titles from the data frame. Pandas is an open-source data analysis and manipulation tool built on top of the Python programming language. You use an attribute called main_image_id to identify an image. See the following code: meta = pd.read_json("s3://amazon-berkeley-objects/listings/metadata/listings_0.json.gz", lines=True) def func_(x): us_texts = [item["value"] for item in x if item["language_tag"] == "en_US"] return us_texts[0] if us_texts else None meta = meta.assign(item_name_in_en_us=meta.item_name.apply(func_)) meta = meta[~meta.item_name_in_en_us.isna()][["item_id", "item_name_in_en_us", "main_image_id"]] print(f"#products with US English title: {len(meta)}") meta.head() There are 1,639 products in the data frame. Next, link the item names with the corresponding item images. images/metadata/images.csv.gz contains image metadata. This file is a gzip-compressed CSV file with the following columns: image_id , height , width , and path . You can read the metadata file and then merge it with item metadata. See the following code: image_meta = pd.read_csv("s3://amazon-berkeley-objects/images/metadata/images.csv.gz") dataset = meta.merge(image_meta, left_on="main_image_id", right_on="image_id") dataset.head() You can use the SageMaker Studio notebook Python 3 kernel built-in PIL library to view a sample image from the dataset: from sagemaker.s3 import S3Downloader as s3down from pathlib import Path from PIL import Image def get_image_from_item_id(item_id = "B0896LJNLH", return_image=True): s3_data_root = "s3://amazon-berkeley-objects/images/small/" item_idx = dataset.query(f"item_id == '{item_id}'").index[0] s3_path = dataset.iloc[item_idx].path local_data_root = f'./data/images' local_file_name = Path(s3_path).name s3down.download(f'{s3_data_root}{s3_path}', local_data_root) local_image_path = f"{local_data_root}/{local_file_name}" if return_image: img = Image.open(local_image_path) return img, dataset.iloc[item_idx].item_name_in_en_us else: return local_image_path, dataset.iloc[item_idx].item_name_in_en_us image, item_name = get_image_from_item_id() print(item_name) image Model preparation Next, create a SageMaker model from a pretrained CLIP model. The first step is to download the pre-trained model weighting file, put it into a model.tar.gz file, and upload it to an S3 bucket. The path of the pretrained model can be found in the CLIP repo . We use a pretrained ResNet-50 (RN50) model in this demo. See the following code: %%writefile build_model_tar.sh #!/bin/bash MODEL_NAME=RN50.pt MODEL_NAME_URL=https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt BUILD_ROOT=/tmp/model_path S3_PATH=s3://<your-bucket>/<your-prefix-for-model>/model.tar.gz rm -rf $BUILD_ROOT mkdir $BUILD_ROOT cd $BUILD_ROOT && curl -o $BUILD_ROOT/$MODEL_NAME $MODEL_NAME_URL cd $BUILD_ROOT && tar -czvf model.tar.gz . aws s3 cp $BUILD_ROOT/model.tar.gz $S3_PATH !bash build_model_tar.sh You then need to provide an inference entry point script for the CLIP model. CLIP is implemented using PyTorch , so you use the SageMaker PyTorch framework. PyTorch is an open-source ML framework that accelerates the path from research prototyping to production deployment. For information about deploying a PyTorch model with SageMaker, refer to Deploy PyTorch Models . The inference code accepts two environment variables: MODEL_NAME and ENCODE_TYPE . This helps us switch between different CLIP model easily. We use ENCODE_TYPE to specify if we want to encode an image or a piece of text. Here, you implement the model_fn , input_fn , predict_fn , and output_fn functions to override the default PyTorch inference handler . See the following code: !mkdir -p code %%writefile code/clip_inference.py import io import torch import clip from PIL import Image import json import logging import sys import os import torch import torch.nn as nn import torch.nn.functional as F from torchvision.transforms import ToTensor logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) logger.addHandler(logging.StreamHandler(sys.stdout)) MODEL_NAME = os.environ.get("MODEL_NAME", "RN50.pt") # ENCODE_TYPE could be IMAGE or TEXT ENCODE_TYPE = os.environ.get("ENCODE_TYPE", "TEXT") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # defining model and loading weights to it. def model_fn(model_dir): model, preprocess = clip.load(os.path.join(model_dir, MODEL_NAME), device=device) return {"model_obj": model, "preprocess_fn": preprocess} def load_from_bytearray(request_body): return image # data loading def input_fn(request_body, request_content_type): assert request_content_type in ( "application/json", "application/x-image", ), f"{request_content_type} is an unknown type." if request_content_type == "application/json": data = json.loads(request_body)["inputs"] elif request_content_type == "application/x-image": image_as_bytes = io.BytesIO(request_body) data = Image.open(image_as_bytes) return data # inference def predict_fn(input_object, model): model_obj = model["model_obj"] # for image preprocessing preprocess_fn = model["preprocess_fn"] assert ENCODE_TYPE in ("TEXT", "IMAGE"), f"{ENCODE_TYPE} is an unknown encode type." # preprocessing if ENCODE_TYPE == "TEXT": input_ = clip.tokenize(input_object).to(device) elif ENCODE_TYPE == "IMAGE": input_ = preprocess_fn(input_object).unsqueeze(0).to(device) # inference with torch.no_grad(): if ENCODE_TYPE == "TEXT": prediction = model_obj.encode_text(input_) elif ENCODE_TYPE == "IMAGE": prediction = model_obj.encode_image(input_) return prediction # Serialize the prediction result into the desired response content type def output_fn(predictions, content_type): assert content_type == "application/json" res = predictions.cpu().numpy().tolist() return json.dumps(res) The solution requires additional Python packages during model inference, so you can provide a requirements.txt file to allow SageMaker to install additional packages when hosting models: %%writefile code/requirements.txt ftfy regex tqdm git+https://github.com/openai/CLIP.git You use the PyTorchModel class to create an object to contain the information of the model artifacts’ Amazon S3 location and the inference entry point details. You can use the object to create batch transform jobs or deploy the model to an endpoint for online inference. See the following code: from sagemaker.pytorch import PyTorchModel from sagemaker import get_execution_role, Session role = get_execution_role() shared_params = dict( entry_point="clip_inference.py", source_dir="code", role=role, model_data="s3://<your-bucket>/<your-prefix-for-model>/model.tar.gz", framework_version="1.9.0", py_version="py38", ) clip_image_model = PyTorchModel( env={'MODEL_NAME': 'RN50.pt', "ENCODE_TYPE": "IMAGE"}, name="clip-image-model", **shared_params ) clip_text_model = PyTorchModel( env={'MODEL_NAME': 'RN50.pt', "ENCODE_TYPE": "TEXT"}, name="clip-text-model", **shared_params ) Batch transform to encode item images into embeddings Next, we use the CLIP model to encode item images into embeddings, and use SageMaker batch transform to run batch inference. Before creating the job, use the following code snippet to copy item images from the Amazon Berkeley Objects Dataset public S3 bucket to your own bucket. The operation takes less than 10 minutes. from multiprocessing.pool import ThreadPool import boto3 from tqdm import tqdm from urllib.parse import urlparse s3_sample_image_root = "s3://<your-bucket>/<your-prefix-for-sample-images>" s3_data_root = "s3://amazon-berkeley-objects/images/small/" client = boto3.client('s3') def upload_(args): client.copy_object(CopySource=args["source"], Bucket=args["target_bucket"], Key=args["target_key"]) arugments = [] for idx, record in dataset.iterrows(): argument = {} argument["source"] = (s3_data_root + record.path)[5:] argument["target_bucket"] = urlparse(s3_sample_image_root).netloc argument["target_key"] = urlparse(s3_sample_image_root).path[1:] + record.path arugments.append(argument) with ThreadPool(4) as p: r = list(tqdm(p.imap(upload_, arugments), total=len(dataset))) Next, you perform inference on the item images in a batch manner. The SageMaker batch transform job uses the CLIP model to encode all the images stored in the input Amazon S3 location and uploads output embeddings to an output S3 folder. The job takes around 10 minutes. batch_input = s3_sample_image_root + "/" output_path = f"s3://<your-bucket>/inference/output" clip_image_transformer = clip_image_model.transformer( instance_count=1, instance_type="ml.c5.xlarge", strategy="SingleRecord", output_path=output_path, ) clip_image_transformer.transform( batch_input, data_type="S3Prefix", content_type="application/x-image", wait=True, ) Load embeddings from Amazon S3 to a variable, so you can ingest the data into OpenSearch Service later: embedding_root_path = "./data/embedding" s3down.download(output_path, embedding_root_path) embeddings = [] for idx, record in dataset.iterrows(): embedding_file = f"{embedding_root_path}/{record.path}.out" embeddings.append(json.load(open(embedding_file))[0]) Create an ML-powered unified search engine This section discusses how to create a search engine that that uses k-NN search with embeddings. This includes configuring an OpenSearch Service cluster, ingesting item embedding, and performing free text and image search queries. Set up the OpenSearch Service domain using k-NN settings Earlier, you created an OpenSearch cluster. Now you’re going to create an index to store the catalog data and embeddings. You can configure the index settings to enable the k-NN functionality using the following configuration: index_settings = { "settings": { "index.knn": True, "index.knn.space_type": "cosinesimil" }, "mappings": { "properties": { "embeddings": { "type": "knn_vector", "dimension": 1024 #Make sure this is the size of the embeddings you generated, for RN50, it is 1024 } } } } This example uses the Python Elasticsearch client to communicate with the OpenSearch cluster and create an index to host your data. You can run %pip install elasticsearch in the notebook to install the library. See the following code: import boto3 import json from requests_aws4auth import AWS4Auth from elasticsearch import Elasticsearch, RequestsHttpConnection def get_es_client(host = "<your-opensearch-service-domain-url>", port = 443, region = "<your-region>", index_name = "clip-index"): credentials = boto3.Session().get_credentials() awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, 'es', session_token=credentials.token) headers = {"Content-Type": "application/json"} es = Elasticsearch(hosts=[{'host': host, 'port': port}], http_auth=awsauth, use_ssl=True, verify_certs=True, connection_class=RequestsHttpConnection, timeout=60 # for connection timeout errors ) return es es = get_es_client() es.indices.create(index=index_name, body=json.dumps(index_settings)) Ingest image embedding data into OpenSearch Service You now loop through your dataset and ingest items data into the cluster. The data ingestion for this practice should finish within 60 seconds. It also runs a simple query to verify if the data has been ingested into the index successfully. See the following code: # ingest_data_into_es for idx, record in tqdm(dataset.iterrows(), total=len(dataset)): body = record[['item_name_in_en_us']].to_dict() body['embeddings'] = embeddings[idx] es.index(index=index_name, id=record.item_id, doc_type='_doc', body=body) # Check that data is indeed in ES res = es.search( index=index_name, body={ "query": { "match_all": {} }}, size=2) assert len(res["hits"]["hits"]) > 0 Perform a real-time query Now that you have a working OpenSearch Service index that contains embeddings of item images as our inventory, let’s look at how you can generate embedding for queries. You need to create two SageMaker endpoints to handle text and image embeddings, respectively. You also create two functions to use the endpoints to encode images and texts. For the encode_text function, you add this is before an item name to translate an item name to a sentence for item description. memory_size_in_mb is set as 6 GB to serve the underline Transformer and ResNet models. See the following code: text_predictor = clip_text_model.deploy( instance_type='ml.c5.xlarge', initial_instance_count=1, serverless_inference_config=ServerlessInferenceConfig(memory_size_in_mb=6144), serializer=JSONSerializer(), deserializer=JSONDeserializer(), wait=True ) image_predictor = clip_image_model.deploy( instance_type='ml.c5.xlarge', initial_instance_count=1, serverless_inference_config=ServerlessInferenceConfig(memory_size_in_mb=6144), serializer=IdentitySerializer(content_type="application/x-image"), deserializer=JSONDeserializer(), wait=True ) def encode_image(file_name="./data/images/0e9420c6.jpg"): with open(file_name, "rb") as f: payload = f.read() payload = bytearray(payload) res = image_predictor.predict(payload) return res[0] def encode_name(item_name): res = text_predictor.predict({"inputs": [f"this is a {item_name}"]}) return res[0] You can firstly plot the picture that will be used. item_image_path, item_name = get_image_from_item_id(item_id = "B0896LJNLH", return_image=False) feature_vector = encode_image(file_name=item_image_path) print(feature_vector.shape) Image.open(item_image_path) Let’s look at the results of a simple query. After retrieving results from OpenSearch Service, you get the list of item names and images from dataset : def search_products(embedding, k = 3): body = { "size": k, "_source": { "exclude": ["embeddings"], }, "query": { "knn": { "embeddings": { "vector": embedding, "k": k, } } }, } res = es.search(index=index_name, body=body) images = [] for hit in res["hits"]["hits"]: id_ = hit["_id"] image, item_name = get_image_from_item_id(id_) image.name_and_score = f'{hit["_score"]}:{item_name}' images.append(image) return images def display_images( images: [PilImage], columns=2, width=20, height=8, max_images=15, label_wrap_length=50, label_font_size=8): if not images: print("No images to display.") return if len(images) > max_images: print(f"Showing {max_images} images of {len(images)}:") images=images[0:max_images] height = max(height, int(len(images)/columns) * height) plt.figure(figsize=(width, height)) for i, image in enumerate(images): plt.subplot(int(len(images) / columns + 1), columns, i + 1) plt.imshow(image) if hasattr(image, 'name_and_score'): plt.title(image.name_and_score, fontsize=label_font_size); images = search_products(feature_vector) The first item has a score of 1.0, because the two images are the same. Other items are different types of glasses in the OpenSearch Service index. You can use text to query the index as well: feature_vector = encode_name("drinkware glass") images = search_products(feature_vector) display_images(images) You’re now able to get three pictures of water glasses from the index. You can find the images and text within the same latent space with the CLIP encoder. Another example of this is to search for the word “pizza” in the index: feature_vector = encode_name("pizza") images = search_products(feature_vector) display_images(images) Clean up With a pay-per-use model, Serverless Inference is a cost-effective option for an infrequent or unpredictable traffic pattern. If you have a strict service-level agreement (SLA) , or can’t tolerate cold starts, real-time endpoints are a better choice. Using multi-model or multi-container endpoints provide scalable and cost-effective solutions for deploying large numbers of models. For more information, refer to Amazon SageMaker Pricing . We suggest deleting the serverless endpoints when they are no longer needed. After finishing this exercise, you can remove the resources with the following steps (you can delete these resources from the AWS Management Console , or using the AWS SDK or SageMaker SDK): Delete the endpoint you created. Optionally, delete the registered models. Optionally, delete the SageMaker execution role. Optionally, empty and delete the S3 bucket. Summary In this post, we demonstrated how to create a k-NN search application using SageMaker and OpenSearch Service k-NN index features. We used a pre-trained CLIP model from its OpenAI implementation. The OpenSearch Service ingestion implementation of the post is only used for prototyping. If you want to ingest data from Amazon S3 into OpenSearch Service at scale, you can launch an Amazon SageMaker Processing job with the appropriate instance type and instance count. For another scalable embedding ingestion solution, refer to Novartis AG uses Amazon OpenSearch Service K-Nearest Neighbor (KNN) and Amazon SageMaker to power search and recommendation (Part 3/4) . CLIP provides zero-shot capabilities, which makes it possible to adopt a pre-trained model directly without using transfer learning to fine-tune a model. This simplifies the application of the CLIP model. If you have pairs of product images and descriptive text, you can fine-tune the model with your own data using transfer learning to further improve the model performance. For more information, see Learning Transferable Visual Models From Natural Language Supervision and the CLIP GitHub repo sitory. About the Authors Kevin Du is a Senior Data Lab Architect at AWS, dedicated to assisting customers in expediting the development of their Machine Learning (ML) products and MLOps platforms. With more than a decade of experience building ML-enabled products for both startups and enterprises, his focus is on helping customers streamline the productionalization of their ML solutions. In his free time, Kevin enjoys cooking and watching basketball. Ananya Roy is a Senior Data Lab architect specialised in AI and machine learning based out of Sydney Australia . She has been working with diverse range of customers to provide architectural guidance and help them to deliver effective AI/ML solution via data lab engagement. Prior to AWS , she was working as senior data scientist and dealt with large-scale ML models across different industries like Telco, banks and fintech’s. Her experience in AI/ML has allowed her to deliver effective solutions for complex business problems, and she is passionate about leveraging cutting-edge technologies to help teams achieve their goals. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Improve Patient Safety Intelligence Using AWS AI_ML Services _ AWS for Industries.txt
AWS for Industries Improve Patient Safety Intelligence Using AWS AI/ML Services by Terrell Rohm, Gang Fu, Dr. Iona Maria Thraen, Sara McLaughlin Wynn, Rod Tarrago, and Stephen Andrews | on 19 JUN 2023 | in Artificial Intelligence , Healthcare , Industries , Public Sector | Permalink |  Share Today, healthcare organizations rely on a combination of automated and manual processes to compose, review, and classify patient safety reports. These reports are entered manually by front-line clinicians into the RL Datix reporting system. This entry includes both discrete data points as well as a free-text narrative. Although the data collection process may begin with the digital capture of data, once entered, the data generally remains inaccessible throughout the organization in terms of real-time trending and analysis. Each reporter sees only the adverse events they have reported. Unit and file managers are given broader access relevant to their unit or service line authority, but often the data remains in its raw format due to the textual nature of the event descriptions. As a result, patterns across the organization, such as an increase in infections or medication errors, are unit or service line dependent and appear to be isolated events. The current analysis of these reports is achieved through a combination of built-in reports/graphics depending on the software, manual data manipulation, and the display of discrete fields. Analysis is siloed to the respective units or authorities while an organization-wide or region-wide analysis is dependent on the employment of multiple patient safety analysts and data specialists. Additional reports may include separate databases and spreadsheets to triangulate around specific issues. In academic medical centers (AMCs), this process requires dedicated time, people, and resources. AMCs need a technology solution that can automate the analytical processes to free dedicated resources for much needed patient care improvement initiatives and activities. As a Proof of Concept (POC), we focused on the automated analysis of medication-related patient safety reports. The proposed solution intends to reduce manual analytical work and inefficiencies in current workflows, reduce time-to-insight, improve the information extracted from daily reports, and uncover patterns across reports and throughout the organization. We collaborated with University of Utah Health on this POC project, using five years of medication-related patient safety reports to fine-tune a couple of generalized and domain specific language models using Amazon SageMaker . This approach classifies the severity of errors using discrete fields, identifies high risk medications from text narratives, and visualizes high-risk medication-related events within the corresponding harm levels. Solution overview Amazon Comprehend Medical was used to detect high risk medications, and the results were summarized in a functional, interactive dashboard built upon Amazon QuickSight . The entire data processing pipeline was automated using event driven, serverless architecture via AWS Lambda . Given the fact that patient safety reports contain private and sensitive information, all of the services used in this solution are HIPAA eligible , and the project was carried out in a HIPAA-compliant landing zone account . In addition, de-identification of the patient safety reports was achieved using Amazon Comprehend Medical DetectPH API , which has been demonstrated in this post and reference solution . To improve efficiency of the patient safety reporting process, we have refined and compared different transformer based LLMs in AWS partner Huggingface to effectively detect and classify high risk medications based on free-text descriptions in the reports (see Table 1).  A sample Jupyter notebook  was prepared and it can be shared with academic medical centers for further customization. The architectural diagram in the following figure outlines the potential steps for patient safety professionals to run this solution on AWS. Figure 1. Architecture Diagram of the solution for patient safety intelligence Additionally, to provide a secure and compliant machine learning (ML) environment , Amazon SageMaker, data encryption, network isolation, authentication, and authorization are set as the default. Key features include: Encryption of data at rest in an Amazon Simple Storage Service (Amazon S3) bucket is turned on with your own key stored in AWS Key Management Service (AWS KMS) . The extra cost for AWS KMS provides better controlled security, and the same approach was used in this post . Encryption of data at rest in Amazon Elastic File System (Amazon EFS) (home folder for Notebook instances) is enabled using default AWS KMS key (aws/elasticfilesystem). Amazon SageMaker Studio environment is launched within a private VPC. With the network isolation, the VPC endpoints provide the access to other AWS services including S3 buckets through AWS PrivateLink . Amazon Identity and Access Management (IAM) is used for role-based access control, and it can determine which permissions the SageMaker user can have. If you want to have a secure research environment through a lockdown Virtual Desktop Infrastructure (VDI) without screen copy, then you can use Amazon AppStream 2.0 or Amazon Workspaces to access Amazon SageMaker domain presigned URL . This solution leverages AWS Analytics and artificial intelligence/machine learning (AI/ML) services for automatic data processing, information extraction, and AI predictions upon patient safety reports.  High-alert medications , extracted from the standard high-risk medication list compiled by the Institute for Safe Medication Practices (ISMP), have been consolidated into RxNorm concepts. These were used to map the named entities with alternative synonyms extracted by Amazon Comprehend Medical. They were further analyzed and displayed on an Amazon QuickSight dashboard (see the following figure). The dashboard displays multiple visualizations of the data both independently from discrete fields (such as counts by Safety Event Codes) and data from textual fields (counts of High Alert Medications), and also combines data from both discrete and textual sources as demonstrated by the Combination chart. Finally, the capacity to drill down by individual Patient Safety Codes and the corresponding High Alert Medications is provided. Note that a cell size of five or less has been removed for privacy purposes. This approach could additionally be constructed by location, time of day, or any other discrete data element. Figure 2. Example dashboard for high alert medications extracted by Amazon Comprehend Medical Outcomes Using the AI approach as described in the following, a comparison analysis for AI prediction POC results are found in Table 1. The general results range in Precision from .881 to .901; Recall from .874 to .899; Accuracy from .874 to .899; and F1 score of .873 to .899 depending on the application.   Table 1. AI Model prediction results to classify level of harms based on free text description Conclusion Given the success of this POC project, we plan to engage with an AWS partner to build other use case applications and to test a production-ready system that includes complete clinical data. This data can lead to additional metrics, models, and improvements. Furthermore, given the need for manual entry of clinical information into the patient safety reporting system, efforts are underway to integrate electronic health record (EHR) information into the analysis. ML is an effective tool to improve efficiency, reduce time to insight, and unearth potentially hidden information in medication-related patient safety reports. Given these results, it would be valuable to continue to improve outcome scores, expand this effort to other areas of patient safety reporting, and investigate integration with other clinical and demographic data sources. TAGS: #healthcare , AI/ML , amazon sagemaker , Patient safety , Personalized health Terrell Rohm Terrell Rohm is the Director of Quality Data Analytics & Technology for the Chief Quality Office at the University of Utah Health. He has over 20 years’ experience working in the private and public sectors in technology and leadership roles. He leads a department providing data analytics, data engineering, and business intelligence services focusing on healthcare quality. He holds an MBA from the Jon M. Huntsman School of Business at Utah State University and a bachelor’s degree in computer science from Brigham Young University. Gang Fu Gang Fu is a Healthcare Solution Architect at AWS. He holds a PhD in Pharmaceutical Science from the University of Mississippi and has over ten years of technology and biomedical research experience. He is passionate about technology and the impact it can make on healthcare. Dr. Iona Maria Thraen Dr. Iona Maria Thraen holds a PhD in Medical Informatics from the College of Medicine, University of Utah; sixty hours of graduate doctoral social work credits from the College of Social Work, University of Utah; thirty hours of graduate training in economics (Fordham University); a master’s degree in social work (University of Nebraska); and an undergraduate degree in Psychology with a minor in Theology (Creighton University) Dr. Thraen currently holds an appointment as adjunct assistant professor in the Dept of Biomedical Informatics and adjunct instructor with the Department of Operations and Information Systems, both with the University of Utah. In her role, Dr. Thraen sets the strategic direction for the department to move from Patient Safety 1.0 to Patient Safety 2.0; manages oversight of personnel, budget, and policy setting; leads patient safety initiatives across the organization in collaboration with Value Engineering, System’s Quality, and Nursing Quality; teaches patient safety content to Master of Health Administration Students; and participates in patient safety related research and development. Finally, Dr. Thraen has been involved in numerous research activities resulting in multiple publications, acknowledgements, and grants. Sara McLaughlin Wynn Sara McLaughlin Wynn is an Enterprise Account Manager at AWS. She has spent two decades working with higher education institutions in the Western United States and now supports the AWS mission to accelerate the digital transformation of higher education. Rod Tarrago Rod Tarrago, MD, is a Principal Business Development Manager at AWS. He leads clinical informatics for academic medicine. Rod brings 15 years of experience as a chief medical information officer. Clinically, he practiced pediatric critical care medicine for 20 years prior to joining AWS. Stephen Andrews Stephen Andrews is the Medication Safety Pharmacist for the University of Utah Health, comprised of 5 hospitals and 11 community health care centers. He is responsible for developing the vision and associated strategic plan for an ideal safe medication use system. He obtained his Doctor of Pharmacy from the University of Missouri-Kansas City, completed post-graduate residency training at the University of Kansas Health System, is a Board-Certified Pharmacotherapy Specialist and Board Certified Professional in Patient Safety. Stephen is passionate about improving the reliability of safe medication use by incorporating evidenced-based strategies and solutions. Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Improving Geospatial Processing Faster using Amazon Aurora with Ozius _ Case Study _ AWS.txt
Français The high demand for Biome has primarily to do with its strong performance. On the company’s previous system, it would have taken Biome 150 days to process the environmental data from all of continental Australia. Now, the solution can complete this task within only 8 hours—450 times faster than before. “Amazon Aurora is a game changer,” says Scarth. “It helped us complete our geospatial processing far faster than I could’ve possibly imagined.” Ozius has also increased the volume of data that it processes by a factor of 10, ingesting over a quarter of a billion data points using Aurora. to develop Ozius Biome Español to process data on all of Australia's vegetation 日本語 2022 Ozius has also improved the resolution of its environmental-intelligence products. Now the company can offer a close-up of vegetation within a 20-by-20-meter area, a huge improvement from the 200-by-200-meter-area resolution previously available. By achieving higher resolution, Ozius can reconstruct Australia’s vegetation with greater accuracy and fidelity. About Ozius 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Before using Aurora, the enterprise relied solely on its on-premises PostgreSQL with PostGIS databases to process the data points that it collected from satellites. On its previous system, it would have taken Ozius around 150 days to process the nearly 170 million data points that it gathered from continental Australia’s topography. To reduce the amount of time spent processing data, the Ozius team began searching for a robust database solution. Customer Stories / Aerospace Environmental intelligence enterprise Ozius strives to deliver advanced analytics to its customers through Ozius Biome (Biome), its proprietary solution that synthesizes environmental data from earth-observation satellites and spaceborne light-detection-and-ranging (lidar) technologies. Because Ozius gathers millions of data points from these satellites, it wanted to find a cloud service that would work alongside its existing PostgreSQL databases with PostGIS, a spatial database extender for PostgreSQL databases, and generate enough compute power to accelerate its processing time. Get Started “We shopped around with several cloud service providers,” says Peter Scarth, data science lead and chief technology officer at Ozius. “We chose Amazon Aurora because it is a readily supported, high-quality database solution within a scalable framework.” Moreover, the Ozius team received technical support from the AWS team during its implementation and any time that it needed help troubleshooting. AWS Services Used Learn how Ozius, an Austrailan environmental intelligence enterprise, uses artificial intelligence and Amazon Aurora to generate data on Australia's vegetation. 中文 (繁體) Bahasa Indonesia 10x Amazon Aurora Additionally, because Ozius uses serverless solutions on AWS, the company has optimized compute costs and resources. As a result, it can provide its Biome suite of products to customers at a competitive price point. “We’re saving our customers hundreds of thousands of dollars and months of time,” says Ben Starkey, managing director at Ozius. “The only way for a company to get similar data in localized areas is to fly a plane and use airborne lidar or to go into the field and measure it manually.” Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Ozius plans to officially launch Ozius Biome to the public in July 2022. From there, it will work toward further reducing its processing times and expanding its operations using serverless solutions on AWS. “We’ve opened a world of possibilities because we can ingest and compute this amount of data,” says Scarth. “Working on AWS gives us a whole lot of opportunities and provides better products to our customers.” 中文 (简体) Peter Scarth Data Science Lead and Chief Technology Officer, Ozius  Learn more » Amazon Aurora is a game changer. It helped us complete our geospatial processing far faster than I could’ve possibly imagined.” Overview with processing environmental data Amazon Aurora is a relational database management system (RDBMS) built for the cloud with full MySQL and PostgreSQL compatibility. Aurora gives you the performance and availability of commercial-grade databases at one-tenth the cost. Solution | Improving Performance and Cost Savings Using Amazon Aurora Outcome | Opening a World of Possibilities by Launching Biome to the Public During beta testing, Ozius sold data for approximately 10 million hectares to early bird stakeholders. Because Ozius outperformed its sales goals, it closed its early bird enrollment for Biome in December 2021. “The feedback that we have received has been incredible,” says Starkey. “We’re able to service lots of small queries really quickly, and we’ve received several queries to deliver data across large areas and even whole states.” Since completing this phase of the project, the company has received new sales leads every week, and its customers have placed data-order requests for up to 30 million hectares. Türkçe 450x faster Based in Australia, Ozius is a small enterprise that provides earth-observation analytics and intelligence to both public and private sectors across many industries including natural capital markets and government, energy, and defense sectors. The company conceptualized Biome in 2021, identifying the need to produce large datasets that would facilitate a highly accurate reconstruction of Australia’s forest and plant canopy using artificial intelligence and lidar technologies. With Biome, its customers can identify carbon-trading opportunities, monitor deforestation, prepare for bushfires, and detect landscape changes. Ozius has experienced an increased demand for this type of intelligence as more companies roll out environmental conservation and net-zero initiatives. English data requests following beta testing 30 million hectares 4 months 8 hours Deutsch Ozius began exploring different solutions from Amazon Web Services (AWS) and other third-party cloud providers. In July 2021, it decided to adopt Amazon Aurora, a MySQL- and PostgreSQL-compatible relational database built for the cloud that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open-source databases. Compared to its previous system, the company ingests up to 10 times more data points and processes them 450 times faster. Opportunity | Identifying the Need to Process Satellite Data with a Robust Cloud Solution Tiếng Việt Italiano ไทย Contact Sales Ozius provides earth-observation analytics and intelligence across natural capital markets and government, energy, and defense sectors. Its Ozius Biome solution uses artificial intelligence and spaceborne lidar and satellite technologies to generate data on Australia’s vegetation. more data ingested than its previous system In July 2021, Ozius worked alongside the AWS team to combine Aurora with its on-premises databases and accelerate the development of Biome. By November 2021, Ozius launched beta testing for Biome, reaching this milestone within a much shorter timeline than the company had originally expected. “We only spent 4 months developing our Biome solution,” says Alisa Starkey, founder, director, and chief science officer at Ozius. “That timeline for new, national-scale product development is unheard of in our industry.” Ozius Develops Biome in 4 Months, Offers Spatial Datasets Using Amazon Aurora Português
Improving Hiring Diversity and Accelerating App Development on AWS with Branch Insurance _ Case Study _ AWS.txt
Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools. Learn more » Amazon Cognito provides an identity store that scales to millions of users, supports social and enterprise identity federation, and offers advanced security features to protect your consumers and business.  Learn more » Français Español more Black engineers and 26% more Hispanic or Latino engineers than industry averages of typical cost for similarly sized startups AWS AppSync creates serverless GraphQL and Pub/Sub APIs that simplify application development through a single endpoint to securely query, update, or publish data. About Branch Insurance 4 products 日本語 AWS Services Used 6-month launched in just 3 years with a team of fewer than 20 developers However, offering this simplicity requires powerful infrastructure to process data quickly and store it efficiently and securely in compliance with regulations. Branch has been a serverless-native company on AWS since its founding in 2017 as a team of two. The startup wanted to use managed services to off-load as much of the infrastructure maintenance work as possible and reduce bespoke backend code to simplify its logic and improve scalability. “AWS has consistently provided better services that we can use to hand off more of the undifferentiated heavy lifting,” says Joe Emison, cofounder and chief technology officer of Branch. “By using AWS, we can focus our valuable time on what differentiates Branch.”  한국어 Amazon DynamoDB Overview | Opportunity | Solution | Outcome | AWS Services Used As the startup grew, it also recognized several challenges with the existing job market. The company wanted to avoid the typical cycle of hiring a lot of senior developers because that practice excluded many talented developers from underrepresented groups in the software industry. “It can be difficult to find experienced developers who are willing to learn and adapt to the way your company wants to do things,” says Herndon. To break out of that constrained hiring market, Branch decided to focus on hiring junior developers and upskilling them through an in-house boot camp program based on its specific technology stack. Opportunity | Off-Loading Infrastructure Maintenance Work and Diversifying Hiring Get Started AWS AppSync One of the biggest benefits of building on AWS has been the ability to duplicate environments and run multiple environments on the same configurations for staging, development, and production. “With this setup, we can be much more confident in our ability to test,” says Herndon. “Developers have more time for working with the code because they don’t have to wait for a feature to be scheduled on a single staging environment.” Doing a full deployment on AWS now takes just 10–15 minutes for Branch. On average, the company deploys 5 times per week, and each time it saves a significant amount of time and resources that translate to increased developer productivity. In all, Branch has accelerated its development cycles by an estimated 6 months. “Using serverless technology on AWS, we’ve replaced what would be an entire team with a system that’s relatively cheap,” says Emison. The company estimates that it spends just 3 percent as much as similarly sized startups.  Meanwhile, as developers come in from the boot camp, Branch creates new environments for them quickly on AWS. Further, new hires are better prepared to use the company’s serverless architecture so that they can more quickly get started building great products. The boot camp has also increased the diversity of Branch’s workforce. One-third of Branch’s engineering team is Black and one-third is Hispanic or Latino—much higher than the industry averages of 5 percent and 7 percent, respectively. In addition, Branch has 10 percent more female engineers than the industry average. “We’re trying to help these new hires acclimate more quickly to our team, but all of the skills we’re teaching are transferrable to other companies,” says Herndon. In that way, it’s also helping create a more diverse talent pool for all companies building in the cloud.  AWS Amplify Fast-growing insurance technology startup Branch set out to radically simplify the end-user experience for insurance customers by offering bindable prices based on just a couple of simple pieces of information—the customer’s name and address. “One of the things that makes us different is how quickly you can get a rate you can purchase,” says Ivan Herndon, vice president of engineering at Branch.  Branch built an API hub using AWS AppSync, which creates serverless GraphQL and Pub/Sub APIs that simplify application development through a single endpoint to securely query, update, or publish data. The company also used a serverless architecture to empower its junior developers and diversify its workforce. As a result, Branch drastically reduced the amount of time and resources that it needed to deploy updates and maintain its technology stack. 中文 (繁體) Bahasa Indonesia acceleration in app development velocity With this shift from hiring experience to nurturing expertise, Branch aimed to improve the diversity of its workforce while easing the onboarding process for new hires. It designed its boot camp curriculum to focus on the AWS services and serverless architecture that its developers use and build on every day. “Building on AWS works very well for us, and it scales seamlessly,” says Herndon. “We don’t have to worry about security compliance because it’s built into AWS services.” In addition, Branch leverages a fully-typed architecture, with TypeScript in its frontend code and a typed schema in its AppSync API hub, to create guardrails for its developers. Using JavaScript (TypeScript) in both front and backends also makes it much easier for each developer to be a full-stack developer at Branch. Branch Insurance (Branch) had goals for its internal development teams that were as ambitious as its efforts to provide uniquely simple insurance policies to its customers. The startup wanted to take an all-in approach to serverless architecture using Amazon Web Services (AWS) to make its infrastructure scalable, accelerate developer training, and simplify deployments.  Contact Sales Ρусский 3% عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. “Building a product on AWS is like doing it on ‘easy mode’ because there’s so much that’s simplified by using managed services,” says Emison. “We just write business logic and interfaces. That’s the great benefit of using AWS.” 2022 Overview Building a product on AWS is like doing it on ‘easy mode’ because there’s so much that’s simplified by using managed services. We just write business logic and interfaces. That’s the great benefit of using AWS.”  28% Customer Stories / Financial Services Amazon Cognito Improving Hiring Diversity and Accelerating App Development on AWS with Branch Insurance Branch Insurance is an insurance technology startup that provides simple insurance policies and comprehensive bundles to customers in 33 US states. The company was founded in 2017 in Columbus, Ohio. English Learn how Branch Insurance accelerated app development using AWS AppSync. more female engineers than the industry average Deutsch 10% Branch uses AWS AppSync as the foundation for its backend infrastructure and API service. AWS AppSync receives all the requests from the company’s website and mobile app, filters out malicious requests, makes sure each request is properly formatted, and finally initiates the proper business logic. The company also manages the authorization flow using libraries from AWS Amplify, open-source client libraries that developers can use to build cloud-powered mobile and web apps. “Branch’s entire backend, including all business logic and transactional data, runs on AWS AppSync,” says Emison. “By connecting AWS AppSync to AWS Amplify, the amount we have to deal with operations is extremely minimal.”  Outcome | Building Products on 'Easy Mode' Using AWS Services Tiếng Việt Solution | Using AWS AppSync Accelerated App Development Cycles by 6 Months for Branch Italiano ไทย Branch uses the scalability of Amazon DynamoDB, a key-value and document database that delivers single-digit millisecond performance at virtually any scale, to handle as much traffic as it needs. Meanwhile, the startup stores all member information on Amazon Cognito, which businesses can use to add sign-up, sign-in, and access control to web and mobile apps quickly and easily. Branch has made user authentication effortless by using AWS AppSync to route each user login request to Amazon Cognito. “One of the magical parts of AWS AppSync is how well it connects to Amazon Cognito to automatically respond to authentication requests,” says Emison.  Türkçe In just 3 years, Branch launched four insurance products—home, auto, renters, and umbrella insurance—in 33 US states. And the company did that with fewer than 20 full-time developers. As it continues to grow and hire new developers through its custom boot camp, it plans even more innovative features.  Joe Emison Co-Founder and Chief Technology Officer Learn more » AWS Amplify is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve. No cloud expertise needed. Learn more » Português
Improving Mergers and Acquisitions Using AWS Organizations with Warner Bros. Discovery _ Warner Bros. Discovery Case Study _ AWS.txt
Amazon GuardDuty Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. Français 2023 AWS CloudTrail Español   About Warner Bros. Discovery Discovery had been working to centralize its account creation to better operate at scale and support its multiple growing business units. As a result, it streamlined the process for any mergers and acquisitions (M&A). In 2022, the company began undergoing its largest merger to date when WarnerMedia and Discovery started to merge into Warner Bros. Discovery (WBD); this process is still ongoing. The main challenge of these kinds of M&As is securely integrating a newly merged or acquired company’s cloud footprint into Discovery’s existing footprint without impacting the day-to-day operations of either business. With a cloud-first approach, keeping the cloud infrastructure accessible, running, secure, and protected is vital. reduction in firewall rule deployment time 日本語 When account creation is centralized, we have the ability to view and control cloud spend in a single pane of glass. We’re plugged in, our support team is plugged in, and we can manage costs.” Customer Stories / Media & Entertainment Get Started 한국어 View and control cloud spend Overview | Opportunity | Solution | Outcome | AWS Services Used Before 2019, creating a new account could take up to 2 months. Now that the centralized process is used, with defined features and a controlled process, an account can be configured immediately, and the entire delivery is finished within 2 days. WBD also uses the centralized environment to detect and consolidate duplicate implementations. Achieved The Global Cloud Services team began making AWS Organizations a key part of its process in 2019. “As we learned about the capabilities of AWS Organizations and how organizational units could apply the controls and service control policies in a hierarchy, it really suited our goals,” says Kevin Woods, lead cloud solutions architect at WBD. As we learned about the capabilities of AWS Organizations and how organizational units could apply the controls and service control policies in a hierarchy, it really suited our goals.” AWS Services Used 中文 (繁體) Bahasa Indonesia in a single pane of glass The improved speed of development and deployment translates to a better time to market. “The development teams creating direct-to-consumer products immediately have a place to go,” says Lankford. “Development teams do not need to wait on their cloud environment. Their code can be deployed immediately.” This means that features get to market faster because content is produced faster. By using AWS Organizations and integrating other AWS services, WBD improved deployment time, which helps the company to scale with new growth. Contact Sales Ρусский Prevented costs عربي Learn more » Discovery had been improving its M&A process for years and now provides customers with an ever-widening portfolio of television, streaming, and gaming content. One of Discovery’s first large M&As was with Scripps in 2018. Discovery learned and matured through the Scripps integration. After the company merged with WarnerMedia to become WBD, a global media and entertainment leader, it created a centralized governance group (the Global Cloud Services team) to efficiently manage its new and old accounts. The team was created to implement governance at scale, using prior lessons learned to create a robust framework for security and governance tooling. The new control policies helped WBD to be proactive instead of reactive by using security baselines to track security findings, a necessity when scaling up services. The company began to treat governance as a product of its internal teams. “We wanted to be able to grow and use the power of the cloud while making sure our development teams had a secure, governed environment,” says Bianca Lankford, vice president of cloud security at WBD. 中文 (简体) AWS Firewall Manager Kevin Woods Lead Cloud Solutions Architect, Warner Bros. Discovery Learn more » AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. Learn more » 2 months to 2 days WBD saves time for web application firewall rule deployment using AWS Organizations to create a centralized deployment model of AWS Firewall Manager to centrally configure and manage firewall rules across accounts. This process reduces deployment time from days to minutes, which is pivotal for some events that require expedited deployment and security tooling. Overview WBD also uses AWS CloudTrail to monitor and record account activity across AWS infrastructure. The purpose of AWS CloudTrail is to track user activity and API usage across all WBD’s AWS accounts. By using AWS CloudTrail and Amazon GuardDuty, WBD benefits from centralized security tooling while adopting governance controls. This centralized user activity and API usage in AWS CloudTrail also reduces costs for the company. “When account creation is centralized, we have the ability to view and control cloud spend in a single pane of glass,” says Lankford. “We’re plugged in, our support team is plugged in, and we can manage costs.” AWS Organizations Bianca Lankford Vice President of Cloud Security, Warner Bros. Discovery Outcome | Paving the Way for Larger M&As Türkçe Learn how Warner Bros. Discovery streamlined the process for mergers and acquisitions (M&A) using AWS Organizations. English To make sure that integration processes are smooth, the Global Cloud Services team preconfigures all accounts to include AWS Enterprise Support, which provides concierge-like service that is focused on achieving outcomes and finding success in the cloud. The team then gets out of the way so that internal development teams can operate independently and expedite innovation. “To encourage self-service, it’s important to have centralized guardrails,” says Lankford. “There is a degree of confidence that teams are operating within a standardized guardrail set.” From days to minutes AWS Organizations lets you create new AWS accounts at no additional charge. With accounts in an organization, you can easily allocate resources, group accounts, and apply governance policies to accounts or groups. faster time to market Improving Mergers and Acquisitions Using AWS Organizations with Warner Bros. Discovery Deutsch Tiếng Việt AWS CloudTrail monitors and records account activity across your AWS infrastructure, giving you control over storage, analysis, and remediation actions. WBD uses Amazon Web Services (AWS) to centralize account creation as well as automate and secure the M&A process at scale. The company uses AWS Organizations to create new AWS accounts at no additional charge, allocate resources, group accounts, and apply governance policies.  Using AWS, WBD decreases time to market, reduces cost, and creates a centralized and automated deployment of new accounts, all while making security a priority.  Italiano ไทย Solution | Reducing Costs and Speeding Up Account Creation to 2 Days Using AWS WBD uses the delegated administration capabilities of AWS Organizations to give its teams the capability to centrally manage security services. It protects cloud infrastructure using Amazon GuardDuty, a threat detection service that monitors AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. WBD deploys Amazon GuardDuty across all accounts at creation before they are fully integrated. “Amazon GuardDuty is a default configuration,” says Woods. “I don’t think there’s any area where you would not have it in place.” The 2022 merger of Discovery and WarnerMedia has been the largest to go through the automated accounts deployment process. The company went from 270 accounts to thousands of accounts. By using AWS Organizations account management APIs, WBD has had the building blocks in place to be flexible during this process. The company is also actively integrating cloud environments from its M&As in a secure way. “It’s about how we use the cloud to keep growing,” says Lankford. “When our development teams have a secure, governed environment, they can work without hindrance and get compelling content into the marketplace and into the homes of our consumers.” reduction time in new account creation related to large M&As Opportunity | Using AWS Organizations to Improve M&As Warner Bros. Discovery is a global media and entertainment company based in New York City. The company provides customers with a vast portfolio of content in television, streaming, and gaming. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Improving Operational Efficiency with Predictive Maintenance Using Amazon Monitron _ Baxter Case Study _ AWS.txt
Baxter International Inc. (Baxter), a global medical technology leader, is driven by its mission to save and sustain lives. The company’s network of 70 manufacturing sites worldwide operates 24/7 in a highly complex, dynamic, and regulated environment. Every minute of production is critical, and every minute of downtime avoided is valuable not only to the company but also to its customers and patients. Baxter needed an equipment-monitoring solution that could build resiliency in its operations and reduce unplanned equipment downtime. Français About Baxter International Inc. Español Baxter Improves Operational Efficiency with Predictive Maintenance Using Amazon Monitron To avoid supply chain disruptions and maintain quality, Baxter needed to build resiliency into its operations, keeping its facilities up and running without unexpected downtime so that the company could deliver lifesaving products to customers and patients on time and achieve its mission of saving and sustaining lives. Baxter uses a wide range of industrial equipment in its utilities, process, and packaging zones to produce medical devices and pharmaceuticals. With around-the-clock operations and precise requirements during production for factors like temperature and product movement, reliable operations are critical. Based on the success so far, Baxter plans to scale its use of Amazon Monitron to cover its complete network of 70 manufacturing sites worldwide in a few years. Baxter expects the deployment to continue creating a cultural change at its facilities as it implements a predictive maintenance program and advances the company’s digital transformation to continue using data and insights to improve business processes. Opportunity | Using Amazon Monitron to Reduce Unplanned Equipment Downtime 日本語 AWS Services Used operational efficiency and quality by automating inspection tasks Amazon Monitron has given us the actionable data needed to maintain the thousands of manufacturing assets in our facilities, allowing us to predict and preempt unplanned equipment downtime.” Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Monitron Baxter International Inc. is a global medical technology company that helps facilitate patient care through its portfolio of outpatient, hospital, critical care, kidney, and surgical innovations that are available in over 100 countries. Improved “The time to value has been incredibly quick and has added momentum to Baxter’s digital transformation efforts. Amazon Monitron has given us the actionable data needed to maintain the thousands of manufacturing assets in our facilities, allowing us to predict and preempt unplanned equipment downtime,” says Karan. “This gives us a big advantage in creating reliable and sustainable supply for our customers, which is especially critical given supply chain challenges being felt across the industry.” A predictive maintenance task force at Baxter reviewed failure predictions and the scheduled maintenance logs and determined that vibration and temperature sensors deployed at scale, combined with ML technology, could be a powerful solution for detecting anomalies that could lead to failures of system components. In 2021, Baxter began a proof-of-concept project to use Amazon Monitron, installing wireless sensors to capture vibration and temperature data. In this initial deployment, the company installed 400 Amazon Monitron sensors in 1 month in one of its largest facilities in the United States. manual inspection time for technicians Because Amazon Monitron automatically detects abnormal machine operating states by analyzing vibration and temperature signals using International Organization for Standardization standards and ML models, Baxter could expand quickly without the need for a team with ML expertise. Baxter technicians can review any issues immediately from the Amazon Monitron app and take action. After the success of the proof-of-concept project, Baxter deployed 2,500 Amazon Monitron sensors at its lighthouse facility and plans to install tens of thousands of sensors in additional plants across the United States, Europe, and Asia. “Amazon Monitron costs one-tenth of what other products on the market cost and doesn’t require Baxter to hire dozens of ML engineers,” says A. K. Karan, global senior director of digital transformation at Baxter. “Amazon Monitron is one of the few solutions on the market that can meet our needs for speed, cost efficiency, and scalability for our global breadth of operations.” Reduced 中文 (繁體) Bahasa Indonesia ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Baxter looked to Amazon Web Services (AWS) for a predictive maintenance solution that was simple to deploy, equipment agnostic, cost efficient, and scalable. Using Amazon Monitron—an end-to-end condition monitoring system that uses machine learning (ML) to automatically detect abnormal conditions in industrial equipment and lets users implement predictive maintenance to reduce unplanned downtime—Baxter has significantly improved its operational efficiencies by preventing unplanned equipment downtime and emergency repairs. Amazon Monitron is an end-to-end condition monitoring system that uses machine learning to automatically detect abnormal conditions in industrial equipment and lets users implement predictive maintenance to reduce unplanned downtime. 2022 Overview A key motivating factor for switching from a reactive to a predictive maintenance strategy was increasing uptime and reducing maintenance costs by scheduling maintenance rather than responding to emergency repairs. “The power of ML combined with actionable data delivered instantaneously on a mobile app has improved the team’s productivity significantly. This is truly a game changer for us,” says Adam Aldridge, reliability engineering manager. 500 machine hours Türkçe Baxter’s previous equipment-monitoring system relied on manual, time-based inspections, which required technicians to walk around the sites to check equipment. The cycle to inspect thousands of manufacturing assets in a facility could take a few weeks. Equipment failure could occur between these inspection cycles and cause unplanned equipment downtime. For some equipment inspections, technicians needed to enter confined spaces, requiring the company to halt operations for safety. English Solution | Realizing Tangible Value by Saving 500 Machine Hours of Downtime with Amazon Monitron With headquarters in the United States and facilities around the world, Baxter strives to deliver high-quality products to treat patients in hospitals, outpatient offices and facilities, and in patient homes. Deutsch Tiếng Việt Outcome | Scaling Globally with a Predictive Maintenance Strategy Italiano Customer Stories / Life Sciences Contact Sales of unplanned downtime prevented in one facility Learn more » A. K. Karan Global Senior Director of Digital Transformation, Baxter International Inc. Since Baxter’s deployment of Amazon Monitron, Baxter has avoided over 500 hours of unplanned machine downtime from over 40 alerts in a short time span at its lighthouse facility. This number of machine hours equates to approximately 7 million units of production, so Baxter can positively affect the lives of about 10,000 patients. Learn how Baxter reduced unplanned equipment downtime using Amazon Monitron. Português Baxter saw immediate value through the reduction of technicians’ manual inspection time and the ability to rapidly scale to additional facilities. “The speed with which we could deploy the Amazon Monitron devices was incredible,” says Tim Marini, senior director of operations at Baxter. “Sticking on the sensors, downloading the Amazon Monitron application, and getting started happened in minutes.” Part of that value includes a cultural change for technicians to take a proactive rather than reactive approach. “Using Amazon Monitron has helped us change the paradigm from unplanned, unexpected, critical failures to near-real-time monitoring of critical systems,” says Krizay Elenitoba-Johnson, site director at Baxter’s manufacturing facility in Alabama. “We can convert unplanned equipment downtime into planned and well-managed outcomes.”
Improving Patient Outcomes Using Amazon EC2 DL1 Instances _ Leidos Case Study _ AWS.txt
cost savings for model training Français Chetan Paul Vice President of Technology and Innovation Federal Health, Leidos In July 2021, Leidos first piloted the instances in a stand-alone on-premises environment provided by Habana Labs, verifying the instances’ cost-performance ratio and suitability for computer-vision and natural-language processing use cases. In November 2021, the company proposed to develop a pilot using Amazon EC2 DL1 Instances for the VA because the agency was already using AWS as a security-approved Authority to Operate environment. From January to August 2022, Leidos set up the Amazon EC2 DL1 Instances, trained and refined the deep learning models, performed demos, and incorporated feedback from the VA. The setup is expected go live by the end of 2022, just 1 year after the project started. “For large federal agencies like the VA to move at that speed is significant,” says Paul. “Amazon EC2 DL1 Instances were seamless from both a technology-setup and a development perspective.” Español The Leidos team has piloted two use cases on Amazon EC2 DL1 Instances. For the FDA, it developed a pilot to show how a neural network for image processing could be used to analyze chest X-rays of patients with COVID-19 and detect pneumonia early. The second use case was taking advantage of natural-language processing, using a DistilBERT model, to accelerate claims processing. “With every new technology, we anticipate a steep learning curve,” says Paul. “However, with the extensive user documentation, developer-portal use cases, study guides, and sample code from AWS and Habana Labs, learning was accelerated. Our customer saw that there are plenty of resources and support.” Solution | Using Amazon EC2 DL1 Instances to Cut Model Training Costs for Leidos by 66% Amazon EC2 DL1 instances powered by Gaudi accelerators from Habana Labs (an Intel company), deliver low cost-to-train deep learning models for natural language processing, object detection, and image recognition use cases.  Learn more » 日本語 Leidos is a science and technology solutions leader working to address some of the world’s challenges in the defense, intelligence, homeland security, civil, and healthcare markets. It has more than 400 locations in 30 countries. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used About Leidos Now Leidos sees a price-performance ratio of 60 percent and cost savings of 66 percent on model training compared with the on-premises infrastructure, without compromising processing speed or accuracy. The company also reduced model training time from 8 hours to less than 1 hour for about 2,200 cases per day by distributing the training workloads across Amazon EC2 DL1 Instances. “It’s a great benefit to distribute workloads across Amazon EC2 DL1 Instances and aggregate the outcomes,” says Paul. “That scalability is important for our customers that expect their workloads, but not necessarily their workforce, to increase over time.” AWS Services Used Increased speed 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  Contact Sales Ρусский Leidos provides technology solutions across civil, defense, health, and intelligence sectors. It serves federal health agencies, including the VA and the US Food and Drug Administration (FDA), and commercial organizations, such as hospitals and clinics. QTC, a Leidos subsidiary, is the largest provider of disability and occupational health exam services for veterans, operating 65 US clinics and a network of more than 12,000 private care providers. Processing veterans’ disability claims requires a lot of paperwork: each veteran has to fill out the right disability questionnaire for their claim, which includes prescriptions and medical notes. “Speed and accuracy matter,” says Chetan Paul, vice president of technology and innovation federal health at Leidos. “A delay in processing the claim for a veteran is a delay in getting the right medical care for that veteran.” عربي Learn how Leidos improved patient outcomes while saving 66 percent on costs to train ML models using Amazon EC2 DL1 Instances. Leidos plans to use Amazon EC2 DL1 Instances for other use cases, such as electronic health record processing, for the VA, the FDA, and the National Institutes of Health. Amazon EC2 DL1 Instances are well suited for analyzing image data for the FDA’s Center of Devices and Radiological Health and for research on the lungs of patients with COVID-19. “At Leidos, we rank our solutions to our customers using the parameters of speed, scale, security, and usability,” says Paul. “Our solution on Amazon EC2 DL1 Instances checks all the boxes.” 中文 (简体) To improve speed and cost efficiency of automating claims processing, Leidos became an early adopter of Amazon EC2 DL1 Instances, available on AWS since October 2021. Because Amazon EC2 DL1 Instances feature eight Gaudi accelerators, each with 32 GiB of high bandwidth memory, they would support Leidos in distributing customers’ training jobs across instances, reducing model training time and cost. Leidos, a science and technology solutions leader, builds machine learning (ML) applications that accelerate the ability of public and private health organizations, like the US Department of Veterans Affairs (VA), to get patients the medical care that they need. However, the company’s traditional on-premises infrastructure made it challenging to achieve the performance and cost efficiencies needed by complex ML applications that use large datasets. So Leidos sought advanced compute solutions on Amazon Web Services (AWS) to cost-effectively build ML applications that automate the manual processes of health organizations and help them accelerate diagnosis and treatment of patients. Learn more » 2022 Overview Amazon EC2 DL1 Instances Türkçe 66% English Cut model training time Leidos had extensively used Amazon Elastic Compute Cloud (Amazon EC2), a broad and deep compute solution, and other AWS services that support Amazon EC2. In late 2021, after careful consideration, the company chose to migrate its ML workloads from its on-premises infrastructure to the new Amazon EC2 DL1 Instances. These instances are powered by Gaudi accelerators from Habana Labs, an Intel company and AWS Partner, to deliver low-cost-to-train deep learning models for natural-language processing and computer-vision use cases. By migrating its ML development to these instances, Leidos improved performance and decreased compute costs so that its customers could reap greater returns on investment while minimizing manual tasks. Leidos Improves Patient Outcomes Using Amazon EC2 DL1 Instances At Leidos, we rank our solutions to our customers using the parameters of speed, scale, security, and usability. Our solution on Amazon EC2 DL1 Instances checks all the boxes.” Customer Stories / Professional Services Outcome | Applying Amazon EC2 DL1 Instances and ML to Other Use Cases Opportunity | Using Amazon EC2 DL1 Instances to Cost-Effectively Automate Claims Processing 95–97% precision score better price performance Deutsch compared with 72% using hybrid solution Previously, QTC processed claims both manually, using human reviewers, and automatically, using a hybrid environment of virtual machines and Amazon EC2 instances to manage large workloads and datasets. However, that hybrid approach wasn’t fast enough in processing the huge volumes and variety of data involved in claims processing—including images, scientific literature, publications, and text—nor was the price performance optimal for customers’ return on investment. Tiếng Việt Italiano ไทย By taking advantage of distributed computing capabilities offered by the eight-node Amazon EC2 DL1 processor and scaling the compute by adding Amazon EC2 DL1 Instances as required, Leidos can train models with more data, thus increasing the F1 score, or precision and recall score. On traditional hybrid Amazon EC2 environments, the models had a maximum F1 score of 72 percent. By training on Amazon EC2 DL1 Instances, Leidos increased the F1 score to 95–97 percent. “This makes the reviewers’ lives so much easier,” says Paul. “It eliminates the fatigue and error from a manual review process, and workforce efficiency and productivity jumped: reviewers can process 40 claims in the time that it took to process 1 before. The veterans get to their claims and healthcare much faster.” for claim processing Amazon EC2 from 8 hours to less than 1 hour for about 2,200 cases a day 60% Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Improving Search Capabilities and Speed Using Amazon OpenSearch Service with ArenaNet _ ArenaNet Case Study _ AWS.txt
Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any Learn more » Amazon OpenSearch Service Français 2023 Amazon Redshift Español Opportunity | Using Amazon OpenSearch Service to Enhance the Player Experience for ArenaNet  日本語 response time for complex search queries Customer Stories / Media & Entertainment Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Learn how online game developer ArenaNet optimized search functionality for players using Amazon OpenSearch Service. Using AWS managed solutions like Amazon OpenSearch Service, ArenaNet reduces the management, monitoring, and maintenance of the wiki pages, which had previously been the responsibility of a single engineer. Plus, because Amazon OpenSearch Service places the database name at the beginning of each key, all the Guild Wars wiki pages share one large cluster instead of requiring the engineer to generate multiple clusters to optimize users’ searches. “Having that single managed Amazon OpenSearch Service cluster was incredibly helpful in spinning up functionality in a relatively short timeframe,” says Lloyd. ArenaNet added search functionality, expanded syntax capabilities, and greatly improved the speed of searches for players. “It doesn’t sit there and churn,” says Mitch Sickler, systems engineering manager at ArenaNet. “Users immediately get a return of whatever they searched for.” For example, a user’s search for a character quote used to take so long that the server would time out after 1 minute. “After Amazon OpenSearch Service was working and everything was indexed properly, that same search would take 2 seconds, if that,” says Lloyd. To help further improve efficiency in querying as well as save costs, in January 2022, ArenaNet changed its cloud-based data warehouse solution to Amazon Redshift. The team migrated 100 TB to Amazon Redshift while cutting its costs by 50 percent. ArenaNet’s use of Amazon Redshift helped alleviate significant performance issues from its previous data warehouse solution, which cost more and performed slower because of high search loads, increased traffic, and other factors. “What we like about Amazon Redshift is that it gets less expensive and better over time,” says Clarke-Willson. ArenaNet has maintained near-100 percent game uptime alongside in-person help from AWS engineers and online support. “They’ve been great at assisting us in what we’re trying to accomplish,” Sickler says. “They strive to anticipate potential friction when we have big releases and try to get ahead of any issues. I’m super appreciative of that.” game uptime maintained AWS Services Used 中文 (繁體) Bahasa Indonesia Founded in 2000 and acquired by NCSoft in 2002, ArenaNet released the MMORPG Guild Wars in 2005 without a monthly subscription fee. Players go on quests with other players online, exploring fantasy worlds as characters that they create and design themselves, including customizing their outfits and equipment. By 2010, the company had sold nearly 6.5 million copies worldwide. It released Guild Wars 2 in August 2012 and sold 3.5 million copies in its first year to become the fastest-selling MMORPG up to that point. A unique aspect of the game is the ability of players to consult an accompanying Guild Wars wiki, a massive online reference source available through a browser or by typing “/wiki” and clicking an object within the game. Users contribute to and edit the wikis’ nearly 280,000 pages, detailing information about the characters, storylines, and other game content. ArenaNet needed a backend solution that could handle the increasing scale and complexity of the five wikis related to Guild Wars. More than 14,000 editors manage pages available in English, German, French, and Spanish languages. “Modern MMORPGs are really complicated and filled with features, and the wiki makes the game way more accessible,” says Stephen Clarke-Willson, vice president of engineering at ArenaNet. “It’s like, if you go to a distant country without a travel guide, you don’t know what’s going on. The wiki has become an organic part of the game.” Guild Wars players had asked ArenaNet to add search features to help them navigate the complexity of the information on the wiki pages. ArenaNet had been using MediaWiki, a free open-source software, to process, store, and display information for wiki users. As the Guild Wars wikis continued to grow in scope and complexity, the MediaWiki built-in search engine could not keep up with use that reached up to 400 searches per second. At the users’ request, in September 2021, ArenaNet implemented Amazon OpenSearch Service, an open-source distributed search and analytics suite derived from Elasticsearch. ArenaNet installed the specific MediaWiki extensions that would help the wikis to communicate with Amazon OpenSearch Service. Using Amazon OpenSearch Service, ArenaNet could index wiki content for faster search results while also offloading the search processing from the wikis’ web and database servers onto the dedicated Amazon OpenSearch servers. Further, instead of having to spin up multiple clusters to handle a search engine that would at times fall over under heavy loads, ArenaNet worked alongside the Amazon OpenSearch Service team proactively to find work-arounds that streamlined communication between MediaWiki and the AWS service. “After we did that, it was basically plug and play,” says Justin Lloyd, Linux engineer at ArenaNet. Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Contact Sales Ρусский Based in Bellevue, Washington, ArenaNet is a video game developer best known for the popular massively multiplayer online role-playing franchise Guild Wars. عربي ArenaNet is the developer of the Guild Wars franchise, including one of the most popular massively multiplayer online role-playing games (MMORPGs) in the world, Guild Wars 2. The company sought to optimize the functionality of a unique feature of the game: its direct integration with wiki pages that provide a comprehensive online reference source, written by Guild Wars players. Players were requesting additional features, and ArenaNet wanted a cloud-based data warehouse with the speed and agility to respond to record numbers of users. As its current solution became increasingly expensive to maintain, the company’s small engineering team looked for a more cost-effective managed solution. ArenaNet turned to Amazon Web Services (AWS) and improved the speed and syntax capabilities of its search tools for users while cutting its costs by 50 percent and strengthening the durability of its data warehouse by using Amazon Redshift, which uses SQL to analyze structured and semistructured data across data warehouses, operational databases, and data lakes. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Near 100% Using Amazon OpenSearch Service helps our search functions to work so much better and be much more powerful. We don’t have to manage it ourselves, which is another huge benefit.” Overview Solution | Adding Capabilities and Improving Efficiency for Game Players While Cutting Costs by 50%  50% Türkçe Justin Lloyd Linux Engineer, ArenaNet English uses of improved search functionality 20 million + About ArenaNet Deutsch Improving Search Capabilities and Speed Using Amazon OpenSearch Service with ArenaNet Tiếng Việt Italiano ไทย 2 second Learn more » reduction in data warehouse costs The backend changes to the Guild Wars wiki have prompted overwhelmingly positive comments from players on social media. “We see how grateful people are to have the wikis by how much activity the wikis get,” says Lloyd. ArenaNet plans further optimizations in speed and functionality to its search capabilities, which have been used more than 21 million times. The company is also looking into using Amazon OpenSearch Service for observability so that it can centralize and better analyze logs generated by MediaWiki. “Using Amazon OpenSearch Service helps our search functions to work so much better and be much more powerful,” Lloyd says. “We don’t have to manage it ourselves, which is another huge benefit.” Outcome | Continuing to Optimize the Player Experience  Português
Improving Transportation with Mobility Data Using Amazon EMR and Serverless Managed Services _ Arity Case Study _ AWS.txt
Arity was facing operational challenges associated with maintaining Kafka clusters, keeping them up to date with the latest security patches and bug scans and diagnosing the clusters when issues arose. To move away from having to keep detailed knowledge of individual services and to increase focus on its business logic, Arity transitioned to Amazon Managed Streaming for Apache Kafka (Amazon MSK), which makes it simple to ingest and process streaming data in near real time with fully managed Apache Kafka. Using Amazon MSK to manage Kafka, Arity reduced operational overhead and associated costs using Amazon MSK by taking advantage of automatic scaling to use clusters more efficiently, such as by reducing cluster idle time during periods of lower use. Arity’s modernization reduced monthly infrastructure costs by 30 percent, and the cost per trip connection decreased by 36 percent. These savings mean that the company can better devote its resources to core business needs instead of self-managing its telematics solution. Arity Improves Transportation with Mobility Data Using Amazon EMR and Serverless Managed Services Français Arity, a mobility and data analytics company that focuses on improving transportation, wanted to modernize its data collection infrastructure. Arity collects large amounts of driving data and uses predictive analytics to build solutions with the goal of turning that data into behavioral insights to make transportation smarter and safer for everyone. Since its inception, Arity has collected and analyzed more than a trillion miles of driving data. Looking to improve its data infrastructure, Arity decided that by deepening its use of Amazon Web Services (AWS), it could more efficiently use smart technologies while managing costs. Arity began its modernization process by migrating to Amazon EMR, a cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks. Arity uses Amazon EMR data science analytics use cases, empowering the company to process and access data that is used to make informed business decisions. As a managed solution, Amazon EMR simplified the overhead of running infrastructure and provided Arity with options to reduce total cost of ownership. Arity also uses Amazon EMR to decrease the overhead required to run its compute instances. Using Amazon EMR and other AWS services, Arity reduced by 20 percent the number hours it needed to manage on Amazon Elastic Compute Cloud (Amazon EC2), secure and resizable compute capacity for virtually any workload, resulting in compute cost savings. Español 30% reduction Learn more » 日本語 Contact Sales AWS offers support that helps Arity understand and use its products. “We receive great support from the teams at AWS,” says Banikazemi. “When we need something, they are within reach.” Arity looks at training as an investment in its team that enhances its architecture, and it takes advantage of the personalized training opportunities offered by AWS. The company recently offered a well-received training event and plans to offer more training in the future. Arity implemented a two-pronged approach to its modernization. First, to help prevent disruption of its road map and get the most value, it chose services offered by AWS that fit well within its existing architecture, which meant that Arity could efficiently shift to the new solution. Second, while Arity was focused on migrating its existing infrastructure, it started changing its architectural approach so that it could use its new solution from the beginning of product development. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Modernizing its architecture has led Arity to increase its development capacity because of lower associated solution management overhead. Developers can better focus on their jobs, innovate faster, and improve product time to market. Arity also adds improvements to its products faster and identifies and resolves events sooner. “We can now solve customer challenges in weeks, where before it would have taken quarters,” says Banikazemi. Already on AWS, Arity wanted to better use these services to modernize its data infrastructure and architecture with the goal of freeing up developer resources and reinvesting them in its business to drive innovation. Ultimately, Arity knew that achieving these goals would reduce challenges associated with managing IT infrastructure, such as clusters. “The overhead of maintaining our infrastructure was becoming an operational burden,” says Reza Banikazemi, director of system architecture at Arity. To reduce its operational overhead and better allow its team to focus on delivering business outcomes, Arity decided to move from its self-managed processes to managed offerings on AWS. Improved Arity uses the self-managing ability of Amazon Kinesis Data Analytics to transform and analyze streaming data in near real time using Apache Flink. On Amazon Kinesis Data Analytics, Arity generates driving behavior insights based on collated driving data. As a bridge between data analysis on Amazon EMR and near-real-time data analyses and to connect data streams, Arity uses Amazon Kinesis Data Firehose, an extract, transform, load service that reliably captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. Arity gets data from its streaming infrastructure, pulls the data for downstream processing into a self-managed cluster into Amazon Simple Storage Service (Amazon S3)—an object storage service offering scalability, data availability, security, and performance—and then accesses the data from Amazon S3 using Amazon EMR and Amazon Athena, an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Solution | Modernizing Infrastructure to Free Resources and Focus on Business Outcome | Driving Down Management Burden Amazon Managed Streaming for Apache Kafka (Amazon MSK) makes it easy to ingest and process streaming data in real time with fully managed Apache Kafka. AWS Services Used Amazon MSK 中文 (繁體) Bahasa Indonesia innovation Founded in 2016 by The Allstate Corporation, Arity uses telematics to collect and analyze driving data to better understand and predict driving behavior. Telematics refers to the integrated use of communications and information technology to transmit, store, and receive information from telecommunications devices and send it to remote objects over a network. Arity uses that collected and analyzed driving data to help companies make informed choices and reduce costs, including costs for insurance companies, mobile app providers, cities and their departments of transportation, marketers, and more. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 20% reduction 中文 (简体) About Arity Reza Banikazemi Director of System Architecture, Arity 2022 Overview Drives Get Started Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. We can now solve customer challenges in weeks, where before it would have taken quarters.” Modernized Amazon Kinesis Data Firehose is an extract, transform, and load (ETL) service that reliably captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. Türkçe Customer Stories / Transportation & Logistics English to use of fully managed architecture in monthly infrastructure costs Going forward, Arity hopes to expand its use of AWS serverless technologies to eliminate the need to manage servers so that it can reduce infrastructure management tasks, implement automatic scaling, and optimize costs. “Working on AWS has been great. We made a lot of good strides this year, and we’re looking forward to continuing it next year,” says Banikazemi. Amazon EMR Learn how Arity modernized its data collection infrastructure using Amazon EMR. Deutsch Tiếng Việt Amazon Kinesis Data Firehose Italiano ไทย use of smart technologies  Learn more » Amazon EC2 in Amazon EC2 hours Opportunity | Improving the Use of AWS Services to Reduce Instance Needs for Arity by 20 Percent Arity is a mobility and data analytics company that focuses on improving transportation. The company helps to better understand and predict driving behavior at scale and delivers those insights using solutions that help companies to deliver smarter, safer, and more economical services to consumers. Português
Increasing Reach and Reliability of Healthcare Software by Migrating 300 Servers to AWS in 6 Weeks _ Mayden Case Study _ AWS.txt
To migrate efficiently while still supporting patient services, Mayden joined the AWS Migration Acceleration Program (AWS MAP), a program to build strong cloud foundations, reduce risk, and offset the initial cost of migrations. Mayden also worked with Sourced, an AWS Partner, which offered expertise and augmented Mayden’s DevOps team during the migration. “We wouldn’t have gotten this done as quickly and with the low amount of downtime that we had if we hadn’t had the support of AWS MAP and worked with the Sourced team,” says Tom Dawson, product owner for the systems team at Mayden. This solution coordinates and automates large-scale migrations to the AWS Cloud, involving numerous servers. Enterprises can improve performance and prevent long cutover windows by automating manual processes and integrating multiple tools efficiently.  Learn more » The AWS Migration Acceleration Program (MAP) is a comprehensive and proven cloud migration program based upon AWS’s experience migrating thousands of enterprise customers to the cloud. Français Solution | Rehosting 300 Servers in 6 Weeks with Minimal Downtime Using AWS Application Migration Service during cross-cloud replication of servers  Outcome | Increasing Access to Innovative, Reliable Mental Health Services As NHS IAPT services are increasingly offered virtually, iaptus supports 200 mental health services with 40,000 users in the iaptus application. Even before the migration, an NHS-commissioned survey of users rated iaptus at 80.1 percent for reliability and responsiveness, compared with the NHS average of 58.1 percent. “Despite not having the level of stability that we might have liked from our former provider, we did a good job mitigating what we could,” says Rebecca Prestland, business development and marketing strategist at Mayden. “Given how important it is for our system to be available, fast, and responsive, we’re excited to see how that rating will improve now that we’re on AWS.” Español Automation 日本語 Contact Sales Founded in 2000, Mayden launched the iaptus patient-management system in 2008 as part of the pilot program for what became NHS IAPT. Today, iaptus supports 65 percent of all referrals to the NHS IAPT service; in 2021 alone, iaptus facilitated the care of 1.2 million out of the 1.8 million total referrals. The solution serves as an electronic health record (EHR) management system and hosts online patient services, such as appointment booking, self-referrals, and integrated video appointments. All the iaptus services are delivered through the cloud. In August 2021, Mayden’s technical team began assessing the benefits of moving to a cloud hyperscaler. The company searched for a new provider, expecting to meet with a lot of faceless websites. Instead, when Mayden approached AWS in November 2021, the team was greeted by people who provided personalized service and swiftly connected them with the experts and answers they needed. AWS Migration Acceleration Program 300 servers 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Healthcare technology company Mayden wanted to migrate to a new cloud provider but needed to do so without disrupting service for patients and care providers. Located in the United Kingdom, Mayden provides technology for mental healthcare services as part of the National Health Service (NHS) and NHS’s Improving Access to Psychological Therapies (IAPT) program. The company was not satisfied with the level of stability that it was experiencing, and it recognized that its previous cloud provider could not support Mayden in its next phase of growth. Mayden needed to find a new provider and migrate its servers in a way that caused as little disruption to its healthcare clients as possible. AWS Cloud Migration Factory Using AWS Application Migration Service, we rehosted the more complicated legacy parts of our application very quickly and with no downtime.” Mayden is growing and has ambitious plans. The company is exploring AWS machine learning tools to analyze the data collected in the IAPT program to drive better outcomes for patients. The team is also expanding into physical health services. “We believe that tech has an important role to play in creating sustainable healthcare systems,” says Prestland. “The migration to AWS was an important move for Mayden to support us strategically as we continue to grow.” AWS Services Used AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automatically converting your source servers to run natively on AWS. It also simplifies application modernization with built-in, post-launch optimization options. 中文 (繁體) Bahasa Indonesia Amazon Route 53 Ρусский Mayden is a UK healthcare technology company creating digital technology that changes what’s possible for clinicians and patients. Its flagship solution, iaptus, is an EHR system for mental health services. عربي AWS Application Migration Service 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. Route 53 connects user requests to internet applications running on AWS or on-premises. Learn more » Mayden migrated to Amazon Web Services (AWS) using AWS Application Migration Service, which minimizes time-intensive, error-prone manual processes by automatically converting customer source servers to run natively on AWS. Using AWS Application Migration Service, Mayden migrated 300 servers to AWS in 6 weeks with minimal downtime. Now, Mayden is expanding to new Regions, and it has already used AWS to build infrastructure for new services in Canada in only 10 days. 2022 Overview Get Started Since migrating to AWS, the availability and reliability of Mayden’s service has improved. “The stability is notable,” says Chris Eldridge, director of operations at Mayden. “Since migrating to AWS, we haven’t had any major service issues.” It’s crucial for Mayden’s solution to be available 24/7 because people might need to access its applications, such as self-referrals, at any time of day. “If you’re working in mental health, you’re always aware of the importance of your system being online and available,” says Eldridge. “If somebody can’t access a patient’s record when they need to, we’re aware of the weight of that responsibility.” Using AWS, Mayden’s IT team can do its job faster. Building the 75 database servers that make up a key part of Mayden’s infrastructure—a task that previously would have taken hours—took 2 minutes on AWS. “The speed of AWS is astonishing. The ability to create infrastructure that quickly makes such a massive difference to our small DevOps team,” says Eldridge. Mayden also uses managed services on AWS—including Amazon Route 53, a domain name system web service; AWS Client VPN, a fully managed remote access VPN solution; and Elastic Load Balancing, which distributes network traffic. Using these services frees up the team to concentrate on building and supporting its applications. Türkçe 98% faster Tom Dawson Product Owner for the Systems Team, Mayden Opportunity | Migrating to AWS to Facilitate Growth for Mayden English Learn how Mayden migrated mental health services software to AWS in 6 weeks with minimal downtime using AWS Application Migration Service. to build database servers on AWS Mayden Increases Reach and Reliability of Healthcare Software by Migrating 300 Servers to AWS in 6 Weeks A few weeks after completing the UK migration, Mayden used the tools and knowledge that it had gained to build a new environment in Canada. The infrastructure, which will support mental health and addictions services, took only 10 days to build. After this system is launched, Mayden will begin building new infrastructure in another geographic location. It will also apply its learnings to consolidate its infrastructure in Australia onto AWS.  After migrating test workloads at the end of April through May, Mayden migrated live workloads to AWS in June through mid-July 2022. It rebuilt about 40 percent of its servers using AWS-native services and cloud tools, such as Terraform, an open-source infrastructure-as-code service. The other 60 percent were rehosted to AWS with no downtime using AWS Application Migration Service. By using AWS Application Migration Service, Mayden moved these legacy applications to AWS with minimal or no changes to the code or core architecture. “Using AWS Application Migration Service, we rehosted the more complicated legacy parts of our application very quickly and with no downtime,” says Dawson. “The fact that the service runs entirely within the operating system meant that we didn’t need to get into the underlying physical infrastructure to do the replication.” Deutsch No downtime About Mayden minimized the need for code changes to legacy applications Tiếng Việt Customer Stories / Healthcare Italiano ไทย migrated in 6 weeks using AWS Application Migration Service To further accelerate its migration, Mayden used AWS Cloud Migration Factory, an orchestration solution powered by AWS Application Migration Service that coordinates and automates large-scale migrations to AWS. Using this service, Mayden migrated groups of 30 machines at once. Learn more » Português
Increasing Sales Opportunities by 83 Working with AWS Training and Certification with Fortinet _ Case Study _ AWS.txt
Founded in 2000 in California, Fortinet is a global cybersecurity company with nearly 600,000 customers in diverse industries, such as manufacturing, education, and healthcare. Many of Fortinet’s customers use AWS and want to maximize their productivity while using Fortinet’s cybersecurity solutions. Roughly two-thirds of Fortinet’s revenue comes from overseas, and the organization needed to deliver a consistent knowledge base across different countries and industries. Français The APN Customer Engagements (ACE) program allows you to securely collaborate and co-sell with Amazon Web Services (AWS), drive successful engagements with customers, and grow your business. Learn more » Opportunity | Working with AWS Training and Certification to Develop Scalable Training Programs for Fortinet Español In 2013, Fortinet joined the AWS ISV Accelerate program, a co-sell program for organizations that provide software solutions that run on or integrate with AWS. In 2014, it placed its first listing on AWS Marketplace, a digital catalog where companies can find, test, buy, and deploy software that runs on AWS. Since then, its presence has grown to nearly 50 listings and 18,400 unique and active subscriptions, while the number of Fortinet employees tripled. In 2019, Fortinet started talking to AWS about using AWS Training and Certification to create a structured, scalable approach to training business development representatives (BDRs), the first line of contact with prospects who have expressed an interest in using Fortinet solutions. AWS Marketplace As an incentive to earn AWS Certified Cloud Practitioner certification, the company rewarded successful employees with sponsored participation in AWS re:Invent, an annual learning conference that is hosted by AWS for the global cloud computing community. “BDRs look at their training as a linchpin for their career,” Clark says. “It’s a great feather in their cap as they look to advance through the different roles at Fortinet.” 日本語 Contact Sales 2022 242 accreditations Get Started 한국어 Fortinet also added Co-Selling with AWS for ISV Partners, a course designed to articulate the value of the co-sell model. Fortinet BDRs received an overview of the AWS field structure, best practices on co-selling, and greater understanding of the motivation of AWS field teams. Fortinet also customized training through regular engagement of relevant guest speakers, such as an AWS sales representative who gave recommendations on how to engage with AWS for the mutual benefit of customers. Overview | Opportunity | Solution | Outcome | AWS Services Used The multinational cybersecurity company Fortinet needed a scalable way to educate its global salesforce so that employees could more knowledgeably talk to customers about their use of Amazon Web Services (AWS). Fortinet, an AWS Partner, also wanted to enrich its co-sell opportunities by aligning its IT nomenclature with that of AWS. It turned to AWS Partner Training and Certification to develop structured programs that could provide guidance and education to help customers along their cloud journeys. More than 500 Fortinet salespeople voluntarily participated in the program, resulting in a more thorough understanding of customers’ needs and 83 percent more sales opportunities. The overarching value of AWS Training and Certification is that it gives an employee a much more rounded view of the customer outcome, how they are using the cloud and transforming their business.” Find, test, buy, and deploy software that runs of AWS.  Learn more » Marty Hess Regional Vice President for Cloud Alliances and Ecosystem Strategy, Fortinet 83% increase Founded in 2000 in California, Fortinet is a global cybersecurity company serving nearly 600,000 customers in diverse industries. Its customers use Fortinet Security Fabric to protect users, devices, and applications across all network edges. AWS Services Used The AWS ISV Accelerate Program is a co-sell program for organizations that provide software solutions that run on or integrate with AWS. The program helps you drive new business and accelerate sales cycles by connecting participating independent software vendors (ISVs) with the AWS Sales organization. Learn more » 中文 (繁體) Bahasa Indonesia Outcome | Growing Business with the Help of AWS Training and Certification Fortinet also uses AWS Training and Certification programs to showcase potential career paths to job candidates. Plus, employees who have gone through AWS Training and Certification serve as mentors. They help new hires learn to work efficiently through the APN Customer Engagements Program (ACE), which lets AWS Partners securely collaborate and co-sell with AWS, drive successful engagements with their customers, and grow their businesses. Fortinet uses ACE to track customer engagements and sets goals against the metrics for global teams. Ρусский Customer Stories / Software & Internet عربي Fortinet Solutions Architects, who support sellers and their customers, have witnessed the success of AWS Training and Certification for the growing sales team and want to modify the AWS Training and Certification program for themselves. The company plans to continue to roll out structured programs, hoping to increase company-wide buy-in. “The overarching value of AWS Training and Certification is that it gives an employee a much more rounded view of the customer outcome, how they are using the cloud and transforming their business,” says Hess. “That’s what we’re trying to do: improve the better-together story as it relates to AWS and Fortinet and what value we bring to our joint customers.” 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Training and Certification achieved in 12 months AWS ISV Accelerate Overview About Fortinet 167% increase Türkçe APN Customer Engagements Program (ACE) English Increasing Sales Opportunities by 83% Working with AWS Training and Certification with Fortinet 857% increase Fortinet has seen continual business growth since launching its AWS Training and Certification programs, creating 54 percent more sales opportunities in 2021 and 83 percent more in 2022. Launched opportunities, a measure of customers who began to use a service through the AWS Marketplace, increased by 167 percent in 2021. Propel your organization with cloud fluency with AWS Training and Certification. Our content is created by experts at AWS and updated regularly so you can keep your cloud skills fresh. In mid-2021, AWS and Fortinet launched the first round of voluntary AWS Training and Certification programs. The program focused in part on providing an overall understanding of AWS through courses such as AWS Cloud Practitioner Essentials. The course addresses cloud concepts, AWS services, security, architecture, pricing, and support. In fact, 64 Fortinet employees—including 40 BDRs—earned AWS Certified Cloud Practitioner certification, which helps organizations identify and develop talent with critical knowledge related to implementing cloud initiatives. “AWS Training and Certification helped us better understand the different storage capabilities, compute capabilities, and overall breadth of the AWS portfolio,” says Stephen Clark, cloud security sales director at Fortinet. “It was an eye-opening experience.” Learn how Fortinet in cybersecurity increased sales opportunities by 83 percent and empowered its global salesforce with AWS Training and Certification. Deutsch Solution | Educating the Salesforce on Cloud Operations and Co-Sell Opportunities Tiếng Việt Italiano ไทย in AWS Certified Cloud Practitioner certification over previous period in total sales opportunities in the second year of the training program Learn more » From the beginning, course offerings had included iterations of what is now AWS Partner: Sales Accreditation, which provides best practices for co-selling with AWS and elucidates the factors that drive customer cloud adoption. Over time, Fortinet’s training programs increasingly began to emphasize the co-sell model. The third iteration of the program included AWS Partner: Cloud Economics Accreditation, which focuses on the cost dynamics and other business cases for migration from on-premises solutions to the cloud. The course helped Fortinet sellers grasp the nuances of the co-sell model and how it differs from traditional IT. “That helps us understand not only our technical role helping to secure customers in the cloud but also what their expectations are from a financial standpoint,” says Marty Hess, regional vice president for cloud alliances and ecosystem strategy at Fortinet. Since 2021, more than 500 Fortinet employees have registered for courses. Including the reuse of recorded assets, 197 individuals have received a total of 242 accreditations. Plus, 230 people have completed virtual classroom training sessions. “Now, BDRs are so much better at nurturing the leads that come in,” says Mishel Fletcher, director of cloud alliance marketing at Fortinet. “They are so much more confident in their discussions with prospects.”  in launched-won sales opportunities in the first year of the training program Português
Increasing Scalability and Data Durability of Television Voting Solution Using Amazon MemoryDB for Redis with Mediaset _ Mediaset Case Study _ AWS.txt
The key requirement for Mediaset’s voting solution was scalability so that the company could handle the traffic volume and record all the votes. During the Amici finale, Mediaset supported more than four million viewers on live television and an additional one million using digital players on mobile devices or the company’s website. Using its solution built on Amazon MemoryDB for Redis, Mediaset received more than five million votes for the season 21 finale of Amici, which was more than five times the number of votes received in the previous finale using the company’s on-premises solution. Using Amazon MemoryDB for Redis, Mediaset also achieved data durability by storing votes to comply with government requirements. “Amazon MemoryDB for Redis has the features of both an in-memory cache and a database, so it’s really good for a lot of our business needs,” says Reni. “We serve a front-end application, so being fast is essential for our systems.” For viewers, the migration made the experience better by improving the response time and eliminating errors. During the season 21 Amici finale, response times were around one-tenth of a second. This response time was much faster than that of the previous voting system, where traffic sometimes exceeded limits and prevented viewers from submitting votes in time. “Using Amazon MemoryDB for Redis, viewers had a very good experience, could express their votes quickly, and didn’t encounter any errors,” says Curci. “It was very good for us.” Saved time Français Achieved Overview | Opportunity | Solution | Outcome | AWS Services Used |  2023 Mediaset’s most popular show, Amici, is a talent show in which teenagers sing, act, and dance to compete for a prize. Viewers can vote five times at set intervals throughout the show from a mobile application, website, or connected television. Because of traffic spikes during these 10- to 15-minute voting periods, Mediaset’s on-premises solution experienced performance issues, causing delays and errors that impacted the customer experience. Mediaset started comparing cloud alternatives in April 2022 and chose AWS because Mediaset was already using the cloud provider in other areas and knew the solution would be scalable and quick to deploy. “Time was a big factor for us,” says Marco Reni, technical project manager and architect at Mediaset. “The request to handle the voting for the finale came in shortly before the event, and we can’t move scheduled television programs. The show must go on.” Español Opportunity | Using Amazon MemoryDB for Redis to Support Traffic Spikes During Voting Sessions for Mediaset 日本語 AWS Services Used Reduced costs Customer Stories / Media & Entertainment AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Learn more » Redis-compatible, durable, in-memory database service for ultra-fast performance 한국어 Using services like Amazon MemoryDB for Redis and the expertise provided by AWS Enterprise Support, we can rapidly build prototypes and test architecture in a few days, which we couldn’t have done without using AWS services”. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn how Mediaset in the media and entertainment industry scaled to support over five million votes during the finale of its most popular television show using Amazon MemoryDB for Redis. Mediaset chose Amazon Web Services (AWS) because of the flexibility, scalability, and ease of implementation that AWS offers. Using Amazon MemoryDB for Redis—a Redis-compatible, durable, in-memory database service for ultra-fast performance—Mediaset replaced its on-premises architecture in 30 days and successfully received more than five million votes during the finale. After the success of its voting solution for the Amici finale, Mediaset expanded to use it for all the shows with a voting system in the fall of 2022. These shows can run with concurrent voting sessions, but Mediaset can handle the traffic using Amazon MemoryDB for Redis and AWS Fargate—a serverless, pay-as-you-go compute engine for containers—to effectively scale up during prime time and scale back down afterward. Using the automatic scaling feature of AWS Fargate, Mediaset can determine the number of container instances needed and then flexibly scale in seconds instead of minutes if there is increased traffic. “Using Amazon MemoryDB for Redis, we could adapt the service to serve multiple shows with almost no effort,” says Reni. Overview 中文 (繁體) Bahasa Indonesia Outcome | Expanding and Enhancing Mediaset’s Voting Solution Using Amazon MemoryDB for Redis Contact Sales Ρусский implementation time عربي 中文 (简体) Learn more » Supported Increasing Scalability and Data Durability of Television Voting Solution Using Amazon MemoryDB for Redis with Mediaset Get Started AWS Fargate Daniele Curci Software Engineer and Solution Architect, Mediaset Mediaset’s solution built using AWS is more flexible and lower maintenance than its former solution, which saves time for the company. With an on-premises structure, Mediaset needed to involve multiple teams over 4–6 months for projects that required moving infrastructure. Using AWS, Mediaset can perform load tests at a low cost without investing in additional hardware, and its team no longer needs to worry about infrastructure. The company can add new features to the Mediaset Infinity streaming service in days or weeks instead of months using managed services. “Using services like Amazon MemoryDB for Redis, we can rapidly build prototypes and test architecture in a few days, which we couldn’t have done without managed services from AWS,” says Daniele Curci, software engineer and solution architect at Mediaset. “We can focus on the logic of our application without spending time on the physical infrastructure.” Türkçe data durability to meet government requirements English more than five million votes in Amici finale Amazon MemoryDB for Redis Along with being able to support variable traffic needs, Mediaset saves on costs because of the scalability of its solution. “For the Amici finale, we scaled up before the start of the show and scaled back after the show,” says Reni. “The costs for that night were very low, which would not have been possible with an on-premises architecture.” Based in Italy, Mediaset is a large commercial broadcaster that provides live channels and movie streaming. Amici, its most popular television show, draws millions of viewers to vote for contestants who sing, act, and dance for a prize.  Founded in 1993, Mediaset is a large commercial broadcaster based in Italy that produces and distributes television drama, film, news, sports, and multimedia content. The Mediaset Infinity streaming service provides live channels and movie streaming to viewers across Italy and around the world. Solution | Collecting Over Five Million Votes During Popular Television Finale Using Amazon MemoryDB for Redis and Using AWS Fargate Mediaset plans to extend its voting solution using AWS services to cover additional voting channels and expand analytics capabilities. The company also plans to use additional features of Amazon MemoryDB for Redis, such as using the service as persistent storage for its content management system needs. “The biggest benefit for us is the scalability,” says Reni. “Being able to scale almost instantly to whatever size we need using Amazon MemoryDB for Redis is important because we are never certain about how many viewers we will need to support.” Deutsch by scaling to meet variable demand Tiếng Việt About Mediaset Italiano ไทย The company met with experts from AWS throughout the implementation process. Mediaset designed the solution to meet various requirements, such as limiting the number of votes each viewer could submit and validating the user location. The company had to work quickly so that the solution could go live in May 2022 for the final episode of season 21 of Amici. "The AWS team understood our urgency and went over the top,” says Reni. “From a technical point of view, it was really useful to have the AWS team’s expertise on Amazon MemoryDB for Redis while we were implementing the architecture. Furthermore, AWS Enterprise Support was always available to solve any last-minute doubt. Just weeks before the finale of its most popular television show, Italian mass media company Mediaset needed to migrate its on-premises voting solution to a cloud infrastructure. Mediaset expected a high volume of traffic and needed a scalable solution. Television engagement can be unpredictable, and the company had recently increased the number of votes that each viewer could submit.  for team with managed services Achieved 30-day Português
Indecomm Case Study _ Amazon Web Services.txt
With decades of mortgage industry experience and millions of data points stored, Indecomm has a deep understanding of the loan lifecycle. The company’s mortgage automation products help lenders, insurers, and financial agents streamline back-office operations so they can spend more time improving the borrower experience. Its Genius product suite addresses many inefficiencies in underwriting and other preliminary stages of loan processing. In 2019, Indecomm sought to drive further automation in document processing and analysis by developing an improved data extraction solution using machine learning (ML).  AWS Lambda data classification accuracy, with 97% data extraction accuracy Français Prior to IDX, extracting data from a 100-page document took 30 minutes. With Amazon Textract, the new solution efficiently converts images to text at the field level and enriches data within 5–7 minutes. This has been especially helpful for mortgage lenders dealing with self-employed borrowers who often present non-standard income documentation.  Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Today, many companies manually extract data from scanned documents such as PDFs, images, tables, and forms, or through simple OCR software that requires manual configuration (which often must be updated when the form changes).  2023 In DecisionGenius, IDX works to automate data verifications, reducing the number of manual loan file interactions. As a result, lenders have lowered the required number of file “touches” by 50–60 percent, doubling underwriter and processor productivity. A knock-on effect of less manual intervention is higher accuracy; Indecomm’s IDX boasts a document classification accuracy rate of 100 percent and a data extraction average of 97 percent accuracy for its average loan package. 100% Español Indecomm’s ML-powered IDX helps lenders optimize business processes with automated data extraction and analysis, improving the performance, timelines, and cost structure of mortgage origination. Most importantly, lenders can focus more on front-line customer satisfaction. Integrated, automated workflows simplify decisions so Indecomm’s clients can close loans faster. Learn More Learn more » 日本語 Indecomm plans to apply its learnings in developing IDX to optimize other back- and middle-office operations in the mortgage origination, servicing, and capital markets. The company looks forward to using IDX to address new operational challenges within banking and financial services, recognizing that many of the same data extraction challenges are found across other industry verticals. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Closing Loans Faster with Integrated Workflows Automates scaling with parallel processing and serverless architecture AWS Services Used Underwriting and audit accuracy are vital in the mortgage loan origination process, as data oversights or errors can lead to a higher risk of default. With Amazon Textract, Indecomm’s Genius products capture critical data and flag missing data that could be overlooked during manual document review. Indecomm’s clients benefit from reduced risk when using the Genius product suite.  Solution | Developing a ML Solution to Reduce Manual Processing 中文 (繁體) Bahasa Indonesia Indecomm developed its Intelligent Document Extraction (IDX) solution to reduce the cost and time spent reviewing mortgage origination documents, resulting in quicker loan turnaround times and higher customer satisfaction. Opportunity | Driving Further Automation in Document Processing and Analysis Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. average total cost per page processed data classification and extraction time less manual document intervention required Overview Indecomm’s SVP of Engineering, Dr. Harish B. Kamath, says, “We found that Amazon Textract offered us the highest flexibility of use and lowest cost to develop our IDX module. We utilized all the capabilities of Amazon Textract, in combination with AWS Lambda machine-learning components, to map out hundreds of mortgage industry-related documents and extract over 4,000 data fields.” Indecomm and its clients have experienced a significant reduction in turnaround time and loan origination costs; jobs are now scheduled every 2 minutes, compared to the previous wait time of over 20 hours for sequential processing of 800 pages. Document classification costs have also decreased from $5 to $3 for the same 800 pages. Overall, costs, including data enrichment, security, storage, and reporting, have decreased to just 2 cents per page on AWS.  To learn more, visit aws.amazon.com/solutions/ai-ml. Türkçe The most time-consuming and critical tasks associated with mortgage loan origination—commonly referred to as the loan application process—are reading, analyzing, and comparing information across a large repository of documents. Lenders spend an inordinate amount of time manually reviewing documentation, which reduces productivity and lengthens the time required to obtain a loan. About Indecomm English Indecomm Automates Complex Mortgage Document Processing with Amazon Textract 50–60% Amazon Textract Indecomm is a software service provider that utilizes automation and technology to accelerate timelines, reduce costs, and simplify complex processes for mortgage lenders, servicers, insurers, and secondary market participants. The company processes about 1 million loans and 800,000 audits annually. Scalability, document turnaround time, cost, and configurability were leading considerations in solution evaluation. The business eventually decided on Amazon Textract on Amazon Web Services (AWS) to build IDX, choosing the service for its scalability and integration with serverless tools such as AWS Lambda. Previously, Indecomm required many virtual machines to meet data processing requirements, often exceeding budget thresholds when large jobs came in. Amazon Textract’s application programming interfaces (APIs) allow parallel processing, which facilitates rapid document analysis at scale without additional delays or overhead.  To ensure high levels of accuracy and efficiency in IDX, Indecomm leveraged Amazon Textract to automate complex document review and extract data from images and text for analysis. Clients using IDX can halve the time required for underwriting and mortgage origination, ensuring data accuracy with a predictable, affordable costing model. Indecomm is a SaaS provider whose GeniusWorks product suite automates back-office mortgage operations. The company set out to develop a machine learning–powered data extraction solution, which it named Intelligent Document Extraction (IDX). Deutsch 5–7 minutes Tiếng Việt Dr. Harish B. Kamath SVP of Engineering, Indecomm Italiano ไทย Furthermore, with IDX, Indecomm’s clients can rapidly scale their operations to meet sudden increases in demand—without hiring new employees or investing in extra hardware. They can also analyze data stored over time to predict business processing costs more accurately for mortgages. Unlike traditional data extraction solutions, which require continuous manual monitoring and corrective actions, Amazon Textract and IDX continuously learn and adapt to user-defined changes. Accuracy is thus not merely maintained but improved over time. Indecomm used to experience delays of up to 5–6 hours in processing long document queues due to corrupt files, leading to increased costs and management overheads from constant monitoring. However, the integration of AWS Lambda and built-in monitoring through IDX allows for on-demand monitoring, effectively removing bottlenecks from the system.  Contact Sales Learn more » In the two years since implementation, Amazon Textract has automated the classification and extraction of more than 700 mortgage forms with approximately 9,200 unique fields. Furthermore, clients have improved the efficiency and accuracy of quality control post loan distribution with Indecomm’s AuditGenius. Early in the loan lifecycle, data is stored within DecisionGenius and IncomeGenius. This data then serves as an easily referenceable repository that lenders can use to audit loan analyses and decisions using AuditGenius. The ability to instantly access and compare outcomes with source documents improves transparency, confidence, and auditing turnaround times.  Over a period of three years, Indecomm evaluated in-house and third-party alternatives to support the development of its Intelligent Document Extraction (IDX) ML tool. The tool goes beyond simple optical character recognition (OCR) to identify and classify documents; extract, validate, and certify data; and enrich data as needed. IDX serves as the underlying document extraction technology powering three Indecomm products: IncomeGenius, DecisionGenius, and AuditGenius. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. 2 cents Português We found that Amazon Textract offered us the highest flexibility of use and lowest cost to develop our IDX module.”
Indivumed Case Study.txt
Unlocking Life-Saving Opportunities with AI and ML Français But the datasets are complex and extensive. To manage this complexity, the company turned to Amazon Web Services (AWS) and used cloud-based high performance computing (HPC) to build the world’s first and most extensive proprietary multi-omics database. Amazon EFS Amazon S3 Hamburg-based Indivumed specializes in using the highest quality biospecimen and comprehensive clinical data to advance research and development in precision oncology. Established 20 years ago, its headquarters is located in Hamburg, Germany. Español Amazon EC2 Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. The advances made by the organizations using the Indivumed technology could be life-changing for cancer patients. “We have the most highly automated multi-omics processing facility out there,” says Rene Steen, vice president for IT at Indivumed. “It’s driving the creation of new treatments that will ultimately save and extend people’s lives. That’s something to be proud of.” 日本語 Indivumed has made further enhancements to store data that’s no longer needed using Amazon S3 Glacier, which provides long-term, secure, durable storage classes for data archiving. “To be able to plow ahead with the business as it grows, and to know we have the pipeline to keep up with that growth, is essential,” says Woodsmith. JADBio is a software-as-a-service platform that runs on AWS, making integration with IndivuType straightforward through APIs. The JADBio technology supports Indivumed’s nRavel® artificial intelligence (AI) platform by recognizing and learning patterns of information found in tumor data. It’s also increased the number of samples it can process in parallel. IndivuType can now process 500 samples per week, up from 20, by using Amazon EKS to scale up to 1,000 instances. This is a 2,400 percent increase in processing capacity compared to its previous system. Amazon Elastic File System (Amazon EFS) automatically grows and shrinks as you add and remove files with no need for management or provisioning. These new capabilities have helped Indivumed establish new connections and partnerships. The company now offers advanced tissue sample analysis with IndivuType and nRavel® to several large pharmaceutical organizations and a number of small to medium-sized biotech companies. Get Started 한국어 Indivumed Boosts Cancer Research With Powerful Analytics Built on AWS To further optimize costs, the new cluster replaced several Amazon EFS workloads with object storage provided by Amazon Simple Storage Service (Amazon S3), which is built to retrieve any amount of data from anywhere. With the MOCCA cluster, Indivumed has saved more than 50 percent on total IT costs and reduced the cost per sample by around 41 percent, compared to its previous AWS setup. nRavel® includes bespoke tools that Indivumed has built and validated using data from disease models curated from comprehensive biological databases. Together with advanced analytical algorithms and ML, it helps Indivumed to better understand the biology, treatments, and outcomes of cancer. AWS Services Used For two decades, Hamburg-based Indivumed has specialized in biobanking, providing infrastructure, expertise, and technology for cancer research and development. Most of its customers and partners are academic research institutes and pharmaceutical companies that use the insights Indivumed generates to discover and validate novel drugs and ultimately develop new treatments for life-threatening cancers. Initially, Indivumed built an HPC cluster using Amazon Elastic Compute Cloud (EC2), which provides secure and resizable compute capacity, and Amazon Elastic File System (EFS), which automatically grows and shrinks as files are added and removed. 中文 (繁體) Bahasa Indonesia Rene Steen Vice President for IT, Indivumed Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي As the company grew, Indivumed needed to ramp up the amount of data it could handle so that it could increase the number of samples it could process each year. To achieve this, Indivumed needed to refactor the cluster. “We spent a significant amount of time building a cloud-native tech platform,” says Woodsmith. 中文 (简体) Modernizing Cluster Increases Processing Capacity by 2,400% With IndivuType up and running, Indivumed wanted to generate novel insights about cancer biology that its customers and partners could use to develop new treatments. To create those insights, Indivumed applied machine learning (ML) to multi-omics data analysis. Alongside this, it used JADBio, an automated ML system that’s customized for life science applications that include large multi-omics clinical datasets and medical images. The result was IndivuType, a multi-omics database that combines diverse molecular biological information with clinical information from thousands of patients across Europe, the US, and Asia. The datasets for each cancer sample—including raw readouts from the molecular assay, which detects markers of disease—can reach 200 GB in size. Learn more » Benefits of AWS We have the most highly automated multi-omics processing facility out there. It’s driving the creation of new treatments that will ultimately save and extend people’s lives. That’s something to be proud of.” Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Launching a Multi-Omics Database on AWS Hamburg-based Indivumed specializes in using the highest quality biospecimen and comprehensive clinical data to advance research and development in precision oncology. Its IndivuType discovery solution uses AWS to store data and support analysis to decipher the complexity of cancer. By improving its AWS infrastructure, Indivumed has saved more than 50 percent on total IT costs and ramped up the number of samples it can process from 20 to 500 a week, a 2,400 percent increase. Indivumed knew its compute requirements would be significant. So it decided to build an HPC cluster that could not only handle huge datasets, but also scale resources up and down automatically based on the amount of processing required. With the life sciences field and pharmaceutical industry becoming more data-driven, Indivumed saw an opportunity to generate these insights through analyzing multi-omics data. Indivumed decided to use the thousands of tissue samples it stores to create a unique repository for deep molecular information on cancers. Developed a multi-omics database to store thousands of tissue samples for medical research Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch About Indivumed Tiếng Việt Indivumed and AWS kicked off the Multi-Omics for Cancer and Clinical Analytics (MOCCA) project to modernize the cluster. It’s based on Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes. Indivumed also used Intel-based compute-optimized Amazon EC2 Spot Instances to deliver high performance workloads at low cost. Italiano ไทย Amazon EKS Contact Sales 2022 Generated insights used to create new therapeutics for cancer treatments It chose AWS to help make its vision a reality. “AWS was the best choice to help us scale and it provides a range of secure, reliable, and serverless technologies for us to build on,” says Dr. Jonathan Woodsmith, vice president of advanced analytics and AI at Indivumed. Reduced total IT costs by 50 percent  Increased data processing capacity for samples by 2,500 percent Português
Infor Case Study.txt
Facilitated training for over 2,405 employees Français Benefits of AWS Español Since upskilling its employees through AWS Training and Certification, Infor has increased its efficiency in developing customer solutions. Teams can make better use of AWS services in the customer solutions that they build. For example, through the course Running Containers on Amazon Elastic Kubernetes Service (Amazon EKS)—where participants develop practical, in-depth skills for managing containers using Amazon EKS—Infor teams learned how to use the service to improve, simplify, and speed up development. “Offering that course is going to save us money,” says Carlin. Teams that have gone through training adopt new AWS services and technology faster, especially if personnel could ask questions about applying new AWS services to specific products during a training class. “AWS upgrades its technology quite rapidly,” says Carlin, “and the training equips us to quickly adopt new services and technological transformations on AWS. Quick adoption means cost efficiency and performance improvements.” Increasing Speed to Market and Solution Functionality After Training 日本語 Contact Sales Get Started 한국어 Since Infor began working with AWS Training and Certification, over 400 Infor employees have received AWS Certification, validating technical skills and cloud expertise. The company is working to get over 1,000 employees certified as well. “We have continuous enrollment,” says Carlin. “In the latest round of deliveries, we have 400 people on the waiting list. That enthusiasm lets us continue offering these courses, which benefits employees personally and professionally.” DevOps Engineering on AWS Decreased the number of support tickets resulting in improved customer service DevOps Engineering on AWS teaches you how to use the combination of DevOps cultural philosophies, practices, and tools to increase your organization’s ability to develop, deliver, and maintain applications and services at high velocity on AWS. AWS Services Used 中文 (繁體) Bahasa Indonesia Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي 中文 (简体) AWS Training and Certification Learn more » Architecting on AWS   The courses provided by AWS Training and Certification helped equip Infor to develop resilient, secure, and scalable solutions in the cloud and increase the velocity of development on AWS. Some of these courses included Architecting on AWS—where participants learn to build IT solutions on AWS following architecting best practices—and Developing on AWS, a course on how to develop secure and scalable cloud applications. Another top course was DevOps Engineering on AWS, a course on how to use DevOps philosophies, practices, and tools to develop, deliver, and maintain applications and services at high velocity on AWS. Between October 2020 and May 2022, 2,405 Infor employees participated in 90 courses. AWS Training and Certification helped Infor select courses and met Infor’s needs for broad accessibility and relevant content, including offering courses across three time zones. “AWS responded to our need to be creative in how we set classes up,” says Carlin. “The fact that we had the flexibility to offer courses across multiple time zones for employees around the world was very important for us as a global organization.” Employees at Infor responded enthusiastically to the courses offered by AWS Training and Certification, with hundreds of people on the waiting list after the initial slots filled up. “The courses led by instructors from AWS Training and Certification were much more interactive than our alternative, self-instruction offerings,” says Carlin. “The instructors addressed the issues that our technical personnel encounter much more specifically in real time in the sessions. We saw more immediate benefits, and the responses from our employees who took the classes were positive.” Facilitated AWS Certification for 400 employees Infor employees can also better respond to challenges and minimize potential problems during the software testing phase after going through training. Several product groups whose employees went through the training have reduced the volume of service tickets that they submit to AWS because they now have a better understanding of the underlying AWS services at work in the solution. For one AWS service, Infor submitted 18 service tickets in 2 months to AWS before training, and only 1 service ticket in 2 months after training on this service. Through a series of use case scenarios and practical learning, you’ll learn to identify services and features to build resilient, secure, and highly available IT solutions in the AWS Cloud. Expert AWS Instructors emphasize best practices using the AWS Well-Architected Framework and guide you through the process of designing optimal IT solutions, based on real-life scenarios. Building Training Pathways for Nontechnical Personnel Increased employee satisfaction Infor Bulks up AWS Expertise, Trains Over 2,400 Employees to Meet Customer Needs Türkçe English Infor has used AWS as its primary cloud services provider since 2011 and went all in on AWS in 2014, making it critical for employees to have strong AWS expertise. However, Infor lacked a formal training strategy and was unaware of the learning gaps present within its organization. Following the initial assessment and the AWS Learning Needs Analysis, Infor began working with AWS Training and Certification to offer virtual AWS Classroom Training to its employees. “One of the principles at Infor is to provide enrichment for our employees,” says Dan Carlin, vice president of cloud financial operations at Infor. “We wanted to give employees exposure to this material and training. As a result of training, we also expected to see more cost efficiency and optimization in how we consume AWS services as well as increased speed to functionality, which benefits our customers.” Improved efficiency and cost optimization About Infor Accelerated adoption of new AWS services We offer both digital and classroom training that allows you to learn online at your own pace and learn best practices from an expert instructor. Whether you are just starting out, building on existing IT skills, or sharpening your cloud knowledge, AWS Training and Certification can help you be more effective and do more in the cloud. Deutsch Dan Carlin Vice President of Cloud Financial Operations, Infor We see the value that working with AWS Training and Certification provides across our personnel environment. The output is better products, better performance, and better customer experience.”  Tiếng Việt Infor, a global leader in business cloud software, strives to serve customers by developing industry-specific functionality in each of its solutions. The company deploys solutions using Amazon Web Services (AWS) to serve 14,000 cloud customers. To effectively compete and satisfy the changing needs of customers, Infor needed robust cloud skills to deliver high-quality solutions and support quickly. By working with AWS Training and Certification, which helps customers build and validate skills to get more out of the cloud, Infor continues to deliver, with 2,400 employees currently training and more scheduled to be trained by the end of 2022. With this training, Infor can better meet customer needs by enhancing the performance and efficiency of its solutions and helping customers adopt new technology more quickly. Italiano ไทย As Infor continues expanding the courses that employees can take from AWS Training and Certification, the company plans to continue refining their development on AWS by conducting additional AWS Learning Needs Analysis, a self-assessment tool to identify an organization’s cloud skills gaps and build a data-driven plan. Using the AWS Learning Needs Analysis will help Infor continue enhancing training opportunities and meet its training needs. The company also plans to expand training opportunities to nontechnical roles, such as sales employees and solutions consultants. “If sales personnel can answer customers’ technology-related questions, it will address customer concerns and accelerate the sales process without taking time away from technical personnel,” says Carlin. “We see the value that working with AWS Training and Certification provides across our personnel environment. The output is better products, better performance, and better customer experience.” Investigating Opportunities and Strategy for Employee Training Running Containers on Amazon Elastic Kubernetes Service (Amazon EKS) 2022 Amazon EKS makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane. In this three-day course, you'll learn container management and orchestration for Kubernetes using Amazon EKS. Infor provides cloud-based enterprise resource planning solutions to customers around the globe. The company has over 17,000 employees in 117 offices worldwide and over 65,000 customers. Português
Information Technology Institute Launches Postgraduate Artificial Intelligence Diploma Using AWS _ Case Study _ AWS.txt
Hands-on learning Français Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Opportunity | Adapting Education to Meet Future Needs by preparing students to earn AWS Certifications Español  To train its instructors to deliver its new AI-Pro diploma program to students, ITI worked alongside the French Graduate School of Computer Science and Advanced Technologies (EPITA) to provide an online program in AI. Combining theory and practices of the computer vision and AI in neurolinguistics programming areas, the program is conducted remotely by AI experts from EPITA to certify qualified instructors to deliver a specialized AI program. 日本語 Contact Sales 2022 Equipping students 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used About Information Technology Institute using AWS services Customer Stories / Education ITI, an educational institution founded in Egypt in 1993, provides IT-related education for tertiary-level students at 11 campuses across Egypt. It also offers professional training programs for various branches of the Egyptian government, which in 2021, announced a national strategy to drive economic growth using AI and ML technologies. Egypt expects to undergo an increase in reliance on AI applications and solutions in government sectors over the next 3 years. The government has since spent the equivalent of $25 million on partnerships with international universities and companies to help create training programs and employment opportunities in the AI and ML fields.  The Information Technology Institute provides the AI-Pro postgraduate diploma program, which offers coursework through AWS Academy and gives students the opportunity to gain AWS Certification. AWS Services Used 中文 (繁體) Bahasa Indonesia Information Technology Institute Launches Postgraduate Artificial Intelligence Diploma Using AWS AWS Academy Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud. Learn more » Ρусский with on-demand skills for careers in the cloud عربي 中文 (简体) Real AWS infrastructure AWS Training and Certification Learn more » Overview Outcome | Bridging the Gap Between Academia and Industry AI-Pro integrated content and resources from AWS Education Programs and the first 400 students began the AI-Pro diploma program in April 2021. AWS Academy, which empowers higher education institutions to prepare students for industry-recognized certifications and careers in the cloud, is the foundation that ITI uses to provide education in AI/ML. ITI used AWS Academy to provide course materials related to the AI/ML fields of study, as well as cloud foundations. Students also received access to AWS Academy Learner Labs, long-running hands-on lab environments where educators can bring their own assignments and invite their students to get experience using select AWS services. “AWS Academy supports educators and students alike to apply new cloud knowledge immediately in an actual functioning cloud environment. The learning continuity is such a huge benefit,” says George Hany Fekry Iskander, head of the mechatronics and industrial automation department at ITI.  Get Started to deploy students’ graduation projects Validate technical skills and cloud expertise to grow your career and business. Learn more » Türkçe ITI provides education across its 11 campuses with its use of AWS Education Programs, equipping more students with in-demand skills for careers in the cloud. Because ITI also provides students with the opportunity to gain AWS Certification during their education, students can validate their technical skills and cloud expertise to potential employers, helping them to join the cloud workforce. “What makes AWS Certification especially valuable is that certified students can validate their skills and build confidence and credibility, which bolsters their employability,” says Dr. Heba Saleh Omar, chairwoman of ITI. Since the AI-Pro diploma program’s launch, nearly 300 students have received vouchers for AWS Certification examinations. ITI wanted to improve career opportunities for its students by developing their skills and preparing them for in-demand jobs. Focusing new diploma programs on AI helps fulfill today’s educational needs and tomorrow’s technological forecast in Egypt. To achieve these goals, ITI used AWS Education Programs to develop the AI-Pro diploma program and provide students with opportunities to gain AWS Certification, which validates technical skills and cloud expertise to grow careers and businesses. English ITI is looking to add more educational fields and degree tracks to its use of AWS Academy beyond AI and ML. It is specifically interested in adding cybersecurity and natural language processing diploma programs to be supported by AWS Academy. ITI intends to increase its number of educators holding AWS Certification from five to 15 to accommodate the expansion into different areas of expertise in the cloud. AWS Certification Solution | Developing a Hands-On Curriculum Deutsch ITI is also working alongside AWS to create a comprehensive lab environment that would encourage deeper, more immersive engagement with current and upcoming AWS services. If students were involved in the developmental and implementation stages, they could gain valuable experience for working in the cloud industry. “After joining the workforce, I discovered just how much the curriculum mirrored the tools and services used in the real world. I especially appreciated the hands-on lessons, which familiarized me with the latest cloud innovations the industry had to offer. ITI’s AI-Pro diploma is an ML career with cloud fundamentals,” says Omar Wahid, a graduate of ITI’s AI-Pro postgraduate diploma program. Tiếng Việt Italiano ไทย The Information Technology Institute (ITI) provides IT-related education for tertiary-level students at 11 campuses across Egypt. The institution also offers professional training programs for various branches of the Egyptian government. The Information Technology Institute (ITI) in Egypt used Amazon Web Services (AWS) to launch a new postgraduate degree, AI-Pro. With the rise of artificial intelligence (AI) and machine learning (ML) in Egypt’s digital development plan, ITI wanted to create a diploma program that provided students with relevant skills and certifications. The AI-Pro diploma program was developed working with AWS Training and Certification Education Programs, which support learners in building and validating skills to get more out of the cloud. These AWS Education Programs prepare diverse learners for in-demand, entry-level cloud roles around the world. ITI delivered these programs to 1,000 students across 9 months. Increases employability AWS Academy supports educators and students alike to apply new cloud knowledge immediately in an actual functioning cloud environment. The learning continuity is such a huge benefit.”  Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. George Hany Fekry Iskander Head of the Mechatronics and Industrial Automation Department, ITI Português
InMotion Inovasi Teknologi Boosts Local-Language Engagement with Millions of Indonesians on AWS _ Case Study _ AWS.txt
Under 3 seconds Français 2023 Amazon Simple Storage Service Outcome | Distributing Millions of Messages in Bahasa Indonesia  Español Opportunity | Delivering Omni-Channel Communications to Millions of Indonesians Based in Jakarta, Indonesia, InMotion Inovasi Teknologi develops software solutions to help companies improve customer engagement. The business has more than 50 employees and focuses on industries such as finance, education, and the public sector. InMotion Inovasi Teknologi is an Indonesian-based technology company that builds software solutions for customer engagement across digital channels. As part of its continued development, the company migrated around 1,000 scripts from Amazon EC2 to Amazon CloudFront, leveraging Amazon S3 to distribute static content.  InMotion Inovasi Teknologi transforms the scalability of its applications using Amazon CloudFront, reducing response times for millions of customer interactions, and lowering costs.  日本語 Contact Sales As a result of the migration, the company reduced server costs by 10 percent and improved application performance by 30 percent. The business seamlessly supports more Indonesian enterprises, distributing millions of messages in Bahasa Indonesia. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.  Learn more » Learn More 1.5x 한국어 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. To learn more, visit aws.amazon.com/cloudfront. Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. Learn more » Get Started Reduced chatbot response times The company moved approximately 1,000 scripts for its 3Dolphins applications and chatbot service from Amazon EC2 instances to Amazon CloudFront, going live in two weeks.  AWS Services Used 99.95% Overview 中文 (繁體) Bahasa Indonesia InMotion’s founders engaged with the company’s AWS account team, who proposed offloading the scripts which facilitated client-server communication into Amazon CloudFront. “Our AWS team was proactive as always, listening to our objectives and providing the right solutions,” Hastomo says.  Ρусский Customer Stories / Software & Internet عربي decrease in server instance costs faster performance of web dashboards 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. When InMotion developed 3Dolphins in 2015, it chose to run the applications and chatbot service on Amazon Web Services (AWS), using Amazon Elastic Compute Cloud (Amazon EC2) instances to provide the optimal amount of compute performance for varying workloads.  About InMotion Inovasi Teknologi InMotion Inovasi Teknologi Boosts Local-Language Engagement with Millions of Indonesians on AWS Sonny Hastomo Chief Executive Officer, InMotion Inovasi Teknologi Amazon Elastic Kubernetes Service Thanks to the speed and scalability of Amazon CloudFront, InMotion is helping large enterprises communicate with millions of customers in Bahasa Indonesia in a timelier manner. Today, around 50 of InMotion’s customers are using the 3Dolphins SRM suite to distribute over 15 million messages a day in Bahasa Indonesia as part of their customer engagement programs.  InMotion also integrated Amazon CloudFront with Amazon Simple Storage Service (Amazon S3), where it stores static assets such as imagery for web dashboards and chatbot interfaces. Amazon CloudFront distributes the static content, caching the images at edge locations to reduce load time and protecting resources by checking content requests against access control lists.  Türkçe Amazon Elastic Compute Cloud The adoption of Amazon CloudFront has also increased application and service availability. Comments Hastomo, “Previously, we experienced occasional downtime, or some functionality wouldn’t work. Amazon CloudFront ensures scripts are continuously processed without any issues, improving reliability.”  English By migrating the scripts to Amazon CloudFront, InMotion decreased its Amazon EC2 costs by 10 percent. Amazon CloudFront’s caching service also reduces the number of requests served by the Amazon EC2 instances and lowers latency, improving application performance. “By leveraging Amazon CloudFront, our Service SRM web dashboard responds 30 percent faster and our chatbot answers queries in less than three seconds,” says Hastomo. Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. Learn more » 10% More than two hundred businesses in Indonesia use 3Dolphins applications by InMotion Inovasi Teknologi (InMotion) to engage with customers across digital channels. InMotion, an Indonesian-based technology company, leverages the power of artificial intelligence to offer solutions such as its 3Dolphins Social Relationship Management (SRM) application to enhance customer engagement and the 3Dolphins Service SRM application for omni-channel customer service. Meanwhile, businesses such as banks, financing companies, educational institutions, and automotive industries also use the 3Dolphins Sales SRM application to convert conversations into sales opportunities and 3Dolphins Chatbot SRM service to answer frequently asked questions. Deutsch Tiếng Việt Solution | Reducing Costs and Improving Performance with Amazon CloudFront In addition, the business sought to increase application scalability. Hastomo explains, “Our competitors’ solutions often lack the flexibility to easily scale to support millions of engagements in Bahasa Indonesia. We wanted to fill that gap, offering enterprises omni-channel communications for mass audiences, and opening new business opportunities for ourselves.” Italiano ไทย Amazon CloudFront Following its success with Amazon CloudFront, InMotion plans to continue developing its AWS architecture. It aims to containerize its applications to use resources more efficiently and further reduce costs through Amazon Elastic Kubernetes Service (Amazon EKS). “We continue working with AWS because it helps us deliver better software services to our customers in ways that are more cost effective to our business,” Hastomo concludes. more customer engagements Delivering messages on this scale in Bahasa Indonesia has given InMotion a significant advantage over competitors who also offer localized engagement tools. Hastomo estimates that 3Dolphins can handle workloads that are 1.5 times larger than those of its competitors. “This has helped us to secure business with Indonesian enterprises,” he explains. “We’re able to support a business that has 60 million customers and receives around 3 million website visitors each month.” Learn more » application availability As part of its continual improvement process, InMotion looked to optimize its Amazon EC2 architecture in 2021. Chief executive officer Sonny Hastomo, who co-developed the 3Dolphins solutions, says, “We wanted to offload script files from virtual server instances to lower our cloud costs as well as boost the performance of our applications.”  30% By leveraging Amazon CloudFront, our Service SRM web dashboard responds 30 percent faster and our chatbot processes queries in less than three seconds.” Português
Insightful.Mobi Decreases Costs and Enhances Dashboard Performance Using Amazon QuickSight _ Case Study _ AWS.txt
Opportunity | Using AWS to Navigate Consumer Goods Data for Insightful.Mobi Français Achieved 2023 Enhanced Español Increased Auckland-based startup Insightful.Mobi delivers the next generation of field-based sales and merchandising tools for consumer goods companies that sell or provide services to retailers such as supermarkets. A software-as-a-service firm, Insightful.Mobi creates and embeds high-performance, interactive dashboards to empower its customers to efficiently manage their sales and merchandising workforce. Insightful.Mobi’s seamless integration of data into its customer web portal and application enhances productivity, delivering valuable insights while maintaining ease of use. About Insightful.Mobi 日本語 AWS Services Used Get Started 한국어 Learn how Insightful.Mobi decreased costs, enhanced performance, and increased revenue using Amazon QuickSight. Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Redshift sales and revenues and enhanced return on investment Improved Solution | Migrating to Agile Cloud Dashboards Using QuickSight Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » Insightful.Mobi Decreases Costs and Enhances Dashboard Performance Using Amazon QuickSight The benefits Insightful.Mobi has gained are passed on to its customers. For example, one of New Zealand’s biggest frozen-food brands simplified its sales team’s jobs, reducing the time it spent on administration. Similarly, a major beverage company reported a 26 percent increase in its sales representatives’ productivity. Using AWS technology, Insightful.Mobi can create precisely the products that its customers want. Happier customers mean higher revenues and—when paired with lower costs—an enhanced return on investment for Insightful.Mobi. 中文 (繁體) Bahasa Indonesia Accelerated Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Paul Miller CEO and Cofounder, Insightful.Mobi With promotional costs making up nearly one-quarter of the overall price of products, it’s critical for businesses to make sure they have the right items in the right stores and on the right shelves. Addressing this need can be challenging in the retail and consumer goods marketplace, which is fast paced and constantly changing. Overview Türkçe Insightful.Mobi offers integrated cloud and mobile tools for customer relationship management in consumer goods. A software-as-a-service business, it facilitates smarter field sales, promotions, and merchandising through near-real-time insights and business intelligence. English Insightful.Mobi also used SPICE (Superfast, Parallel, In-memory Calculation Engine), the robust in-memory engine designed to work with Amazon QuickSight to rapidly perform advanced calculations and serve data. With these tools, Insightful.Mobi can now help customers by creating and publishing dashboards with insights powered by machine learning. Insightful.Mobi’s customers can quickly access these dashboards from any device to look for patterns and outliers, leading to a better understanding and use of their data. “Amazon QuickSight is serverless, scalable, and superfast, so our customers can slice and dice their data in lots of different ways according to what suits their needs,” Miller says. Outcome | Making Every Effort Count The chain between manufacturers and buyers includes many links, from suppliers, distributors, and franchisees to marketers, promoters, and floor salespeople. Collectively, their interactions produce a volume, variety, and complexity of data that is difficult to navigate effectively. A firm might offer hundreds of unique products, each of which must be tracked by store, price, display space, and other variables related to promotion and distribution. By analyzing this data, businesses can better understand their customers and make informed decisions on their placement and promotions. Amazon QuickSight significant cost savings agility, performance, flexibility, and scalability Among the most important benefits Insightful.Mobi has achieved using its AWS technical stack are enhanced productivity and cost-efficiency. “On AWS, we’ve reduced the complexity of our production process so that the customer can be front and center,” Miller says. Moreover, analytics and reporting used to require the combined labor of both a business analyst and a core developer. “Previously, it would take us 2–3 weeks to make a new dashboard or set of reports for our customers,” Miller says. “Using Amazon QuickSight, a business analyst working directly with a customer can create new dashboards and reports in less than 1 day.” Deutsch Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. With QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries. Learn more » Tiếng Việt “Previously, it would take us 2–3 weeks to make a new dashboard or set of reports for our customers. Using Amazon QuickSight, a business analyst can create new dashboards and reports in less than 1 day.” The transition was straightforward and simple. “It was very well structured,” says Miller.  “AWS provided support as we put together a proof of concept to help our tech people understand how to implement the solution into the technology stack and systems.” To get the most out of QuickSight, the company used online videos and training workshops with product specialists who answered specific questions the team had. QuickSight offers a dashboard and reporting layer that has native, highly secure connectivity to Amazon Redshift. Italiano ไทย Given the complexity of today’s marketplace and the vast amounts of data coming from different sources, firms need all the insights they can get about consumers’ buying decisions. Such data helps guide marketing and sales strategies, so Insightful.Mobi turned to Amazon Web Services (AWS) to help its customers gain increased visibility into their data. It chose the architecture of Amazon QuickSight, which powers data-driven organizations with unified business intelligence at hyperscale so that all users can meet varying analytic needs from the same source. Using QuickSight, Insightful.Mobi quickly and cost-effectively provides embedded interactive dashboards and analytics to its clients so that they can derive insights, increase productivity, and realize efficiencies right away. creation of dashboards from 2–3 weeks to less than 1 day Insightful.Mobi is poised to keep growing. The grocery market in New Zealand is currently worth $14 billion, and Australia’s market, where Insightful.Mobi plans to expand, is five times as large. With QuickSight, Insightful.Mobi quickly and reliably provides its corporate customers with all the essential sales insights that they need. “Using AWS tools,” Miller says, “there is no limit to what we can do. We can pretty much do it all.” To deliver insights to consumer goods firms, Insightful.Mobi used to rely on traditional, server-based reporting tools to manage data, but such methods are too slow, time-consuming, and expensive for today’s complex supply chains. Insightful.Mobi needed to offer agility beyond the typical customer relationship management functionality in its field sales products so that its customers could analyze their data quickly and cost-effectively. For its data warehouse infrastructure, the firm already relied on Amazon Redshift, a service that provides data warehousing reinvented for an ever-changing data landscape. “We already knew the AWS way of doing things, so we built on our experience,” says Paul Miller, chief executive officer (CEO) and cofounder of Insightful.Mobi. So in 2021, Insightful.Mobi decided to migrate its visualization and insights layer to QuickSight. customer experience and improved satisfaction Português
Insilico Case Study _ Life Sciences _ AWS.txt
Due to the volumes of experimental and methodical data procesed by Insilico’s platforms, they have extremely high graphics processing unit (GPU) requirements. The company turned to AWS to find the flexibility and scalability it needed available on-demand. Both PandaOmics and Chemistry42 run on Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud. Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Benefits of AWS Insilico Achieves 99% Cost Savings in Drug Candidate Discovery Using AWS Español Eliminated bottlenecks from drug pipelines 日本語 Contact Sales “Using AWS has allowed us to easily scale up our business and facilitate cross-border collaboration,” adds Kamya. This collaborative element proved particularly helpful for the company’s COVID-19 project, which involved designing lead compounds aimed at treating SARS-CoV-2. Those compounds are now close to reaching studies that would enable Insilico to submit an Investigational New Drug (IND) application to the U.S. Food and Drug Administration (FDA). Get Started 한국어 About Insilico Medicine Learn more » Headquartered in Hong Kong, Insilico has over 150 collaborators worldwide. As a result, the platform architecture needed to be scalable and easily accessible. Insilico accomplished this by hosting the relevant data in the cloud using Amazon Simple Storage Service (Amazon S3), an object storage service. “Although we are a startup, we have a global team, and AWS allows us to coordinate our team globally without worrying about where we locate our servers,” says Zhu. Democratized access to computational tools   Learn More Petrina Kamya, PhD Global Business Development Director for Chemistry42 AWS Services Used AWS Healthcare & Life Sciences Virtual Symposium 2021: Insilico Insilico develops ML-powered tools widely accessible to the pharma industry through its suite of software-as-a-service (SaaS) platforms, including PandaOmics for accelerated identification of promising drug targets and Chemistry42, which leverages experimental data, ML algorithms, and physics-based methods to design and optimize novel compounds. The company has validated these platforms with its own internal drug pipelines to demonstrate concrete cost and time savings. 中文 (繁體) Bahasa Indonesia Fostered connection between different actors within the pharmaceutical industry AWS Makes ML Affordable and Globally Accessible at Every Step of the Drug Pipeline Ρусский “AWS gives us access to the computation power we need,” says Qingsong Zhu, Ph.D., Insilico’s chief operating officer. “As a startup, it’s been key for us to have access to powerful servers without needing to maintain huge computing clusters on-premises ourselves.” عربي 中文 (简体) Learn more »   Going forward, Insilico Medicine plans to become more involved in the later stages of the drug research and development process. The company intends to incorporate even more AWS tools to enable growth and maximize the potential of AI to revolutionize the pharmaceutical industry. The drug development process is simultaneously urgent and laborious. As of 2010, it took an average of 4.5 years and cost an average of $674 million to bring a single drug from target hypothesis to candidate validation—and those numbers have risen steadily in the past decade. Every step presents unique challenges that require specialized expertise, sometimes causing the process to be fragmented and inefficient. Insilico Medicine is a small biotechnology startup that has developed AI platforms for drug discovery. The company combines expertise in machine learning, bioinformatics, and chemistry to save cost and time at multiple stages of drug development. Using our PandaOmics and Chemistry42 platforms built on AWS, we were able to bring a fibrosis drug candidate from target discovery to compound validation in under 18 months for just $2.6 million.” Towards a Connected, Streamlined Pharmaceutical Industry Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. To help biopharma and biotechnology companies streamline and accelerate their drug discovery and development pipelines, Insilico Medicine developed a robust suite of machine learning (ML)-powered tools to aid in target identification, molecule design, and lead optimization. English Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. “If you're a biologist, you shouldn't be afraid of doing bioinformatics. If you're a chemist, you shouldn't be afraid of using computational tools,” says Kamya. “It was important to us at Insilico that we created a platform that is straightforward and easy to use, giving reliable outcomes regardless of scientific background. We want to democratize the use of AI for drug discovery and increase interoperability between different departments in the pharmaceutical industry.” Insilico’s drug discovery engine, built on Amazon Web Services (AWS), sits at the center of the company’s portfolio. The engine uses millions of data samples and multiple data types to discover disease biomarkers, identify the most promising targets, and design novel small molecule modulators that are specific to the target. Insilico layers advanced artificial intelligence (AI) and ML capabilities to perform these analyses and support all steps of the pharma research and development process. Deutsch Reduced average drug discovery costs by over $650 million Tiếng Việt Amazon S3 Each Insilico platform accelerates a specific part of the drug development process, but when interconnected they can save additional time by eliminating bottlenecks. The platforms’ ease of use democratizes access to sophisticated bioinformatics, making it simpler for different parties to use the same tools and coordinate analyses. Italiano ไทย “Using our PandaOmics and Chemistry42 platforms built on AWS, we were able to bring a fibrosis drug candidate from target discovery to compound validation in under 18 months for just $2.6 million,” says Petrina Kamya, Ph.D., Insilico’s global business development director for Chemistry42. 2022 Amazon EC2 See how AWS is working with other biopharma and life science companies to drive discovery and innovation. Accelerated drug development process by 3 years compared to average Português
Intelligently Search Media Assets with Amazon Rekognition and Amazon ES _ AWS Architecture Blog.txt
AWS Architecture Blog Intelligently Search Media Assets with Amazon Rekognition and Amazon ES by Sridhar Chevendra, Shitij Agarwal, and Gurinder Singh | on 14 JUL 2021 | in Amazon OpenSearch Service , Amazon Rekognition , Amazon Simple Storage Service (S3) , Architecture , AWS Lambda | Permalink |  Share Media assets have become increasingly important to industries like media and entertainment, manufacturing, education, social media applications, and retail. This is largely due to innovations in digital marketing, mobile, and ecommerce. Successfully locating a digital asset like a video, graphic, or image reduces costs related to reproducing or re-shooting. An efficient search engine is critical to quickly delivering something like the latest fashion trends. This in turn increases customer satisfaction, builds brand loyalty, and helps increase businesses’ online footprints, ultimately contributing towards revenue. This blog post shows you how to build automated indexing and search functions using AWS serverless managed artificial intelligence (AI)/machine learning (ML) services. This architecture provides high scalability, reduces operational overhead, and scales out/in automatically based on the demand, with a flexible pay-as-you-go pricing model. Automatic tagging and rich metadata with Amazon ES Asset libraries for images and videos are growing exponentially. With Amazon Elasticsearch Service (Amazon ES) , this media is indexed and organized, which is important for efficient search and quick retrieval. Adding correct metadata to digital assets based on enterprise standard taxonomy will help you narrow down search results. This includes information like media formats, but also richer metadata like location, event details, and so forth. With Amazon Rekognition , an advanced ML service, you do not need to tag and index these media assets. This automatic tagging and organization frees you up to gain insights like sentiment analysis from social media. Figure 1 is tagged using Amazon Rekognition. You can see how rich metadata (Apparel, T-Shirt, Person, and Pills) is extracted automatically. Without Amazon Rekognition, you would have to manually add tags and categorize the image. This means you could only do a keyword search on what’s manually tagged. If the image was not tagged, then you likely wouldn’t be able to find it in a search. Figure 1. An image tagged automatically with Amazon Rekognition Data ingestion, organization, and storage with Amazon S3 As shown in Figure 2, use Amazon Simple Storage Service (Amazon S3) to store your static assets. It provides high availability and scalability, along with unlimited storage. When you choose Amazon S3 as your content repository, multiple data providers are configured for data ingestion for future consumption by downstream applications. In addition to providing storage, Amazon S3 lets you organize data into prefixes based on the event type and captures S3 object mutations through S3 event notifications. Figure 2. Solution overview diagram S3 event notifications are invoked for a specific prefix, suffix, or combination of both. They integrate with Amazon Simple Queue Service (Amazon SQS) , Amazon Simple Notification Service (Amazon SNS) , and AWS Lambda as targets. (Refer to the Amazon S3 Event Notifications user guide for best practices). S3 event notification targets vary across use cases. For media assets, Amazon SQS is used to decouple the new data objects ingested into S3 buckets and downstream services. Amazon SQS provides flexibility over the data processing based on resource availability. Data processing with Amazon Rekognition Once media assets are ingested into Amazon S3, they are ready to be processed. Amazon Rekognition determines the entities within each asset. Amazon Rekognition then extracts the entities in JSON format and assigns a confidence score. If the confidence score is below the defined threshold, use Amazon Augmented AI (A2I) for further review. A2I is an ML service that helps you build the workflows required for human review of ML predictions. Amazon Rekognition also supports custom modeling to help identify entities within the images for specific business needs. For instance, a campaign may need images of products worn by a brand ambassador at a marketing event. Then they may need to further narrow their search down by the individual’s name or age demographic. Using our solution, a Lambda function invokes Amazon Rekognition to extract the entities from the ingested assets. Lambda continuously polls the SQS queue for any new messages. Once a message is available, the Lambda function invokes the Amazon Rekognition endpoint to extract the relevant entities. The following is a sample output from detect_labels API call in Amazon Rekognition and the transformed output that will be updated to downstream search engine: {'Labels': [{'Name': 'Clothing', 'Confidence': 99.98137664794922, 'Instances': [], 'Parents': []}, {'Name': 'Apparel', 'Confidence': 99.98137664794922,'Instances': [], 'Parents': []}, {'Name': 'Shirt', 'Confidence': 97.00833129882812, 'Instances': [], 'Parents': [{'Name': 'Clothing'}]}, {'Name': 'T-Shirt', 'Confidence': 76.36670684814453, 'Instances': [{'BoundingBox': {'Width': 0.7963646650314331, 'Height': 0.6813027262687683, 'Left': 0.09593021124601364, 'Top': 0.1719706505537033}, 'Confidence': 53.39663314819336}], 'Parents': [{'Name': 'Clothing'}]}], 'LabelModelVersion': '2.0', 'ResponseMetadata': {'RequestId': '3a561e82-badc-4ba0-aa77-39a13f1bb3a6', 'HTTPStatusCode': 200, 'HTTPHeaders': {'content-type': 'application/x-amz-json-1.1', 'date': 'Mon, 17 May 2021 18:32:27 GMT', 'x-amzn-requestid': '3a561e82-badc-4ba0-aa77-39a13f1bb3a6','content-length': '542', 'connection': 'keep-alive'}, 'RetryAttempts': 0}} As shown, the Lambda function submits an API call to Amazon Rekognition, where a T-shirt image in .jpeg format is provided as the input. Based on your confidence score threshold preference, Amazon Rekognition will prompt you to initiate a human review using Amazon A2I. It will also prompt you to use Amazon Rekognition Custom Labels to train the custom models. Lambda then identifies and arranges the labels and updates the specified index. Indexing with Amazon ES Amazon ES is a managed search engine service that provides enterprise-grade search engine capability for applications. In our solution, assets are searched based on entities that are used as metadata to update the index. Amazon ES is hosted as a public endpoint or a VPC endpoint for secure access within the specified AWS account. Labels are identified and marked as tags, which are assigned to .jpeg formatted images. The following sample output shows the query on one of the tags issued on an Amazon ES cluster. Query: curl-XGET https://<ElasticSearch Endpoint>/<_IndexName>/_search?q=T-Shirt Output: {"took":140,"timed_out":false,"_shards":{"total":5,"successful":5,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":0.05460011,"hits":[{"_index":"movies","_type":"_doc","_id":"15","_score":0.05460011,"_source":{"fileName":"s7-1370766_lifestyle.jpg","objectTags":["Clothing","Apparel","Sailor Suit","Sleeve","T-Shirt","Shirt","Jersey"]}}]}} In addition to photos, Amazon Rekognition also detects the labels on videos. It can recognize labels and identify characters and entities. These are then added to Amazon ES to enhance search capability. This allows users to skip to specific parts of a video for quick searchability. For instance, a marketer may need images of cashmere sweaters from a fashion show that was streamed and recorded. Once the raw video clip is identified, it is then converted using Amazon Elastic Transcoder to play back on mobile devices, tablets, web browsers, and connected televisions. Elastic Transcoder is a highly scalable and cost-effective media transcoding service in the cloud. Segmented output renditions are created for delivery using the multiple protocols to compatible devices. Conclusion This blog describes AWS services that can be applied to diverse set of use cases for tagging and efficient search of images and videos. You can build automated indexing and search using AWS serverless managed AI/ML services. They provide high scalability, reduce operational overhead, and scale out/in automatically based on the demand, with a flexible pay-as-you-go pricing model. To get started, use these references to create your own sample architectures: Amazon S3 Amazon Elasticsearch Amazon Rekognition AWS Lambda Sridhar Chevendra Sridhar Chevendra is a Solutions Architect with Amazon Web Services. He works with digital native business customers to build secure, scalable, and resilient architectures in the AWS Cloud. Sridhar enjoys the outdoors and likes to read about macroeconomics. Shitij Agarwal Shitij Agarwal is a Partner Solutions Architect at AWS. He creates joint solutions with strategic ISV partners to deliver value to customers. When not at work, he is busy exploring New York city and the hiking trails that surround it, and going on bike rides. Gurinder Singh Gurinder Singh is a Solution Architect at AWS. He works with customers to design and implement a variety of solutions in the AWS Cloud. Gurinder enjoys landscaping and loves go on long drives. Resources AWS Architecture Center AWS Well-Architected AWS Architecture Monthly AWS Whitepapers AWS Training and Certification This Is My Architecture Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Interactively fine-tune Falcon-40B and other LLMs on Amazon SageMaker Studio notebooks using QLoRA _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Interactively fine-tune Falcon-40B and other LLMs on Amazon SageMaker Studio notebooks using QLoRA by Sean Morgan , Philipp Schmid , and Lauren Mullennex | on 29 JUN 2023 | in Amazon Machine Learning , Amazon SageMaker , Artificial Intelligence , Generative AI , Technical How-to | Permalink | Comments |  Share Fine-tuning large language models (LLMs) allows you to adjust open-source foundational models to achieve improved performance on your domain-specific tasks. In this post, we discuss the advantages of using Amazon SageMaker notebooks to fine-tune state-of-the-art open-source models. We utilize Hugging Face’s parameter-efficient fine-tuning (PEFT) library and quantization techniques through bitsandbytes to support interactive fine-tuning of extremely large models using a single notebook instance. Specifically, we show how to fine-tune Falcon-40B using a single ml.g5.12xlarge instance (4 A10G GPUs), but the same strategy works to tune even larger models on p4d/p4de notebook instances . Typically, the full precision representations of these very large models don’t fit into memory on a single or even several GPUs. To support an interactive notebook environment to fine-tune and run inference on models of this size, we use a new technique known as Quantized LLMs with Low-Rank Adapters (QLoRA) . QLoRA is an efficient fine-tuning approach that reduces memory usage of LLMs while maintaining solid performance. Hugging Face and the authors of the paper mentioned have published a detailed blog post that covers the fundamentals and integrations with the Transformers and PEFT libraries. Using notebooks to fine-tune LLMs SageMaker comes with two options to spin up fully managed notebooks for exploring data and building machine learning (ML) models. The first option is fast start, collaborative notebooks accessible within Amazon SageMaker Studio , a fully integrated development environment (IDE) for ML. You can quickly launch notebooks in SageMaker Studio, dial up or down the underlying compute resources without interrupting your work, and even co-edit and collaborate on your notebooks in real time. In addition to creating notebooks, you can perform all the ML development steps to build, train, debug, track, deploy, and monitor your models in a single pane of glass in SageMaker Studio. The second option is a SageMaker notebook instance , a single, fully managed ML compute instance running notebooks in the cloud, which offers you more control over your notebook configurations. For the remainder of this post, we use SageMaker Studio notebooks because we want to utilize SageMaker Studio’s managed TensorBoard experiment tracking with Hugging Face Transformer’s support for TensorBoard. However, the same concepts shown throughout the example code will work on notebook instances using the conda_pytorch_p310 kernel. It’s worth noting that SageMaker Studio’s Amazon Elastic File System (Amazon EFS) volume means you don’t need to provision a preordained Amazon Elastic Block Store (Amazon EBS) volume size, which is useful given the large size of model weights in LLMs. Using notebooks backed by large GPU instances enables rapid prototyping and debugging without cold start container launches. However, it also means that you need to shut down your notebook instances when you’re done using them to avoid extra costs. Other options such as Amazon SageMaker JumpStart and SageMaker Hugging Face containers can be used for fine-tuning, and we recommend you refer to the following posts on the aforementioned methods to choose the best option for you and your team: Domain-adaptation Fine-tuning of Foundation Models in Amazon SageMaker JumpStart on Financial data Train a Large Language Model on a single Amazon SageMaker GPU with Hugging Face and LoRA Prerequisites If this is your first time working with SageMaker Studio, you first need to create a SageMaker domain . We also use a managed TensorBoard instance for experiment tracking , though that is optional for this tutorial. Additionally, you may need to request a service quota increase for the corresponding SageMaker Studio KernelGateway apps. For fine-tuning Falcon-40B, we use a ml.g5.12xlarge instance. To request a service quota increase, on the AWS Service Quotas console, navigate to AWS services , Amazon SageMaker , and select Studio KernelGateway Apps running on ml.g5.12xlarge instances . Get started The code sample for this post can be found in the following GitHub repository . To begin, we choose the Data Science 3.0 image and Python 3 kernel from SageMaker Studio so that we have a recent Python 3.10 environment to install our packages. We install PyTorch and the required Hugging Face and bitsandbytes libraries: %pip install -q -U torch==2.0.1 bitsandbytes==0.39.1 %pip install -q -U datasets py7zr einops tensorboardX %pip install -q -U git+https://github.com/huggingface/transformers.git@850cf4af0ce281d2c3e7ebfc12e0bc24a9c40714 %pip install -q -U git+https://github.com/huggingface/peft.git@e2b8e3260d3eeb736edf21a2424e89fe3ecf429d %pip install -q -U git+https://github.com/huggingface/accelerate.git@b76409ba05e6fa7dfc59d50eee1734672126fdba Next, we set the CUDA environment path using the installed CUDA that was a dependency of PyTorch installation. This is a required step for the bitsandbytes library to correctly find and load the correct CUDA shared object binary. # Add installed cuda runtime to path for bitsandbytes import os import nvidia cuda_install_dir = '/'.join(nvidia.__file__.split('/')[:-1]) + '/cuda_runtime/lib/' os.environ['LD_LIBRARY_PATH'] =  cuda_install_dir Load the pre-trained foundational model We use bitsandbytes to quantize the Falcon-40B model into 4-bit precision so that we can load the model into memory on 4 A10G GPUs using Hugging Face Accelerate’s naive pipeline parallelism. As described in the previously mentioned Hugging Face post , QLoRA tuning is shown to match 16-bit fine-tuning methods in a wide range of experiments because model weights are stored as 4-bit NormalFloat, but are dequantized to the computation bfloat16 on forward and backward passes as needed. model_id = "tiiuae/falcon-40b" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) When loading the pretrained weights, we specify device_map=”auto"  so that Hugging Face Accelerate will automatically determine which GPU to put each layer of the model on. This process is known as model parallelism . # Falcon requires you to allow remote code execution. This is because the model uses a new architecture that is not part of transformers yet. # The code is provided by the model authors in the repo. model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, quantization_config=bnb_config, device_map="auto") With Hugging Face’s PEFT library, you can freeze most of the original model weights and replace or extend model layers by training an additional, much smaller, set of parameters. This makes training much less expensive in terms of required compute. We set the Falcon modules that we want to fine-tune as target_modules in the LoRA configuration: from peft import LoraConfig, get_peft_model config = LoraConfig( r=8, lora_alpha=32, target_modules=[ "query_key_value", "dense", "dense_h_to_4h", "dense_4h_to_h", ], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, config) print_trainable_parameters(model) # Output: trainable params: 55541760 || all params: 20974518272|| trainable%: 0.2648058910327664 Notice that we’re only fine-tuning 0.26% of the model’s parameters, which makes this feasible in a reasonable amount of time. Load a dataset We use the samsum dataset for our fine-tuning. Samsum is a collection of 16,000 messenger-like conversations with labeled summaries. The following is an example of the dataset: { "id": "13818513", "summary": "Amanda baked cookies and will bring Jerry some tomorrow.", "dialogue": "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)" } In practice, you’ll want to use a dataset that has specific knowledge to the task you are hoping to tune your model on. The process of building such a dataset can be accelerated by using Amazon SageMaker Ground Truth Plus , as described in High-quality human feedback for your generative AI applications from Amazon SageMaker Ground Truth Plus . Fine-tune the model Prior to fine-tuning, we define the hyperparameters we want to use and train the model. We can also log our metrics to TensorBoard by defining the parameter logging_dir and requesting the Hugging Face transformer to report_to="tensorboard" : bucket = ” <YOUR-S3-BUCKET> ” log_bucket = f"s3://{bucket}/falcon-40b-qlora-finetune" import transformers # We set num_train_epochs=1 simply to run a demonstration trainer = transformers.Trainer( model=model, train_dataset=lm_train_dataset, eval_dataset=lm_test_dataset, args=transformers.TrainingArguments( per_device_train_batch_size=8, per_device_eval_batch_size=8, logging_dir=log_bucket, logging_steps=2, num_train_epochs=1, learning_rate=2e-4, bf16=True, save_strategy = "no", output_dir="outputs",  report_to="tensorboard", ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) Monitor the fine-tuning With the preceding setup, we can monitor our fine-tuning in real time. To monitor GPU usage in real time, we can run nvidia-smi directly from the kernel’s container. To launch a terminal running on the image container, simply choose the terminal icon at the top of your notebook. From here, we can use the Linux watch command to repeatedly run nvidia-smi every half second: watch -n 0.5 nvidia-smi In the preceding animation, we can see that the model weights are distributed across the 4 GPUs and computation is being distributed across them as layers are processed serially. To monitor the training metrics, we utilize the TensorBoard logs that we write to the specified Amazon Simple Storage Service (Amazon S3) bucket. We can launch our SageMaker Studio domain user’s TensorBoard from the AWS SageMaker console: After loading, you can specify the S3 bucket that you instructed the Hugging Face transformer to log to in order to view training and evaluation metrics. Evaluate the model After our model is finished training, we can run systematic evaluations or simply generate responses: tokens_for_summary = 30 output_tokens = input_ids.shape[1] + tokens_for_summary outputs = model.generate(inputs=input_ids, do_sample=True, max_length=output_tokens) gen_text = tokenizer.batch_decode(outputs)[0] print(gen_text) # Sample output: # Summarize the chat dialogue: # Richie: Pogba # Clay: Pogboom # Richie: what a s strike yoh! # Clay: was off the seat the moment he chopped the ball back to his right foot # Richie: me too dude # Clay: hope his form lasts # Richie: This season he's more mature # Clay: Yeah, Jose has his trust in him # Richie: everyone does # Clay: yeah, he really deserved to score after his first 60 minutes # Richie: reward # Clay: yeah man # Richie: cool then # Clay: cool # --- # Summary: # Richie and Clay have discussed the goal scored by Paul Pogba. His form this season has improved and both of them hope this will last long After you are satisfied with the model’s performance, you can save the model: trainer.save_model("path_to_save") You can also choose to host it in a dedicated SageMaker endpoint . Clean up Complete the following steps to clean up your resources: Shut down the SageMaker Studio instances to avoid incurring additional costs. Shut down your TensorBoard application . Clean up your EFS directory by clearing the Hugging Face cache directory: rm -R ~/.cache/huggingface/hub Conclusion SageMaker notebooks allow you to fine-tune LLMs in a quick and efficient manner in an interactive environment. In this post, we showed how you can use Hugging Face PEFT with bitsandbtyes to fine-tune Falcon-40B models using QLoRA on SageMaker Studio notebooks. Try it out, and let us know your thoughts in the comments section! We also encourage you to learn more about Amazon generative AI capabilities by exploring SageMaker JumpStart , Amazon Titan models, and Amazon Bedrock . About the Authors Sean Morgan is a Senior ML Solutions Architect at AWS. He has experience in the semiconductor and academic research fields, and uses his experience to help customers reach their goals on AWS. In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Addons. Lauren Mullennex is a Senior AI/ML Specialist Solutions Architect at AWS. She has a decade of experience in DevOps, infrastructure, and ML. She is also the author of a book on computer vision. Her other areas of focus include MLOps and generative AI. Philipp Schmid is a Technical Lead at Hugging Face with the mission to democratize good machine learning through open source and open science. Philipp is passionate about productionizing cutting-edge and generative AI machine learning models. He loves to share his knowledge on AI and NLP at various meetups such as Data Science on AWS, and on his technical blog . Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Introducing popularity tuning for Similar-Items in Amazon Personalize _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Introducing popularity tuning for Similar-Items in Amazon Personalize by Julia Clark , Branislav Kveton , Nihal Harish , and Yifei Ma | on 08 JUN 2023 | in Amazon Machine Learning , Amazon Personalize | Permalink | Comments |  Share Amazon Personalize now enables popularity tuning for its Similar-Items recipe ( aws-similar-items ). Similar-Items generates recommendations that are similar to the item that a user selects, helping users discover new items in your catalog based on the previous behavior of all users and item metadata. Previously, this capability was only available for SIMS , the other Related_Items recipe within Amazon Personalize. Every customer’s item catalog and the way that users interact with it are unique to their business. When recommending similar items, some customers may want to place more emphasis on popular items because they increase the likelihood of user interaction, while others may want to de-emphasize popular items to surface recommendations that are more similar to the selected item but are less widely known. This launch gives you more control over the degree to which popularity influences Similar-Items recommendations, so you can tune the model to meet your particular business needs. In this post, we show you how to tune popularity for the Similar-Items recipe. We specify a value closer to zero to include more popular items, and specify a value closer to 1 to place less emphasis on popularity. Example use cases To explore the impact of this new feature in greater detail, let’s review two examples. [1] First, we used the Similar-Items recipe to find recommendations similar to Disney’s 1994 movie The Lion King ( IMDB record ). When the popularity discount is set to 0, Amazon Personalize recommends movies that have a high frequency of occurrence (are popular). In this example, the movie Seven (a.k.a. Se7en), which occurred 19,295 times in the dataset, is recommended at rank 3.0. By tuning the popularity discount to a value of 0.4 for The Lion King recommendations, we see that the rank of the movie Seven drops to 4.0. We also see movies from the Children genre like Babe, Beauty and the Beast, Aladdin, and Snow White and the Seven Dwarfs get recommended at a higher rank despite their lower overall popularity in the dataset. Let’s explore another example. We used the Similar-Items recipe to find recommendations similar to Disney and Pixar’s 1995 movie Toy Story ( IMDB record ). When the popularity discount is set to 0, Amazon Personalize recommends movies that have a high frequency occurrence in the dataset. In this example, we see that the movie Twelve Monkeys (a.k.a. 12 Monkeys), which occurred 6,678 times in the dataset, is recommended at rank 5.0. By tuning the popularity discount to a value of 0.4 for Toy Story recommendations, we see that the rank of the Twelve Monkeys is no longer recommended in the top 10. We also see movies from the Children genre like Aladdin, Toy Story 2, and A Bug’s Life get recommended at a higher rank despite their lower overall popularity in the dataset. Placing greater emphasis on more popular content can help increase likelihood that users will engage with item recommendations. Reducing emphasis on popularity may surface recommendations that seem more relevant to the queried item, but may be less popular with users. You can tune the degree of importance placed on popularity to meet your business needs for a specific personalization campaign. Implement popularity tuning To tune popularity for the Similar-Items recipe, configure the popularity_discount_factor hyperparameter via the AWS Management Console , the AWS SDKs, or the AWS Command Line Interface (AWS CLI). The following is sample code setting the popularity discount factor to 0.5 via the AWS SDK: { response = personalize.create_solution( name="movie_lens-with-popularity-discount-0_5". recipeARN="arn:aws:personalize:::recipe/aws-similar-items", datasetGroupArn=dsg_arn, solutionConfig={ "algorithmHyperParameters" : { # set the preferred value of popularity discount here "popularity_discount_factor" : "0.50" } } ] } The following screenshot shows setting the popularity discount factor to 0.3 on the Amazon Personalize console. Conclusion With popularity tuning, you can now further refine the Similar-Items recipe within Amazon Personalize to control the degree to which popularity influences item recommendations. This gives you greater control over defining the end-user experience and what is included or excluded in your Similar-Items recommendations. For more details on how to implement popularity tuning for the Similar-Items recipe, refer to documentation . References [1] Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 4, Article 19 (December 2015), 19 pages. DOI= http://dx.doi.org/10.1145/2827872 About the Authors Julia McCombs Clark is a  Sr. Technical Product Manager on the Amazon Personalize team. Nihal Harish is a Software Development Engineer on the Amazon Personalize team. Yifei Ma is a Senior Applied Scientist at AWS AI Labs working on recommender systems. His research interests lie in active learning, sequential modeling, and online decision making. Branislav Kveton is a Principal Scientist at AWS AI Labs. He proposes, analyzes, and applies algorithms that learn incrementally, run in real time, and converge to near optimal solutions as the number of observations increases. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Introducing the latest Machine Learning Lens for the AWS Well-Architected Framework _ AWS Architecture Blog.txt
AWS Architecture Blog Introducing the latest Machine Learning Lens for the AWS Well-Architected Framework by Raju Patil, Ganapathi Krishnamoorthi, Michael Hsieh, Neil Mackin, and Dhiraj Thakur | on 05 JUL 2023 | in Amazon Machine Learning , Announcements , Architecture , AWS Well-Architected Framework | Permalink | Comments |  Share Today, we are delighted to introduce the latest version of the AWS Well-Architected Machine Learning (ML) Lens whitepaper . The AWS Well-Architected Framework provides architectural best practices for designing and operating ML workloads on AWS. It is based on six pillars: Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization, and—a new addition to this revision—Sustainability. The ML Lens uses the Well-Architected Framework to outline the steps for performing an AWS Well-Architected review for your ML implementations. The ML Lens provides a consistent approach for customers to evaluate ML architectures, implement scalable designs, and identify and mitigate technical risks. It covers common ML implementation scenarios and identifies key workload elements to allow you to architect your cloud-based applications and workloads according to the AWS best practices that we have gathered from supporting thousands of customer implementations. The new ML Lens joins a collection of Well-Architected lenses that focus on specialized workloads such as the Internet of Things (IoT), games, SAP, financial services, and SaaS technologies. You can find more information in AWS Well-Architected Lenses . What is the Machine Learning Lens? Let’s explore the ML Lens across ML lifecycle phases, as the following figure depicts. Figure 1. Machine Learning Lens The Well-Architected ML Lens whitepaper focuses on the six pillars of the Well-Architected Framework across six phases of the ML lifecycle. The six phases are: Defining your business goal Framing your ML problem Preparing your data sources Building your ML model Entering your deployment phase Establishing the monitoring of your ML workload Unlike the traditional waterfall approach, an iterative approach is required to achieve a working prototype based on the six phases of the ML lifecycle. The whitepaper provides you with a set of established cloud-agnostic best practices in the form of Well-Architected Pillars for each ML lifecycle phase. You can also use the Well-Architected ML Lens wherever you are on your cloud journey. You can choose either to apply this guidance during the design of your ML workloads, or after your workloads have entered production as a part of the continuous improvement process. What’s new in the Machine Learning Lens? Sustainability Pillar : As building and running ML workloads becomes more complex and consumes more compute power, refining compute utilities and assessing your carbon footprint from these workloads grows to critical importance. The new pillar focuses on long-term environmental sustainability and presents design principles that can help you build ML architectures that maximize efficiency and reduce waste. Improved best practices and implementation guidance : Notably, enhanced guidance to identify and measure how ML will bring business value against ML operational cost to determine the return on investment (ROI). Updated guidance on new features and services : A set of updated ML features and services announced to-date have been incorporated into the ML Lens whitepaper. New additions include, but are not limited to, the ML governance features, the model hosting features, and the data preparation features. These and other improvements will make it easier for your development team to create a well-architected ML workloads in your enterprise. Updated links : Many documents, blogs, instructional and video links have been updated to reflect a host of new products, features, and current industry best practices to assist your ML development. Who should use the Machine Learning Lens? The Machine Learning Lens is of use to many roles, including: Business leaders for a broader appreciation of the end-to-end implementation and benefits of ML Data scientists to understand how the critical modeling aspects of ML fit in a wider context Data engineers to help you use your enterprise’s data assets to their greatest potential through ML ML engineers to implement ML prototypes into production workloads reliably, securely, and at scale MLOps engineers to build and manage ML operation pipelines for faster time to market Risk and compliance leaders to understand how the ML can be implemented responsibly by providing compliance with regulatory and governance requirements Machine Learning Lens components The Lens includes four focus areas: 1. The Well-Architected Machine Learning Design Principles A set of best practices that are used as the basis for developing a Well-Architected ML workload. 2. The Machine Learning Lifecycle and the Well Architected Framework Pillars This considers all aspects of the Machine Learning Lifecycle and reviews design strategies to align to pillars of the overall Well-Architected Framework. The Machine Learning Lifecycle phases referenced in the ML Lens include: Business goal identification – identification and prioritization of the business problem to be addressed, along with identifying the people, process, and technology changes that may be required to measure and deliver business value. ML problem framing – translating the business problem into an analytical framing, i.e., characterizing the problem as an ML task, such as classification, regression, or clustering, and identifying the technical success metrics for the ML model. Data processing – garnering and integrating datasets, along with necessary data transformations needed to produce a rich set of features. Model development – iteratively training and tuning your model, and evaluating candidate solutions in terms of the success metrics as well as including wider considerations such as bias and explainability. Model deployment – establishing the mechanism to flow data though the model in a production setting to make inferences based on production data. Model monitoring – tracking the performance of the production model and the characteristics of the data used for inference. The Well-Architected Framework Pillars are: Operational Excellence – ability to support ongoing development, run operational workloads effectively, gain insight into your operations, and continuously improve supporting processes and procedures to deliver business value. Security – ability to protect data, systems, and assets, and to take advantage of cloud technologies to improve your security. Reliability – ability of a workload to perform its intended function correctly and consistently, and to automatically recover from failure situations. Performance Efficiency – ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as system demand changes and technologies evolve. Cost Optimization – ability to run systems to deliver business value at the lowest price point. Sustainability – addresses the long-term environmental, economic, and societal impact of your business activities. 3. Cloud-agnostic best practices These are best practices for each ML lifecycle phase across the Well-Architected Framework pillars irrespective of your technology setting. The best practices are accompanied by: Implementation guidance – the AWS implementation plans for each best practice with references to AWS technologies and resources. Resources – a set of links to AWS documents, blogs, videos, and code examples as supporting resources to the best practices and their implementation plans. 4. Indicative ML Lifecycle architecture diagrams to illustrate processes, technologies, and components that support many of these best practices. What are the next steps? The new Well-Architected Machine Learning Lens whitepaper is available now. Use the Lens whitepaper to determine that your ML workloads are architected with operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability in mind. If you require support on the implementation or assessment of your Machine Learning workloads, please contact your AWS Solutions Architect or Account Representative. Special thanks to everyone across the AWS Solution Architecture, AWS Professional Services, and Machine Learning communities, who contributed to the Lens. These contributions encompassed diverse perspectives, expertise, backgrounds, and experiences in developing the new AWS Well-Architected Machine Learning Lens . TAGS: machine learning , ML Raju Patil Raju Patil is a Data Scientist in AWS Professional Services. He builds and deploys AI/ML solutions to help AWS customers overcome business challenges including computer vision, time-series forecasting, and predictive analytics use cases across financial services, telecom, and healthcare. He led data science teams in Advertising Technology and computer vision and robotics R&D initiatives. He enjoys photography, hiking, travel, and culinary exploration. Ganapathi Krishnamoorthi Ganapathi Krishnamoorthi is a Senior ML Solutions Architect at AWS. Ganapathi provides prescriptive guidance to startup and enterprise customers helping them to design and deploy cloud applications at scale. He is specialized in machine learning and is focused on helping customers leverage AI/ML for their business outcomes. When not at work, he enjoys exploring outdoors and listening to music. Michael Hsieh Michael Hsieh is a Principal AI/ML Specialist Solutions Architect. He solves business challenges using AI/ML for customers in the healthcare and life sciences industry. As a Seattle transplant, he loves exploring the great Mother Nature the city has to offer, such as the hiking trails, scenery kayaking in the SLU, and the sunset at Shilshole Bay. As a former long-time resident of Philadelphia, he has been rooting for the Philadelphia Eagles and Philadelphia Phillies. Neil Mackin Neil Mackin is a Principal ML Strategist and leads the ML Solutions Lab team of strategists in EMEA. He works to help customers realize business value through deploying machine learning workloads into production and guides our customers on moving towards best practice with ML. Dhiraj Thakur Dhiraj Thakur is a Solutions Architect with Amazon Web Services. He works with AWS customers and partners to provide guidance on enterprise cloud adoption, migration, and strategy. He is passionate about technology and enjoys building and experimenting in the analytics and AI/ML space. Comments View Comments Resources AWS Architecture Center AWS Well-Architected AWS Architecture Monthly AWS Whitepapers AWS Training and Certification This Is My Architecture Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
iptiQ Case Study.txt
Français AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » 30 iptiQ uses a common code base for its European partners and has developed a single API to allow any of them to connect to its technology, regardless of the specific product and market combination. To accommodate individual requirements, iptiQ tailors what each partner gets and adapts its offering to the different categories of insurance that their partners’ customers need. Using AWS, this flexibility is possible. Español The scale-up relies on a number of services, including Amazon Relational Database Service (Amazon RDS), Amazon Elastic Kubernetes Service (Amazon EKS), Amazon SageMaker and AWS Lambda, which helped it set up, operate, and scale its relational database in the cloud with just a few clicks. Pozzoli also values the ability to easily draw on developer resources. “There's a large engineering pool with extensive experience on AWS, which allows you to ramp-up your teams quickly,” he says. 日本語 1% In addition, iptiQ has reduced partner onboarding time—from around 6–8 months, which is common with other insurers, to a few weeks. 2022 AWS Customer Success Story: iptiQ by Swiss Re | Amazon Web Services 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Learn how »  In Europe, iptiQ launched its Property & Casualty insurance business entirely using Amazon Web Services (AWS), knowing that it needed to grow and develop its products at speed. “Using AWS has been pivotal to our success,” says Claudio Pozzoli, chief technology officer (CTO) at iptiQ EMEA. “Our business is simplifying complex insurance practices, not IT maintenance. With our platform built on AWS, we can better support our partners, give them the products they need faster, and make the digital journey easier for their customers.” Amazon Lambda Maecenas efficitur neque ac ex porta Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. Learn more » iptiQ, a B2B2C insurer and division of Swiss Re, provides a white-label, digital insurance solutions built on AWS that helps its consumer-brand partners sell insurance policies that are complementary to their core businesses.  These days, it’s not unusual to find yourself buying a mobile phone subscription from a grocery store, signing up for a credit card from your favorite sports team, or even getting home insurance when you buy furniture. Out-of-category purchasing, as it’s called, is becoming more common, especially for financial services. AWS Services Used Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » ipsum et velit consectetur 中文 (繁體) Bahasa Indonesia Donec placerat Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Data protection is critical in a highly regulated sector such as insurance. “Using AWS, we can easily comply with the security standards in our industry,” says Pozzoli. “We have peace of mind that our brand and reputation—and those of our partners—are fully protected.”  Learn more » Get Started About iptiQ Swiss Re’s iptiQ Helps its Partners Deliver Simple, Digital Insurance Solutions on AWS 2 AWS Customer Success Stories Türkçe English Amazon RDS With our platform built on AWS, we can better support our partners, give them the products they need faster, and make the digital journey easier for their customers.” Sed quis Customer Stories / Insurance Using AWS, iptiQ has the availability, speed, and flexibility to keep innovating. Its solution makes life easier for consumers, both when buying insurance and making claims. In addition, it has reduced partner onboarding time—from around 6–8 months, which is common with other insurers—to just a few weeks. Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. This innovative model is known as business-to-business-to-consumer (B2B2C) insurance, and the market for such services is set to almost triple in size between 2020 and 2031. iptiQ makes it easier for brands to sell insurance that complement their core products—while giving those companies’ customers a better insurance-buying experience. Nulla nisl massa, ullamcorper id Deutsch Tiếng Việt Pellentesque quis dui vel nunc cursus. elementum ac eget null integer interdum sodales felis pellentesque et Swiss Re’s iptiQ Helps its Partners Deliver Simple, Digital Insurance Solutions on AWS Italiano ไทย Amazon EKS For iptiQ, delivering a great experience is as important as ensuring that security is covered. So, if you find that your life is being made a little bit easier by the convenience of buying insurance services from your preferred brand, it might well be iptiQ that’s powering it. And as the company continues its rapid growth—Gross Written Premium grew 95 percent in 2021—in Europe it’s using AWS to gain the speed and availability it needs to deliver innovative insurance purchasing options to brands and consumers. iptiQ, a scale-up division of reinsurer Swiss Re, is making it easy for consumer brands to sell insurance to their customers. As a white-label insurance provider, iptiQ forms partnerships with insurance intermediaries and leading companies such as home furnishings retailer IKEA and real-estate marketplace ImmoScout24. “Today, more than 50 partners embed or integrate our insurance solutions into their products or customer journeys,” says Andreas Schertzinger, chief executive officer (CEO) at iptiQ EMEA. “This means that more than 1.6 million consumers benefit from our affordable and convenient products.” Claudio Pozzoli Chief Technology Officer, iptiQ EMEA Português Amazon SageMaker
Isetan Mitsukoshi System Solutions seamlessly migrates databases to Amazon Aurora using Amazon DMA _ Isetan Mitsukoshi System Solutions Case Study _ AWS.txt
Français Learn how Mitsukoshi Isetan System Solutions (IMS) modernized to Amazon Aurora with the help of Amazon DMA. IMS then used AWS Database Migration Service (AWS DMS), a managed migration and replication service to help move workloads to AWS quickly with minimal downtime and zero data loss, to migrate from Amazon RDS for Oracle to Amazon Aurora PostgreSQL. Amazon RDS for Oracle is a fully managed commercial database that makes it easy to set up, operate, and scale Oracle deployments in the cloud. 2023 Solution | Achieving digital transformation through a phased cloud migration Español IMS’ database migration project is just the beginning. With over 50 databases remaining – both on-premises and lift-and-shifted to the cloud – IMS plans to gradually migrate them to Amazon Aurora. IMS is delighted with Amazon DMA and plans to continue to use the DMA team for future migration efforts. Isetan Mitsukoshi System Solutions seamlessly migrates databases to Amazon Aurora using Amazon DMA About Isetan Mitsukoshi System Solutions 日本語 Contact Sales Outcome | Establishing path to long-term cost-savings and business agility As a customer of AWS, IMS was already accustomed to AWS cloud services. In phase one, the goal was to quickly reduces the administrative burden of self-managing its on-premises system by re-platforming the databases to the cloud. The company migrated its commercial databases to Amazon Relational Database Service (Amazon RDS) for Oracle, a fully managed commercial database that makes it easy to set up, operate, and scale Oracle deployments in the cloud. 한국어 Amazon Database Migration Accelerator (DMA) is a solution that brings together AWS Database Migration Service (DMS), AWS Schema Conversion Tool (SCT), and AWS database experts to help customers migrate away from traditional commercial databases at fixed prices. Learn more » Get Started IMS continued working on a system for controlling customer service initiatives like in-store specials, events, and brand promotions. This system encompassed social media management, contact center assistance, and policy effectiveness evaluation. With the help of DMA, IMS migrated the database within budget and in the 19 weeks scheduled. “We had complex applications with syntax unique to the existing database, so we assumed convoluted modifications would be needed,” says Kazumi Saito of the ICT Operation Service of IMS. “We were worried about our English skills, but thanks to excellent, friendly support from the AWS Japan team, there weren’t any problems.” According to Kazumi Saito, DMA’s Japanese documentation on migration procedures and program modifications was a major benefit. This material also included thorough information on operating, maintaining, and enhancing the new system. AWS Database Migration Service AWS Services Used Amazon Aurora PostgreSQL 中文 (繁體) Bahasa Indonesia Completing the migration on schedule was exceptionally impressive. DMA provided quick solutions to challenges with clear explanations on the root causes and techniques to resolve. We’re able to focus on our own work with the system providing great performance with zero unplanned downtime since it launched.” Technical support “Because our system is technologically advanced and many staff members have come and gone over its lifespan, we thought migration would be tough,” says Masaki Saito, Manager of ICT Operation Service at IMS. “Completing the migration on schedule was exceptionally impressive. DMA provided quick solutions to challenges with clear explanations on the root causes and techniques to resolve. We’re able to focus on our own work with the system providing great performance with zero unplanned downtime since it launched.” In addition, by moving from on-premises to Amazon Aurora, IMS was able to lower costs through performance efficiencies and breaking free from expensive licensing fees. Ρусский Isetan Mitsukoshi System Solutions oversees information strategies and provides an extensive range of IT services for all department stores and companies in the Isetan Mitsukoshi Group. The company aims to fuse customer service with digital technology to create the ultimate customer experience as the core of department store DX initiatives. عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. on Amazon Aurora Overview Amazon Database Migration Accelerator Customer Stories / Information and Communication Türkçe Amazon RDS for Oracle to migrate as planned English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Learn more » Opportunity | Offloading legacy systems to concentrate on digital transformation However, the end goal for IMS was to modernize to the cloud-native database, Amazon Aurora, which is designed for unparalleled high performance and availability at a global scale with full MySQL and PostgreSQL compatibility. “Amazon Aurora’s affordable high-performance databases reduce expensive licensing costs,” says Karasawa. In addition, IMS chose Amazon Aurora PostgreSQL-Compatible Edition for its performance capabilities to support a high concurrency environment, its lower conversion cost from Oracle PL/SQL stored procedure, and its ease of use. Takeshi Karasawa General Manager, ICT Engineer Services Department, Isetan Mitsukoshi System Solutions Ltd. Deutsch 19 weeks Tiếng Việt The Isetan Mitsukoshi Group, with a long history as one of the largest department store groups in Japan, recognized digital transformation is needed to keep up with modern demands. In 2019, Isetan Mitsukoshi System Solutions (IMS), which supports all IT usage across the Isetan Mitsukoshi Group, embarked on a multi-phase database migration and modernization journey with Amazon Web Services (AWS) to transform its digital infrastructure in order to drive innovation and better value for its customers. Understanding the value cloud services offer, IMS has embraced the cloud first strategy. In 2022, IMS procured the help of Amazon Database Migration Accelerator (Amazon DMA), a solution that brings together AWS services and AWS database experts to help customers migrate away from traditional commercial databases, to modernize their databases to Amazon Aurora. According to Karasawa, high-load legacy systems impeded the progression of DX. Historically, IMS operated on-premises commercial databases, but over time, the increasing costs to operate these databases became a major issue. Database licensing costs alone were a significant portion of the group’s total IT expenses, requiring significant annual renewal costs and operational workloads. The company therefore turned to AWS to help with their DX, shifting its databases to the cloud and alleviating the expense and time-consuming effort of self-managing databases. Zero unplanned downtime Italiano ไทย Architecture Diagram AWS Database Migration Service (AWS DMS) is a managed migration and replication service that helps move your database and analytics workloads to AWS quickly, securely, and with minimal downtime and zero data loss. Learn more » Learn more » access to experts and documentation Isetan Mitsukoshi System Solutions (IMS) develops IT strategies, provides solutions, and runs systems for the group’s 44 department stores and companies and 17,000 employees. IMS enables the group’s core department store business to receive mission-critical IT solutions like sales management, revenue control, and analytics, as well as digital transformation (DX) initiatives. “Our great variety of IT solutions ranges from operating business systems to using cutting-edge technology,” says Takeshi Karasawa, General Manager of ICT Engineer Services at IMS. “With DX as one of our main focuses, we use digital technology to provide new value to customers, improve employee productivity, and preserve the heritage of our department stores. We’re also committed to modernization to support smart devices, lowering operating workloads and expenses, and creating DX-friendly environments.” In phase two of its DX and modernization journey, IMS procured the help of Amazon Database Migration Accelerator (Amazon DMA) to help accelerate its database migrations to AWS. Due to limited engineering resources and the added complexity needed to convert schemas and source code objects to be compatible with the target engine, DMA provided the technical expertise needed to quickly convert schemas and applications. Amazon DMA also provided a detailed playbook for IMS to migrate the databases to production. Português IMS first migrated its Electronic Data Interchange (EDI) system database, which controls transactions between department stores and trading partners. Although the system contained unique database engine code, the DMA team quickly resolved all problems to complete the migration in nine weeks as planned.
Isha Foundation Delivers on its Mission for Millions by Transforming Content Delivery on AWS _ Case Study _ AWS.txt
In 2020, Isha Foundation moved most of its in-person programs, training, and events online. As a result, the foundation experienced an increase in the number of visitors attending events or watching videos of Sadhguru’s teachings on its websites. The foundation also needed to support two million users globally who take part in online events occurring during Maha Shivaratri, the most significant event in India's spiritual calendar. Français Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Learn more » Español Delivered highly available content to millions of users Customer Stories / Social Services The foundation relies on Amazon Elastic Kubernetes Service (Amazon EKS) to run and scale containerized Kubernetes applications, which eliminates the need for internal resources to manage Kubernetes clusters. Senthilkumar V, DevOps engineer at Isha Foundation, says, “Previously, we had to spend time and money upgrading hardware every few years, investing engineering and security resources into the data center, and managing the environment. Now, we can allocate more resources into enhancing our website and other applications instead.”  日本語 2022 Isha Foundation transformed its content delivery network using Amazon CloudFront, ensuring a reliable experience for its growing online user base, and supporting its mission of helping them attain physical, mental, and spiritual well-being. Eliminated hardware upgrades and data center maintenance costs To enhance content delivery to a growing number of video subscribers on its websites, and support thousands of additional concurrent users during events, the foundation migrated its on-premises IT environment to AWS. Isha Foundation’s data center was supporting its websites, online educational resources, and an internal CRM solution. “Our data center limited our ability to scale in response to growth and this negatively impacted video quality and website response times. We chose AWS for improved scalability and ease of integration,” says Sivanesan Mathivanan, Delivery Manager–DevOps at Isha Foundation. 한국어 When Sadhguru introduced Conscious Planet, an initiative to create a world where humans act more consciously, the foundation’s main website maintained strong performance throughout the multi-day campaign.  “During this initiative, we streamed new videos and articles and hosted multiple events throughout the world without outages or issues,” Mathivanan says. “This helped us achieve our goal of encouraging people to find out more about what the movement is about.” Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Enhancing the Content Experience for Millions on AWS Get Started With its websites, CRM, and CMS running on AWS, the foundation has expanded its various educational and outreach activities by offering daily events and programs. Scaling to support a surge in web traffic during special online events is no longer an issue for the organization, with millions of concurrent users during Maha Shivaratri, as well as special guided meditations occurring monthly during the full moon. “We can scale our application environment to manage 10 times more traffic during online events because of AWS,” says Senthilkumar V.  With Amazon CloudFront, Isha Foundation can deliver highly available content and achieve low latency; critical given the foundation is based in a remote area of India. “With AWS, we’re not restricted by borders. We want to reach more people worldwide, and AWS provides the high performance and availability we require,” says Mathivanan. Isha Foundation is now exploring additional AWS services, such as Amazon Polly that turns text into lifelike speech. “We’re looking at using text-to-speech in Amazon Polly to give everyone the experience of hearing Sadhguru’s voice in their own native language, which is exciting,” says Mathivanan. “Our focus at Isha Foundation is to engage with spiritual seekers, meditators, and volunteers in new ways as we grow. By leveraging Amazon CloudFront and new AWS technologies, we can constantly provide our users with a spiritual experience no matter where they are.” AWS Services Used Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Learn more » Isha Foundation transformed its content delivery network using Amazon CloudFront, ensuring a reliable website experience for its expanding online user base, and supporting its mission of helping them attain physical, mental, and spiritual well-being. 中文 (繁體) Bahasa Indonesia 10x Solution | Scaling to Deliver Highly Available Content on Amazon CloudFront Contact Sales Ρусский Isha Foundation is running its CRM, CMS, websites, and an internal log system on Amazon Elastic Compute Cloud (Amazon EC2) instances. It chose Amazon CloudFront as the content delivery network for its websites and CMS. عربي About Isha Foundation 中文 (简体) Sivanesan Mathivanan Delivery Manager–DevOps, Isha Foundation In 1992, Indian yoga teacher and spiritual leader, Jagadish Vasudev, known popularly as Sadhguru, created a nonprofit organization called the Isha Foundation. The foundation is dedicated to raising human consciousness through yoga programs and inspiring projects for society, the environment, and education. What began as a grassroots organization grew into a worldwide movement, supported today by 11 million volunteers in 300 centers across the globe. Amazon Elastic Compute Cloud Overview Amazon Elastic Kubernetes Service Scaled to support 10x increase in web traffic On AWS, Isha Foundation leveraged Amazon CloudFront, a content delivery network built for high performance and security, and Amazon Elastic Kubernetes Service (Amazon EKS) for scalability. With these solutions, Isha Foundation is ensuring an improved online experience for its subscribers and supporting its mission of helping them attain overall well-being. Our focus at Isha Foundation is to engage with spiritual seekers, meditators, and volunteers in new ways as we grow. By leveraging Amazon CloudFront and new AWS technologies, we can constantly provide our users with a spiritual experience no matter where they are.” Türkçe Isha Foundation is a non-profit organization offering in-person and online courses and events to a growing number of users globally. To support this growth and securely deliver content with low latency, Isha Foundation migrated its customer relationship management (CRM), content management system (CMS), and website application to AWS. Deployed resources in minutes versus weeks English Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Cost Optimizations The foundation’s IT team can also better serve internal customers, including over 100 departments who often request software deployments. Resources can be deployed in minutes, whereas previously it took weeks to procure and install hardware or software for a department. Deutsch Amazon DynamoDB Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Tiếng Việt Rapid Deployments Isha Foundation, based in India, is a nonprofit organization dedicated to raising human consciousness. Guided by Sadhguru, the foundation offers a variety of programs that provide methods for anyone to attain physical, mental, and spiritual wellbeing. Its offerings allow participants to deepen their experience of life and reach their ultimate potential. Italiano ไทย Amazon CloudFront Opportunity | Supporting an Increase in Online Content and Users Learn more » Isha Foundation Delivers on its Mission for Millions by Transforming Content Delivery on AWS Content Delivery Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Jefferies Manages Packaged Applications at Scale in the Cloud through Amazon RDS Custom for Oracle _ Jefferies Case Study _ AWS.txt
Français 2023 Español Manish Mohite Senior Vice President and Global Head of Cloud Engineering, Jefferies 日本語 Jefferies needed a way to automate time-consuming database administration tasks. “We operate in six regions, and we have over 50 accounts, so that’s 300 different custom engine versions that we would have to manage,” says Manish Mohite, senior vice president and global head of cloud engineering at Jefferies. The company saw an opportunity to automate and enhance the resilience and efficiency of its data infrastructure by developing packaged applications in the cloud and turned to Amazon Web Services (AWS). Jefferies began using Amazon Relational Database Service Custom (Amazon RDS Custom) for Oracle, a managed database service for applications that require privileged access to underlying operating system and database environments. The company selected this service to achieve cloud scale for legacy, custom, and packaged applications that require licensing and security tooling. This effort took approximately 6–8 months. Other attractive features of Amazon RDS Custom for Oracle included Oracle licensing portability with bring your own license (BYOL), AWS-managed provisioning with shared responsibilities, automated backup and recoveries, and cloud scalability to quickly and simply adjust to business needs. As a financial service subject to many regulations, the integration with standard security and compliance was another important feature because it made managing the process easier. Being able to use Jefferies’ existing tooling—such as IBM Guardium agent, Oracle Unified Directory to centrally manage Oracle identities, Atlassian DevOps toolset—with Amazon RDS Custom for Oracle was critical. “Amazon RDS Custom for Oracle really did provide us significant value in proposition, especially with these packaged applications that we could build in the cloud,” says Mohite. 한국어 Learn more » Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Get Started About Jefferies  AWS Services Used 中文 (繁體) Bahasa Indonesia The company also uses a host of AWS services within a sophisticated solution architecture that it built for the use case. Among them are Amazon Route 53, a highly available domain name system web service that connects user requests to internal applications, and Amazon Simple Storage Service (Amazon S3), a service for building fast, powerful cloud-native apps that scale automatically. These offerings store and route traffic to and from Jefferies’ applications. Contact Sales Ρусский عربي Jefferies is a leading global, full-service investment banking and capital markets firm that provides advisory, sales and trading, research, and wealth and asset-management services. With more than 40 offices around the world, Jefferies offers insights and expertise to investors, companies, and governments. 中文 (简体) Jefferies, a global investment banking firm, is modernizing its technology to advance innovation at the firm. By shifting from an application development to an application assembly and integration model, Jefferies’ vision is to build highly agile teams that can deliver fast, customized insights to achieve better client outcomes. Chief information officer at Jefferies, Vikram Dewan, says, “Our goal is for cloud-native platforms to serve as the foundation for more than 90 percent of new modernized workloads at the firm.” Jefferies’ Solution Jefferies Manages Packaged Applications at Scale in the Cloud through Amazon RDS Custom for Oracle Benefits of Using Amazon RDS Custom for Oracle “In the context of Amazon RDS Custom, we can now provision and lifecycle Amazon RDS Oracle databases in hours compared to weeks and months in the past,” says Mohite. By managing all engines through the Amazon S3 bucket and automating database setup and scaling using Amazon RDS Custom for Oracle, Jefferies has freed up time and money for more strategic activities. “We really use AWS for all the undifferentiated heavy lifting, and we can focus on the things that matter most to us,” says Mohite. Amazon RDS Custom for Oracle really did provide us significant value in proposition, especially with these packaged applications that we could build in the cloud.” Customer Stories / Financial Services Türkçe English Jefferies, a leading global investment banking firm, selected Amazon RDS Custom for Oracle to automate database administration tasks for legacy, custom, and packaged applications. The company consolidated hundreds of custom engine versions into one Amazon S3 bucket. Amazon RDS Custom for Oracle Deutsch Amazon RDS for Oracle is a fully managed commercial database that makes it easy to set up, operate, and scale Oracle deployments in the cloud. Tiếng Việt Italiano ไทย To verify that it was using the appropriate level of automation for its business needs, Jefferies used AWS Systems Manager, a management service that makes it simple to automatically collect software inventory, to implement its capabilities through automation and increase the value of AWS services. “For example, on those Amazon RDS Custom for Oracle instances, we don’t want to just tag the database. We want to tag everything with something more relevant, meaningful for us at Jefferies. Amazon RDS Custom for Oracle actually does all that automation for us,” says Mohite. Jefferies also uses AWS Service Catalog to abstract those automation documents and Amazon CloudWatch to monitor and audit automations and infrastructure at scale. This means that Jefferies is able to improve its client interactions with greater speed and additional features. Industry-Wide Opportunity Português
Kee Wah Bakery Brings Timeless Baked Goods to Modern Shoppers with Eshop on AWS _ Kee Wah Bakery Case Study _ AWS.txt
To address these scalability and reliability issues, Kee Wah Bakery decided to migrate Eshop’s on-premises servers to the cloud. The business engaged Amazon Web Services (AWS) in Hong Kong, who connected them with APN Premier Consulting Partner, Nextlink Technologies (Nextlink), to support with the migration. Français To better handle traffic surges, Nextlink implemented Elastic Load Balancing (ELB), which dynamically scales Eshop’s load balancer in response to fluctuations in volume, preventing any individual server instance from becoming overloaded. Furthermore, the partner replaced its previous content delivery network with Amazon CloudFront, resulting in a 12 percent decrease in site latency. Kee Wah Bakery transformed the scalability and performance of its ecommerce site Kee Wah Eshop by migrating to AWS, supporting spikes in traffic and delivering a consistent online experience. 2023 Español Amazon EC2 日本語 Solution | Improving Eshop’s Performance, Security, and Availability with AWS 12% higher website traffic supported Our online presence is entering a new era with AWS. We want to engage with customers more actively via the web and communicate with them in more personalized ways across digital channels.” About Kee Wah Bakery 한국어 For Kee Wah Bakery, enhancing personalization is part of a broader set of goals that include increasing the business’s analytical capabilities. In the next six months, the company plans to migrate to a cloud-based SAP S/4HANA solution running on AWS. This move will provide the bakery with real-time operational reporting for the first time. It will also maximize production efficiency and offer more tailored promotional campaigns through online sales and tools that offer deep insight into customer buying patterns. Overview | Opportunity | Solution | Outcome | AWS Services Used Elastic Load Balancing Get Started To safeguard against prevalent web exploits and bots that threaten security and performance, the Nextlink team deployed AWS WAF, a reliable web application firewall. Lau says, “We received exceptional support from Nextlink. The team demonstrated its proficiency and strong partnership with AWS in Hong Kong. Thanks to Nextlink’s expertise and collaboration, we were able to complete the migration in under two months, which was a great accomplishment given our initial expectations for a much longer timeframe.” Since transitioning Eshop to AWS in October 2022, Kee Wah Bakery has experienced zero website crashes, even during peak traffic periods such as the Chinese New Year celebrations in January 2023, when daily site visits reached 20,000 and concurrent connections averaged around 300. Says Lau, “I’ve received positive feedback across the business and from customers on the improved performance of our Eshop. Our customers in Hong Kong hold high expectations, and our standards are equally demanding, so it’s satisfying to meet and exceed those expectations.” AWS Services Used 5x 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  Contact Sales Ρусский As part of its ongoing development, Kee Wah Bakery decided to introduce an ecommerce sales channel and launched its localized website, Eshop, in Q4 2022. The site gives customers in Hong Kong an avenue to conveniently order baked goods for home delivery. Soon after launching Eshop, the company experienced a surge in online orders leading up to holidays, especially the mid-autumn festival and Chinese New Year. Site traffic could surge by five times, with the number of daily site visits rising to around 20,000. These traffic surges often crashed Eshop, as its underlying IT infrastructure, partially on premises and partially on the cloud, was unable to scale sufficiently. Kee Wah Bakery Brings Timeless Baked Goods to Modern Shoppers with Eshop on AWS 2 months 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The downtimes were concerning for the business, not only because of lost revenue but also for the potential reputational damage. Terry Lau, marketing manager at Kee Wah Bakery, explains, “We take pride in the quality of our products and strive to provide the best possible service to our customers. We couldn’t let any issues with Eshop’s performance undermine our hard work.” Customer Stories / Retail Overview to launch Eshop on AWS better site performance Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones (AZs). Next, Kee Wah Bakery implemented Amazon Relational Database Service (Amazon RDS) and Amazon Elastic File System (Amazon EFS) to accelerate data read and write operations, boosting website performance by 900 percent. The bakery also utilized Amazon ElastiCache to swiftly retrieve frequently requested information and images. According to Lau, Eshop is now better equipped to support the company's strategy of driving sales globally. Thanks to the scalable and reliable performance of AWS, Lau can confidently introduce new localized websites, such as its recently launched US website, even as it expands its bricks-and-mortar stores. Lau adds, “We’re all aware of the immense potential for global sales through ecommerce. Providing a consistent, top-notch online experience on AWS to customers of Kee Wah Bakery, regardless of location, will be key to our success.” Türkçe English Outcome | Driving Personalization and Global Expansion Amazon RDS Terry Lau Marketing Manager, Kee Wah Bakery Nextlink worked with Kee Wah Bakery to develop a comprehensive plan to move the entire Eshop platform to AWS, which involved migrating both servers and the Magento ecommerce software. After conducting a thorough assessment of Eshop, including an analysis of traffic volumes, Nextlink proceeded to build the core AWS infrastructure for Eshop. This included replacing on-premises servers with Amazon Elastic Compute Cloud (Amazon EC2) instances and adopting Amazon Route 53 to manage website traffic. Kee Wah Bakery, a Hong Kong institution with almost 85 years of experience, migrated its ecommerce website—Kee Wah Eshop—to AWS to offer its customers consistent high-quality service both online and in-store. Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. 900% Kee Wah leverages Amazon Elastic Compute Cloud (Amazon EC2) virtual server instances with Amazon Route 53 to manage traffic on Eshop, and Elastic Load Balancing (ELB) and Amazon CloudFront to support order spikes. By transitioning its site to AWS, Kee Wah Bakery has enhanced its customers' online shopping experience while driving personalization. Kee Wah Bakery is a household name and one of the biggest bakery brands in Hong Kong. The company produces a range of specialty baked goods including wedding cakes, mooncakes, and traditional Chinese pastries. Deutsch عربي Tiếng Việt Italiano ไทย Amazon CloudFront Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » less network latency Kee Wah Bakery is one of Hong Kong’s oldest bakery businesses, well-known for its Cantonese mooncakes, popular during mid-autumn festival in September. The bakery, which first opened in 1938, has stores across Hong Kong and mainland China, Taiwan, Japan, and two locations in the United States. Opportunity | Enhancing Eshop to Manage Traffic Spikes and Prevent Revenue Loss Português Following the migration of Eshop to AWS, Kee Wah Bakery is exploring opportunities to enhance its online sales channels, including integrating Eshop with popular messaging platforms like WhatsApp. Customers will be able to interact more easily with different stores and streamline processes like in-store order pickups. “Our online presence is entering a new era with AWS. We want to engage with customers more actively via the web and communicate with them in more personalized ways across digital channels,” explains Lau.
Kioxia uses AWS for better HPC performance and cost savings in semiconductor memory development and manufacturing _ Case Study _ AWS.txt
Proactive cloud use by general IT workers as well as developers AWS CloudFormation AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. Learn more » Français Kioxia, a world-leading semiconductor manufacturer, uses High Performance Computing (HPC) in its product development and manufacturing processes. When facing issues with resource flexibility during HPC usage peaks, the company turned to Amazon Web Services (AWS). With AWS Direct Connect, Kioxia securely connected its on-premises environment to AWS and distributed jobs according to needs and loads, cutting costs by around seven percent.  Kioxia Uses AWS for Better HPC Performance and Cost Savings in Semiconductor Memory Development and Manufacturing 2023 Amazon FSx for Lustre Español Agility Learn More 日本語 Customer Stories / Manufacturing “The semiconductor market has experienced rapid technological innovations and massive waves of change,” Kawabata continues. “We created the world's first NAND flash memory in 1987 under Toshiba, and in 2007, we were the first to announce 3D multilayer technology. Wherever we see a benefit, we strive to create through a bottom-up approach using systems that leverage advanced IT.” 한국어 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Learn more » Unlocking innovation Cloud is in high demand especially for running HPC, and it will fully unlock the power of HPC once security considerations are addressed. We can get started quickly on the cloud, it expands your options, and it’s also useful for business continuity planning.” Outcome | Rebalancing 1% of design jobs to unlock 7% cost savings According to Takahashi, adopting AWS has allowed the company to respond to unexpected problems and factory requests calmly. In the past, a sudden request from a factory manager would take time and inter-departmental coordination to resolve. But with AWS resources, Kioxia can now make decisions and establish countermeasures on the spot. “When we scrutinized HPC jobs, we saw that just one percent of photomask design jobs determined overall specs,” explains Takahashi. “We can optimize workload and cost by assigning this one percent to AWS. Rebalancing this portion has reduced costs by seven percent.” AWS Services Used To manufacture semiconductor memory, photomask patterns are transferred to semiconductor wafers by shining ultraviolet light at ultra-high speeds, akin to developing photographs. Photomasks correct the original circuit design and enable accurate manufacturing, producing circuits of several nanometers (nm) that are thinner than the wavelength of UV light (approx. 300 nm). This design requires iterative simulations with the computational power of HPC. 中文 (繁體) Bahasa Indonesia “This confirmed that our established security measures and rules would work properly on the cloud and summarized the steps we needed to take to use the cloud with peace of mind,” says Kawabata. Contact Sales Ρусский عربي Kioxia turned to cloud services to solve the challenge. When the company spun off from Toshiba, it began researching many cloud solutions, focusing on AWS, which was already a popular key service. AWS’s components and APIs are similar to Solaris and UNIX, which have long been popular as semiconductor memory development environments. Takahashi decided the company’s experienced engineers would find AWS familiar, allowing them to leverage their knowledge. 中文 (简体) Solution | Mitigating fluctuating needs for HPC with the cloud This approach includes the cloud as the company embraces new technology with a forward-thinking attitude. “We also rigorously scrutinize security,” says Kawabata, emphasizing the importance of security measures. Overview Kioxia connects a portion of its large-scale HPC environment to AWS via AWS Direct Connect to offload processing to AWS. Once a project plan is complete, jobs are sent to an on-premises job scheduler and allocated to on-premises HPC or AWS, according to size and need. Resources on AWS are set to launch automatically when jobs run, turning off when jobs are completed to minimize costs. Get Started Its own IT workers are also enjoying new value not possible in on-premises environments as using AWS makes it easy for them to experiment, letting them proactively design architecture and test environments. Türkçe Security is another key factor for Kioxia’s adoption of AWS. Kioxia takes security measures very seriously as it handles highly sensitive design data. The company reviewed a 256-item checklist based on AWS Well-Architected Framework and identified essential points. Kioxia has a colossal on-premises HPC environment to meet a variety of computational needs. Through its many years of experience, the company has accumulated the knowledge to bolster resources to perfectly match market and technological needs. Despite this expertise, Kioxia still had difficulty preempting short-term peaks and problems by resourcing for them. English Kioxia creates flash memory and SSD products to “uplift the world with memory.” The company’s plant in Yokkaichi city, Japan, is one of the largest and most productive in the industry. Famous as a smart factory that leverages AI and other advanced technology, the Yokkaichi plant has proudly delivered unrivaled productivity and efficiency for 30 years. “The semiconductor memory industry is fiercely competitive, so we challenge ourselves and maintain the momentum of a young venture company,” says Toshiaki Kawabata, Chief Information and Security Officer and executive at Kioxia Holdings. Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram “We planned well, prepared resources, and distributed them appropriately, but external factors still caused a few events per year where we had to pause projects or find another solution,” says Masanori Takahashi, Chief Specialist of Memory Lithography, Kioxia. “We use HPC to replace human capabilities, reducing their workload and optimizing costs. However, when resources are in short supply, our workers must respond using their own knowledge, which increases labor costs.” Basic engineering requires massive HPC computing power to correctly simulate running circuit designs or to replicate the manufacturing process to predict problems. HPC power is especially important for designing manufacturing components called photomasks. About Kioxia Corporation IT is as essential for designing and manufacturing semiconductor memory as it is in other manufacturing industries. Computing power is especially essential in engineering, such as memory design and simulations. Due to the intricate design of semiconductor memory and the complex manufacturing process, Kioxia uses IT to save on labor and improve yields at every stage of the process. https://aws.amazon.com/hpc/. 7% For high-performance storage, Kioxia uses Amazon FSx for Lustre. With the company’s existing applications requiring high disk I/O speeds, Amazon FSx Lustre’s blazing throughput was an obvious choice. Deutsch AWS Auto Scaling Tiếng Việt Opportunity | Providing cutting-edge memory technologies and products Italiano ไทย Cost savings produced by rebalancing 1% of design jobs Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. Learn more » Numerous options and capabilities to handle unexpected factory requests Kioxia spun off from Toshiba in 2017, taking charge of Toshiba’s memory business. The semiconductor manufacturer chiefly produces NAND flash memory, pursuing the potential of memory, creating new value, and changing the world with all-new experiences as part of the Kioxia group. Learn more » Amazon EC2 Architecture Diagram Kioxia wants to learn more about AWS to improve its knowledge and skills for high-level use. “Once you have addressed the security considerations, the cloud is an ideal environment,” says Kawabata. “AWS provided accurate, fast, and friendly support for our intricate questions and requests. Cloud is in high demand especially for running HPC, and it will fully unlock the power of HPC. We can get started quickly on the cloud, it expands your options, and it’s also useful for business continuity planning.”  Toshiaki Kawabata Executive Officer, Chief Information and Security Officer; Kioxia Holdings Corporation Português To learn more, visit
Kirana Megatara Reduces Procurement Costs by 10 Percent for Raw Rubber with Speedy Reporting on AWS _ Case Study _ AWS.txt
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Learn more » Français Adds Adinugraha, “With Amazon QuickSight, we don't lose time with any programing. We just focus on the formulas and the display, and then it’s drag and drop. That's never happened before.” 2023 reduced time for report creation Español With Amazon QuickSight, Kirana Megatara can present the latest data in SAP in close to real time using interactive dashboards. As a result, its Sourcing Department can view the latest production targets per plant and know how much raw rubber to buy at the start of each day. Moreover, the department can identify the suppliers close to each plant that are consistently producing the amount of raw rubber it needs at the right price. Kirana Megatara chose Amazon Quicksight as a business intelligence tool to deliver a range of reports from SAP on AWS in hours and develop custom applications supporting SAP. Data from these applications are also extracted into Amazon Quicksight for further analysis. Adinugraha recalls, “Amazon QuickSight was more cost effective than the other solution we were considering. We could start small and expand usage as the demand for dashboards increased.”  Faster insights 日本語 About Kirana Megatara Customer Stories / Manufacturing Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Hendrik Iriawan Saputra General Manager of IT, Kirana Megatara Opportunity | Seeking Faster Insights for Improved Supplier Management and Sourcing Kirana Megatara is a world-class producer of rubber and a processor of crumb rubber, made from worn-out tires. It has 15 subsidiaries, including one of Indonesia’s oldest rubber processing companies, PT Djambi Waras, which opened in 1964. Kirana Megatara is also a member of the Global Platform for Sustainable Natural Rubber, which promotes sustainable practices. AWS Services Used Maintaining a good relationship with suppliers is as important as getting the best prices. With Amazon QuickSight, we have a constantly refreshed picture of the suppliers we should be working with to maximize production efficiency.” Outcome | Reducing Procurement Costs by 10 Percent for Raw Rubber Kirana Megatara Reduces Procurement Costs by 10 Percent for Raw Rubber with Speedy Reporting on AWS 中文 (繁體) Bahasa Indonesia To do this, Kirana Megatara deployed Amazon QuickSight, which gives an organized view of its business-critical data in SAP on AWS, running on Amazon Elastic Compute Cloud. As a result, the organization is optimizing procurement of raw rubber and building stronger relationships with its suppliers. Contact Sales Ρусский Kirana Megatara uses Amazon QuickSight to extract insights from data in SAP on AWS, providing buyers with daily production targets to improve the alignment of supply and demand and optimize expenditure. عربي Solution | Providing Clear Insight Faster with Amazon QuickSight To learn more, visit aws.amazon.com/quicksight. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Kirana Megatara buys more than one thousand metric tons of raw rubber from Indonesian suppliers every day. It ships the material to its 16 processing plants across the country to be processed into Standard Indonesian Rubber (SIR) for companies like Bridgestone, Goodyear, and Pirelli. In 2021, the plants produced 508,000 metric tons of SIR, worth a total of $857 million. Secure Overview For consistent data on each plant’s raw material usage and production figures, Kirana Megatara deployed SAP on premises in 2012. But after reliability and scalability issues, it migrated the system to Amazon Web Services (AWS) in 2021. Although Kirana Megatara gained better performance with SAP on AWS, the company wanted to improve report extraction. Its analytics team had to present data in Microsoft Excel spreadsheets, which was complex and time consuming. Narendra Adinugraha, head of analytics at Kirana Megatara, says, “We needed more than a day to import figures, process information, and create reports.” Amazon QuickSight is a fast, cloud-powered business intelligence service that makes it easy to deliver insights to everyone in your organization. Data in near real time for supply chain management Türkçe Amazon Elastic Compute Cloud 8–10 English Furthermore, Adinugraha can easily meet the demand from the business for new reports. He says, “We are producing between 8 to 10 reports a month in Amazon QuickSight, so we’re well on top of the requests coming in.” Just one of these reports could take weeks using Excel, but with the speed of Amazon QuickSight, the analytics team can deliver one report every 2.5 days on average, allowing departments better control over their operations. 75% Kirana Megatara produces rubber for leading tire manufacturers globally. The company wanted to increase the speed of reporting business data to improve decision making and operations.  Working with AWS Partner Technova, Kirana Megatara integrated Amazon Quicksight with SAP modules running on Amazon Elastic Compute Cloud (Amazon EC2). “We value our relationship with Technova; its AWS engineers are always available on short notice,” says Adinugraha. complex reports delivered per month Deutsch 10% Tiếng Việt Learn More Ensures critical data is encrypted end-to-end Italiano ไทย Using Amazon QuickSight, Kirana Megatara can ensure procurement is more precisely aligned with business needs, lowering the risk of oversupply. As a result, the Sourcing Department, which buys hundreds of thousands of metric tons of raw rubber each year, estimates it has reduced procurement costs by 10 percent using the reports.  Plus, the department has the data to develop an effective loyalty program with suppliers across Indonesia. Hendrik Iriawan Saputra says, “We can build special relationships with our best suppliers and develop incentives, such as fertilizer funding, equipment for rubber cultivation, and training, so they continue to supply us with the raw rubber we need to drive business growth.” Learn more » As a result, the analytics team couldn’t deliver daily reports on the changing production targets for each processing plant to its Sourcing Department. Without this data, the department lacked the insight to easily determine exactly how much raw rubber to buy in the market, running the risk of plants being under or over supplied. In addition, the analytics team lacked the capabilities to provide more than 100 new annual reports requested by the business, who is constantly on the search for new insights to improve processes. In addition, Kirana Megatara could securely provide dashboard views of its business-critical data in SAP to employees. Amazon QuickSight included end-to-end data encryption with row and column level security control. Hendrik Iriawan Saputra, general manager of IT at Kirana Megatara, says, “We could ensure that only authorized people had access to the reports.” Amazon QuickSight Looking ahead, Kirana Megatara is planning to use machine learning (ML) to extract more insight from supplier interactions and to predict changes in raw rubber prices and volumes with accuracy. “Our immediate step is to develop our competencies around ML and then see where it can add value to our analytics,” says Adinugraha. Português savings in procurement costs
KTO Case Study.txt
Français AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » Amazon EC2 Auto Scaling Jonathan Bonett, Chief Technical Officer, KTO.com Español This solution allows the company to personalize campaigns using behavioral triggers, where specific customer actions on the platform determine which promotions and offers they receive. “Previously, our campaigns were based on assumptions about customer behavior, but now we run our campaigns based on triggered events that can happen in near-real time,” says Bonett. “This means that promotions and special offers can be tailored to each customer, which increases both engagement and loyalty to the brand.” Security for KTO.com is also stronger now because it follows AWS best practices. “We’ve doubled our defenses with the new platform, because we now have redundancy safety nets in place,” says Bonett. Our platform could take up to an hour to pay the winners and update all the accounts, now this process happens in seconds.” 日本語 Because the platform always has the resources it needs, customers always have a responsive experience and can seamlessly place bets based on the latest odds. Customers can also receive their payouts in seconds, a process that used to take much longer in the past. This is because the KTO.com platform can now scale to manage the spikes in traffic when customers check their winnings after an event is over. “Previously, our platform could take 30 minutes or even an hour to pay out to the winners and update all the accounts, now this process happens in seconds,” says Bonett. 2023 Contact Sales Get Started 한국어 Learn how »  Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used The company chose AWS because of its diverse range of services. When its customer growth took off, KTO.com needed to expand and optimize its infrastructure to maintain a good customer experience. It also needed to prepare for the soccer World Cup in late 2022, when it expected a massive influx of traffic from new customers and payment transactions. “We have thousands of new registrations every day and the number of bets placed on the platform has increased accordingly,” says Jonathan Bonett, chief technical officer (CTO) at KTO.com. KTO.com Reduces Costs, Improves Scaling for Latin America Betting Platform Using AWS Having built its platform on Amazon Web Services (AWS) from the beginning, KTO.com turned to the cloud provider to help it deal with growth. With the soccer World Cup approaching in 2022, KTO.com needed a cost-effective way to be able to support demand spikes for betting on this and other major sporting events. Amazon MSK makes it easy to ingest and process streaming data in real time with fully managed Apache Kafka. Learn more » Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. AWS Services Used Amazon MSK 中文 (繁體) Bahasa Indonesia Opportunity: Migrating for Improved Performance and Personalization About KTO.com Customer Stories / Software Internet / Brazil Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Previously, scaling compute services was a manual and time-consuming process that required technical expertise. Bonett wanted to automate scaling based on a pre-defined schedule to simplify resource management as the company grew. In addition, he wanted to integrate a new customer relationship management (CRM) service to enable personalized website experiences and customized marketing campaigns. KTO.com worked with AWS Partner 56Bit to plan and implement the changes to its platforms. Overview Amazon EC2 Auto Scaling helps you maintain application availability and lets you automatically add or remove EC2 instances using scaling policies that you define.  Learn more » AWS Customer Success Stories Türkçe English Non-technical employees at KTO.com can now scale compute resources using a simple scheduler, which lets them set the day and time for coming events that will increase betting volumes. Previously, this was a manual process that required involvement from the IT team. To scale in this way the company uses Terraform and AWS Lambda, a serverless, event-driven compute service that lets it run code for virtually any type of application. It also uses Amazon EC2 Auto Scaling, which allows it to add or remove compute capacity to meet changing demands. “Instead of taking up to 2 hours for scaling, the new scheduling system can be set in a few minutes—even by someone without technical knowledge,” says Bonett. Amazon Lambda Solution: Pre-Scheduling Infrastructure Scaling Using AWS Lambda and Amazon EC2 Deutsch Tiếng Việt KTO.com is looking to expand further in the Latin America region and offer services in locations such as Canada, Chile, and Peru. “Using AWS, we’ve been able to cope with exponential growth while maintaining a good customer experience and optimizing security,” says Bonett. “Now we’re in a position to continue our expansion throughout the Latin America region, and into new parts of the world.” Italiano ไทย Amazon CloudFront KTO.com provides an online sports betting and casino games platform for the Latin America market. The platform was created in 2018 by KTO Group, a software development company. KTO.com grew rapidly in 2022 with its active customers increasing by over 1,000 percent year on year.  Having built its platform on Amazon Web Services (AWS) from the beginning, KTO.com turned to the cloud provider to help it deal with growth. With the soccer World Cup approaching in 2022, KTO.com needed a cost-effective way to be able to support demand spikes for betting on this and other major sporting events.  Using AWS, KTO.com can easily schedule compute resources to scale up and down to meet traffic spikes, providing customers with a more responsive experience. The project has resulted in many performance improvements—including reduced latency and winning bets being settled in near-real time, a process that previously could take up to an hour. KTO.com is an online gambling platform built by KTO Group. The company targets the Latin America market and focuses on sports betting. Outcome: Digital Transformation Prepares KTO.com for Future Expansion Using AWS, KTO.com has also improved the focus and success of its marketing campaigns. KTO.com deployed a new CRM solution using Amazon Managed Streaming for Apache Kafka (MSK), which makes it easy to ingest and process streaming data in real time. To further improve customer experience, KTO.com uses Amazon CloudFront, a content delivery network (CDN), to securely deliver content with low latency and high transfer speeds. This minimizes latency issues that customers can experience when placing bets or playing online games. Português
LambdaTest Improves Software Test Insights and Cuts Dashboard Response Time by 33 Using Amazon Redshift _ Case Study _ AWS.txt
LambdaTest is a cloud-based continuous quality testing platform that helps over 2 million developers and testers across 130+ countries ship code faster. To give customers quicker, better insights into software test results, the company worked with AWS Data Lab to build a new analytical dashboard solution on Amazon Redshift. Français response times for faster insights 2023 More than two million software developers and testers across the globe rely on LambdaTest, a continuous quality testing platform, to ensure quality code and ship their software to customers faster. The platform, which runs on Amazon Web Services (AWS), provides both manual and automated testing of web and mobile apps across more than 3000 different browsers, mobile devices, and operating systems. LambdaTest is used in over 130 countries and has hosted more than 200 million tests to date. About LambdaTest Español By implementing its new analytics platform on Amazon Redshift, LambdaTest has reduced the average response time by 33 percent, updating analytical dashboards in less than 10 seconds. “Using the federated query capability in Amazon Redshift, our customers have less than 50 millisecond response times for their test analysis dashboards and an average data refresh cycle of less than five minutes,” says Rahman. “This means they can get faster insights into test orchestration and execution and can easily see if tests fail. Overall, Amazon Redshift helps us give our customers better, faster insights into software test performance.” Learn More 日本語 customers served Opportunity | Seeking a Better View of Software Test Results Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used The new dashboard, which LambdaTest designed and implemented in just 4 weeks, reduces dashboard response times by 33 percent and gives customers faster insights into test orchestration and execution results.  The LambdaTest analytical platform on AWS can also scale seamlessly to support the ingestion of millions of data records annually. “Amazon Redshift is highly scalable, especially when we’re doing federated queries and ingesting data from Amazon RDS instances,” says Srivishnu Ayyagari, senior product manager at LambdaTest. “Even when more data comes onto our analytical platform, it continues to perform at a high level.” Amazon Redshift reduction in dashboard response time AWS Services Used Outcome | Reducing Response Time and Improving Test Insights 中文 (繁體) Bahasa Indonesia from POC to production LambdaTest Improves Software Test Insights and Cuts Dashboard Response Time by 33% Using Amazon Redshift Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Data Lab offers accelerated, joint engineering engagements between customers and AWS technical resources to create tangible deliverables that accelerate data and analytics modernization initiatives. Learn more » 4 weeks For the past several years, LambdaTest’s enterprise clients have been seeking analytical dashboards where they can quickly view insights and reports on test orchestration, execution, and results. “Our customers didn’t have a snapshot view of what tests had been run or what had failed,” says SS Rahman, head of technical integration at LambdaTest. To address this, the company attempted to build a new analytics solution with MySQL as the data source. However, database queries often took up to 15 seconds to complete, and the solution couldn’t meet the company’s goal of providing response times under 10 seconds. The result? A poor customer experience, and one that could not scale easily to support the millions of new records coming in every year.  Overview The solution is based on Amazon Redshift, a cloud data warehouse that uses SQL to analyze both structured and semi-structured data. The AWS team helped LambdaTest create a proof of concept (POC) for a new customer-facing dashboard that queries data from Amazon Relational Database Service (Amazon RDS) and ingests it in Amazon Redshift. The test metadata includes pass, failure, and completion information for each test. The dashboards also feature a variety of trend graphs and charts to visualize the distribution of test results among browsers, operating systems, and apps. Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Türkçe English LambdaTest utilized Amazon Redshift to build new dashboards that reduce dashboard response times by up to 33 percent and give customers faster software test insights. Working with the AWS Data Lab team, LambdaTest completed the dashboard POC in four weeks. “If we managed this project on our own instead of relying heavily on the expertise of AWS, we would have taken at least eight weeks,” says Rahman.  SS Rahman Head of Technical Integration, LambdaTest 1 million Deutsch LambdaTest, based in San Francisco, California, is a continuous quality testing cloud platform that helps more than 2 million developers and testers across 130+ countries ship code faster. The company’s browser and app testing cloud runs manual and automated tests of web and mobile apps across 3,000+ environments including browsers, real devices, and multiple operating systems.  To design a new analytical dashboard solution, LambdaTest leveraged the AWS Data Lab program, an internal AWS community that offers technical resources and hands-on support for customers looking to accelerate data and analytics initiatives. Specifically, LambdaTest participated in the AWS Build Lab, an intensive multi-day engagement in which AWS Data Lab Solutions Architects and other AWS experts provide architectural guidance, share best practices, and remove technical roadblocks. “AWS has always been very available and helpful. When we discussed our latency and performance issues during the AWS Build Lab, AWS proposed the perfect solution,” Rahman says. Tiếng Việt Italiano ไทย LambdaTest is currently implementing Amazon OpenSearch Service to manage data log analytics in the cloud. “AWS releases new services frequently, and we always evaluate those services for our business,” Rahman says. “We’re a growing company focused on innovating in the testing space, and we will continue to work together with AWS as we expand.” 50 milliseconds Using the federated query capability in Amazon Redshift, our customers have less than 50 millisecond response times for their test analysis dashboards and an average data refresh cycle of less than five minutes. This means they can get faster insights into test orchestration and execution, and they can easily see if tests fail.” Learn more » AWS Data Lab Solution | Working with AWS Data Lab to Build a New Analytical Dashboard 33% To learn more, visit aws.amazon.com/redshift.  Português Contact Sales
Largest metastatic cancer dataset now available at no cost to researchers worldwide _ AWS Public Sector Blog.txt
AWS Public Sector Blog Largest metastatic cancer dataset now available at no cost to researchers worldwide by Eric Oermann, Katie Link, Anthony Costa, and Erin Chu | on 08 JUN 2023 | in Amazon Machine Learning , Announcements , Education , Nonprofit , Public Sector , Research | Permalink | Comments |  Share Metastasis derives from Greek words for removal , or migral . Metastastic cancer—where tumor cells spread to sites far from the tissue of origin—accounts for over 90% of fatalities from cancer, the leading cause of death worldwide . Metastatic cancer presents a core challenge for modern oncology due to the high degree of variation that it can display on a genetic, molecular, or gross anatomic level compared to primary cancer — as well as the high degree of variation across patients in their disease presentation, progression, and outcome. Treating metastatic cancer can involve surgery, radiation therapy, chemotherapy, immunotherapy, and other treatments. Treatment plans require recurring imaging studies and clinical visits so patients can track their cancer and its response to therapy. So how do we best record, model, and study this incredibly heterogenous and lethal disease in order to develop treatment plans that save lives? The NYUMets team, led by Dr. Eric Oermann at NYU Langone Medical Center , is collaborating with Amazon Web Services (AWS) Open Data, NVIDIA, and Medical Open Network for Artificial Intelligence ( MONAI ), to develop an open science approach to support researchers to help as many patients as possible. NYUMets: Brain dataset now available for metastatic cancer research With support from the AWS Open Data Sponsorship Program , the NYUMets: Brain dataset is now openly available at no cost to researchers around the world. NYUMets: Brain draws from the Center for Advanced Radiosurgery and constitutes a unique, real-world window into the complexities of metastatic cancer. NYUMets: Brain consists of data from 1,005 patients, 8,003 multimodal brain MRI studies, tabular clinical data from routine follow-up, and a complete record of prescribed medications—making it one of the largest datasets in existence of cranial imaging, and the largest dataset of metastatic cancer. In addition, more than 2,300 images have been carefully annotated by physicians with segmentations of metastatic tumors, making NYUMets: Brain a valuable source of segmented medical imaging. Extending the MONAI framework to longitudinal data for cancer dynamics research In collaboration with NVIDIA, the NYUMets team is building tools to detect, automatically measure, and classify cancer tumors. The team used MONAI, co-founded by NVIDIA and King’s College London, to build an artificial intelligence (AI) model for segmentation tasks, as well as a longitudinal tracking tool. Now, NYUMets: Brain can be used as a starting dataset by which we can apply AI to recognize metastatic lesions in imaging studies. Together with NVIDIA, the NYUMets team is extending the MONAI framework for working with metastatic cancer data. This data is most frequently longitudinal in nature, meaning many imaging studies are performed on the same patient to track their disease. This facilitates the study of metastatic cancer and cancer dynamics over time, more closely capturing how physicians study and patients experience cancer in the real world. In addition, the NYUMets team built clinical measurements to augment the MONAI framework’s existing metrics. These cover practical medical use cases of treatment response and progression. With clinical metrics, the team intends to bridge the gap between AI technologies used in research and the application of these technologies in the clinic. One such clinical measurement tracks the change in tumor volume between imaging studies taken at different points in time. This is a crucial measurement for a patient undergoing cancer treatment—and could be applied to any disease where lesions change over time. Get started with no-cost machine learning services to power metastatic cancer research A preprint for the NYUMets flagship publication can be reviewed here . The NYUMets: Brain dataset is available to access at no cost with support from the AWS Open Data Sponsorship Program. It’s also available on the Registry of Open Data on AWS and on the AWS Data Exchange catalog . Users with AWS accounts can apply for access to the full dataset here . O nce approved, you can access the dataset in the  Amazon Simple Storage Service ( Amazon S3 ) bucket using an Amazon S3 Access Point. Documentation for bucket structure and naming conventions can be explored at nyumets.org , including the NYUMets MONAI Extension . You can explore the entire MONAI framework here . Read more about open data on AWS: Creating access control mechanisms for highly distributed datasets 33 new or updated datasets on the Registry of Open Data for Earth Day and more How researchers can meet new open data policies for federally-funded research with AWS Accelerating and democratizing research with the AWS Cloud Introducing 10 minute cloud tutorials for research Subscribe to the AWS Public Sector Blog newsletter to get the latest in AWS tools, solutions, and innovations from the public sector delivered to your inbox, or contact us . Please take a few minutes to share insights regarding your experience with the AWS Public Sector Blog in this survey , and we’ll use feedback from the survey to create more content aligned with the preferences of our readers. TAGS: AWS and open data , AWS Data Exchange , AWS Open Data Sponsorship Program , brain health , cancer , datasets , Machine Learning , NVIDIA , open data , Open Data for Public Good , Registry of Open Data on AWS Eric Oermann Eric Karl Oermann is an assistant professor of neurosurgery, radiology, and data science at NYU. He studied mathematics at Georgetown and philosophy with the President’s Council on Bioethics, and abandoned graduate studies in group theory to study artificial intelligence (AI) in medicine and neurological surgery while completing a postdoctoral fellowship at Verily Life Sciences and serving as an advisor at Google-X. He has published over one-hundred manuscripts spanning machine learning, neurosurgery, and philosophy in journals ranging from The American Journal of Bioethics to Nature and is dedicated to studying human and artificial intelligence to improve human health. Katie Link Katie Link leads healthcare and life sciences applications of artificial intelligence as a machine learning engineer at Hugging Face. She is also a medical student at the Icahn School of Medicine at Mount Sinai in New York City. Prior to Hugging Face, she has worked on artificial intelligence (AI) research applied to biomedicine at NYU Langone Health, Google X, and the Allen Institute for Brain Science, and studied Neuroscience and Computer Science at Johns Hopkins University. Anthony Costa Anthony has been leading initiatives in biomedical technologies, data science, and artificial intelligence (AI) for more than a decade. On the faculty of the Mount Sinai Health System, he served as founding director of Sinai BioDesign and chief operating officer for AISINAI, building and leading successful teams focused on improving outcomes in medicine through a needs-based approach to technology development and machine intelligence. At NVIDIA, he serves as the global head of life sciences alliances, with a particular focus on large language models and generative AI. In this role, he heads developer relations and strategic partnerships, in addition to external research collaborations, between NVIDIA and healthcare and life sciences partners. Erin Chu Erin Chu is the life sciences lead on the Amazon Web Services (AWS) open data team. Trained to bridge the gap between the clinic and the lab, Erin is a veterinarian and a molecular geneticist, and spent the last four years in the companion animal genomics space. She is dedicated to helping speed time to science through interdisciplinary collaboration, communication, and learning. Comments View Comments Resources AWS in the Public Sector AWS for Government AWS for Education AWS for Nonprofits AWS for Public Sector Health AWS for Aerospace and Satellite Solutions Case Studies Fix This Podcast Additional Resources Contact Us Follow  AWS for Government Twitter  AWS Education Twitter  AWS Nonprofits Twitter  Newsletter Subscription
Learn how MediSys in healthcare transformed its IT operations using AWS Professional Services _ MediSys Case Study _ AWS.txt
MediSys’s alternate production environment runs on AWS services to securely store data from across the organization. It continuously replicates data to the cloud for MediSys, providing the organization with an up-to-date copy of vital information. Français Maintains 2023 MediSys Replicates Patient Records and Medical Images to AWS Español On AWS, our system is available when we need it. It is simple for us to switch to a cloud environment and make sure that we can access the electronic health record.” Learn how MediSys in healthcare transformed its IT operations using AWS Professional Services Learn more » 日本語 AWS Professional Services Get Started 한국어 Facilitates Overview | Opportunity | Solution | Outcome | AWS Services Used Reduces First, MediSys replicated millions of patient records and other data to its alternate production environment running on AWS. This migration included its EHR and GE Healthcare Picture Archiving and Communication System images for medical archiving. As part of its disaster recovery systems validation test, MediSys fully exercised its new disaster recovery environment by operating EHR production for 3 weeks on AWS. This test proved to be extremely successful, providing a day-to-day operating environment that outperformed the on-premises data center, based on exception percentage and response time. Through this collaboration, the three teams completed the migration while meeting all applicable security and performance standards. MediSys also achieved its return-on-investment goals by migrating to the cloud and reducing traditional data center management costs. AWS Services Used MediSys is a New York not-for-profit corporation. MediSys is a supporting organization to Jamaica Hospital Medical Center (JHMC) and Flushing Hospital Medical Center (FHMC). MediSys is also comprised of a multitude of entities and resources functioning within a complex integrated delivery system. With this innovative approach to alternate production, MediSys is supporting organizational continuity for high-quality patient care. This migration has empowered the network to transform its infrastructure on AWS and adopt cloud technologies to support its services. With access to cloud-native tools and security and compliance controls on AWS, MediSys will continue to transform its healthcare IT operating environment while driving new experiences for healthcare providers and patients. Improves 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي About MediSys Health Network 中文 (简体) MediSys Health Network (MediSys) is transforming its IT operations and innovation capabilities by migrating its alternate production environment to Amazon Web Services (AWS), the first step in its cloud journey. The New York–based healthcare network went live on AWS in October 2022, improving data resiliency while maintaining high security and compliance. With a cloud-native alternate production environment in place, MediSys can focus less on data center management and more on improving the quality of care and outcomes for the communities it serves. AWS Professional Services’ offerings use a unique methodology based on Amazon’s internal best practices to help you complete projects faster and more reliably, while accounting for evolving expectations and dynamic team structures along the way. Overview Sami Boshut Chief Information Officer, MediSys Health Network high-quality patient care Opportunity | Transforming EHR Operations Solution | Building Resiliency in the Cloud Türkçe English security and compliance MediSys engaged AWS Professional Services—a global team of experts that helps organizations realize their desired business outcomes when using AWS—to support the migration project. Working with a team from Epic, the AWS and MediSys teams strategized ways to optimally configure the EHR environment. AWS has more than 146 HIPAA-eligible services and holds certifications for global compliance standards, like HITRUST CSF. With the support of AWS Professional Services, MediSys has configured AWS services to meet its applicable compliance standards and safeguard protected health information. For example, MediSys deployed all services using the AWS Landing Zone Accelerator for Healthcare to warrant compliance with all healthcare industry standards and policies. If the organization experiences an issue with its production system, it can quickly step into its highly available alternate production environment. MediSys, which oversees 750 hospital beds, can continue providing patient care without wasting valuable time. “Every second counts in patient care,” says Boshut. “On AWS, our system is available when we need it. It is simple for us to switch to a cloud environment and make sure that we can access the EHR.” Deutsch operational costs Tiếng Việt Customer Stories / Healthcare Italiano ไทย Outcome | Facilitating High-Quality Patient Care Since 2010, MediSys has used the Epic electronic health record (EHR) to deliver a high-quality provider and patient experience. “Epic is used everywhere in our organization,” says Sami Boshut, chief information officer of MediSys. “It’s very important that we support the continuous operation of our EHR production and alternate production environment.” To support day-to-day operations, MediSys uses on-premises servers housed in an on-premises data center. The healthcare network migrated its EHR alternate production environment, used for disaster recovery, to AWS to improve availability, reduce operational costs, and maintain compliance with improved security. data resiliency business continuity Português
LegalZoom AWS Local Zones Case Study.txt
to the cloud Using AWS Local Zones truly accelerated the migration of a very complex application to AWS by helping us break it down into smaller components.” Français of legacy applications with ease and zero downtime LegalZoom kicked off the transition to AWS Local Zones in late 2020, and it expects to finish the migration by the end of 2022. The rest of the project will see LegalZoom engineers develop new tools and make use of additional AWS offerings. “We’re often looking at building something ourselves to solve a customer problem,” says Hutchins. “Then we’ll get an announcement telling us that—oh, wait!—there’s a new offering from AWS being launched that does exactly what we need.” Español Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » 日本語 AWS Local Zones However, the mix of legacy and modern components in LegalZoom’s data center posed a challenge for the company’s engineering team. The team realized that it might have to re-engineer many legacy components to address potential latency issues before migrating to the cloud. This re-engineering would have been an enormous task. In its efforts to avoid time-consuming re-engineering, LegalZoom discovered AWS Local Zones, a type of infrastructure deployment that places compute, storage, database, and other select AWS services close to large population and industry centers. By using this solution, LegalZoom migrated incrementally and with ease—all without compromising on performance. In fact, the AWS Local Zone in Los Angeles, California, which is located very close to LegalZoom’s data center, offered lower latency than the company’s on-premises solution. Outcome | Completing a Complex Migration Using AWS Local Zones 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Hutchins and his team turned to Amazon Web Services (AWS) and discovered that a combination of solutions could help them meet this triple mandate. “The entire process of migrating from our data center to AWS has been seamless and painless, resulting in happier customers and happier engineers,” says Hutchins.   Implemented complex migration AWS Direct Connect AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Learn more » About Company While the company’s cloud migration is still in progress, LegalZoom customers are already enjoying a streamlined experience. Using the AWS Local Zone in Los Angeles, LegalZoom is enjoying levels of agility and performance that are higher than what they have ever experienced. As Hutchins says, “Using AWS Local Zones, we have not had to make any compromises.”   AWS Services Used 中文 (繁體) Bahasa Indonesia to 5 milliseconds, accelerating the cloud migration process Founded in 2001, LegalZoom offers legal services to US and global customers seeking help with business formation, intellectual property protection, and estate planning, among others. After helping over two million entrepreneurs start their businesses, LegalZoom launched LZ Tax, a LegalZoom company, to help people file and save on their taxes in 2020. An on-premises data center solution was enough for the first 19 years of LegalZoom’s growth, but Hutchins and his team decided to migrate to the cloud as fast as possible in 2020. Learn how LegalZoom migrated to the cloud quickly without compromising agility or performance using AWS Local Zones. Cut latency on network calls Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn more » 中文 (简体) Solution | Accelerating the Pace of Innovation Using AWS Local Zones An online legal technology company, LegalZoom helps its customers create legal documents without necessarily having to hire a lawyer. Its services cover business formation, estate planning, and taxes. LegalZoom, an industry leader in online small business formations and a leading online platform for legal, compliance, and tax solutions, wanted to accelerate its pace of innovation by migrating its location-sensitive applications to the cloud. The challenge was that the company’s entirely on-premises data center contained a mix of modern and legacy components. “The legacy components were blocking us from migrating to the cloud,” says Jonathan Hutchins, director of engineering for site reliability engineering at LegalZoom. The engineering team had to find a way to migrate to the cloud as fast as possible, without compromising agility or performance. LegalZoom Accelerates Innovation with Hybrid Cloud Migrations using AWS Local Zones 2022   Overview Since LegalZoom’s engineers were freed from manually intervening in the data center to resolve API issues, the improvement in team morale has been palpable. “By choosing AWS, we’ve been able to attract and retain stronger engineering talent,” says Hutchins. “Engineers are much more excited to work now that it’s so easy to spin up a new service.” Further, migrating to AWS has unlocked LegalZoom’s ability to use a wide variety of AWS tools. The engineering team has found Amazon Elastic Kubernetes Service (Amazon EKS)—a managed container service to run and scale Kubernetes applications in the cloud or on premises—especially helpful. “Since we migrated to a microservices architecture, any issues within our APIs and the components that we’re running are self-healing,” Hutchins says. “It’s seamless. The components will encounter an issue, but the solution will alert us that something happened and spin up a new component.” Get Started Jonathan Hutchins Director, LegalZoom AWS Local Zones are a type of infrastructure deployment that places compute, storage, database, and other select AWS services close to large population and industry centers. Türkçe English Accelerated migration The LegalZoom engineering team moved carefully to migrate without impacting the customer experience. First, it containerized APIs and applications to avoid introducing latency issues when migrating components to the cloud. Then, it took advantage of Amazon Elastic Compute Cloud (Amazon EC2), a service that offers secure and resizable compute capacity for virtually any workload, for anything that wouldn’t run as a container. This incremental approach helped the team find the right solution for each challenge that it faced. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Learn more » Increased reliability Deutsch Amazon EKS Tiếng Việt Opportunity | Using AWS Local Zones Helped LegalZoom Achieve Single-Digit Millisecond Latency Italiano ไทย By migrating to the cloud, LegalZoom reduced latency on network calls to under 5 ms. In doing so, the company has not only avoided introducing customer experience issues but also enhanced the customer experience overall. “The infrastructure that our components are running on since the migration to AWS is so much faster than what we had in our data center, so we experience less downtime and fewer issues with latency,” says Hutchins. “Our system is faster using AWS Local Zones.” Contact Sales The migration to AWS Local Zones has helped the LegalZoom engineering team shift its focus from refactoring to innovation. The company migrated complex applications to AWS Local Zones, starting with smaller components, and kept other components in its data center during migration. The ability to migrate without moving everything at once meant that LegalZoom could continue providing services to customers without interruption during the migration. “Using AWS Local Zones truly accelerated the migration of a very complex application to AWS by helping us break it down into smaller components,” Hutchins says. “By migrating to AWS, we’ve been able to focus our engineering talent on building for our customers.” Amazon EC2 Customer Stories / Software and Internet with migration and modernization of architecture LegalZoom’s use of AWS Direct Connect, a cloud service that delivers the shortest path to AWS resources, has made it simple to migrate data when the team is ready. The company used Direct Connect for the migration setup by connecting its data center to AWS to efficiently migrate the pieces of its complex applications. “Using AWS Direct Connect was absolutely crucial for us to be able to migrate to AWS Local Zones,” says Hutchins, “and setting it up on the AWS side was very simple.” Português
Lendingkart _ Amazon Web Services.txt
The company also has plans to market its products outside India in the near future by taking advantage of the global network of AWS About Lendingkart Adopting a Structured Approach to Product Development Français the credit gap MSMEs face at around $240 billion. Benefits Reliability, uptime, transaction speed, and automation were also key elements of Lendingkart’s SaaS vision. “We wanted all these elements taken care of by an experienced provider so we could focus on developing the offering itself. The journey with AWS SaaS Factory has been fantastic from both technical and business development perspectives,” says Singh. Español Amazon EC2 With the AWS SaaS Factory, Lendingkart adopted a structured approach to product development, considering key points such as pricing models and product journeys. It first clearly defined the problem statement: a lack of efficient digital credit scoring mechanisms for MSMEs among banks and NBFCs. Following that, the development team began building with potential customers in mind, anticipating their needs. Learn More “AWS has served as a trusted advisor collaborating with our teams from the start, suggesting ways to optimize resources and minimize technology gaps,” he adds. Lendingkart currently uses 日本語 The AWS SaaS Factory accelerated the speed at which we were able to execute this project. The structures for building a SaaS were already in place for us to adopt and modify, which helped us to have more efficient, enriching conversations with our prospects.” AWS Global Infrastructure gives us the confidence to go to market quickly for international launches.” Get Started 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Amazon Relational Database Service (Amazon RDS) with Multi-AZ for fault-tolerant scaling and database administration. Building Reliable, Multi-tenant Architecture on the Cloud With 20 banks and NBFCs—such as Aditya Birla Finance Limited, Canara bank, and Punjab National Bank—already onboarded, Lendingkart has disbursed over 2,000 crore Rupees (US$307 million) in 2021. New customers can onboard quickly to Lendingkart 2gthr, without complex integrations or interfaces. “Within two weeks, a bank or NBFC can start using Lendingkart 2gthr to evaluate MSME candidates for loans. This short launch cycle empowers them to scale quickly with minimal resource investment, without delaying their internal initiatives. We’re incredibly excited to offer such a short time-to-value for our customers,” Singh says. Lendingkart 2gthr SaaS platform in November 2020. Lendingkart 2gthr provides enhanced loan management capabilities for financial institutions, a specialized credit underwriting model, and the flexibility to configure specific policy rules to support all stages of loan processing. AWS Services Used SaaS Factory Program to receive guidance on creating a secure platform that other lenders—competitors to Lendingkart’s MSME lending business—would trust. “We needed to create a neutral third-party environment where financial services companies trusted that Lendingkart Finance wouldn’t be able to access their customer data,” Singh explains. Lendingkart successfully launched its 中文 (繁體) Bahasa Indonesia Since its SaaS launch, Lendingkart has been working to refine its offering based on customer feedback. It’s also considering listing Lendingkart 2gthr on Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский In addition to setting up an isolated SaaS environment on AWS, Lendingkart worked with the AWS SaaS Factory team to develop its go-to-market strategy. The company benefited from learning about other financial companies on AWS that have built similar SaaS products. “AWS played a large part in shaping our thought process and making sure we had the right direction early on for this project,” Singh says. عربي Learn more » The AWS SaaS Factory Program helps AWS Partners at any stage of the software-as-a-service (SaaS) journey. Whether you are looking to build new products, migrate existing applications, or optimize SaaS solutions on AWS, the AWS SaaS Factory Program can help. 中文 (简体) Abhishek Singh Chief Business Officer, Lendingkart Amazon Elastic Compute Cloud  (Amazon EC2) for secure, resizable compute capacity. It also uses Leveraging AWS Global Infrastructure to Expand Internationally Lendingkart built its microservices architecture on Amazon Web Services (AWS) and consulted with its AWS account team on how to develop a software as a service (SaaS) for digital underwriting. “We asked AWS to help us understand how to diversify our original Lendingkart finance business into an independent, scalable SaaS product,” says Singh. Lendingkart is on a mission to harness technology to close the MSME credit gap. Abhishek Singh, chief business officer at Lendingkart, shares, “Our dream is to enable all Indian MSMEs to have the capital they need to fulfill their potential.” Utilizing machine learning-driven underwriting, it provides offers for unsecured business loans to MSMEs in just 72 hours. Lendingkart has disbursed nearly $1 billion in loans since its inception in 2015 and currently serves customers in over 4,000 cities and towns. Lendingkart Builds Digital Underwriting Platform with AWS SaaS Factory to Close MSME Credit Gap in India To learn more, visit aws.amazon.com/solutions/compute-networking.   Onboards new Lendingkart 2gthr customers in 2 weeks Amazon Elastic Kubernetes Service Amazon Elastic Kubernetes Service (Amazon EKS) to manage containers and “The AWS SaaS Factory accelerated the speed at which we were able to execute this project. The structures for building a SaaS were already in place for us to adopt and modify, which helped us to have more efficient, enriching conversations with our prospects,” relates Singh. Türkçe Diversifying Business with Scalable SaaS Product English Onboarding to Lendingkart 2gthr in 2 Weeks the backbone of the Indian economy. However, these businesses often struggle to obtain financing to sustain or expand their operations due to a lack of banking history. Recent estimates peg Micro, small, and medium enterprises (MSMEs) in India, defined as businesses with investment limits of $128,000–$6.4 million, form AWS Marketplace. “By listing our product on AWS Marketplace, customers who are already on AWS can easily find us. We can add a lot more value in terms of security, ease of procurement, and available integrations,” Singh explains. Regions and Availability Zones. Singh concludes, “As we expand into other countries, we’ll be looking to AWS to help us continually improve our services. Additionally, the Ensures reliability and uptime of SaaS platform Deutsch Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Tiếng Việt Lendingkart has been growing 100 percent year on year and recognized that demand for its services continued to rise. In 2020, the startup hatched an idea to share its digital underwriting expertise with the larger lending market. This would achieve a dual purpose, boosting domestic economic prosperity while monetizing the company’s credit scoring models. Harshvardhan Lunia, founder and chief executive officer, insisted that the data and analytics platforms developed in-house should ideally be made available to the entire market, including banks and non-banking financial companies (NBFCs). AWS SaaS Factory Program Italiano ไทย Lendingkart embarked on the AWS Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 2022 Accelerates time-to-market with expert guidance Receives early guidance on shaping product development Lendingkart is a fintech providing micro, small, and medium enterprises (MSMEs) in India with unstructured business loans through its machine learning-driven underwriting algorithm. Since launching in 2015, the company has disbursed over $1 billion in loans and serves MSMEs in 4,000 cities. Adopts structured, customer-first approach Supports global business expansion Amazon Relational Database Service Português Contact Sales
Lenme builds a secure and reliable lending platform with AWS _ Lenme Case Study _ AWS.txt
About Lenme Amazon OpenSearch Service Lenme Builds a Secure and Reliable Lending Platform Using AWS Français 2023 Español Solution | Full Process Automation with AI and Machine Learning 日本語 Amazon SageMaker Lenme researched various cloud service providers and chose AWS because trust and reliability are key for mobile financial services. The service capability and maturity, regional availability, cost savings, and other considerations of AWS services also played a crucial role in Lenme's decision to choose AWS. In addition, AWS offered a comprehensive suite of services and solutions that made it a perfect fit for Lenme's needs. Building on Amazon Rekognition, Amazon SageMaker, and Amazon OpenSearch Service, Lenme created a fully automated, now-standard suite of services, such as identity verification, and the ability to qualify borrowers accurately while minimizing lending risks and improving default rates. Using Amazon SageMaker, a service to build, train, and deploy machine learning (ML) models for virtually any use case with fully managed infrastructure, tools, and workflows, Lenme created a ML algorithm that deployed to the cloud, minimizing up to 80 percent of the risk associated with manual verification processes in lending. It also improved the average default rate for lenders using its data services. The company also uses OpenSearch, a distributed, community-driven, 100 percent open-source search and analytics suite used for a broad set of use cases like real-time application monitoring, log analytics, and website search, to analyze borrowers' banking data more accurately than ever before and to run queries on unstructured data in an easy and seamless way. Lenme also removed barriers and allowed easy access for lenders who can now fund loans and deploy services with their own algorithms and requirements on the Lenme platform using Lenme APIs. “Our business outlook and opportunities are positive with how we can scale with Amazon Rekognition as needed. We look forward to continuing our relationship with AWS and leveraging their technology to further revolutionize the lending industry," says Mark. Lenme’s platform powered by AWS services and solutions is transforming the lending industry. Lenme can now authenticate customers in three clicks and within seconds. The platform is fully automated and includes the option of being leveraged by Lenme’s customers through the use of API technology built on AWS. The AWS pay-as-you-go model also helps Lenme scale as needed to meet market demands. Its commercial customers can use the benefit of reducing the risk associated with lending up to 80 percent, reducing the cost of acquiring new customers by up to 40 percent, all while increasing the conversion rate of new customers by 34 percent. The cost savings and reduced risk are possible due to the savings through automation with AI and ML on Lenme’s lending platform. Lenme continues to drive toward its vision to create a fully open-source platform where data providers, lenders, developers, and others can deploy funding and financial services. Building on AWS has helped Lenme increase the potential for growth and impact. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Improved Minimized 80% Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization,intelligent shopping, robotics, and voice-assisted devices.  AWS Services Used 中文 (繁體) Bahasa Indonesia of risk associated with lending 40% reduction Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Learn more » Mark Maurice CEO, Lenme 34% Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. Overview Lenme, a subscription-based service, has revolutionized the lending industry by leveraging Amazon Web Services (AWS) to automate a platform that is now solving longstanding challenges of acquiring, verifying, and evaluating borrowers. With over 500,000 active users, Lenme connects individual borrowers with financial institutions, businesses, individuals, lenders, and data providers, to transfer these savings directly to users who have been traditionally underserved. Get Started Customer Stories / Financial Services With 138 million Americans struggling financially and in need of a short-term loan product, Lenme is committed to building an automated platform that scales as needed to serve this purpose. The challenge of customer acquisition, verification, and evaluation has always been a priority and a costly essential for lenders. It is a complex and time-consuming process that often involves high costs and risks. “The AWS suite of artificial intelligence and machine learning services have enabled us to address the longstanding challenges of acquiring, then accurately verifying, and evaluating customers in the lending industry,” says Mark Maurice, chief executive officer (CEO) of Lenme. Lenme addressed this challenge by using AWS services to verify and qualify borrowers in just three clicks with the artificial intelligence (AI) capabilities in Amazon Rekognition Identity Verification API, which helps Lenme to verify customers with high accuracy and within seconds. Amazon Rekognition is a fully-managed AI service that offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from images and videos. Lenme’s new technology is helping the company provide low-cost products while establishing itself as a trusted leader in the lending financial industry. Türkçe English Opportunity | Reducing Cost and Increasing Lending Process Speed Lenme, a lending platform founded in 2018 and headquartered in San Francisco, connects people looking to borrow money with financial institutions, lending businesses, and individual investors looking to invest in the small amount loan market. Lenme’s mission is to enable individuals to lend and borrow with confidence, at a lower cost, and on secure platforms. Amazon Rekognition improvement in customer conversion rate Increased speed Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from your images and videos to implement identity verification and content moderation solutions at a low cost.  in the cost of customer acquisition Explore how Lenme is revolutionizing lending with identity verification and subscriber authentication capabilities using Amazon Rekognition and AWS Identity Verification solutions. Deutsch to verify customers’ identities from days to seconds Tiếng Việt Our platform is now faster and more efficient, helping us verify and authenticate customers in three clicks and a few seconds. This helps us provide our lenders with more data and to reduce lending risks up to 80%." Italiano ไทย Contact Sales Outcome | Speed, Accuracy, and Safety for Lenders and Borrowers Learn more » the average default rate for lenders Português
LetsGetChecked Case Study _ Amazon Connect _ AWS Lex.txt
Managing Patients’ Healthcare Journeys Using AWS Amazon Connect and other AWS services now underpin LetsGetChecked’s operations. “AWS is at the heart of everything we do, whether it’s on a data level or an integration level,” says Murphy. “It would be a lot more expensive, a lot more difficult, and a lot more fragmented, if we were developing different technology. AWS has the scope we need.” Français Benefits of AWS Reducing Agent Calls by 50% Using Voice Data and Amazon Connect Español LetsGetChecked’s business plan called for expanding services beyond testing to playing a broader part in its customers’ healthcare, including managing interactions with health professionals and pharmacies. This meant building systems to support the business logic to deliver this and complying with general data protection and specific healthcare regulations about patient records. In the highly regulated telehealth market, patients must be served by people who are licensed in their particular region. Learn more 日本語 Using Amazon Connect to Meet Regulatory Requirements Get Started 한국어 Automatically routes customers to the correct region through natural voice conversations. Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. For the company’s own customers (and its customers’ clients), this meant having a full view of all interactions and knowing which interaction types, delivered at what time, encouraging higher levels of engagement, and enabling better patient outcomes. LetsGetChecked saw that the functionality it needed for a full view of its customers was possible with Amazon Connect, and suitable for European GDPR and US HIPAA regulatory compliance. After configuration, integration, and testing, the results could be presented in a form that was ready to be signed off by its compliance and information security teams. This manual process required agents to arrange pick up of the testing kits, check tracking codes, and handle other logistics. By integrating with Amazon Connect data collected from client calls about these events—including times and addresses—LetsGetChecked has automated this process, reducing agent calls by up to 50 percent. Because this service generates the vast bulk of the company’s telephone contacts, this means a reduction of 30 or 40 percent in agent costs with future features planned to automate more tasks. LetsGetChecked is using Amazon Connect data to drive its analytics. Call data is streamed into Amazon Redshift, which can accelerate time to insights with fast, easy, and secure cloud data warehousing at scale. The company has expanded its data analysis team to extract and analyze this and other data in a single-source-of-truth model, which makes everything available to different business units across the company. Provide superior customer service at a lower cost with an easy-to-use omnichannel cloud contact center. AWS Services Used LetsGetChecked is a global healthcare solutions company that provides the tools to manage health from home through direct access to diagnostic testing, virtual care, and medication delivery for a wide range of health and wellness conditions. LetsGetChecked’s end-to-end model includes manufacturing, logistics, lab analysis, physician support, and prescription fulfilment. Founded in 2015, the company empowers people with accessible health information and care to live longer, happier lives. 中文 (繁體) Bahasa Indonesia LetsGetChecked Transforms Home Healthcare Using AWS Helped company meet regulatory requirements for different territories. Contact Sales Ρусский عربي Few businesses can look back over the past 2 years and see them as fulfilling but Murphy views healthcare as more than just a business. “Working for LetsGetChecked is a wonderful experience,” he says. “Seeing the difference we can make in people’s lives, just by doing our jobs. Look at the number of people who get a diagnosis earlier than they would—it’s tremendous. We’re providing healthcare first and we’re a business second. We do things properly, to the highest possible standard. That’s how we work now, that’s how we're always going to work, and AWS helps us do that.” 中文 (简体) Amazon Redshift LetsGetChecked found that compliance was simplified through the integration of Amazon Connect with AWS best-practice data and account security models. AWS is at the heart of everything we do, whether it’s on a data level or an integration level. It would be a lot more expensive, a lot more difficult, and a lot more fragmented, if we were developing on different technology. AWS has the scope we need.” Amazon Connect About LetsGetChecked Although this transformation was part of the company’s plan for growth, and was well underway when the COVID-19 pandemic hit, the sudden 3x spike in user traffic meant the company had to manage more customer interactions than before with the sudden demand for COVID-19 testing. Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. The result of the second project was to establish the foundation for the company’s expansion into more areas of customer healthcare, without incurring major overheads due to regulatory requirements. The first project was improving customer call management. LetsGetChecked had already decided to replace its original call management system with Amazon Connect to scale capacity and increase interoperability with internal systems. Working with VoiceFoundry, an AWS Partner and Amazon Connect specialist, the company migrated from its existing system. After the migration was completed seamlessly, LetsGetChecked had the confidence to continue development. The result was a call center system that can scale with new functionality, delivering immediate benefits through automation and integration. However, as a virtual healthcare solutions business working across geographical areas, this increase in business efficiency would only be acceptable if it complied with multiple regulatory environments. Türkçe LetsGetChecked turned to Amazon Web Services (AWS) and chose Amazon Connect, an easy-to-use omnichannel cloud contact center, to manage its customer interactions and deliver a better service. “The COVID-19 pandemic did not pause our roadmap,” says Colm Murphy, customer solutions technical manager at LetsGetChecked. “Quite the opposite. We knew more people would need at-home health support more than ever and used the opportunity to enable development.” English The company had two challenges. First, it needed to scale its systems to respond to the immediate demand—in particular, customer call management—created by COVID-19 testing. Second, it had to manage its long-term transformation to a full healthcare management business. The company’s second project was building a system for patient information management beyond just recording tests and results into a full history of interactions with their health providers. This would help it achieve the business benefits of large-scale data ownership. AWS Lex LetsGetChecked is a global healthcare solutions company that provides people with the tools to manage health from home through direct access to diagnostic testing, virtual care, and medication delivery for a wide range of health and wellness conditions. LetsGetChecked is available nationwide in the United States, the United Kingdom, and most EU countries. It is co-headquartered in Dublin and New York. The company’s at-home diagnostic and care services experienced high demand during the COVID-19 pandemic. Moving its call centers to Amazon Connect not only provided scalable performance, but also allowed integration with other core systems. Using Amazon Connect, LetsGetChecked built a foundation for its business transformation to a full-spectrum telehealth management business. Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. Amazon Connect is now a key part of LetsGetChecked’s system. “Our unique advantage is that we own the entire chain from test production, deployment, and lab analysis,” says Murphy. “As we develop our telehealth management system, we’ll handle patients’ journeys through healthcare, their medications, tests, and interactions with professionals. It’s a unique mix of business-to-business and business-to-consumer, and with Amazon Connect, we have a system with the flexibility and capabilities to manage that effectively.” Reduced agent calls by 50% by automating collection of test kits. Scaled to meet high demand during the COVID-19 pandemic. Deutsch A good example was the implementation of call recording, which helped LetsGetChecked implement access and auditing rules for different tasks such as quality assurance, query resolution, and freedom-of-information requests. Tiếng Việt Colm Murphy Customer Solutions Technical Manager, LetsGetChecked Italiano ไทย Irish unicorn LetsGetChecked is an end-to-end global healthcare solutions company that helps people manage their health from home through direct access to diagnostic testing, virtual care, and medication delivery. With its core diagnostic testing business already established, LetsGetChecked was executing on a planned expansion into virtual care and medication delivery.  LetsGetChecked used Amazon Lex to build natural-language chatbots with conversational artificial intelligence to allow the automated routing of calls to appropriate regional queues. Another area where the company benefits from the combination of Amazon Connect and Amazon Lex is the transfer of completed home test kits to the lab. LetsGetChecked runs a huge operation and coordinates this by using its customer relationship management system to communicate directly with the dispatch system of its delivery firm. LetsGetChecked was growing rapidly and needed to improve its call center systems to handle increasing numbers of customers and tests. 2022 Português
Leverage pgvector and Amazon Aurora PostgreSQL for Natural Language Processing Chatbots and Sentiment Analysis _ AWS Database Blog.txt
AWS Database Blog Leverage pgvector and Amazon Aurora PostgreSQL for Natural Language Processing, Chatbots and Sentiment Analysis by Shayon Sanyal | on 13 JUL 2023 | in Advanced (300) , Amazon Aurora , Generative AI , PostgreSQL compatible , Technical How-to | Permalink | Comments |  Share Generative AI – a category of artificial intelligence algorithms that can generate new content based on existing data — has been hailed as the next frontier for various industries, from tech to financial services, e-commerce and healthcare. And indeed, we’re already seeing the many ways Generative AI is being adopted . ChatGPT is one example of Generative AI, a form of AI that does not require a background in machine learning (ML); virtually anyone with the ability to ask questions in simple English can utilize it. The driving force behind the capabilities of generative AI chatbots lies in their foundation models . These models consist of expansive neural networks meticulously trained on vast amounts of unstructured, unlabeled data spanning various formats, including text and audio. The versatility of foundation models enables their utilization across a wide range of tasks, showcasing their limitless potential. In this post, we cover two use cases in the context of pgvector and Amazon Aurora PostgreSQL-Compatible Edition : First, we build an AI-powered application that lets you ask questions based on content in your PDF files in natural language. We upload PDF files to the application and then type in questions in simple English. Our AI-powered application will process questions and return answers based on the content of the PDF files. Next, we make use of the native integration between pgvector and Amazon Aurora Machine Learning . Machine learning integration with Aurora currently supports Amazon Comprehend and Amazon SageMaker . Aurora makes direct and secure calls to SageMaker and Comprehend that don’t go through the application layer. Aurora machine learning is based on the familiar SQL programming language, so you don’t need to build custom integrations, move data around or learn separate tools. Overview of pgvector and large language models (LLMs) pgvector is an open-source extension for PostgreSQL that adds the ability to store and search over ML-generated vector embeddings. pgvector provides different capabilities that let you identify both exact and approximate nearest neighbors. It’s designed to work seamlessly with other PostgreSQL features, including indexing and querying. Using ChatGPT and other LLM tooling often requires storing the output of these systems, i.e., vector embeddings, in a permanent storage system for retrieval at a later time. In the previous post, Building AI-powered search in PostgreSQL using Amazon SageMaker and pgvector , we provided an overview of storing vector embeddings in PostgreSQL using pgvector, and a sample implementation for an online retail store. Large language models (LLMs) have become increasingly powerful and capable. You can use these models for a variety of tasks, including generating text, chatbots, text summarization, image generation, and natural language processing capabilities such as answering questions. Some of the benefits offered by LLMs include the ability to create more capable and compelling conversational AI experiences for customer service applications or bots, and improving employee productivity through more intuitive and accurate responses. LangChain is a Python module that makes it simpler to use LLMs. LangChain provides a standard interface for accessing LLMs, and it supports a variety of LLMs, including OpenAI’s GPT series, Hugging Face, Google’s BERT, and Facebook’s RoBERTa. Although LLMs offer many benefits for natural language processing (NLP) tasks, they may not always provide factual or precisely relevant responses to specific domain use cases. This limitation can be especially crucial for enterprise customers with vast enterprise data who require highly precise and domain-specific answers. For organizations seeking to improve LLM performance for their customized domains, they should look into effectively integrating their enterprise domain information into the LLM. Solution overview Use case 1: Build and deploy an AI-powered chatbot application Prerequisites Aurora PostgreSQL v15.3 with pgvector support. Install Python with the required dependencies (in this post, we use Python v3.9). You can deploy this solution locally on your laptop or via Amazon SageMaker Notebooks . This solution incurs costs. Refer to Amazon Aurora Pricing to learn more. How it works We use a combination of pgvector, open-source foundation models ( flan-t5-xxl for text generation and all-mpnet-base-v2 for embeddings), LangChain packages for interfacing with its components and Streamlit for building the bot front end. LangChain’s Conversational Buffer Memory and ConversationalRetrievalChain allows chatbots to store and recall past conversations and interactions as well as to enhance our personalized chatbot by adding memory to it. This will enable our chatbot to recall previous conversations and contextual information, resulting in more personalized and engaging interactions. NLP question answering is a difficult task, but recent developments in transformer-based models have greatly enhanced its ease of use. Hugging Face’s Transformers library offers pre-trained models and tools that make it simple to do question-answering activities. The widely used Python module Streamlit is used to create interactive online applications, while LangChain is a toolkit that facilitates retrieving documentation context data based on keywords. The following diagram illustrates how it works: The application follows these steps to provide responses to your questions: The app reads one or more PDF documents and extracts their text content. The extracted text is divided into smaller chunks that can be processed effectively. The application utilizes a language model to generate vector representations (embeddings) of the text chunks and stores the embeddings in pgvector (vector store). When you ask a question, the app compares it with the text chunks and identifies the most semantically similar ones. The selected chunks are passed to the language model, which generates a response based on the relevant content of the PDFs. Environment setup To get started, we need to install the required dependencies. You can use pip to install the necessary packages either on your local laptop or via SageMaker Jupyter notebook : pip install streamlit langchain pgvector PyPDF2 python-dotenv altair huggingface-hub InstructorEmbedding sentence-transformers Create the pgvector extension on your Aurora PostgreSQL database (DB) cluster: CREATE EXTENSION vector; Note : When you use HuggingFaceEmbeddings , you may get the following error: StatementError: (builtins.ValueError) expected 1536 dimensions, not 768 . This is a known issue (see pgvector does not work with HuggingFaceEmbeddings #2219 ). You can use the following workaround: Update ADA_TOKEN_COUNT = 768 in local ( site-packages ) langchain/langchain/vectorstores/pgvector.py on line 22. Update the vector type column for langchain_pg_embedding table on your Aurora PostgreSQL DB cluster: alter table langchain_pg_embedding alter column embedding type vector (768); Import libraries Let’s begin by importing the necessary libraries: import streamlit as st from dotenv import load_dotenv from PyPDF2 import PdfReader from langchain.embeddings import HuggingFaceInstructEmbeddings from langchain.llms import HuggingFaceHub from langchain.vectorstores.pgvector import PGVector from langchain.memory import ConversationBufferMemory from langchain.chains import ConversationalRetrievalChain from htmlTemplates import css, bot_template, user_template from langchain.text_splitter import RecursiveCharacterTextSplitter import os To load the pre-trained question answering model and embeddings, we import HuggingFaceHub and HuggingFaceInstructEmbeddings from LangChain utilities. For storing vector embeddings, we import pgvector as a vector store, which has a direct integration with LangChain. Note that we’re using two additional important libraries –  ConversationBufferMemory , which allows for storing of messages, and ConversationalRetrievalChain , which allows you to set up a chain to chat over documents with chat history for follow-up questions. We use RecursiveCharacterTextSplitter to split documents recursively by different characters, as we’ll see in our sample app. For the purpose of creating the web application, we additionally import Streamlit. For the demo, we use a popular whitepaper as the source PDF document – Amazon Aurora: Design considerations for high throughput cloud-native relational databases . Create the Streamlit app We start by creating the Streamlit app and setting the header: st.header("GenAI Q&A with pgvector and Amazon Aurora PostgreSQL") user_question = st.text_input("Ask a question about your documents:") This line sets the header of our web application to “ GenAI Q&A with pgvector and Amazon Aurora PostgreSQL. ” Next, we take our PDFs as input and split them into chunks using RecursiveCharacterTextSplitter : def get_pdf_text(pdf_docs): text = "" for pdf in pdf_docs: pdf_reader = PdfReader(pdf) for page in pdf_reader.pages: text += page.extract_text() return text def get_text_chunks(text): text_splitter = RecursiveCharacterTextSplitter( separators=["\n\n", "\n", ".", "!", "?", ",", " ", ""], chunk_size=1000, chunk_overlap=200, length_function=len ) chunks = text_splitter.split_text(text) return chunks Load the embeddings and LLM into Aurora PostgreSQL DB cluster Next, we load the question answering embeddings using the sentence transformer sentence-transformers/all-mpnet-base-v2 into Aurora PostgreSQL DB cluster as our vector database using the pgvector vector store in LangChain: CONNECTION_STRING = PGVector.connection_string_from_db_params( driver = os.getenv("PGVECTOR_DRIVER"), user = os.getenv("PGVECTOR_USER"), password = os.getenv("PGVECTOR_PASSWORD"), host = os.getenv("PGVECTOR_HOST"), port = os.getenv("PGVECTOR_PORT"), database = os.getenv("PGVECTOR_DATABASE") ) def get_vectorstore(text_chunks): embeddings = HuggingFaceInstructEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") vectorstore = PGVector.from_texts(texts=text_chunks, embedding=embeddings,connection_string=CONNECTION_STRING) return vectorstore Note that pgvector needs the connection string to the database. We load it from the environment variables. Next, we load the LLM. We use Google’s flan-t5-xxl LLM from the HuggingFaceHub repository: llm = HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.5, "max_length":1024}) By default, LLMs are stateless, meaning that each incoming query is processed independently of other interactions. The only thing that exists for a stateless agent is the current input. There are many applications where remembering previous interactions is very important, such as chatbots. Conversational memory allows us to do that. ConversationBufferMemory and ConversationalRetrievalChain allow us to provide the user’s question and conversation history to generate the chatbot’s response while allowing room for follow-up questions: def get_conversation_chain(vectorstore): memory = ConversationBufferMemory( memory_key='chat_history', return_messages=True) conversation_chain = ConversationalRetrievalChain.from_llm( llm=llm, retriever=vectorstore.as_retriever(), memory=memory ) return conversation_chain # create conversation chain st.session_state.conversation = get_conversation_chain(vectorstore) User input and question answering Now, we handle the user input and perform the question answering process: user_question = st.text_input("Ask a question about your documents:") if user_question: handle_userinput(user_question) with st.sidebar: st.subheader("Your documents") pdf_docs = st.file_uploader( "Upload your PDFs here and click on 'Process'", accept_multiple_files=True) if st.button("Process"): with st.spinner("Processing"): # get pdf text raw_text = get_pdf_text(pdf_docs) # get the text chunks text_chunks = get_text_chunks(raw_text) Demonstration Streamlit is an open-source Python library that makes it simple to create and share beautiful, custom web apps for machine learning and data science. In just a few minutes you can build and deploy powerful data apps. Let’s explore a demonstration of the app. To install Streamlit: $ pip install streamlit $ streamlit run app.py The starting UI looks like the following screenshot: Follow the instructions in the sidebar: Browse and upload PDF files. You can upload multiple PDFs because we set the parameter accept_multiple_files=True for the st.file_uploader function. Once you’ve uploaded the files, click Process . You should see a page like the following: Start asking your questions in the search bar. For example, let’s start with a simple question – “ What is Amazon Aurora? ” The following response is generated: Let’s ask a different question, a bit more complex – “ How does replication work in Amazon Aurora? ” The following response is generated: Note here that the conversation history is preserved due to Conversational Buffer Memory . Also, ConversationalRetrievalChain allows you to set up a chain with chat history for follow-up questions. We can also upload multiple files and ask questions. Let’s say we uploaded another file “ Constitution of the United States ” and ask our app – “ What is the first amendment about? ” The following is the response: For full implementation details about the code sample used in the post, see the GitHub repo. Use Case 2: pgvector and Aurora Machine Learning for Sentiment Analysis Prerequisites Aurora PostgreSQL v15.3 with pgvector support. Install Python with the required dependencies (in this post, we use Python v3.9). Jupyter (available as an extension on VS Code or through Amazon SageMaker Notebooks ). AWS CLI installed and configured for use. For instructions, see Set up the AWS CLI . This solution incurs costs. Refer to Amazon Aurora Pricing to learn more. Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. No prior machine learning experience is required. This example will walk you through the process of integrating Aurora with the Comprehend Sentiment Analysis API and making sentiment analysis inferences via SQL commands. For our example, we have used a sample dataset for fictitious hotel reviews. We use Hugging Face’s sentence-transformers/all-mpnet-base-v2 model for generating document embeddings and store vector embeddings in our Aurora PostgreSQL DB cluster with pgvector. Use Amazon Comprehend with Amazon Aurora Create an IAM role to allow Aurora to interface with Comprehend. Associate the IAM role with the Aurora DB cluster. Install the aws_ml and vector extensions. For installing the aws_ml extension, see Installing the Aurora machine learning extension . Setup the required environment variables. Run through each cell in the given notebook pgvector_with_langchain_auroraml.ipynb . Run Comprehend inferences from Aurora. 1. Create an IAM role to allow Aurora to interface with Comprehend aws iam create-role --role-name auroralab-comprehend-access \ --assume-role-policy-document "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"rds.amazonaws.com\"},\"Action\":\"sts:AssumeRole\"}]}" Run the following commands to create and attach an inline policy to the IAM role we just created: aws iam put-role-policy --role-name auroralab-comprehend-access --policy-name inline-policy \ --policy-document "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"comprehend:DetectSentiment\",\"comprehend:BatchDetectSentiment\"],\"Resource\":\"*\"}]}" 2. Associate the IAM role with the Aurora DB cluster Associate the role with the DB cluster by using following command: aws rds add-role-to-db-cluster --db-cluster-identifier $(echo $DBENDP | cut -d'.' -f1) \ --role-arn $(aws iam list-roles --query 'Roles[?RoleName==`auroralab-comprehend-access`].Arn' --output text) --feature-name Comprehend Run the following command and wait until the output shows as available , before moving on to the next step: aws rds describe-db-clusters --db-cluster-identifier $(echo $DBENDP | cut -d'.' -f1) \ --query 'DBClusters[*].[Status]' --output text Validate that the IAM role is active by running the following command: aws rds describe-db-clusters --db-cluster-identifier $(echo $DBENDP | cut -d'.' -f1) \ --query 'DBClusters[*].[AssociatedRoles]' --output table You should see an output similar to the following: For more information or instructions on how to perform steps 1 and 2 using the AWS Console see: Setting up Aurora PostgreSQL to use Amazon Comprehend . 3. Connect to psql or your favorite PostgreSQL client and install the extensions CREATE EXTENSION IF NOT EXISTS aws_ml CASCADE; CREATE EXTENSION IF NOT EXISTS vector; 4. Setup the required environment variables We use VS Code for this example. Create a .env file with the following environment variables: HUGGINGFACEHUB_API_TOKEN=<<HUGGINGFACE-ACCESS-TOKENS>> PGVECTOR_DRIVER='psycopg2' PGVECTOR_HOST='<<AURORA-DB-CLUSTER-HOST>>' PGVECTOR_PORT='5432' PGVECTOR_DATABASE='<<DBNAME>>' PGVECTOR_USER='<<USERNAME>>' PGVECTOR_PASSWORD='<<PASSWORD>>' 5. Run through each cell in the given notebook pgvector_with_langchain_auroraml.ipynb Import libraries Begin by importing the necessary libraries: from dotenv import load_dotenv from langchain.document_loaders import CSVLoader from langchain.text_splitter import CharacterTextSplitter from langchain.embeddings import HuggingFaceInstructEmbeddings from langchain.vectorstores.pgvector import PGVector, DistanceStrategy from langchain.docstore.document import Document import os Use LangChain’s CSVLoader library to load CSV and generate embeddings using Hugging Face sentence transformers: os.environ["HUGGINGFACEHUB_API_TOKEN"] = os.getenv('HUGGINGFACEHUB_API_TOKEN') embeddings = HuggingFaceInstructEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") connection_string = PGVector.connection_string_from_db_params( driver = os.environ.get("PGVECTOR_DRIVER"), user = os.environ.get("PGVECTOR_USER"), password = os.environ.get("PGVECTOR_PASSWORD"), host = os.environ.get("PGVECTOR_HOST"), port = os.environ.get("PGVECTOR_PORT"), database = os.environ.get("PGVECTOR_DATABASE") ) loader = CSVLoader('./data/test.csv', source_column="comments") documents = loader.load() If the run is successful, you should see an output as follows: /../pgvector-with-langchain-auroraml/venv/lib/python3.9/site-packages/InstructorEmbedding/instructor.py:7: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console) from tqdm.autonotebook import trange load INSTRUCTOR_Transformer load INSTRUCTOR_Transformer max_seq_length 512 Split the text using LangChain’s CharacterTextSplitter function and generate chunks: text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) print(len(documents)) print(len(docs)) # Access the content and metadata of each document for document in documents: content = print(document.page_content) metadata = print(document.metadata) If the run is successful, you should see an output as follows: 10 10 <<Summarized output>> comments: great hotel night quick business trip, loved little touches like goldfish leopard print robe, complaint wifi complimentary not internet access business center, great location library service fabulous, {'source': 'great hotel night quick business trip, loved little touches like goldfish leopard print robe, complaint wifi complimentary not internet access business center, great location library service fabulous, ', 'row': 0} comments: horrible customer service hotel stay february 3rd 4th 2007my friend picked hotel monaco appealing website online package included champagne late checkout 3 free valet gift spa weekend, friend checked room hours earlier came later, pulled valet young man just stood, asked valet open said, pull bags didn__Ç_é_ offer help, got garment bag suitcase came car key room number says not valet, car park car street pull, left key working asked valet park car gets, went room fine bottle champagne oil lotion gift spa, dressed went came got bed noticed blood drops pillows sheets pillows, disgusted just unbelievable, called desk sent somebody 20 minutes later, swapped sheets left apologizing, sunday morning called desk speak management sheets aggravated rude, apparently no manager kind supervisor weekend wait monday morning {'source': 'horrible customer service hotel stay february 3rd 4th 2007my friend picked hotel monaco appealing website online package included champagne late checkout 3 free valet gift spa weekend, friend checked room hours earlier came later, pulled valet young man just stood, asked valet open said, pull bags didn__Ç_é_ offer help, got garment bag suitcase came car key room number says not valet, car park car street pull, left key working asked valet park car gets, went room fine bottle champagne oil lotion gift spa, dressed went came got bed noticed blood drops pillows sheets pillows, disgusted just unbelievable, called desk sent somebody 20 minutes later, swapped sheets left apologizing, sunday morning called desk speak management sheets aggravated rude, apparently no manager kind supervisor weekend wait monday morning', 'row': 1} . . . Create a table in Aurora PostgreSQL with the name of the collection. Make sure that the collection name is unique and the user has the permissions to create a table: collection_name = 'fictitious_hotel_reviews' db = PGVector.from_documents( embedding=embeddings, documents=docs, collection_name=collection_name, connection_string=connection_string ) Run a similarity search using the similarity_search_with_score function from pgvector. query = "What do some of the positive reviews say?" docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query) for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print(doc.metadata) print("-" * 80) If the run is successful, you should see an output as follows: -------------------------------------------------------------------------------- Score: 0.9238530395691034 comments: nice hotel expensive parking got good deal stay hotel anniversary, arrived late evening took advice previous reviews did valet parking, check quick easy, little disappointed non-existent view room room clean nice size, bed comfortable woke stiff neck high pillows, not soundproof like heard music room night morning loud bangs doors opening closing hear people talking hallway, maybe just noisy neighbors, aveda bath products nice, did not goldfish stay nice touch taken advantage staying longer, location great walking distance shopping, overall nice experience having pay 40 parking night, {'source': 'nice hotel expensive parking got good deal stay hotel anniversary, arrived late evening took advice previous reviews did valet parking, check quick easy, little disappointed non-existent view room room clean nice size, bed comfortable woke stiff neck high pillows, not soundproof like heard music room night morning loud bangs doors opening closing hear people talking hallway, maybe just noisy neighbors, aveda bath products nice, did not goldfish stay nice touch taken advantage staying longer, location great walking distance shopping, overall nice experience having pay 40 parking night, ', 'row': 5} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.975017819981635 comments: great location need internally upgrade advantage north end downtown seattle great restaurants nearby good prices, rooms need updated literally thought sleeping 1970 bed old pillows sheets, net result bad nights sleep, stay location, staff friendly, {'source': 'great location need internally upgrade advantage north end downtown seattle great restaurants nearby good prices, rooms need updated literally thought sleeping 1970 bed old pillows sheets, net result bad nights sleep, stay location, staff friendly, ', 'row': 3} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 1.0084132474978011 comments: great hotel night quick business trip, loved little touches like goldfish leopard print robe, complaint wifi complimentary not internet access business center, great location library service fabulous, {'source': 'great hotel night quick business trip, loved little touches like goldfish leopard print robe, complaint wifi complimentary not internet access business center, great location library service fabulous, ', 'row': 0} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 1.0180131593936907 comments: good choice hotel recommended sister, great location room nice, comfortable bed- quiet- staff helpful recommendations restaurants, pike market 4 block walk stay {'source': 'good choice hotel recommended sister, great location room nice, comfortable bed- quiet- staff helpful recommendations restaurants, pike market 4 block walk stay', 'row': 2} -------------------------------------------------------------------------------- Use the Cosine function to refine the results to the best possible match: store = PGVector( connection_string=connection_string, embedding_function=embeddings, collection_name='fictitious_hotel_reviews', distance_strategy=DistanceStrategy.COSINE ) retriever = store.as_retriever(search_kwargs={"k": 1}) retriever.get_relevant_documents(query='What do some of the positive reviews say?') If the run is successful, you should see an output as follows: [Document(page_content='comments: nice hotel expensive parking got good deal stay hotel anniversary, arrived late evening took advice previous reviews did valet parking, check quick easy, little disappointed non-existent view room room clean nice size, bed comfortable woke stiff neck high pillows, not soundproof like heard music room night morning loud bangs doors opening closing hear people talking hallway, maybe just noisy neighbors, aveda bath products nice, did not goldfish stay nice touch taken advantage staying longer, location great walking distance shopping, overall nice experience having pay 40 parking night,', metadata={'source': 'nice hotel expensive parking got good deal stay hotel anniversary, arrived late evening took advice previous reviews did valet parking, check quick easy, little disappointed non-existent view room room clean nice size, bed comfortable woke stiff neck high pillows, not soundproof like heard music room night morning loud bangs doors opening closing hear people talking hallway, maybe just noisy neighbors, aveda bath products nice, did not goldfish stay nice touch taken advantage staying longer, location great walking distance shopping, overall nice experience having pay 40 parking night, ', 'row': 5})] Similarly, you can test results with other distance strategies such as Euclidean or Max Inner Product. Euclidean distance depends on a vector’s magnitude whereas cosine similarity depends on the angle between the vectors. The angle measure is more resilient to variations of occurrence counts between terms that are semantically similar, whereas the magnitude of vectors is influenced by occurrence counts and heterogeneity of word neighborhood. Hence for similarity searches or semantic similarity in text, the cosine distance gives a more accurate measure. 6. Run Comprehend inferences from Aurora Aurora has a built-in Comprehend function which can call the Comprehend service. It passes the inputs of the aws_comprehend.detect_sentiment function, in this case the values of the document column in the langchain_pg_embedding table, to the Comprehend service and retrieves sentiment analysis results (note that the document column is trimmed due to the long free form nature of reviews): select LEFT(document, 100) as document, s.sentiment, s.confidence from langchain_pg_embedding, aws_comprehend.detect_sentiment(document, 'en') s; You should see results as shown in the screenshot below. Observe the columns sentiment, and confidence. The combination of these two columns provide the inferred sentiment for the text in the document column, and also the confidence score of the inference. For full implementation details about the code sample used in the post, see the GitHub repo. Conclusion In this post, we explored how to build an interactive chatbot app for question answering using LangChain and Streamlit and leveraged pgvector and its native integration with Aurora Machine Learning for sentiment analysis. With this sample chatbot app, users can input their questions and receive answers based on the provided information, making it a useful tool for information retrieval and knowledge exploration, especially in large enterprises with a massive knowledge corpus. The integration of embeddings generated using LangChain and storing them in Amazon Aurora PostgreSQL-Compatible Edition with the pgvector open-source extension for PostgreSQL presents a powerful and efficient solution for many use cases such as sentiment analysis, fraud detection and product recommendations. Now Available The  pgvector extension is available on Aurora PostgreSQL  15.3, 14.8, 13.11, 12.15 and higher in AWS Regions including the AWS GovCloud (US) Regions. To learn more about this launch, you can also tune in to AWS On Air at 12:00pm PT on 7/21 for a live demo with our team! You can watch on Twitch or LinkedIn . If you have questions or suggestions, leave a comment. About the Author Shayon Sanyal is a Principal Database Specialist Solutions Architect and a Subject Matter Expert for Amazon’s flagship relational database, Amazon Aurora. He has over 15 years of experience managing relational databases and analytics workloads. Shayon’s relentless dedication to customer success allows him to help customers design scalable, secure and robust cloud native architectures. Shayon also helps service teams with design and delivery of pioneering features. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Aurora Amazon DocumentDB Amazon DynamoDB Amazon ElastiCache Amazon Keyspaces (for Apache Cassandra) Amazon Managed Blockchain Amazon MemoryDB for Redis Amazon Neptune Amazon Quantum Ledger Database (Amazon QLDB) Amazon RDS Amazon Timestream AWS Database Migration Service AWS Schema Conversion Tool Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
LG AI Research Develops Foundation Model Using Amazon SageMaker _ LG AI Research Case Study _ AWS.txt
LG AI Research successfully deployed its foundation model, EXAONE, to production in one year. EXAONE, which stands for “expert AI for everyone,” is a 300-billion-parameter multi-modal model that uses both images and text data. Français 2023 Amazon FSx for Lustre Español LG AI Research, the artificial intelligence (AI) research hub of South Korean conglomerate LG Group, was founded to promote AI as part of its digital transformation strategy to drive future growth. The research institute developed its foundation model EXAONE engine within one year using Amazon SageMaker and Amazon FSx for Lustre. 日本語 Amazon SageMaker increase in data preparation speed Built on Amazon Web Services (AWS), the foundation model mimics humans as it thinks, learns, and takes actions on its own through large-scale data training. The multi-purpose foundation model can be employed in various industries to carry out a range of tasks. Get Started 한국어 South Korean conglomerate LG Group collects vast amounts of data from its companies, which include home appliances, telecommunications, batteries, and pharmaceuticals. A key pillar of the group’s digital transformation is developing AI technology and integrating AI into its products and services. The group established LG AI Research to harness the power of AI in its digital transformation strategy, develop better customer experiences, and solve common industry challenges. 35% When LG AI Research decided to develop its next-generation foundation model, which takes inspiration from how the human brain works and has an advanced capacity for learning and making judgments, it searched for the most efficient machine learning (ML) platform to handle vast amounts of data and large-scale training and inference. The foundation model needed to train on dozens of terabytes of data to make human-like deductions and comprehend texts and images. Moreover, the project required a high-performance compute infrastructure and the flexibility to increase the number of parameters to billions during training. LG AI Research’s Gwang-mo Song explains, “By using Amazon SageMaker’s high performance distributed training infrastructure, researchers can focus solely on model training instead of managing infrastructure. In addition, by leveraging the parallel data library from Amazon SageMaker, we could obtain training results quickly as the number of GPUs and model parameters increased.” AWS Services Used With Tilda, EXAONE has shown how foundation models can be used to transform a wide range of sectors, from manufacturing and research to education and finance. LG AI Research continues its work to make human life more valuable using its foundation model and looks forward to collaborating closely with AWS on future projects. that supports linear scaling By using Amazon SageMaker’s high-performance distributed training infrastructure, researchers can focus solely on model training instead of managing infrastructure.” 中文 (繁體) Bahasa Indonesia Click to enlarge for fullscreen viewing.  LG AI Research used Amazon SageMaker to train its large-scale foundation model and Amazon FSx for Lustre to distribute data into instances to accelerate model training. By building on AWS, LG AI Research was able to resolve issues, implement checkpoints, fine-tune, and successfully deploy the model to production. 60% Contact Sales Ρусский Workflow automation was also important, as multiple models or downstream tasks needed to be completed simultaneously. To meet these requirements, the institute looked at an on-premises infrastructure, but costs were too high, and it would require 20 employees to configure and maintain the on-premises hardware. It would also require upgrading the GPUs every year and adding more GPUs to support workload spikes. Considering all the challenges in an on-premises solution, LG AI Research decided that Amazon SageMaker was the best fit for this project. عربي LG AI Research built EXAONE—a foundation model that can be used to transform business processes—using Amazon SageMaker, broadening access to AI in various industries such as fashion, manufacturing, research, education, and finance. 中文 (简体) LG AI Research is an AI think tank dedicated to developing AI technology. The institute is expanding the AI ecosystem by encouraging cross-industry collaboration across fashion, manufacturing, research, education, and finance through EXAONE. Kim Seung Hwan Head of LG AI Research Vision Lab LG AI Research Develops Foundation Model Using Amazon SageMaker About LG AI Research Overview Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Using EXAONE, LG AI Research developed an AI virtual artist called Tilda. The fundamental power of Tilda’s artistic qualities comes from EXAONE, which was trained using 600 billion pieces of artwork and 250 million high-resolution images accompanied with text. The virtual artist created 3,000 images and patterns for fashion designer Yoon-hee Park, who designed more than 200 outfits for the 2022 New York Fashion Week using Tilda’s images and patterns. reduction in cost of building AI engine Outcome | Offering New Possibility for Expanding Fields by Using EXAONE Türkçe Opportunity | Developing a Super-Giant Multimodal AI English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Learn more » LG AI Research reduced costs by approximately 35 percent by eliminating the need for a separate infrastructure management team. It also increased the data processing speed by about 60 percent using the Amazon SageMaker distributed data parallel library. 1 year Park’s work with LG AI Research demonstrated the potential of expanding AI technology to the art industry, growing the AI ecosystem and fostering cross-industry collaboration. The company recently announced a partnership with Parsons School of Design in New York City to conduct joint research on advanced AI technologies to leverage in the fashion industry. Scalability Deutsch Customer Stories / Research and Development Tiếng Việt Italiano ไทย to develop the EXAONE AI engine Architecture Diagram Close Learn more » EXAONE’s Architecture Diagram on AWS Solution | Building the Foundation Model EXAONE Using Amazon SageMaker Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
LifeOmic Case Study _ AWS Lambda _ AWS.txt
AWS Lambda Français Achieved HIPAA compliance quickly Founded in 2016, LifeOmic has over 100 employees and a variety of healthcare software solutions. The company started by creating Precision Health Cloud, a secure cloud solution that integrates and indexes disparate data sources, including genomic, clinical, imaging, and population data. This system currently stores 400 million clinical data points and 500 billion genetic variants, including 55 billion unique genetic variants. In addition to supporting healthcare organizations, LifeOmic wanted to offer solutions to help consumers live healthier lives. After it had achieved a solution that was compliant with HIPAA and the HITRUST Alliance, LifeOmic developed mobile apps designed to empower individuals to manage their own health. “We can support all of these products on the same solution and reuse a lot of code, so we’re able to achieve a lot and expand into new marketplaces with a relatively small team,” says Anthony Roach, technical director at LifeOmic. Español Scales to meet peak demand 日本語 AWS Services Used Contact Sales AWS Step Functions 한국어 Makes an average of 100 production deployment updates per day Chris Hemp Vice President of Engineering, LifeOmic  Supports a growing base of over four million users With its secure, scalable serverless architecture on AWS, LifeOmic is equipped to support the full continuum of healthcare, from research and preventive medicine to diagnosis and treatment management. “We wouldn’t have had nearly as much breadth if we hadn’t used AWS,” says Roach. Four million users and counting have downloaded LifeOmic’s mobile applications, which connect with wearable devices and pacemakers. The company can scale to meet demand—such as when New Year’s resolutions led to a three-times increase in application sessions in January compared to December—using simple controls without needing to add new hardware. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Achieving Scalable, HIPAA-Compliant Data Storage on AWS Get Started About LifeOmic By using fully managed serverless solutions on AWS, LifeOmic was able to reduce or remove its ongoing maintenance and operations costs, launch quickly, and prepare to scale with agility as the company adds new products and features. “The ease and speed of serverless development on AWS has helped our small team deliver a large set of features in just a few months,” says Chris Hemp, vice president of engineering at LifeOmic. LifeOmic Achieves up to 50% Cost Savings after Building Serverless Architecture on AWS LifeOmic has built a secure health solution that powers analytics, interventions, and engagement solutions for improving health outcomes across the continuum of care, from prevention and wellness to clinical care and research. 中文 (繁體) Bahasa Indonesia Becoming multitenant to support everything from small clinical practices to large hospital systems was also an important goal for LifeOmic. To achieve this, the company needed scalable data stores, and it saw that AWS provided a variety of potential solutions. By using managed services like Lambda, LifeOmic could keep operational costs low and empower its team to focus on developing software, not running the backend. “Some companies try to do cloud-agnostic development, but they lose the benefits that a designated cloud vendor can provide,” says Roach. “On AWS, we gain everything we need, from serverless code to data stores, so we don’t have to worry about multiple vendors and compatibilities.” In April 2020, LifeOmic sought to become compliant with the Federal Risk and Authorization Management Program, and it achieved this goal by April 2021. “We wouldn’t have achieved these federal standards in 1 year if we weren’t using AWS,” says Hemp. “Using AWS, we were able to keep up with the requirements for documentation and security and have the support that we needed.” Avoids infrastructure costs and capital expenses Ρусский Achieved Federal Risk and Authorization Management Program compliance in 1 year عربي Initially, LifeOmic focused on building genomic pipelines using AWS services such as Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service. Early on, the company started building APIs and began using AWS Lambda to speed up API development processes as soon as the service became HIPAA eligible. By using AWS Lambda with a Hypertext Transfer Protocol interface layered on, LifeOmic’s developers were able to write and deliver code with ease, even if they were unfamiliar with AWS Lambda. 中文 (简体) Software company LifeOmic knew that to improve health outcomes, researchers, clinicians, and device manufacturers in healthcare and biotech organizations needed a secure solution for interaction and data management. To build this solution quickly and cost efficiently, LifeOmic chose a serverless architecture on Amazon Web Services (AWS).  Amazon Elastic Container Service (Amazon ECS) Learn more »   Benefits of AWS Amazon ECS is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. It deeply integrates with the rest of the AWS platform to provide a secure and easy-to-use solution for running container workloads in the cloud and now on your infrastructure with Amazon ECS Anywhere. Growing the company from the ground up on AWS has helped LifeOmic focus on innovation instead of infrastructure management. Next, the company is looking into using Amazon Timestream, a serverless time series database service, to add new features that call for continuous data, such as intraday heart rate and continuous glucose monitoring. LifeOmic also continues to expand its customer base and is seeing growing trust in the cloud. “Our customers are confident in the reliability of AWS,” says Roach. “That and our ability to put out new features so quickly have created a winning combination.” Türkçe The company has also realized business benefits, including faster time to market from using automation to make an average of 100 production deployment updates in 1 day. LifeOmic has also achieved cost savings of 30–50 percent by adopting Lambda, including using provisioned concurrency and Compute Savings Plans, a flexible pricing model that offers low prices on AWS Lambda usage. The company has also seen success recruiting and retaining employees, who are excited to use AWS services. Many participate in AWS Training and have either renewed their AWS Certifications or have achieved one for the first time. English Reduced costs by 30%–50% Scaling Healthcare Applications Using AWS Lambda Amazon API Gateway Improved employee recruitment and retention Deutsch When Amazon OpenSearch Service—which makes it easy to perform interactive log analytics, near-real-time application monitoring, and website searches—became HIPAA eligible, LifeOmic was able to add analytics and search features to its Precision Health Cloud. LifeOmic now uses OpenSearch Service as its biggest data store, housing 500 billion documents. Another milestone for LifeOmic was joining AWS Activate, a program that offers startups free tools, resources, and more to quickly get started on AWS. The program offered insights into the AWS road map, helping LifeOmic make its own decisions about its next steps. AWS Step Functions is a low-code, visual workflow service that developers use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. Tiếng Việt The ease and speed of serverless development using AWS has helped our small team deliver a large set of features in just a few months.”  Continuing to Grow and Innovate Italiano ไทย In LifeOmic’s pipeline, applications make code initiation requests through Amazon API Gateway, a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. The pipeline then uses AWS Lambda to run code to retrieve data from Amazon DynamoDB, a fast, flexible NoSQL database service. And to achieve smooth workflows, LifeOmic uses AWS Step Functions, a low-code, visual workflow service that developers can use to build distributed applications and automate IT and business processes. “Using AWS Step Functions, we can achieve long-running processes easily because everything is managed for us,” says Roach. 2022 AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. LifeOmic decided to build its solution from the ground up on AWS because AWS services like AWS Lambda—a serverless, event-driven compute service—make it simpler to process, store, and transmit protected health information, facilitating HIPAA compliance. “It can take years for startups to meet HIPAA compliance requirements,” says Roach. “LifeOmic started under the assumption of meeting these requirements and more. We tackled and achieved the rigorous HITRUST CSF Certification in less than 6 months with zero corrective actions, and using AWS made it much easier.” Português Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.
Lotte Data Communication Company Vietnam Simplifies API Integrations for Online Retailers on AWS _ Case Study _ AWS.txt
1.7 million Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define. You can use the fleet management features of EC2 Auto Scaling to maintain the health and availability of your fleet. Opportunity | Simplifying External API Integration for Online Retailers Français Amazon EC2 Auto Scaling To streamline API management, LDCC VN created Lotte API Transit Gateway (LATG), a single integration point that online businesses can use to link to any other platform. Moon Geun Jae, platform director at Lotte Data Communication Company Vietnam, says, “LATG provides a wide spectrum of services for payment, delivery, shopping, and membership platforms. Our customers have only one connection to manage and don’t need to modify the core of their IT system to perform smooth API integrations with third parties, which reduces security risk and resource costs.”  Español Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. Learn more » Lotte Data Communication Company Vietnam (LDCC VN) is an IT solutions provider and member of Lotte Group, one of South Korea’s leading retail conglomerates. When LDCC VN began building its Lotte API Transit Gateway solution, it chose AWS to leverage high availability and cloud-native security tools.  Outcome | Reducing Infrastructure Complexity with Low Maintenance and Investment 日本語 Amazon GuardDuty Since launching, LATG has experienced 99.999 percent uptime. The solution was built using Amazon EC2 Auto Scaling to maintain application availability. LDCC VN also deployed several native AWS security tools to build a resilient solution for its customers, including Amazon GuardDuty for intelligent threat protection, AWS Shield for managed distributed denial of service (DDoS) protection, and AWS WAF – Web Application Firewall to guard against common web exploits. This assures LATG customers that confidential data such as personally identifiable information and financial details are protected on the AWS Cloud.  99.999% Lotte Data Communication Company (LDCC), a division of South Korea’s Lotte Group conglomerate, aims to strengthen its B2B customers’ global competitiveness with proven IT solutions. Because Lotte Group also runs its own large-scale ecommerce business, it’s sensitive to the challenges online retailers face, such as the complexity of secure integrations with digital partners. Aiming to bring its expertise to international customers, LDCC established its first office abroad, Lotte Data Communication Company Vietnam (LDCC VN), in 2009. LDCC VN offers locally optimized solutions to customers in industries including retail, manufacturing, and finance.  Get Started 한국어 to fully build API configurations Overview | Opportunity | Solution | Outcome | AWS Services Used Lotte Data Communication Company (LDCC), part of South Korea’s Lotte Group, was established in 1996 as a total IT solutions provider. The provider offers solutions centered on future core technologies such as the metaverse, mobility, and data. LDCC launched its Vietnam business in 2009 to provide locally optimized solutions in industries such as retail, finance, and manufacturing.  solution uptime Guards against common web exploits Solution | Scaling LATG to Process Millions of Transactions While Reducing Costs Lotte Data Communication Company Vietnam Simplifies API Integrations for Online Retailers on AWS AWS Services Used 中文 (繁體) Bahasa Indonesia AWS provides high availability and scalability to accommodate any level of demand, whether it’s 10,000 or 1 billion transactions.” Lotte Data Communication Company Vietnam built Lotte API Transit Gateway using Amazon EC2 Auto Scaling with AWS WAF, simplifying API configuration for millions of transactions and ensuring data protection for its customers. Contact Sales Ρусский About Lotte Data Communication Company Vietnam عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Secure 2022 Overview 1 day to 1 month AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. Learn more » Türkçe English In addition, LATG customers also save 20–50 percent of IT costs over 5 years compared to doing their own API configuration work. To illustrate, LDCC VN estimates that customers performing their own API configurations spend 2–4 months and $20,000–$250,000 on setting up direct API connections, plus up to $20,000 on annual management costs including server leasing. LATG customers, on the other hand, can complete configurations within one month and spend less than $20,000. Depending on configuration complexity, configurations can even be completed in as fast as two weeks and with zero cost.  LDCC VN operates a hybrid IT environment, running some workloads out of its data center in Hanoi and using Amazon Web Services (AWS) for others, such as backup and storage. The company chose to build LATG on AWS to leverage fast deployment, on-demand scaling, and high service uptime. Moon says, “AWS provides high availability and scalability to accommodate any level of demand, whether it’s 10,000 or 1 billion transactions. We also value the strong cybersecurity capabilities AWS offers.” Moon Geun Jae Platform Director, Lotte Data Communication Company Vietnam Deutsch Its solution uses Amazon EC2 Auto Scaling for elastic scaling and Amazon GuardDuty for intelligent threat protection. By building its API solution on AWS, LDCC VN can provide a highly available, secure tool that saves customers up to 50 percent in direct IT spending and labor costs. AWS Shield Tiếng Việt By building LATG on AWS, LDCC VN has a reliable, scalable cloud infrastructure that reduces infrastructure complexity for customers. Low complexity is a key value proposition, because one of LATG’s target customer segments is non-technical companies who need API integrations without heavy IT investment. “Our customers benefit from enhanced security, cost savings, and a lowered requirement for headcount by using LATG,” explains Moon.  Among the first customers to try LATG was Vanila Studio (Vani Studio), a lifestyle and fintech platform in Vietnam. Vani Studio used LATG to integrate its app with a renowned global membership platform, adding and modifying APIs to improve integration flows and creating a monitoring dashboard. The dashboard alerts Vani Studio of any connection issues and offers a recommended action plan to resolve them. LATG now processes more than 1.7 million monthly transactions within the Vani Studio app.  Italiano ไทย LDCC VN has plans to conduct accelerated go-to-market activities in 2023 to leverage economies of scale for LATG globally. But first, it’s focused on further developing its cloud expertise. “Our teams need to be confident and proficient at using the cloud. To achieve this, AWS has been helping us upskill and ensure we deliver high-quality service for our customers,” Moon says.  A key challenge LDCC VN wants to address for its customers is the burden of configuring application programming interfaces (APIs) to link to multiple online partners. APIs are a core software communication intermediary among modern service platforms. Online retailers, for example, need to connect with payment providers such as banks and digital wallets, delivery services such Grab and Gojek, and various loyalty programs. However, each of these external platforms require unique API configurations that entail a high initial configuration cost and time-consuming maintenance. Insecurely configured API connections can also pose serious cybersecurity risks.  Learn more » monthly transactions supported for 1 customer Customer Stories / Software and Internet Up to 50% cost reduction for customers Português Several developers from LDCC VN have attended AWS Training and Certification courses, and members of the sales team learned from project-based sales training ahead of the LATG launch. More project-based and online training are planned for 2023. “We value the support we receive from AWS to enhance our confidence during the sales and project delivery process,” Moon emphasizes.
LTIMindtree Drives Digital Transformation for Global Customers with AWS Training and Certification.txt
LTIMindtree enrolled over 4,600 employees in online AWS Skill Builder courses, virtual and in-person classroom training with hands-on labs, and AWS Certification exam readiness sessions. 18 months into the training, LTIMindtree is attracting new business opportunities and its sales team is more confident in proposing customized cloud solutions. It has also improved workforce retention and is attracting new talent. Français 2023 The number of recognized technical initiatives undergone also elevates LTIMindtree in the eyes of its customers. After 18 months of AWS Training and Certification coursework, LTIMindtree has notched 9 AWS Competencies and is aiming for 15 by early 2023. The business has also achieved 12 Service Delivery designations for services such as Amazon EMR and AWS Database Migration Service (AWS DMS) and plans to achieve more relevant AWS Service Delivery designations in 2023 to showcase its deep expertise in AWS skills. Español 2x employees trained in 18 months LTIMindtree Drives Digital Transformation for Global Customers with AWS Training and Certification Learn More In addition, the training program is contributing to workforce retention and talent management. In response to specific requests from LTIMindtree’s leaders to attract and upskill fresh graduates, LTIMindtree worked with AWS to develop a new-hire training program. The program includes three dedicated days of training followed by two days of on-the-job coaching. 日本語 As part of its three-year investment in workforce development, LTIMindtree has committed to train an additional 5,000–8,000 people in the next 12 months. The business is more than halfway through its three-year training plan. Furthermore, innovation is on the rise because of the training program. LTIMindtree recently introduced three new solutions for customers in the insurance and media industries. “We’re able to innovate faster, launch new solutions, have more meaningful conversations with customers, and drive new business; it’s a snowball effect,” Vijayakumar concludes. Contact Sales Within one year, LTIMindtree trained about 4,600 technical and non-technical employees, with over 6,200 AWS Partner Accreditations and around 450 AWS Certifications achieved. Training was tailored to meet the needs of LTIMindtree’s complex organization structure, covering 9 business units, 15 industry verticals, and employees with different roles and skill sets spread across the globe. 450+ 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Training & Certification To learn more, visit aws.amazon.com/training. AWS Certification helps learners build credibility and confidence by validating their cloud expertise with an industry-recognized credential, and organizations identify skilled professionals to lead cloud initiatives using AWS. Opportunity | Upskilling Continuously for Creative Problem Solving AWS Services Used LTIMindtree is a digital solutions provider with more than 90,000 employees and a presence in over 30 countries. The company was formed via a merger on November 14, 2022 between former Larsen & Toubro Infotech (LTI) and Mindtree. LTIMindtree is committed to addressing its customers’ business challenges, as reflected in its tagline: ‘Getting to the future, faster. Together.’ To help its team members tackle customers’ challenges, LTIMindtree has an internal motto: shoshin, a Japanese concept that refers to having an attitude of openness, eagerness, and lack of preconceptions when studying a subject, also known as “beginner’s mind.” LTIMindtree continuously upskills employees so they can approach problems from all angles and develop innovative solutions.  One of the challenges LTIMindtree faced in designing a training program was its employees’ busy schedules and work commitments, which required a flexible approach to training. The AWS Training and Certification team offered a range of course formats, from digital to in-person classroom instruction, to help employees access the training anytime, in the format of their choice. Outcome | Doubling Sales Opportunities and Attracting New Talent 中文 (繁體) Bahasa Indonesia Solution | Building a Cross-Functional, Flexible Program for Employees AWS Skill Builder is an online learning center that offers one-of-a-kind digital training built by experts at AWS. AWS business opportunities Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » 6,200+ AWS Training and Certification provides free digital AWS Partner Accreditation courses for individuals in business and technical roles. These courses give you a foundational understanding of AWS products and services, best practices, and APN programs so you can effectively address customer business and technical needs. AWS Partner Accreditation courses are available on demand and allow you to learn at your own pace. AWS Partner accreditation Overview By working with AWS Training and Certification, LTIMindtree upskills thousands of employees to attract more business opportunities, launch new solutions, and improve workforce retention. AWS Skill Builder Vijayakumar Pandian, associate vice president at LTIMindtree, says, “The cloud is spurring digital innovations across the industries we serve. It’s not a question of ‘can they build it,’ but rather, ‘how fast can they build it.’” Requests for cloud-based transformation projects are accelerating, and so is the demand for human resources who are certified in cloud operations. LTIMindtree’s goal is for every employee to have basic knowledge of AWS, with accreditation in business or technical areas. “Our customers want to do more with their data and are requesting trained engineers who are familiar with the AWS Well Architected principles,” Vijayakumar adds. Get Started The service revenue from our AWS business has grown significantly, and the momentum continues to build. Türkçe About LTIMindtree Vijayakumar Pandian Associate Vice President, LTIMindtree English 9 The training program was organized across four learning pathways as defined by LTIMindtree: migration and modernization, SAP, Internet of Things (IoT), and data. The training plan prescribed three key training opportunities, including self-paced AWS Skill Builder courses, AWS Partner Courses with hands-on labs in a classroom setting, and AWS Certification exam readiness sessions. AWS Certifications AWS Partner Accreditation Deutsch “Cloud is going to be the fabric of everything graduates do in the future, and they recognize the value of training early in their careers. Programs such as AWS Training and Certification are helping us attract and retain employees, because they believe in an organization that continuously helps them upskill,” Vijayakumar says. Tiếng Việt AWS Competencies 4,600+ Italiano ไทย By pursuing a comprehensive AWS Training and Certification program, LTIMindtree has refined its expertise in assisting enterprises to achieve their cloud technology goals. In 2022, the provider won a contract with one of the largest banks in the United States to help the bank build an AWS-native data analytics stack.  LTIMindtree, an AWS Partner, is a global technology consulting company with operations in over 30 countries. To improve its cloud expertise, the company embarked on a three-year AWS Training and Certification initiative. As a result of the training program, LTIMindtree has seen significant growth in the number of AWS business opportunities with new and existing customers. For example, from one financial quarter to the next, LTIMindtree doubled the number of sales opportunities related to AWS. “The service revenue from our AWS business has grown significantly, and the momentum continues to build,” says Vijayakumar. In addition to technical training, the curriculum included seller enablement programs to help front-line employees—who might not have the right cloud knowledge to communicate the various use cases and challenges LTIMindtree can solve—understand the value of AWS Cloud solutions. “The seller enablement programs from AWS Training and Certification are powering our salespeople in specific verticals to engage in more meaningful cloud transformation conversations with customers,” says Vijayakumar. Prior to the merger, LTI had been an Amazon Web Services (AWS) Partner for over five years and acquired a business called Powerupcloud, an AWS Partner, in 2019. This was a catalyst for further engagement with AWS, to provide even more advanced technology consulting services. LTI entered into a three-year Strategic Collaboration Agreement (SCA) with AWS in March 2021. Part of the agreement included a commitment to help LTI’s customers harness the full potential of AWS, by training its employees with the help of the AWS Partner Training and Certification team.  Concurrently, the business formed a separate business unit dedicated to the AWS Cloud. The data used in this story is based on the results of Larsen & Toubro Infotech's partnership with AWS Training and Certification prior to the merger. Português On November 14, 2022, Larsen & Toubro Infotech and Mindtree—consulting and digital solutions companies under the Larsen & Toubro Group—announced a merger, combining their strengths and unlocking the benefits of scale. The merged entity, LTIMindtree, now operates as a global technology consulting and digital solutions company helping more than 750 global enterprises proactively harness digital technologies. With operations in over 30 countries, LTIMindtree is now one of India’s largest IT services companies in terms of market capitalization.
Lucid Motors and Zerolight Case Study.txt
Français About ZeroLight Español Doubled visitors’ duration time on website versus visits to other automakers’ sites 日本語 Even before the COVID-19 pandemic temporarily closed dealerships worldwide, the average car-shopping experience was trending from traditional showrooms to the internet: the average number of times a car buyer visits a dealership before a purchase has dropped from 7 to 1.5 in the past decade. In reaction, automotive visualization software specialist ZeroLight offers SpotLight Suite, a cloud-based platform that brands, agencies, and dealers use to customize sales and marketing to each shopper. SpotLight users create personalized sales materials with visual content production informed by the car models that shoppers build with ZeroLight’s Palette and Palette+ configurators. In 2020, nascent luxury electric carmaker Lucid Motors enlisted ZeroLight to differentiate itself ahead of the launch of its flagship vehicle, the Lucid Air sedan. Increased the revenue generated per session by 51% 한국어 Lucid Motors and ZeroLight Host Virtual Car Launch on AWS, See 46% Higher Conversion Rate To offer customers a seamless experience, ZeroLight needs readily accessible compute power—so it turned to Amazon Web Services (AWS), which offers globally available GPU instances, low-latency content-delivery tools, and a large selection of advanced artificial intelligence services to help marketers find and engage with customers. ZeroLight implemented Amazon Elastic Compute Cloud (Amazon EC2) G4 Instances powered by NVIDIA T4 Tensor Core GPUs. They are the industry’s most cost-effective and versatile GPU instances for graphics-intensive applications such as remote graphics workstations and graphics rendering. Those G4 Instances were key to the success of the Lucid Air’s September 2020 online launch, which was moved online because of the COVID-19 pandemic. Lucid Motors and ZeroLight Host Virtual Car Launch on AWS, See 46% Higher Conversion Rate (2:47) Hosting a Successful Virtual Launch Using Amazon EC2 G4 Instances Formed in 2014, ZeroLight works to fully integrate online and in-person car shopping. “We’re moving away from a linear buying funnel—where we take a customer from an advert to a website to a dealer—to a circular, more flexible journey where the customer chooses what they want to do,” explains Francois de Bodinat, chief product officer at ZeroLight. Using a digital twin model created with computer-aided design data from the car’s production, ZeroLight shows shoppers a photo-realistic rendering customized to their specifications. That personalized model informs every part of the customer journey from advertising to conversion, optimizing online retail advertising for automakers and better engaging with car shoppers: the goal is for every email, webpage, and ad they see to reflect their personalized car model rather than a generic car. “We want to make the customer the center of the sales process—not to feel like ‘I’m buying a Lucid,’ but to feel like ‘That’s my Lucid. And they know me,’” says Thomas Orenz, director of digital interactive marketing for Lucid Motors. Get Started Amazon EC2 Keeping Up with an Evolving Industry on AWS Amazon EC2 G4 instances are the industry’s most cost-effective and versatile GPU instances for deploying machine learning models such as image classification, object detection, and speech recognition, and for graphics-intensive applications such as remote graphics workstations, game streaming, and graphics rendering. AWS Services Used In the first 10 weeks after the Lucid Air’s debut, more than 436,000 sessions were recorded. Compared to an image-based experience in A/B testing, Lucid has seen a 46 percent increase in car reservations from visitors who engage with the fully interactive configurator, and the revenue generated per session has increased by 51 percent. It also saw increased user engagement on the 3D configurator by up to 47 percent. Multiplies the power of local devices by 10x 中文 (繁體) Bahasa Indonesia ZeroLight needs a lot of power to provide that level of graphical output and real-time computation. Before, that meant being tethered to a high-end physical computer, confining the company to working with dealerships. But that wasn’t sustainable for growth. In an increasingly remote world, customers want to benefit from quality wherever they are, and three-quarters of the car buyer’s journey happens online. Using Amazon EC2 G4 Instances, ZeroLight can offer its vehicle configurator to end users on their own devices. “We needed to bring that physical machine to the cloud and keep the power needed to serve high-quality content,” says de Bodinat. “On AWS, we have more capabilities in the cloud than we would have with physical machines.” Rather than having to be in store to use ZeroLight’s configurator, shoppers can now access it on a smartphone; ZeroLight can use AWS to deliver an iPhone 11 viewing experience that is 10 times more powerful. Contact Sales Ρусский عربي 中文 (简体) Benefits of AWS Increased user engagement on 3D configurator by up to 47% Continuously Improving the Customer Experience on AWS We needed to bring that physical machine to the cloud and keep the power needed to serve high-quality content. On AWS, we have more capabilities in the cloud than we would have with physical machines." Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Handled peaks of 650 concurrent users English Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Using the scalable compute power of AWS, ZeroLight gives its customers free rein to create a personalized car-shopping experience for end users. “I don’t know where ZeroLight would be if we had to manage a farm of servers as assets,” admits de Bodinat. “The credibility of AWS in the market helps to gain trust with the customer to say, ‘Hey, it’s powered by AWS. You’re safe.’” Shoppers can configure the car to meet their preferences using ZeroLight’s Palette+, powered by Amazon EC2 G4 Instances. When visitors reach the Lucid website, AWS needs just 5 seconds to find their location across the United States, Europe, or the United Arab Emirates; trigger the engine on ZeroLight; begin 3D streaming; and deliver the first live image. Each session is assigned a dedicated EC2 instance, enabling Lucid to deliver immersive, 360-degree visualizations. These feature world-first volumetric-video environments brought to Lucid by ZeroLight and the AWS team, which are enhanced by another world first: real-time, cloud-rendered ray tracing, a technique that realistically re-creates the way light interacts with physical objects—enabled by NVIDIA GPUs, which power the Amazon EC2 G4 Instances. Lucid planned a virtual launch for the Air, and ZeroLight built the company a website to facilitate customer engagement and mimic the in-person shopping experience. Customers and journalists could navigate around the vehicle as if in a showroom and inspect every detail—from home. At peak traffic, 650 users concurrently configured their own Air model using the interactive 3D experience—a number enabled by ZeroLight’s ability, derived from AWS, to elastically provision more and then release unneeded instances to cost-effectively meet demand. Visitors’ sessions lasted twice as long as visits to other automakers’ sites. Though other launches are lucky to see a 10 percent conversion for reservations, Lucid saw 17 percent through the configurator. Increased conversion rate by 46% Deutsch Tiếng Việt ZeroLight’s plans to increase the configurator’s capabilities by integrating with other platforms such as Salesforce and Facebook. The company recently announced the reveal of the 2022 Mitsubishi Outlander directly on an Amazon Live landing page using ZeroLight Palette+ live configurator technologies. Lucid looks forward to a ZeroLight-built virtual reality experience using only NVIDIA CloudXR and AWS. Italiano ไทย Lucid had planned to launch the Air at the 2020 New York Auto Show. When the COVID-19 pandemic dashed those plans, the automaker decided that a fully online launch created as many opportunities as challenges. “Wherever you can engage with the customer, you should,” Orenz says. Enabled 430,000 configurator sessions for virtual car launch in 10 weeks 2021 Learn more » Francois de Bodinat Chief Product Officer, ZeroLight “I’ve never seen so much engagement on a single website at launch,” Orenz says. “There were other major launches around the same time; we totally overperformed those numbers in terms of sessions, engagement, and concurrent users on the site and the configurator—and in making reservations. And it’s stable—whatever we did, we couldn’t break it.” Amazon EC2 G4 Instances ZeroLight is an automotive visualization specialist that integrates cutting-edge technologies and personalized media into a single market-leading platform. Its automotive solutions enhance every stage of the vehicle-shopping journey by increasing engagement, delivering hyperpersonalization, and driving sales. Português
Lyell GxP Compliance _ Case Study _ AWS.txt
Overview | Opportunity | Solution | Outcome | AWS Services Used mso-hansi-font-family:Calibri { Opportunity | Reducing the Time to Validate GxP Compliance from Weeks to Minutes  Español What sets AWS apart is its breadth and depth of services, its expertise, and its commitment to the healthcare and life sciences industry." *.MsoChpDefault { 日本語 2022 한국어 Deployed AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » mso-bidi-font-family:Cambria { AWS Services Used Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » With a mission to use autologous cell therapy to cure solid-tumor cancer, Lyell uses reprogrammed T cells to develop potential new therapies. Once extracted, these T cells are processed at Lyell’s manufacturing facility and then infused back into the patient. To mitigate process deviations that could lead to negative patient outcomes, it is critical to validate the environment in which all the manufacturing systems are running. The data generated by the system also needs to have robust integrity and accuracy so that it can be used for analytics downstream. However, Lyell’s manual validation process was slow and not scalable. “We would compare screenshots to assert that each environment matched our specifications. This was laborious and prone to human error,” says Adin Stein, head of IT, cloud infrastructure, and cybersecurity at Lyell. “The process would take anywhere from 2 to 3 weeks.” *, sans-serif { Contact Sales Overview   Adin Stein Head of IT, Cloud Infrastructure, and Cybersecurity, Lyell Immunopharma When Lyell makes any change to the software code base on its systems, a continuous integration (CI) workflow is initiated and runs a series of tests to qualify the installation. These test results are posted to Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Each incoming test result initiates workflows to generate Rapid Q reports, powered by AWS Lambda, a serverless, event-driven compute service that lets organizations run code without provisioning or managing servers. To verify that none of the messages are lost and prevent the system from becoming overwhelmed with multiple incoming changes, Lyell relies on Amazon Simple Queue Service (Amazon SQS), an automatic, fully managed message queuing service. page: WordSection1; mso-hansi-theme-font:minor-latin { With this automation in place, Lyell can spend more time writing test cases and less time documenting changes, identifying areas of compliance risks, and performing exploratory analytics. To complete the auditing process, it uses Amazon DynamoDB, a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability, to store data and re-create compliance documentation. Multiple systems use Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud, to connect to Rapid Q for validation, including Lyell’s commercial environment monitoring and endotoxin testing systems. With Rapid Q, Lyell has achieved significant time savings and gained a scalable, paperless environmental monitoring solution that is future proof. generated to focus on other business areas, by eliminating manual workflows * { ไทย p.MsoNormal, li.MsoNormal, div.MsoNormal { validation tests automatically Learn more » to run validation processes versus 2–3 weeks mso-font-pitch:variable { AWS Lambda Français Lyell Reduces Time to Validate GxP Compliance from Weeks to Minutes Using AWS 中文 (繁體) Bahasa Indonesia Lyell Immunopharma is a clinical-stage T-cell reprogramming company headquartered in South San Francisco, California, dedicated to developing curative cell therapies for patients with solid-tumor cancer. Amazon DynamoDB Industry Innovators 2022: Lyell - Streamlining GxP validation on AWS Lyell turned to Amazon Web Services (AWS) and built Rapid Q, a solution that automatically validates FDA compliance and documents changes made to an environment or application. Now, the company can validate compliance in minutes instead of weeks and deploy new updates and systems at a faster pace. Türkçe English mso-ascii-font-family:Calibri { Solution | Building Rapid Q on AWS to Automate Compliance Validation  Tiếng Việt Lyell is also aligning with the FDA’s new CSA guidance, which encourages manufacturers to spend 80 percent of their time on critical thinking and applying testing to higher-risk activities and the remaining 20 percent on documenting IT environments and applications. Because Rapid Q automatically documents any changes, Lyell no longer needs to create reports manually. “This has freed up our resources so that we can focus on other critical aspects of the business,” says Stein. “Now, we can spend more time building solutions that help interpret data coming from manufacturing facilities and clinical sites.” Lyell wanted to increase the efficiency and reliability of its compliance validation workflows using automation, not only for the initial implementation but also for periodic system updates. This was important so that Lyell could gain the agility that it needed to adopt new technologies and make frequent upgrades, without the barriers created by manual validation reporting. An AWS customer since 2018, the company turned to the range of curated industry solutions on AWS to streamline this labor-intensive process. It worked with AWS Professional Services, a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. “What sets AWS apart is its breadth and depth of services and expertise and its commitment to the healthcare and life sciences industry,” says Stein. Português mso-fareast-theme-font:minor-latin { For Lyell Immunopharma (Lyell), an immuno-oncology company with a mission to cure solid tumor cancers, it is critical to validate the systems and applications for its T-cell reprogramming workflows to comply with US Food and Drug Administration (FDA) regulations. Previously, these validations were done manually, which was expensive, time-consuming, and prone to potential errors. To facilitate compliance and meet the FDA’s computer software assurance (CSA) guidelines, Lyell needed a more efficient validation process. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. On AWS, Lyell has reduced manual effort for compliance and can focus more on innovation. In the future, it will use Rapid Q to run all its cloud workloads that require validation. To support these initiatives, Lyell will continue to build on AWS. “AWS brings a lot to the table in terms of opportunities,” says Stein. “We want to take full advantage of them.” Reduced manual errors mso-bidi-theme-font:minor-bidi { } عربي Times New Roman { About Lyell Immunopharma The Rapid Q system parses the data from the test results to generate automated compliance reports and confirm that the installation meets compliance specifications. Lyell also uses Amazon Simple Notification Service (Amazon SNS), a fully managed messaging service for both application-to-application and application-to-person communication, to send out notifications each time a new Rapid Q report is generated or alerts if an issue arises. Learn how Lyell Immunopharma automates continuous GxP compliance and deploys system changes and upgrades faster using AWS. Deutsch mso-fareast-font-family:Cambria { mso-pagination:widow-orphan { Amazon S3 Italiano mso-fareast-font-family:Calibri { mso-generic-font-family:roman { Working with its internal quality team, Lyell built Rapid Q, an automated reporting solution for compliance validation, to assess and document every code change made to its infrastructure. “With Rapid Q, we automated not only the specifications that define each environment or application but also the validation testing,” says Stein. “As we make changes, we can run tests with the push of a button, decreasing the time that it takes to validate compliance from weeks to minutes. We can also generate reports for our quality team automatically.” Minutes Ρусский 中文 (简体) mso-ascii-theme-font:minor-latin { div.WordSection1 { Outcome | Future Proofing GxP Validation on AWS  More Time Arial, sans-serif { Amazon RDS Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Using Rapid Q, Lyell has significantly reduced the time and cost involved in validating compliance for its systems, which has improved its agility to deploy new features and upgrades at a faster pace. More importantly, Lyell can remain in a state of reporting compliance whenever changes are made to its underlying processes through automation, saving time and reducing human error. “Every time we perform an upgrade or implement a new system that needs to be validated, we realize the immediate benefits of Rapid Q,” says Stein. “We can deliver new solutions to the business faster and at a lower cost. We can spend more time interpreting and building solutions to better understand our manufacturing data in a richer, more accelerated way.” Get Started Customer Stories / Life Sciences Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Learn more » p.Normal0, li.Normal0, div.Normal0 {
MARVEL SNAP_ How Second Dinner and Nuverse Built and Scaled the Mobile Game of the Year Using AWS for Games _ Case Study _ AWS.txt
The founders of Second Dinner had an ambitious vision: for its small team of engineers to develop and maintain a free-to-play online game for millions of users worldwide. The company wanted to launch quickly and free developers to work on game features rather than maintain infrastructure. In collaboration with its publisher, Nuverse, Second Dinner built an innovative serverless architecture that quickly scaled to millions of players using managed solutions from Amazon Web Services (AWS). Within 4 months of its release, the game became one of the most popular and critically acclaimed games in the world and won the Mobile Game of the Year award. Français AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » Amazon GameLift deploys and manages dedicated game servers hosted in the cloud, on-premises, or through hybrid deployments. Amazon GameLift provides a low-latency and low-cost solution that scales with fluctuating player demand.  2023 An important feature of MARVEL SNAP is matchmaking: the evaluation and selection of compatible players for card battles in seconds. As its in-house matchmaking solution reached scalability limits, Second Dinner turned to a feature of Amazon GameLift, which provides dedicated server management for session-based multiplayer games. The company used the feature Amazon GameLift FlexMatch as a stand-alone matchmaking service that it customized to MARVEL SNAP’s needs. Second Dinner’s use of Amazon GameLift FlexMatch resulted in the highest volume of matches ever for a game using the service. “The stand-alone Amazon GameLift FlexMatch feature slotted right in, fitting the event-driven serverless architecture that we had already embraced,” says Brenna Moore, Second Dinner senior software engineer. “It provided configurable rule sets and let us do what we needed to get a quality match make.” Español Amazon EventBridge makes it easier to build event-driven applications at scale using events generated from your applications, integrated SaaS applications, and AWS services. Learn more » About Nuverse 日本語 Amazon GameLift Customer Stories / Games Opportunity | Increasing Game Development Speed and Flexibility Using AWS for Games   Get Started 한국어 Solution | Building a Fully Managed Serverless Architecture for Developers to Focus on Game Features Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Learn more » AWS Services Used In 2022, MARVEL SNAP won Best Mobile Game at The Game Awards. Second Dinner continues to push new features as the game continues to rise in popularity, aiming to serve millions more players around the world concurrently. “MARVEL SNAP is a great flagship product,” says van Dam. “The Second Dinner team has the ambition of getting to a really big user base worldwide, and we’re delivering at scale. We want to replicate what we did for MARVEL SNAP with a lot more developers.” Reduced 中文 (繁體) Bahasa Indonesia of players worldwide MARVEL SNAP accommodates millions of players across its six global regions. A player’s mobile device calls a game client that connects to Amazon API Gateway, a fully managed service that makes it simple to create, publish, maintain, monitor, and secure APIs. Amazon API Gateway invokes functions of AWS Lambda, a serverless, event-driven compute service that helps organizations run code for virtually any type of application or backend service without provisioning or managing servers. Second Dinner built its serverless architecture around AWS Lambda functions that integrate with other AWS services within Nuverse’s account for stable online user experiences. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Traditionally, similar games run on a single server in a data center or in the cloud, but Second Dinner had committed to a serverless architecture using solutions from AWS for Games, which helps customers to build, run, and grow their games with purpose-built cloud services and solutions. “We adopted AWS early on and identified a set of services that could help us accomplish our goal,” says Aaron Brunstetter, Second Dinner’s vice president of engineering. “We realized that we could just use AWS and focus on things that we could do uniquely and powerfully.” Second Dinner developed the game under its own AWS account, then migrated the architecture to Nuverse’s AWS account for stress testing and deployment. Teams from Second Dinner and Nuverse worked alongside AWS technical account managers to complete the transfer in 3 weeks. “On our own, it would have taken us about 6 months,” says Brunstetter. “The near-immediate turnaround was essential to a successful launch.” The fully managed serverless architecture means that engineers can focus on game features, not infrastructure. “The support from AWS has helped our organization to learn quickly,” says van Dam. “The essentially problem-free launch of MARVEL SNAP speaks for itself.” 中文 (简体) To further build resilience into the architecture, Second Dinner uses Amazon EventBridge, a serverless event bus that helps to receive, filter, transform, route, and deliver events. For example, events from Amazon EventBridge can trigger AWS Lambda to update player data stored in Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database. “We didn’t want to build a backend for the game,” says Moore. “We were building the actual game, and that’s where we want to spend all our time.” In fact, Second Dinner saves the equivalent of up to 20 additional engineers who otherwise would have needed to focus completely on running servers and managing the backend infrastructure. About Second Dinner 20 Nuverse is the gaming division of the Chinese internet technology company ByteDance and a game development and publishing brand for players and developers around the world. Overview Second Dinner founders were behind the successful digital card game Hearthstone, which had gained 10 million player accounts within 1 month of its release in 2014. As a newly formed independent game studio in 2019, Second Dinner secured a license from Marvel Entertainment and began to develop a game based on Marvel characters. At an industry event, the team by chance met representatives from Nuverse, the gaming division of ByteDance, who were looking to collaborate with experienced studios with global ambitions. Second Dinner engineers showed the Nuverse team a prototype of MARVEL SNAP, in which players compete in an online Marvel universe with digital decks of cards that contain special powers. “Nuverse brings scale to developers, including access to key capabilities that indie studios don’t have in house, such as marketing resources and investments,” says Tom van Dam, head of the Nuverse global business development team. “We also are responsible for the backend infrastructure, which gives autonomy and creative freedom to the US developers.” Additionally, Second Dinner and Nuverse gain greater insights into infrastructure costs, and they avoid operating under the burden of financial commitments to hardware and software that they had to build themselves. “What was important for us from the beginning was the cost aspect,” says van Dam. “We’ve also been able to conquer time zones and language barriers. We work alongside AWS teams in multiple locations, supporting an infrastructure that doesn’t require a lot of time away from focusing on development of core features.” The architecture’s support for match play across regions facilitates the implementation of new features. For example, the Battle Mode game feature allows players to compete live against their friends in addition to anonymous players on the internet. MARVEL SNAP launched in October 2022 and rapidly scaled to millions of global players in a few months. Early stress tests had pushed concurrency levels to 140,000 games per minute without interruptions, giving the team confidence that it could handle massive numbers of users. “Second Dinner engineers have been through many game launches before and, to a person, we felt like this was the smoothest, most successful launch technically that we’d ever experienced,” says Brunstetter. “Without a doubt, our reasons for that were the choices we made and the services provided by AWS.” MARVEL SNAP: How Second Dinner and Nuverse Built and Scaled the Mobile Game of the Year Using AWS for Games Aaron Brunstetter Vice President of Engineering, Second Dinner Türkçe English Outcome | Scaling Smoothly to Millions of Players Worldwide Amazon API Gateway Millions Amazon EventBridge To a person, we felt like this was the smoothest, most successful launch technically that we’d ever experienced. Without a doubt, our reasons for that were the choices we made and the services provided by AWS.” Learn how Second Dinner and Nuverse used AWS-managed services to build a scalable architecture that supports millions of players worldwide. Deutsch Tiếng Việt Italiano ไทย full-time engineering job saved from backend management Based in California, Second Dinner is a startup independent game studio founded in 2018. Its first game, MARVEL SNAP, won Mobile Game of the Year within 4 months of its release. Learn more » time to market for new game features AWS Lambda Português
Maxar Case Study.txt
Cloud HPC Achieves the “Impossible” Français Benefits of AWS In addition, Hartman says, “There are a number of new programs and funding vehicles being appropriated by the US government as well as international organizations that want to leverage HPC in the cloud. We believe Maxar’s experience and recent achievements should allow us to extend this technology into these same organizations.” Optimizing Compute Costs to Compete against a Free Service Cecelski concludes, “We look forward to taking advantage of new services as AWS continues to expand its offerings, shapes the future of HPC in the cloud, and helps enable us to deliver high-performing, cost-effective services to our clients.” Español Amazon FSx for Lustre makes it easy and cost effective to launch and run the world’s most popular high-performance file system. Accelerating Forecast Delivery 日本語 Historically, many industries have relied on reports generated by the on-premises supercomputer operated by the National Oceanic and Atmospheric Administration (NOAA). However, the weather predictions take an average of 100 minutes to process global data. Over time, many companies began to realize they would require much faster weather warnings to protect their interests. Similar to how NASA has expanded its partnerships with private firms to acquire commercial space hardware and services, the processing and delivery of critical weather data products could also be effectively commercialized. Contact Sales Thanks to the success of the application, Maxar clients can now take proactive measures earlier when assets and personnel are threatened by extreme weather. “Our clients can better protect equipment and evacuate personnel sooner,” says Hartman. “And if weather threatens a commodity, our financial clients now have more time to make buy-sell decisions.” Generates weather forecasts 58% faster 한국어 Decreases compute costs by 45% Initially, Maxar designed a cloud HPC cluster with 234 Amazon EC2 instances capable of producing a numerical weather prediction forecast in roughly 53 minutes, just about half the 100 minutes that the NOAA supercomputer takes to complete the same forecast. This accomplished Maxar’s initial performance goal, so the team set its eyes on enhancing the design to reduce cost. When weather threatens drilling rigs, refineries, and other energy facilities, oil and gas companies want to move fast to protect personnel and equipment. And for firms that trade commodity shares in oil, precious metals, crops, and livestock, the weather can significantly impact their buy-sell decisions. To limit damage, these companies need the earliest possible notice before a major storm strikes. That’s the challenge Maxar Technologies set out to solve. Amazon FSx for Lustre With the fast networking speed provided by AWS, we accomplished what many IT experts considered impossible." Reduces required server instances by 33% Amazon EC2 C5 Instances Stefan Cecelski Data Scientist, Maxar Technologies Elastic Fabric Adapter AWS Services Used Maxar delivers Earth Intelligence and space infrastructure and currently has more than 90 geo-communication satellites in orbit and five robotic arms on Mars. The company collects data across more than 3 million square kilometers of satellite imagery per day and has an archive of over 110 petabytes of satellite images spanning the globe. Amazon EC2 C5 instances deliver cost-effective high performance at a low price per compute ratio for running advanced compute-intensive workloads. 中文 (繁體) Bahasa Indonesia Provides clients with more time to react to extreme weather Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. Shaping the Future of High Performance Computing Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Using EFA networking, Maxar reduced that cluster from 234 c5.18xlarge instances to just 156 c5n.18xlarge instances, which was driven by the ability of the C5n instances to communicate at 100 Gbps network speeds. The EFA interconnect made it possible to outperform the NOAA supercomputer, shortening the forecast time even further—from 53 to 42 minutes, a 22 percent decrease. The team’s new configuration can now produce a forecast 58 percent faster than NOAA’s supercomputer. Additional testing and optimization with AWS revealed Maxar could complete a forecast in under 30 minutes. With further system tuning, Maxar projects it can cut its processing time by an additional 25 percent. The environment automatically spins up when weather data becomes available and then quickly shuts down until a new dataset is available, using numerous AWS services to orchestrate a highly scalable, redundant, and fault-tolerant workflow. The overall cost-optimization measures applied by AWS—including the integration of Amazon EC2 C5n instances with EFA—have enabled Maxar to reduce compute cost by approximately 45 percent. “We need the AWS compute resources for only about 45 minutes each day to run our numerical weather prediction application, so it is a huge benefit to have an AWS environment that we can use only when required,” says Cecelski. To learn more, visit aws.amazon.com/hpc. Get Started “Prior to using AWS, no one thought any cloud environment was capable of outperforming an on-premises supercomputer in generating numerical weather predictions,” says Stefan Cecelski, a data scientist at Maxar. “But with the fast networking speed provided by AWS, we accomplished what many IT experts considered impossible.” Türkçe English About Maxar Technologies AWS ParallelCluster is an AWS-supported open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. The comprehensive tools, utilities, and the overall AWS technology stack not only allowed Maxar to optimize the solution for cost and performance, but also to get to market more quickly. “In the past, it was typically cost-prohibitive for any non-government or non-academic entity to go through the procurement and investment activities to research, buy, build, configure, and then set up a traditional on–premises, bare-metal HPC environment,” says Hartman. “However, with AWS, the barrier for commercial solutions has truly been eliminated. Plus, given the experience our team has gained through setting up our cloud HPC programs and offerings, we are well-positioned to help numerical weather prediction users—and even the core authors of numerical weather prediction models like NOAA and ECMWF (European Centre for Medium-Range Weather Forecasts)—better understand and leverage commercial solutions for numerical weather prediction applications as well as other HPC needs for all areas of Earth Intelligence.” Deutsch Tiếng Việt Maxar worked with AWS to create an HPC solution that includes four key technologies. The company relies on Amazon Elastic Compute Cloud (Amazon EC2) for highly secure, resizable compute resources and the ability to configure capacity with minimal friction. Maxar also uses the Elastic Fabric Adapter (EFA) network interface to run its application with a hardware bypass interface that speeds up inter-instance communications. To complement the enhanced computing and networking, the application uses Amazon FSx for Lustre to accelerate the read/write throughput of the application. Maxar also takes advantage of AWS ParallelCluster, an open source cluster management tool that makes it easy to deploy HPC clusters with a simple text file that automatically models and provisions resources. AWS ParallelCluster Italiano ไทย Automatically spins 156 server instances up and down To resolve this issue, Maxar sought to significantly reduce the time needed to generate numerical weather predictions. Its data scientists, engineers, and DevOps team decided to build a high performance computing (HPC) solution to deliver forecasts in half the time of the NOAA supercomputer. “We first considered an effort that would involve building the system in an on-premises data center,” says Travis Hartman, director of analytics and weather at Maxar. “But we realized we needed a cloud environment to build a cost-effective solution that our DevOps team could easily manage and which would allow us to significantly reduce our timeline to get the results to market.” 2020 Learn more » Maxar Uses AWS to Deliver Forecasts 58% Faster Than Weather Supercomputer Having achieved its performance goal, Maxar next focused on delivering the service profitably. Maxar needed to keep the cost of its weather application as low as possible to compete with the free, yet slower, service that NOAA provides. Maxar realized this objective by reducing the number of servers and optimizing the cost of the system—without negatively impacting performance. By using AWS ParallelCluster with Amazon EC2 C5n instances and EFA, Maxar generates the same computing power while decreasing the number of clustered servers by 33 percent. So Maxar turned to Amazon Web Services (AWS). “We knew HPC on AWS could provide an environment that balances performance, cost, and manageability,” Hartman says. “The key AWS capabilities we wanted to leverage for our numerical weather prediction application included automatic environment builds and shutdowns, elastic compute resources, the necessary networking bandwidth to crunch the numbers quickly, and the ability to do so with the velocity required by our business and customer goals.” Português
Measurable-AI-case-study.txt
Amazon Simple Storage Service Leveraging Managed Services to Simplify Scaling and Control Overhead MailTime currently has 1.4 million users and Measurable AI processes more than 10 million emails each day to extract granular, itemized insights. These actionable insights are used by digital economy companies, consultancies, academia, and financial institutions to better predict revenues, and gain an in-depth understanding of their customer purchasing behavior and competitive intel. The company is currently the largest provider of e-receipt data across emerging markets, with a dominant position in Southeast Asia, the Middle East, Latin America, and India.   Français To offload the burden of database administration, Measurable AI is also using Amazon Relational Database Service (Amazon RDS) for MySQL. Gary Lau, cofounder and CTO of Measurable AI, says, “We determined that AWS managed services, such as Amazon EKS and Amazon RDS, would simplify scaling while controlling cost and overheads. This is important as we’re still a small team.” The startup currently has 20 employees in Hong Kong and the UK. Transferring Data Securely via Amazon S3 Buckets Español Reducing Query Times from Hours to Minutes Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Gary Lau Cofounder & CTO, Measurable AI The results have been impressive. Since adopting Amazon OpenSearch Service, Measurable AI has reduced average query times from hours to minutes, meaning customers can obtain actionable consumer insights at a faster speed. Furthermore, developers now utilize built-in dashboards for monitoring instead of building their own. The startup is saving at least 20 percent of developers’ time previously spent on monitoring and maintenance. “Amazon OpenSearch Service has delivered faster search and query performance with rich client libraries for easy integration. Plus, it’s freed up more time for us to focus on developing,” Lau says. 日本語 Contact Sales Alternative data is all about speed. Freeing up our developers’ time to deliver insights to the market faster is key, and managed services from AWS allows us to do that."   한국어 Lau says, “Amazon S3 is an industry standard for secured and convenient data sharing. The solution provides managed, secure, and scalable data storage with low latency. Another advantage is we can create a temporary link for customers to download data directly from Amazon S3 rather than our own servers, offsetting some bandwidth from our compute requirements.” Measurable AI can also transfer data via restful application programming interfaces (APIs) for customers that don’t have a data pipeline or prefer an alternative method to Amazon S3 buckets. Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. Measurable AI is a B2B provider of aggregated, anonymous data insights for digital economy companies, financial institutions, and researchers. Based in Hong Kong, its data coverage spans emerging markets in Southeast Asia, Latin America, and the Middle East. Benefits Get Started AWS Services Used In 2018, Measurable AI migrated to Amazon Web Services (AWS) from another cloud provider. Among other reasons, it sought to leverage the rich features available in Amazon Elastic Kubernetes Service (Amazon EKS), such as customized node groups to improve scalability, a feature not available with the company’s previous provider. Amazon OpenSearch Service 中文 (繁體) Bahasa Indonesia After migrating to AWS, Measurable AI looked for other ways to improve operations with managed services on AWS. One of its focus areas is query performance, a key success criterion for the startup. In typical use cases, Measurable AI customers query the startup’s data sets to explore and parse information about their own customers or markets. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Learn more » 中文 (简体) About Measurable AI Measurable AI Empowers Businesses with Faster Insights from Alternative Data on AWS Initially, Measurable AI deployed the open-source Elasticsearch engine on Amazon EKS. However, its developers were spending too much time maintaining infrastructure, and complex queries could take hours to run. It switched to Amazon OpenSearch Service, a managed analytics suite, to perform queries on the 70 TB of email data currently stored in Amazon Simple Storage Service (Amazon S3). Developers also appreciate the ease with which they can upgrade instance types without managing additional storage requirements and configuration changes. “If we need to improve query performance, we simply upgrade the instance and the attached storage is managed by Amazon OpenSearch Service,” explains Lau.   Amazon Elastic Kubernetes Service Türkçe English AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. •  Reduces query times from hours to minutes •  Queries 70TB of data each day •  Frees up 20% of developers’ time •  Simplifies scaling with customized Kubernetes node groups •  Transfers data securely to customers’ data pipelines •  Automates storage configuration changes •  Reduces time-to-market with serverless technology The startup is growing its customer base for both its B2C and B2B operations and is prepared to scale with an agile foundation on AWS. Lau concludes, “Alternative data is all about speed. Freeing up our developers’ time to deliver insights to the market faster is key, and managed services from AWS allows us to do that.” Deutsch Freeing Up Developers with Serverless Technology Last year, Measurable AI introduced RewardMe, a cashback reward app to reward individual users for contributing anonymous data points. Consumers sign up for RewardMe, link the app to their credit card or email account, and automatically earn cryptocurrency or cash back with every purchase they make across 100 merchants worldwide. To reduce time-to-market, Measurable AI used AWS Fargate as a serverless compute engine to launch RewardMe. Tiếng Việt Learn More Italiano ไทย To learn more, visit aws.amazon.com/solutions/analytics. Measurable AI is an alternative data startup specializing in providing corporations with granular insights extracted from its own transactional e-receipt consumer panel. Founded in 2014, this innovative data provider started out pioneering MailTime, an email productivity app which helps ‘declutter’ mailboxes and prioritize emails in an easy-to-use SMS format. 2022 To receive weekly insights from Measurable AI, most customers request data transfers via Amazon S3 buckets. Measurable AI defines read-only permission settings, grants access rights using AWS Identity and Access Management (IAM), and then customers receive data in Amazon S3 direct to their own data pipelines. According to research, the market for alternative data is expected to grow to $3.2 billion in 2022 and reach $13.9 billion by 2026 at a compound annual growth rate of 44%. Alternative data is defined as unstructured text and imagery from news feeds, social media, online communities, communications metadata, satellite imagery, geo-spatial information, and other sources that can help businesses derive unique—and valuable—market insights. Português