ID
stringlengths
11
163
Content
stringlengths
1.52k
32.9k
Mediality Leverages Automation to Deliver Racing Data Faster on AWS _ Case Study _ AWS.txt
Mediality Racing worked with AWS Partner Cevo to migrate from legacy Microsoft Windows workloads and develop a cloud-native, serverless data framework using AWS Amplify and AWS Lambda. 10 minutes Français With the help of Cevo, Mediality has automated several formerly manual workflows. Efficiency has skyrocketed, and employees can redirect their attention to more value-added tasks like product development. Employee satisfaction has likewise increased because monotonous, time-consuming tasks have been removed from daily workflows. “We can use our resources and in-depth racing knowledge better to our competitive advantage,” McLean explains.  The increase in automation across all data processes has drastically improved operation-wide efficiency. Mediality also has higher visibility into workflows on the AWS Cloud, to see where further automation could be introduced. Its teams are currently putting the finishing touches on a public API, which will be a first for the business.  2023 AWS Landing Zone is a solution that helps customers more quickly set up a secure, multi-account AWS environment based on AWS best practices. With the large number of design choices, setting up a multi-account environment can take a significant amount of time, involve the configuration of multiple accounts and services, and require a deep understanding of AWS services. Learn more » Formed after the separation in 2020 of Australian Associated Press (AAP), Mediality Pty Ltd offers diverse media and publishing solutions including the country’s premier press release distribution network. Its Mediality Racing division, formerly AAP Thoroughbred Information Services and then AAP Racing, has decades of experience delivering data on thoroughbred horses to clients such as wagering operators, horse owners, and individual punters.  Español Mediality, a company formed of business units previously known as the Australian Associated Press, provides modern media and publishing solutions for businesses of all sizes. To offer faster, more flexible data delivery, its Mediality Racing division decided to migrate from Microsoft Windows and older legacy workloads in the data center to more open-source alternatives on the AWS Cloud.  Amazon DocumentDB (with MongoDB compatibility) is a fully managed native JSON document database that makes it easy and cost effective to operate critical document workloads at virtually any scale without managing infrastructure. Learn more » Better automation Learn More Philip McLean Managing Director, Mediality Racing Pty Ltd 日本語 Opportunity | Modernizing 40-Year-Old Data Center Architecture Contact Sales AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. Learn more » Customer Stories / Media & Entertainment Mediality has highly skilled developers on staff, but most of their experience prior to this project was with the .NET framework, and they were struggling to keep up with the company's growth. To build upon its developers’ expertise, the business chose to work with Cevo, an Amazon Web Services (AWS) Partner. Mediality had other workloads on AWS and wanted to execute the data project on a trusted platform following cloud best practices. The company has an ongoing relationship with Cevo and valued its deep knowledge and experience in developing solutions for customers—including those in the racing industry—using AWS NoSQL and serverless technologies.  2 hours 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Because racing workflows are cyclical and prone to spikes just before events, Cevo recommended that Mediality use a serverless, pay-per-use approach for data transfers. Mediality is now using AWS Lambda serverless code to check for and automatically retrieve input data as it’s updated. Data retrieval and ingestion are fully automated, event-driven processes. Many files that formerly required manual transfer are now sent immediately to customers, saving about 10–15 minutes per event. Previously, Mediality Racing’s account manager would spend at least 2 hours a day preparing and loading files for each race. “This project will finally allow our account manager to focus on business and product development,” explains Philip McLean, managing director at Mediality Racing.  AWS Landing Zone Solution | Developing User-Friendly, Cloud-Native Data Workflows Get Started saved daily on database management After analyzing how data was flowing in and out of its core database, Cevo helped Mediality migrate from Microsoft SQL Server, a relational database hosted in a managed data center, to Amazon DocumentDB, a fully managed non-relational database service.  AWS Services Used AWS Amplify Mediality, formed after the Australian Associated Press (AAP) was restructured in 2020, provides modern media and publishing solutions for businesses of all sizes. Its Mediality Racing division (formerly AAP Racing) has been supplying accurate, updated horse racing data used in form guides for nearly four decades.  中文 (繁體) Bahasa Indonesia Eliminates technical debt from data center Amazon DocumentDB ไทย Ρусский About Mediality عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Futureproofed operation The enhanced platform will enrich our existing customer relationships and provide a future-proofed foundation for new business opportunities.” Overview Mediality Racing plans to release its public API in 2023, and the company anticipates the move will open the door to a whole new set of use cases for its customers, including bespoke racing app development. “Having a public API transforms the way we can deliver our product and ultimately the way customers consume our data. The enhanced platform will enrich our existing customer relationships and provide a future-proofed foundation for new business opportunities,” McLean concludes. Mediality Racing now uses AWS Amplify as a user-friendly development framework, AWS Lambda to drive event-driven automation, and Amazon DocumentDB as a fully managed database service. The company is able to offer customers an API for faster data delivery and consumption, freeing up employees from the many file management tasks that filled their workdays.  Türkçe AWS Lambda English A company restructuring in 2021 provided the opportunity to streamline. Tim Mansour, technology initiatives manager at Mediality Racing, explains, “We decided to move forward with a greenfield approach to redesign our data platform to be cloud native, leaving the past behind and deploying modern technologies to boost workflow efficiency.”  By modernizing its data platform on the AWS Cloud, Mediality can offer customers a flexible API that facilitates faster retrieval of time-sensitive racing data. “The faster our customers can get their products to market—products that rely on our data—the more likely they are to capture the punter’s dollar,” McLean explains.  Mediality Racing had attempted a piecemeal approach to modernization, but this ended up adding rather than reducing workflow complexity. Meanwhile, several of its customers were asking for more modern data delivery formats, including application programming interfaces (APIs). The company had been delivering racing data via large XML files for many years.  AWS Amplify is a set of tools and services that can be used together or on their own, to help front-end web and mobile developers build scalable full stack applications, powered by AWS.  Outcome | Eliminating Technical Debt with Flexible API Solution With the API, Mediality expects to see even greater efficiencies in file transfer timelines. Currently, employees take 7–8 minutes to review updated racing files and validate the data before sending updates to customers. Luke Donnelley, operations manager at Mediality Racing, says, “We’re expecting to see a significant uptick—up to 5 minutes—in the speed that we can deliver data. Five minutes is very significant in the corporate online book-making industry in Australia, which has become ultra-competitive. It’s a race for information.”  Deutsch To learn more, visit aws.amazon.com/solutions/migration. Tiếng Việt When Mediality was spun off from AAP, the business—and its subsidiaries such as Mediality Racing—inherited legacy data center and application architecture, with Windows-based workloads that were initially built nearly 40 years ago. Mediality recognized the need to modernize but lacked the investment capital to move towards an open-source architecture on the cloud.  Italiano Mediality Leverages Automation to Deliver Racing Data Faster on AWS Mediality has also boosted resilience and future-proofed its operation with the migration, by eliminating the technical debt associated with running legacy on-premises applications. Mansour elaborates, “We have very loyal staff that have been with us for 20-plus years and knew how to run our on-premises SQL database well. But that came with a significant business continuity risk, as that knowledge resided with just a few individuals. People just aren’t learning those types of legacy workflows and programming languages like COBOL anymore.” With the implementation of Amazon DocumentDB, Mediality has a lower total cost of ownership with a fully managed database that eliminates undifferentiated management tasks and licensing fees. Learn more » Specifically, Mediality Racing wanted to shift from bespoke Windows applications to web interfaces. Its primary database, built on Microsoft SQL, stores horse racing data back to the 1980s and is the core of the business. Mediality Racing supplies Australia’s major newspapers with information for form guides and has a long-standing reputation for data accuracy. Ensuring the integrity of its data during a planned migration was critical. Cevo quickly began helping Mediality Racing develop cloud-native data workflows, setting up an AWS Landing Zone and using AWS Amplify as a user-friendly development framework. Mansour says, “AWS Amplify has been incredibly useful because it allows us to deploy very quickly and easily, pushing code changes to new environments in about 10 minutes.” This faster deployment directly accelerates Mediality’s development process by cutting testing time in half, Mansour explains. AWS Amplify also detects if parts of code are broken and prevents deployment in such cases—thwarting potential errors in racing data due to breaks in code.  to push changes in code Redirects resources to more value-added tasks Português
Mercks Manufacturing Data and Analytics Platform Triples Performance and Reduces Data Costs by 50 on AWS _ Case Study _ AWS.txt
MANTIS unifies data across business units and makes it ready for analysis and decision-making to unlock business value. The platform uses Français By using AWS services, Merck’s Digital Manufacturing organization is effectively overcoming the challenges of implementing and sustaining a huge, complex data platform. The company provides data analytics solutions and capabilities to thousands of users across the globe. Looking ahead, Merck will focus on scaling the platform for low-latency data availability, virtualization, and no-code self-service. data scalability, democratization, and advanced analytics Español Amazon Redshift, which uses SQL to analyze structured and semi-structured data and model large datasets. This makes it simple for thousands of engineers, supply chain managers, and process engineers to create and consume data models. “MANTIS is using AWS services to develop reusable solutions using both ‘lake house’ and ‘data warehouse’ architectures to offer the flexibility and agility required by users,” says Silai. in operating costs Opportunity | Using AWS to Build a Scalable Platform for Manufacturing Data at Merck  日本語 Solution | Creating a Scalable Data Lake and Warehouse and Saving 50 Percent in Operating Costs   Learn how Merck scaled data storage by migrating its legacy manufacturing data platform to AWS. Merck’s Manufacturing Data & Analytics Platform Reduces Costs by 50% on AWS 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used The team further complements Amazon S3 with Improved Get Started To share raw and aggregate data, Merck paired Amazon S3 with AWS Services Used For over 130 years, Merck has developed important medicines and vaccines to prevent and treat diseases in people and animals. In 2017, the company’s IT team developed MANTIS, a centralized data and analytics platform, to help store, visualize, and analyze global manufacturing data in an effective, efficient, secure, and reliable manner. The platform was initially built on premises. “MANTIS not only streamlines manufacturing operations and helps achieve our strategic goals but also helps us become a more data-driven organization,” says Ram Silai, director in the Digital Manufacturing organization at Merck. 400 TB Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » In 2019, Merck migrated MANTIS to the cloud. It chose AWS due to the flexibility of different services, the ability to run programs at a global scale, and the combination of low-cost storage with high-speed data processing capabilities. Moreover, AWS has been an important component of Merck’s enterprise cloud journey. “AWS provides key enterprise services for Merck and supports our cloud-first strategy at every touchpoint across the organization,” says Silai. “We engage with the AWS team on a constant basis so we can align our road map, improve our capabilities, and become more efficient for our users and businesses.” Using AWS tools like Reduced 中文 (繁體) Bahasa Indonesia With MANTIS and other data platforms within Merck adopting similar AWS-based architectures, the company can better unify and share data across business units and divisions. “We will be able to share data between research, manufacturing, commercial, and global support functions seamlessly,” says Silai. “Using AWS capabilities, we’re truly bringing data to the heart of decision-making at Merck. And it’s just the beginning of what is possible.” ไทย Ρусский AWS Glue عربي Merck’s Manufacturing Data and Analytics Platform Triples Performance and Reduces Data Costs by 50% on AWS 中文 (简体) AWS Glue, a serverless data integration service that simplifies discovering, preparing, migrating, and integrating data from multiple sources for analytics. This architecture simplifies and democratizes data usage for everyone at Merck through powerful data visualizations and user-friendly applications built on top of the AWS-powered data lake. Using this platform, stakeholders can get a holistic and near-real-time view of Merck’s manufacturing operations and supply chain. They can also run advanced analytics to optimize manufacturing processes, reduce operational risks, and drive meaningful outcomes. “The solution helps teams spend less time searching and moving data and more time using it for meaningful patient and business outcomes,” says Silai. in performance versus legacy solution Outcome | Pursuing Data Innovation Using AWS Services   Amazon Redshift AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development. Learn more » Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Overview Since implementing AWS solutions, there has been a 50 percent reduction in operating costs and a three-time improvement in performance compared to the legacy on-premises solution. Merck has also seen a significant decrease in time to ingest data for developing solutions, an improved compliance posture, and increased supply chain visibility. MANTIS stores roughly 400 TB of data, adding about 1 TB of data each day. “What’s also significant is that the new platform has made it simpler to develop and implement solutions that are required to follow Good Manufacturing Practices requirements,” says Silai. AWS CloudTrail, which monitors and records account activity across AWS infrastructure, to gain more control over storage, analysis, and remediation. “AWS CloudTrail is very important to our approach because we want to have a clear audit trail to meet Good Manufacturing Practice requirements,” says Silai. Türkçe Merck (known as MSD outside of United States and Canada) is a global healthcare company that delivers innovative health solutions through its prescription medicines, vaccines, biologic therapies, and animal health products.   3x Improvement English Global biopharmaceutical company Merck uses the power of leading-edge science to save and improve lives around the world. To enhance the efficiency of its global manufacturing operations, it needed complete visibility across production lines and sites, along with robust data and analytics capabilities, to identify areas in need of improvement. With Merck’s manufacturing data growing in volume and complexity, its legacy data platform was constantly challenged by performance and scalability thresholds. Furthermore, it was increasingly expensive to manage a full-stack platform on premises. Amazon CloudWatch, which collects and visualizes near-real-time logs and metrics, Merck monitors its collection and use of data, notes problems as they arise, and maintains compliance. The platform has a single access management governance framework based on different data domains. In addition to Amazon CloudWatch, Merck uses About Merck time to ingest data significantly Amazon Simple Storage Service (Amazon S3), an object storage service built to store and retrieve any amount of data from anywhere, to enhance data availability for users while lowering storage costs and time to market. Using Amazon S3, Merck unifies data silos and increases data availability at low cost while providing the highest levels of security and reliability. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Due to exponential growth and increasing variety of data, MANTIS constantly hit its performance and scalability limits. With data from over 120 source systems and thousands of users, Merck needed a more scalable and reliable system that provided maximum efficiency and reduced operating costs. “We wanted speed and efficiency to develop applications on a data platform,” says Silai. “Plus, because we had a range of technology solutions with multiple vendors, it was cumbersome to move, share, and analyze data.” Deutsch Ram Silai Director in the Digital Manufacturing Organization, Merck Tiếng Việt Amazon S3 Using AWS capabilities, we’re truly bringing data to the heart of decision-making at Merck. And it’s just the beginning of what is possible.” Italiano Customer Stories / Life Sciences To overcome this challenge, Merck’s IT team used Amazon Web Services (AWS) to build and implement a holistic platform, MANTIS, to bring data and analytics capabilities to the heart of decision-making for manufacturing. The platform unifies data originating from over 120 manufacturing systems and external parties, providing over 3,000 users with a simpler and more cost-effective way to access and analyze data. Using MANTIS, Merck’s manufacturing division can achieve its strategic goals and ensure that life-saving medications make it to the right place at the right time with the highest levels of quality. Amazon CloudWatch Contact Sales of data stored Learn more » 2023 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 50% Reduction Português
Midtrans Case Study _ Amazon Web Services.txt
Amazon Simple Storage Service Français Midtrans, an Indonesian-based epayment gateway and subsidiary of digital payment technology organization, GoTo Financial (formerly GoJek Group), wanted to solve this problem by helping SMBs gain easier access to cloud technology and services. “The cloud has accelerated innovation and led to digital transformation for many enterprises. But many Indonesian SMBs face business challenges when trying to digitize their infrastructures,” says Eizel Mauldy Muhammad, project manager for Pojok Usaha, “For example, traditional merchants, such as small shop owners with no website or social media presence, sometimes lack the technical resources to support digital transformation.” Midtrans wanted to make it easier for these companies to use the cloud to change the way they sell products and services. Drives digital transformation for Indonesian SMBs Español Midtrans (GoToFinancials) Collaborates with AWS to Drive Digital Transformation for SMBs through its Pojokusaha.com Online Portal Learn More 日本語 AWS Services Used Offering More Applications and Helping Additional Customers Drive Innovation Builds portal in 7 months Get Started 한국어 The portal contains over 30 cloud-based products from both AWS partners and Midtrans customers. SMBs can purchase the products through the Midtrans payment gateway. These offerings include web development services, chat bots, and point-of-sale applications. The portal is designed to assist two types of SMBs: those with no digital footprint, and those with digital services seeking to expand their customer base. By using the portal, businesses can connect with sales teams for applications and services, facilitating a simpler approach to onboarding. About Midtrans To learn more, visit aws.amazon.com/campaigns/small-medium-businesses.  Many small and medium businesses (SMBs) that want to move to the cloud never get off the ground. Some SMBs let concerns around maintenance, security, and costs prevent them from fully adopting cloud services. Helps AWS partners grow their business  Driving Digital Transformation for Indonesian Merchants 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Aside from merchants, AWS Partners are also leveraging the portal to connect seamlessly with merchants. One partner is Jurnal by Mekari, which provides a cloud-based accounting application for its customers through the portal, with the goal of increasing technology adoption and sophistication among Indonesian SMBs. “Through the Pojok Usaha portal on AWS, we are giving partners a way to provide their digital products to merchants in one place to help them grow their business faster,” says Eizel. Creating a Digital Portal in 7 Months The Pojok Usaha portal makes it simpler for SMBs across Indonesia to quickly find and procure cloud services and solutions from AWS and its partners. “By working with AWS to create this portal, we’re serving traditional businesses lacking the technical resources to tap into the digital world,” says Eizel. “By accessing the portal, they can simply click and sign up for a new application or service and begin using cloud solutions without building and maintaining their own software.” Contact Sales Ρусский Midtrans, based in Indonesia, provides complete digital payment solutions for enterprises, startups, and small and medium businesses. More than 500,000 businesses use the Midtrans payment gateway for electronic payments, and the platform processes over 20 million transactions every month. عربي Simplifies procurement of cloud-based applications and services 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » Benefits of AWS Midtrans and AWS will continue to collaborate to offer additional applications and services through the portal, including a broader suite of AWS-native services alongside seamless payment capabilities via GoPay, a digital wallet for online payments. Working together, the two companies will also grow the portal via the new AWS Asia Pacific (Jakarta) Region. With three Availability Zones, AWS customers and partners have wider ability to process and store data locally.“ By working with AWS to create this portal, we’re serving traditional businesses lacking the technical resources to tap into the digital world.” Eizel Mauldy Muhammad Project Manager, Pojok Usaha To achieve its goals, Midtrans engaged with Amazon Web Services (AWS) to build a solution that extends its payment gateway to SMBs across Indonesia. The two organizations conducted joint planning and “working backwards” sessions—a product development approach in which companies start from the ideal customer end state and work backwards, to align business priorities. Following these sessions, the two companies agreed to collaborate on a new digital portal for SMBs. AWS supported Midtrans by offering financial support and technical expertise from a local AWS partner. Eizel adds, “We collaborated closely to create the portal, from strategizing to implementation.” This joint effort resulted in Midtrans completing the project, from ideation to design to launch, within seven months. Türkçe The AWS Asia Pacific (Jakarta) Region will help us reach more SMBs in Indonesia,” says Eizel. “We hope to attract more than 10,000 businesses through the portal and help them create new efficiencies and drive innovation in the cloud. The outcome was Pojok Usaha (the Business Corner in Bahasa Indonesia), an online portal that acts as a centralized hub for SMBs. The portal runs on Amazon Elastic Compute Cloud (Amazon EC2) instances and relies on additional services including Amazon Relational Database Service (Amazon RDS) and Amazon Simple Storage Service (Amazon S3) for data storage. English Amazon Relational Database Service Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch Through the Pojok Usaha portal, Midtrans and AWS are reaching their goal of helping SMBs in Indonesia digitize their businesses and ultimately accelerate their cloud journeys. One merchant taking advantage of the portal to drive digital transformation is Mutia Karya, a food supplier that developed a platform, Mikrolet, to connect food stall operators with suppliers. Another company, Livina Global Teknologi, is using Pojok Usaha to sell an application called Mostore, which allows food and beverage companies to promote their products digitally. Tiếng Việt Italiano ไทย Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 2022 Simplifying Cloud Application Procurement for SMBs Amazon Elastic Compute Cloud Creates centralized online portal featuring more than 30 products and services Português
Migrating Large-Scale SAP Workloads Seamlessly to AWS with Sony _ Sony Case Study _ AWS.txt
Additionally, Sony GISC-IN used Amazon Elastic File System (Amazon EFS), a serverless, fully elastic file storage service, for its main SAP directories in a high-availability cluster. AWS Enterprise Support worked with Sony GISC-IN to optimize its Amazon EFS usage. By configuring Amazon EFS throughput, incorporating lifecycle policies to migrate infrequently accessed data to an infrequent-access tier, and optimizing mounts so that they used recommended parameters for optimal performance, Sony reduced Amazon EFS costs by 40 percent. As it moves forward, Sony plans to develop advanced solutions to help business users work faster and smarter. These solutions include dynamic pricing strategies, self-management applications, and ML models. The possibilities are virtually endless, and Sony is excited to explore the potential of its new AWS infrastructure. in data footprint Français Key Highlights of SAP West Platform inluded that the platform is a multitenant environment that serves the following Sony business units: Sony Europe, Sony North America, Sony Interactive Entertainment Europe, Sony Corporation of America, Sony Global Treasury Services PLC, Sony Russia, Sony Ukraine, Sony Overseas AG, Sony Turkey, Professional Services Middle East and Africa (Sony Dubai), Sony Semiconductor Solutions, and Hawk-Eye Innovations. Maintains Sony migrated SAP West Platform to the cloud to address multiple drivers, including return on investment, cost reduction, technology refresh, service improvement, agility, and preparations for its migration to SAP S/4HANA on AWS—which helps companies achieve faster time to value with the AWS on-demand infrastructure. Español About Sony Electronics AWS Enterprise Support provides you with concierge-like service where the main focus is helping you achieve your outcomes and find success in the cloud. Learn more » 30% reduction Migrating Large-Scale SAP Workloads Seamlessly to AWS with Sony 日本語 AWS Services Used 2023 Sony Electronics is a multinational conglomerate corporation headquartered in Tokyo, Japan. Across Sony, SAP West Platform migration has set standards for building resilient workloads, migrating large-scale SAP Business Warehouse systems, managing information security, and achieving redundant network connectivity through integration with network hub and active directory services. For other business units, it serves as a blueprint for implementing a successful migration project while maintaining workload security and resilience, achieving redundant network connectivity, and avoiding cost overruns. Customer Stories / Media & Entertainment Get Started 한국어 Empowers Sony GISC-IN also developed a serverless solution to automate SAP refresh, removing the need for manual refresh processes. Overall, these efforts resulted in improved backup and refresh capabilities with reduced costs for business units running SAP workloads on AWS. Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Service Optimization Efficiency and innovation are part of Sony’s DNA. As a longtime AWS customer, it knows the advantages of AWS services, including cost reduction, better performance, and access to cutting-edge capabilities like machine learning (ML). With an eye on the future, Sony chose to migrate SAP West Platform to AWS and embrace the cloud’s operational benefits. When the new infrastructure was ready, Sony migrated SAP applications from on-premises data centers to AWS. The teams ran the migration in the US East (Northern Virginia) Region and distributed traffic across two Availability Zones. This approach meant that if one Availability Zone were to fail, the other would take over, minimizing disruption to the business. As a result, Sony completed the migration while maintaining high resilience and availability. Reduces Sony is already an AWS enterprise customer and has many workloads on AWS. So, it was easy to choose AWS rather than migrate to another cloud provider.” … global business users 40% improvement Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2).  Learn more » “Sony is already an AWS enterprise customer and has many workloads on AWS and many other Enterprise apps running in AWS,” says Umesh Kesavan, associate director at Sony. “So, it was easy to choose AWS rather than migrate to another cloud provider.” Globally over 6,000 Sony users rely on SAP West Platform for business-critical activities, from demand planning to warehouse management. When Sony embarked on a journey to improve agility, cost efficiency, and technological modernization, SAP West Platform became a key focus. The scope of the project included the following elements: migrating SAP application infrastructure from a traditional on-premises data center to AWS; modernizing SAP Business Warehouse by upgrading to a new version and replacing Business Intelligence Accelerator with an SAP HANA database; modernizing the legacy IBM mainframe to a Linux x86 model on AWS and rearchitecting on-premises solutions, such as SAP Master Data Management, IBM InfoPrint, and Business Warehouse Accelerator, for the cloud and SAP HANA. Additionally, the scope included demonstrating the ability to continue business transformation projects without delays or additional costs, while adhering to project timelines and business service-level agreements; avoiding functional changes that would require extensive testing to expedite user acceptance testing; improving service, agility, and sustainability for infrastructure services; and achieving service and operational improvements, increasing service scalability, and implementing reliable high-availability disaster recovery. 1 In July 2021, Sony’s SAP cloud migration successfully went live, with very smooth support in the hypercare period. The project achieved several noteworthy accomplishments, such as promised cost savings, reduced downtime, and several other benefits related to agility, transparency, and modernization. 中文 (繁體) Bahasa Indonesia AWS Enterprise Support Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Opportunity | Using AWS Services to Modernize SAP West Platform for Sony As one of the world’s largest companies, Sony Electronics (Sony) oversees a diverse range of business units with thousands of employees. Given its intricate nature, the company’s technology estate is equally complex. no items found  Umesh Kesavan Associate Director, Sony Electronics Learn more » Sony worked with AWS Enterprise Support—which provides 24/7 technical support from high-quality engineers, tools, and technology—to achieve its objectives and carry out the project successfully. The close collaboration between Sony and AWS Enterprise Support team members, as well as smooth communication and coordination, resulted in a seamless process. Throughout the migration, Sony’s technical account manager provided architectural and operational guidance to help the company achieve the greatest possible value from its AWS migration. The benefits delivered were significant, including cost reductions and increased agility, transparency, and modernization. Overview The service supports 6,000 corporate users for Sony across multiple regions, which comprise SAP West. Users rely on many SAP application products, including SAP Enterprise Resource Planning Central Component, SAP Business Warehouse, and SAP Supplier Relationship Management. Although the core applications support all tenants, the noncore applications serve specific tenants or regions. The service manages and stores more than 100 TB of application data. Outcome | Improving Performance by 40% While Reducing Data Footprint by 30% The migration delivered significant cost savings, which could be reallocated to other areas of the business. Over 200 compute instances supporting Sony’s SAP landscape were migrated to the cloud, and the company reduced its data footprint by 30 percent. The project also resulted in a 40 percent runtime performance improvement across all applications. Additionally, the migration was fully managed by Sony’s Global Information Security and Communication (Sony GISC-IN) teams with minimal business intervention. AWS Migration Acceleration Program (AWS MAP) provides tools that reduce costs and automate and accelerate execution, tailored training approaches and content, expertise from Partners in the AWS Partner Network, a global partner community, and AWS investment. Türkçe To keep the migration under budget, Sony participated in the AWS Enterprise Discount Program and the AWS Migration Acceleration Program (AWS MAP), a comprehensive and proven cloud-migration program. The credits provided by these programs helped mitigate expenses. Sony collaborated with the AWS Enterprise Support team to choose the right version of Savings Plans, a flexible pricing model that can help companies reduce their bills by up to 72 percent compared to On-Demand prices. More Media & Entertainment Customer Stories English AWS IEM  in runtime performance high availability and resilience AWS Infrastructure Event Management (IEM) offers architecture and scaling guidance and operational support during the preparation and execution of planned events, such as shopping holidays, product launches, and migrations. Learn more » AWS MAP Project Scope Sony GISC-IN worked postmigration with the AWS Enterprise Support team to optimize Amazon EBS volumes by rightsizing, converting io1 volumes to gp3 based on volume activity. It also migrated more volumes from gp2 to gp3. These optimization efforts resulted in an 84 percent reduction in Amazon EBS storage expenses. The teams began by building new AWS infrastructure for both SAP and non-SAP workloads. Then, they participated in an AWS Well-Architected review, which assists cloud architects in building secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads. By taking part in these sessions, Sony made sure that its infrastructure met best practices for architecture, scalability, resiliency, and security. Deutsch Tiếng Việt Italiano ไทย Solution | Successfully Migrating Business Users across Regions to the Cloud With 6,000 users across 200 locations in 50 countries, the migration was no small feat. The project involved migrating 15 SAP applications on AWS, decommissioning 3 applications to upgrade the SAP Business Warehouse cloud, and modernizing from SAP NetWeaver Business Warehouse Accelerator to SAP S/4HANA on AWS. It also needed to be completed under a tight budget, within a short timeframe, and with minimal disruption to business operations. costs Sony GISC-IN adopted AWS Backint Agent, an SAP-certified backup and restore solution for SAP HANA workloads, to back up its database to Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere. Using this solution, the team quickly backed up 4 TB of data in less than 1 hour. With the help of AWS Enterprise Support, Sony GISC-IN optimized its Amazon S3 usage and reduced costs by 20 percent by implementing lifecycle policies, setting up Amazon S3 tiering, and adopting Amazon S3 Glacier Instant Retrieval, the lowest-cost archive storage with milliseconds retrieval for rarely accessed data. The migration also showcased the strength of Sony GISC-IN, demonstrating its ability to deliver complex and time-sensitive projects with precision and excellence. Managing such a large-scale migration project while minimizing disruption to business operations is a testament to Sony GISC-IN’s capabilities. In fact, Sony’s chief information officer awarded the Sony GISC-IN team a gold medal in recognition of this project’s success. Amazon EBS Sony also took advantage of AWS Infrastructure Event Management (AWS IEM), a program that offers architecture and scaling guidance and operational support for planned events, such as migrations. By participating in AWS IEM, Sony quickly detected and responded to events that had the potential to disrupt its applications. This helped improve operational efficiency and further minimize downtime. 中文 (简体) In April 2020, Sony began to migrate SAP West Platform to Amazon Web Services (AWS)—all within an aggressive timeline and budget. Português After the migration, Sony collaborated with AWS Enterprise Support to further optimize its usage of AWS services. For example, Sony GISC-IN initially used gp2 volumes on Amazon Elastic Block Store (Amazon EBS), a scalable, high-performance block-storage service, as its primary storage during the migration. Later, it switched to gp3 volumes due to the ability to provide input/output operations per second and throughput independently without increasing storage size, resulting in up to 20 percent lower costs per gigabyte compared with gp2 volumes.
Mobileye Cuts Costs Using Amazon EC2 _ Case Study _ AWS.txt
Opportunity | Determining the Need for Increased Compute Power at a Reduced Cost Français The REM team updates the map in near real time: accessing, changing, rebuilding, and stitching together more than 2 million kilometers of drivable paths with detail down to the level of a single stop sign. Each map in development is saved to Amazon Aurora, which is designed for unparalleled high performance and availability at a global scale with full MySQL and PostgreSQL compatibility. “We chose Aurora because it gave us the ability to work at a large scale without having to deal with a lot of maintenance or trying to optimize it ourselves,” says Reisman. “We get excellent performance out of the box.” Amazon S3 is an object storage service offering industry-leading scalability, data availability, security, and performance. Customer Stories / Automotive Español Mobileye is now able to use a single, highly scalable, self-managed Apache Spark cluster to map the entirety of Europe, using crowdsourced RSD that is tailored to the functionality of autonomous vehicles. Crowdsourced data is stored in Amazon Simple Storage Service (Amazon S3), an object storage service offering high scalability, data availability, security, and performance. “Our DevOps team worked alongside the AWS team to figure out how to store huge datasets on Amazon S3 in the most cost-effective way, giving developers access to an almost infinite number of scenarios while not breaking the bank,” says Reisman. The REM team has also begun using the Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering) storage class, which delivers automatic storage cost savings when data access patterns change, without performance impact or operational overhead. “Within Mobileye, Amazon S3 Intelligent-Tiering has been used for quite some time and has shown significant cost reductions,” says Reisman. “From the deep analysis we did alongside the AWS team, it looks like REM will be substantially reducing costs by using this as well.” Solution | Optimizing Costs for Compute and Storage 日本語 Contact Sales 2022 Working alongside AWS subject matter experts, the REM team planned a load test to address the scalability issue of a single cluster. The load test would attempt to map significant parts of Germany using the company’s actual operational code and real RSD information fed into a single cluster of Apache Spark, an open-source, distributed processing system used for big data workloads. The team started small, tweaking the parameters and improving any bottlenecks. The load test involved several stages, gradually increasing the compute until it peaked at 1,300 parallel cells running on 250,000 vCPUs on a single Apache Spark cluster without issue, a significant improvement over REM’s previous maximum capacity of 60,000 vCPUs. Mobileye could map the entire country of Germany in just 2–4 days running on 200,000 vCPUs. “Using AWS, the same map was considerably cheaper to create than before, and it took less than half the time to complete the same area,” says Pini Reisman, director of REM cloud application at Mobileye. “This was achieved by trying to push the envelope and figuring out what was limiting us from running this at the scale that we wanted in one Apache Spark cluster.” 한국어 Mobileye Optimizes Ability to Build Crowdsource HD Maps and Cut Costs Using Amazon EC2 Spot Instances Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. storage costs reduced 250,000 Get Started In 2022, the company plans to map the entirety of Europe, which will require the system to scale up to 200,000 concurrent vCPUs for 20 days—96 million vCPU hours in total. “It’s not that our architecture has changed,” says Reisman. “It’s that we managed to break the boundaries that we had before.” AWS Services Used Outcome | Expanding REM Functionality Further Large dataset Reduced 中文 (繁體) Bahasa Indonesia As a leading supplier of technologies for driving systems, Mobileye needed a way to create high-definition (HD) maps that provided a full set of features for driving-assist technologies and self-driving cars at an affordable cost. The creation of HD driving maps for an entire continent requires enormous compute power that must simultaneously collect data from vehicles and continuously update existing maps, a process that can quickly become unwieldy with soaring costs. About Mobileye Mobileye’s Road Experience Management (REM) group, which is responsible for the creation of its HD maps, addressed these challenges by developing a complex microservices architecture using Amazon Web Services (AWS). The solution is powered by Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. Using a suite of managed services from AWS, Mobileye could simplify its infrastructure, reduce operational overhead, and scale to more than 250,000 virtual CPUs (vCPUs) running concurrently at a fraction of the cost. Ρусский عربي 中文 (简体) Learn more » Founded in 1999, Mobileye develops technology for advanced driver assistance and autonomous driving systems. The company collects data for its mapping by crowdsourcing: vehicles navigating the roads send back road segment data (RSD) that the system ingests and processes. Mobileye extracts only the valuable information from the RSD, a process that minimizes the size and processing cost of the data. By early 2019, the REM team started receiving millions of RSD files daily, which was too much data to run on one compute cluster. As a result, the team had to split the continent of Europe into four disjointed areas and scale, debug, and monitor each one. The overhead of running four clusters contributed to a significant operational challenge that added to the cost and required the team to stitch the clusters together to achieve full functionality. Overview Amazon Aurora Mobileye develops technology for advanced driver assistance and autonomous driving systems. The company was founded in Israel in 1999 and is a leading provider of both camera-based driving-assist systems and solutions for self-driving systems. Amazon Simple Storage Service (Amazon S3) Türkçe English vCPUs on a single Apache Spark cluster Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Learn more » high-performance compute costs Pini Reisman Director of REM Cloud Application, Mobileye Deutsch Amazon S3 Intelligent-Tiering Tiếng Việt To manage the cost of running hundreds of thousands of vCPUs, the company used Amazon EC2 Spot Instances, which let companies take advantage of unused Amazon EC2 capacity and receive up to a 90 percent discount compared with On-Demand prices. But because AWS can reclaim Spot Instances when it needs the capacity in exchange for steep discounts, Mobileye runs its fleet of Spot Instances across many Availability Zones, one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. Additionally, the fleet consists of many Amazon EC2 instance types to diversify traffic and minimize interruptions, with priority given to the largest machines within a single Availability Zone. The solution uses primarily R-instance types for optimal CPU and memory rationing and cost. It prioritizes 24xlarge instances within the R-instance family before using 16xlarge, then 8xlarge, and so forth before opening a new Availability Zone. “Using Spot Instances, we have a very big discount in our enterprise account,” says Ofer Eliassaf, Mobileye’s cloud infrastructure group lead. Using AWS, the same map was considerably cheaper to create than before, and it took less than half the time to complete the same area.” Overview | Opportunity | Solution | Outcome | AWS Services Used  Italiano ไทย S3 Intelligent-Tiering is the only cloud storage class that delivers automatic storage cost savings when data access patterns change, without performance impact or operational overhead. Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Learn more » Amazon EC2 Spot Instances Português
Mobileye Improves Deep Learning Training Performance and Reduces Costs Using Amazon EC2 DL1 Instances _ Mobileye Case Study _ AWS.txt
As they sought to solve tasks in detection, tracking, and segmentation, Mobileye teams had been working independently to train the computationally heavy DL models that were deployed on EyeQ. In 2021, Mobileye began a project to improve performance while lowering the cost of DL by consolidating models—what the company calls “squeezing.” This involved creating a common backbone so that all the tasks could share compute resources. To train these DL models while keeping price down, the company needed cloud-based compute powered by accelerators that could run the largest number of samples per dollar. It began comparing instances of Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. Mobileye Improves Deep Learning Training Performance and Reduces Costs Using Amazon EC2 DL1 Instances production workloads daily Français Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Customer Stories / Automotive Español 40 percent 日本語 AWS Services Used Opportunity | Using Amazon EC2 DL1 Instances to Cost-Effectively Train DL Models that Improve Driver Safety 한국어 Ohad Shitrit Senior Director of AI Engineering and Algorithms, Mobileye Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon EC2 DL1 Instances Accelerates Amazon EC2 R5 instances are the next generation of memory optimized instances for the Amazon Elastic Compute Cloud. R5 instances are well suited for memory intensive applications such as high-performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications. Learn more » Headquartered in Israel, Mobileye develops self-driving technology and advanced driver-assistance systems using cameras, computer chips, and software. More than 50 original equipment manufacturers have adopted Mobileye’s solutions in more than 800 vehicle models, running on a proprietary driver-assistance chip called EyeQ. The company has sold more than 100 million EyeQ chips, which are designed to deploy and run DL models in near real time, processing hundreds of images per second to solve many computer vision problems simultaneously. For example, autonomous vehicles use object-detection algorithms to accurately see pedestrians, other vehicles, and traffic signals. Tracking algorithms follow the trajectory of such objects. And segmentation involves the collection and ingestion of individual pixels to feed DL models that attempt to re-create real-time road conditions. Get Started While Mobileye off-loads DL to Amazon EC2 DL1 Instances, it meets the compute needs of its Amazon EKS workflows using Amazon EC2 R5 Instances, which accelerate performance for workloads that process large datasets in memory. In short, the workflow determines the instance configuration. Using a heterogeneous compute structure, Mobileye speeds its development cycles and improves time to market. It runs more than 250 production workloads daily, scaling to more than 3,500 nodes on Amazon EKS. “By setting up our deep learning training batch workflows using Amazon EC2 DL1 Instances, we’re training more and spending less,” says Shitrit. Together, the AWS, Habana, and Mobileye teams tested Amazon EC2 DL1 Instances for several use cases. Mobileye was able to use Amazon EC2 DL1 Instances to implement distributed training, where one DL training workload was distributed across several instances. The company used Amazon EC2 DL1 Instances within its existing architecture on Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service. “We built the automatic scaling groups, created the virtual private cloud, and facilitated communication among different instances with support from Amazon EKS solution architects,” Shitrit says. On the research side, several Mobileye developers had been working with Habana Labs, a company that is part of Intel, an AWS Partner. Habana Labs had developed a Gaudi accelerator designed to optimize deep neural networks and power purpose-built instances for DL. After the Mobileye research teams’ success, other Mobileye teams began testing Amazon EC2 DL1 Instances, which deliver low cost-to-train DL models for natural language processing, object detection, and image-recognition use cases. Mobileye collaborated with teams from Habana Labs and AWS so that its custom models could be trained on Amazon EC2 DL1 Instances. “With efficient training, we can run large numbers of experiments, find the best model, and improve our accuracy,” says Ohad Shitrit, Mobileye’s senior director of AI engineering and algorithms. “Then our product will be better, which means that the driver will be safer.” Learn how Mobileye, a driving automation technology provider, improved price performance by 40 percent and lowered deep learning model training costs using Amazon EC2 DL1 Instances. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. with increasing numbers of Amazon EC2 DL1 Instances The solution also works seamlessly alongside Argo Workflows, the open-source container-native workflow engine the company uses to orchestrate parallel jobs on Kubernetes and observe model deployment and release. Mobileye benefited from the simple integration of solutions and overall ease of use. “You need very few changes in the code to run your network using Amazon EC2 DL1 Instances,” Shitrit says. “It’s straightforward. A talented developer can do it in a few hours.” Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 250 Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Learn more » 中文 (简体) improvement in price performance 2022 Overview development cycle for tasks involving computer vision Sees near-linear improvement For example, one use case took Mobileye just 2 weeks to scale training workloads across eight Amazon EC2 DL1 Instances and saw near-linear improvement as the number of instances increased. For model training, the company improved price performance by as much as 40 percent on Amazon EC2 DL1 Instances compared to the same number of instances using NVIDIA-based accelerators. To further save money on its DL workflows, Mobileye used Amazon EC2 Spot Instances, which let companies take advantage of unused Amazon EC2 capacity in the cloud at up to a 90 percent discount compared to On-Demand instances, which are primarily used by NVIDIA-based GPUs. Türkçe Alongside AWS and Habana teams, Mobileye is continuing to optimize the use of Amazon EC2 DL1 Instances for model training and is starting to deploy them to production, with plans to deliver to its clients soon. The company also plans to adopt Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances that customers use to run applications requiring high levels of internode communications at scale on AWS. “Amazon EC2 DL1 is powerful hardware with a relatively low price,” says Shitrit. “When we train cost effectively, we can deploy better models to mobilize and improve our products.” English Amazon Elastic Compute Cloud (Amazon EC2) Solution | Creating a Heterogeneous Compute Infrastructure to Drive Development Amazon EC2 R5 Instances About Mobileye Scales to more than 3,500 Deutsch By setting up our deep learning training batch workflows using Amazon EC2 DL1 Instances, we’re training more and spending less.” Tiếng Việt Outcome | Improving Products for Customers by Deploying Better Models Italiano ไทย Based in Jerusalem, Mobileye develops autonomous driving technologies and advanced driver-assistance systems using cameras, computer chips, and software. More than 800 vehicle models use its technology, with more than 100 million chips sold. nodes on Amazon EKS Learn more » Amazon Elastic Kubernetes Service (Amazon EKS) Português Mobileye develops innovative autonomous vehicle technologies and powers its solutions with deep learning (DL) models. The company is constantly optimizing the price performance of its custom computer vision models, which are critical to building autonomous driving solutions that can adapt to ever-changing road conditions. To train these custom computer vision models, Mobileye turned to compute solutions in the cloud from Amazon Web Services (AWS). The company developed a heterogeneous compute cluster that included a novel Gaudi accelerator that was developed specifically for DL workloads. Mobileye’s solution facilitated more than 250 production workloads daily, delivered 40 percent better price performance, and accelerated the company’s DL development cycle.
Mobiuspace delivers up to 40 improved price-performance using Amazon EMR on EKS and Graviton instance _ Mobiuspace Case Study _ AWS.txt
and automated O&M Français 2023 Mobiuspace, a global internet technology company, wanted to optimize its content recommendation algorithm more effectively with big data. Aiming to provide personalized entertainment experience, Mobiuspace has rolled out a line of products to cater to global users’ need for discovering, exploring, consuming, and creating pan-entertainment contents. Mobiuspace has over 200 million active monthly users across over 100 countries and regions, including emerging markets such as the Latin America, the Middle East, and North Africa. It processes 100,000 QPS and billions of user behavioral events processed at peak. Looking to providing better, more localized, and more personalized video streaming services, Mobiuspace decided that by adopting Amazon Web Services (AWS), it will improve content recommendation, shorten mode iteration, and optimize its recommendation algorithm. and better price performance Español and independent architecture building As the growing business placed increasing demands on its architecture, Mobiuspace underwent a data modernization effort and containerization transformation led by the big data team. Mobiuspace migrated its big data operation from Amazon EMR on EC2 to a fully-managed Kubernetes container platform—Amazon Elastic Kubernetes Service (Amazon EKS). With Amazon EMR on EKS, Mobiuspace integrated its big data and front-end applications to enable a microservice-based, containerized, and highly automated system and simpler operations and maintenance (O&M) management. In addition, Amazon EMR on EKS uses containers instead of virtual machines as the smallest resource unit to allow finer management and better utilization of resources. Learn more » 日本語 Amazon SageMaker Get Started 한국어 Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used security compliance Shenzhen Mobiuspace Technology Co., Ltd. (“Mobiuspace”) is a global internet technology company committed to inspiring every corner of the world through technology. Its expanding services and customer base had also significantly driven up data operation costs. Its front-end server was processing as many as 100,000 QPS at peak hours and billions of users’ behavioral events. Mobiuspace wanted a cost-effective solution to address its massive data processing needs. It decided to improve the performance and efficiency of its big data operation by using AWS. This would help Mobiuspace keep pace with its rapid growth and boost business development through rapid cost reduction and continuous optimization. Mobiuspace Delivers up to 40% Improved Price-Performance Using Amazon EMR on EKS AWS Services Used 中文 (繁體) Bahasa Indonesia With Amazon EMR on EKS and the ARM-based AWS Graviton 2 instances, we improved the overall performance of our big data operations by 30% and reduced cost by 20%.” Li Rui Vice President of Technology, Mobiuspace Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Outcome | Accelerating System Development Solution | Reducing Costs and Enhancing Agility Overview Easy Agile management About Mobiuspace Building on the modern data architecture of AWS, Mobiuspace uses Amazon SageMaker, a fully managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning models quickly, to recommend video contents based on users’ interest. In addition, Amazon SageMaker is built with and optimizes commonly-used machine learning algorithms to save users from spending excessive time on algorithm selection and framework. Using Amazon SageMaker, Mobiuspace effectively shortened the cycles of continuous model iteration and updates to the optimized recommendation algorithm, improving user experience and customer satisfaction. Türkçe For better virtual machine scheduling on Amazon EKS, Mobiuspace made full use of the AWS best practices: it runs Spot instances and Amazon EC2 instances powered by AWS Graviton processors to further reduce virtual machine costs of pod pools. Amazon EC2 Spot Instances allow users to tap into the unused EC2 capacity in the AWS Cloud. Available at up to a 90 percent discount compared to On-Demand prices, Spot instances are suitable for container and big data workloads. Amazon EMR or Amazon EKS also facilitate easy, seamless scheduling of and access to Spot resources. In 2020, Amazon EC2 instances powered by AWS Graviton processors were released. Mobiuspace’s testing on the containerized Java back-end services shows that Amazon EC2 M6g instances deliver 40 percent better price performance over M5 instances. “With Amazon EMR on EKS and the ARM-based AWS Graviton 2 instances, we improved the overall performance of our big data operations by 30 percent and reduced cost by 20 percent,” says Li Rui, vice president of technology at Mobiuspace. English Operational efficiency Opportunity | Optimizing Big Data Operations to Enhance The User Experience Already running on Amazon EMR and Amazon Elastic Compute Cloud (Amazon EC2), Mobiuspace intended to better use these services to improve cluster resources utilization and gain more flexibility across AWS global infrastructure. Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. Amazon EMR Learn how Mobiuspace adopted a modern data architecture with Amazon EMR on EKS Deutsch Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Tiếng Việt With video streaming becoming the mainstay of mobile internet consumption, many users want to consume culturally-relevant content and find easier ways to access such information online. However, it was not easy, especially for users in Latin America and other emerging markets, to find localized and personalized content. Mobiuspace made it a priority to analyze and learn user behavior based on their media consumption, cultural, and national backgrounds to provide relevant video recommendations. This would lead to better localized and personalized video streaming services. Italiano ไทย Amazon EKS Contact Sales Founded in 2016, Mobiuspace is a global internet technology company that provides a diversified product portfolio for users to discover, explore, consume and create pan-entertainment content. This makes for a personalized experience anytime, anywhere. Mobiuspace deployed all its businesses and systems on AWS and theyw ere comprised of three major parts. First, its online service system supports service requests of all products running on different operating systems (Android/IOS/Web). These requests include user center, in-feed video recommendation, channel recommendation, follows, video resolution, short URL sharing, push notification, and upgrade services. Second, its big data system collects behavioral data from the client software, provides raw data for analysis and recommendation, and processes billions of behavioral events daily. Finally, its video recommendation system runs on Amazon SageMaker that captures user activity data and uses machine learning models to recommend video content based on users’ interest. Learn more » Rapid system development Amazon EC2 Spot Instances Português
mod.io Provides Low Latency Gamer Experience Globally on AWS _ Case Study _ AWS.txt
Given that most of its gaming community are in the United States and Europe, burstable scaling was often needed during game launches and updates, particularly when games landed on the subscription services and reached millions of new players. High availability and autoscaling during these spikes and sustained periods of growth were also essential. Patrick Sotiriou, co-founder and vice president of Technology at mod.io, says, “Having the agility to spin up resources instantly in another global region is critical to our business.” Tiếng Việt Français 200% global latency, down from 700 ms Amazon Redshift Español Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Solution | Leveraging Managed Services to Relieve Infrastructure Burden 250 ms Amazon Managed Streaming for Apache Kafka is a fully managed service that enables you to build and run applications that use Apache Kafka to process streaming data. Learn more » scales web applications to support two-fold spike in API requests 日本語 Amazon Managed Streaming for Apache Kafka mod.io had been using Amazon Web Services (AWS) since its launch, deploying resources such as Amazon Simple Storage Service (Amazon S3) to store images and mod files. It chose to migrate fully from on-premises to the AWS Cloud in 2021, leveraging managed services such as AWS Lambda that would ease its infrastructure “heavy lifting” burden. With the migration, mod.io began breaking up its monolithic database and supporting architecture, prioritizing cloud-native services wherever possible. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Get Started Since completing its migration to AWS, mod.io has expanded its international presence with a platform that’s highly scalable and more responsive to its users. “AWS and our cloud migration effectively unlocked the ability for us to scale globally in seconds,” says Macsok. Platform performance and reliability have increased significantly, and mod.io no longer needs to spend unnecessary time and money on hardware maintenance.  AWS Services Used Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » The company briefly considered other cloud providers but chose AWS because of its positive experience with AWS subject matter experts and familiarity with the platform. Greg Macsok, vice president of Infrastructure at mod.io, says, “The near real-time support we’ve received from AWS, from a technical and account management perspective, was a major driver in our decision. We also appreciate how we’ve been able to continue developing at speed during the migration thanks to the ease of using the AWS platform.” Since day one, mod.io has focused on continually adding features and functionalities to its product, so this aspect was an important consideration.  mod.io Provides Low Latency Gamer Experience Globally on AWS 中文 (繁體) Bahasa Indonesia Amazon Aurora Contact Sales Ρусский mod.io is a middleware platform that powers user-generated content for video games. Trusted by more than 14 million users for successful integration with over 130 games, mod.io can be utilized across PCs, consoles, mobiles, and virtual-reality devices. عربي I doubt there’s a use case we’d want to tackle that we couldn’t achieve with the multitude of services AWS offers.” 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Patrick Sotiriou Co-Founder and Vice President of Technology, mod.io In September 2021, when beginning its cloud migration journey, mod.io had a daily active user base of 240,000. By November 2022, that figure had more than doubled to 530,000. Despite the massive increase in users, mod.io did not need to drastically scale its engineering team to support new users. “Being on AWS means that no matter how much or how fast our business grows, we don’t need to scale human resources 1:1,” says Macsok.  2022 Overview Modding—the modification of video games through user-generated content (UGC)—has become an integral way of connecting game studios with their communities. mod.io is a middleware provider whose platform powers UGC within games such as SnowRunner. Operating out of Australia, mod.io boasts over 14 million users and integrations with more than 130 games.  AWS Elastic Beanstalk Opportunity | Seeking Better Support and Instant Global Scaling mod.io also implemented Amazon Aurora as a fully managed database service available across three AWS Regions and multiple availability zones. Before the migration, mod.io had servers in the US West (Northern California) Region; it has since expanded to Frankfurt and Singapore. mod.io has set up redundant database replicas around the world to better support gamers in any location and reduced its platform’s global latency from 700 milliseconds to 250 milliseconds on AWS.  Türkçe mod.io rapidly scales its database and multi-region architecture using Amazon Aurora and AWS Elastic Beanstalk, enlarging its global footprint, and reducing latency for gamers globally. English mod.io is an open middleware platform enabling gamers to modify (mod) existing games with user generated content. To support rapid growth and reduce its manual infrastructure burden, mod.io migrated to AWS.  To rapidly autoscale its web applications, the company is using AWS Elastic Beanstalk. Elastic architecture ensures smooth responses to spikes in mod.io traffic during game releases. In one situation, the number of application programming interface (API) requests to the mod.io platform doubled overnight, and the system had no issues or downtime while processing the increased load. Since migrating to AWS, mod.io has experienced no major outages.  To leverage data accumulated in Amazon Aurora and optimize performance using the right tools for the right job, mod.io is now finalizing a bespoke analytics pipeline using Amazon Redshift and Amazon Managed Streaming for Apache Kafka (Amazon MSK). It plans to use behavioral analytics to generate valuable insights that would benefit the game companies it works with, alongside loyal modders on the mod.io platform.  About mod.io with multi-AZ database architecture Deutsch Customer Stories / Gaming Italiano ไทย On-demand scaling, particularly during prime gaming hours or around major game releases, was a priority that became increasingly challenging with its data center. When a new game or new version of a popular game is released, mod.io requires increased compute resources in a short span of time. Since launching in 2018, mod.io has experienced phenomenal growth, averaging 250 percent year-on-year in mods downloaded. The business quickly realized, however, that its bare-metal servers could not keep pace with this growth rate over the long term. Plus, the company had difficulty getting real-time support from its data center and hardware vendors. When experiencing data center failures, mod.io would typically suffer outages while awaiting hardware vendor support.  Outcome | Expanding Reach with Highly Responsive Platform Learn more » The company uses AWS Elastic Beanstalk to autoscale its applications, Amazon Aurora as a multi-region managed database, and Amazon Redshift as a data warehouse. On AWS, mod.io can support limitless expansion with high availability architecture that scales on demand and has embarked on a data and analytics journey to improve end users’ gaming experience.  AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers. Learn more » High availability Português Sotiriou says, “There’s so much potential for us to scale in several areas. I doubt there’s a use case we’d want to tackle that we couldn’t achieve with the multitude of services AWS offers.” Aside from its current analytics project, mod.io plans to evaluate containerization and the creation of a data lake. “We’re looking very far into the future and constantly comparing what we want to do at the product level with how AWS can help us achieve it at a technical level,” Sotiriou concludes. mod.io is now exploring the AWS Partner Network to jointly pursue new business opportunities within the AWS global gametech customer community.
Modern Electron Case Study.txt
Using Amazon Web Services (AWS) to simulate and optimize the technology, Modern Electron has run tens of thousands of complex simulations on compute-optimized Intel-based Amazon Elastic Compute Cloud (Amazon EC2) C5 Instances. When AWS launched 64-bit Amazon EC2 C6g Instances, powered by Arm-based AWS Graviton2 processors, Modern Electron adopted the new technology to achieve better price performance. The savings enabled engineers to iterate faster at a 50 percent lower cost.   Français Founded in 2015, Modern Electron is an energy technology company developing deep tech for distributed energy generation that is greener, cheaper, and climate resilient. Modern Electron is developing technology to enable hundreds of millions of homeowners worldwide to save money on energy while reducing carbon emissions that degrade the environment. The company is working with heating appliance manufacturers to integrate new technology into the next generation of home heating systems. The technology is a new way to approach combined heat and power, converting a portion of the heat into high-efficiency electricity to increase a home’s energy efficiency and heating reliability while reducing its reliance on grid electricity. Achieving Cost Reductions and Better Performance The team also uses AWS Batch, a service that provisions compute resources and optimizes job distribution based on the volume and resource requirements. Most Modern Electron simulations run on a single node, which means less worry about networking performance. “Our use of AWS Batch lets us worry a lot less about the infrastructure because AWS spins up the exact nodes we need as we need them,” says Scherpelz. The team’s local scripts submit runs to AWS Batch to explore specific sets of parameters. AWS Batch automatically boots up the compute node with the right resources and then launches the job. As each job finishes, AWS Batch shuts down that node. Español By migrating from Amazon EC2 C5 Instances to Amazon EC2 C6g Instances, Modern Electron reduced compute costs by an additional 50 percent. Combining this with the savings from Spot Instances, the company achieved an overall cost reduction of more than 75 percent. These savings enable the company to invest in running more simulations. Increased elasticity to accommodate spiking compute demands  日本語 On AWS, we have access to the right computing resources for the science we need to do. The solutions are there for us to use.” Exploring High Performance Computing on AWS Shortened time to solutions Get Started 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Expects to improve resiliency to blackouts Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications Reducing the Carbon Footprint Worldwide In 2018, Modern Electron began running simulations involving large clusters of Amazon EC2 C5 Instances, powered by Intel x86 processors. Capacity fluctuated depending on how many simulations it had to run, so the company opted for Amazon EC2 Spot Instances—spare Amazon EC2 capacity offered at discounted rates. This pricing option saved the company 50 percent compared to the cost of using Amazon EC2 On-Demand Instances for its simulations. Modern Electron then decided to explore the new Amazon EC2 C6g Instances, released in July 2020. The technology is powered by AWS Graviton processors, custom built by AWS using 64-bit Arm Neoverse cores to deliver better price performance for cloud workloads running on Amazon EC2. AWS Services Used The funded startup is bringing a commercial product to market with appliance manufacturing partners around the world. The technology requires optimized designs for a range of different products and models in some of the world’s most demanding conditions, including extreme temperature, lifecycle, and reliability requirements. Thermionic converters have existed for decades and were historically used to power satellites. But engineers at Modern Electron have made breakthroughs on the technology and materials to optimize that technology for use in terrestrial appliances for the first time. Optimization requires powerful compute to run complex simulations, and the infrastructure became available only recently. “We often simulate tens of millions of particles,” says Peter Scherpelz, senior computational physicist at Modern Electron. “We track how each particle moves and simulate that over millions of time steps—that’s trillions of calculations. A desktop computer won’t suffice.”  中文 (繁體) Bahasa Indonesia Amazon C6g AWS Batch Contact Sales Ρусский عربي Using Amazon EC2 C6g Instances has put the company on a faster path to an optimized product. With Modern Electron’s technology, consumers worldwide will be able to squeeze both electricity and heat out of fuel, thus saving money and reducing carbon emissions. “On AWS, we have access to the right computing resources for the science we need to do,” says Scherpelz. “The solutions are there for us to use.” Modern Electron Optimizes Home Mini Power Plants Using Amazon EC2 中文 (简体) Peter Scherpelz Senior Computational Physicist, Modern Electron AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. Benefits of AWS The company also gained elasticity. “Elasticity to scale is crucial for us because we’re a fairly small computational team and have spiking compute demands,” says Roelof Groenewald, computational physicist at Modern Electron. “In the first week of a month, we might run 1,000 simulations, then not run any in the second week. Having the exact resources available that we need at any time is important to us.” Now Modern Electron’s design team can quickly simulate the detailed electron physics in its technology architectures, enabling it to iterate rapidly and improve its design. Ultimately, Modern Electron expects its device will bring efficient electricity and cost savings to hundreds of millions of consumers regardless of whether they’re connected to the power grid. Modern Electron plans to run more-extensive simulations based on hundreds of millions of particles rather than on the 10-million-particle range explored so far. The team is working on using multiple nodes to run larger parallel jobs and establish the infrastructure required to submit these simulations to the cloud and get results quickly. Optimized code speed to enable larger simulations Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Additionally, the company’s engineers have used AWS compute resources to continually optimize code speed, enabling much larger simulations, especially on Amazon EC2 C6g Instances, which have a large number of cores per node. And by running more-extensive simulations, the time to solution scales accordingly. “Aside from lower costs, the real payoff of Amazon EC2 C6g Instances is in speed to solution,” says Scherpelz. “When we save 10 percent, we can do 10 percent more runs or harder and bigger runs. Now we can get solutions in a reasonable time.” About Modern Electron Reduced compute costs by more than 75% Amazon EC2 C6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation C5 instances and are ideal for running advanced compute-intensive workloads.  Expects to reduce carbon emissions Deutsch Tiếng Việt Italiano ไทย Founded in 2015, Modern Electron has since grown to 32 employees. The company’s vision is to minimize carbon emissions by developing a thermionic converter that uses high-temperature heat from combustion already in household boilers and furnaces to generate power that is up to 5 times cheaper and much less carbon intensive than the electricity most homes can purchase from the grid. The device has no moving parts and delivers electricity more efficiently than the power grid, reducing household energy costs and carbon footprints. The technology provides new features such as blackout-proof heating, enabling homeowners to run the heat even when the power grid is down. “Recent winter weather disasters created widespread grid outages in Texas and other states, causing millions to lose power and heat,” says Justin Ashton, vice president of product at Modern Electron. “Having efficient, blackout-proof heating is more relevant than ever. Any home with a gas appliance already has half a power plant in place. Our thermionic technology is the missing piece.” The heating appliances enhanced by Modern Electron’s technology are future compatible with renewable fuels, such as green gas and hydrogen, lowering society’s cost on the environment and speeding up decarbonization. 2021 Learn more » Amazon EC2 Amazon EC2 Spot Instances Expects to reduce household energy costs for consumers Português
Moderna Drives Commercial Innovation Using Amazon Connect and AI _ Moderna Case Study _ AWS.txt
Overview | Opportunity | Solution | Outcome | AWS Services Used Barbara Salami Vice President of Digital for Commercial, Moderna * { Standardization Español *.MsoChpDefault { 日本語 2023 한국어 Moderna, a digital biotechnology company, is best known for the mRNA vaccine it developed during the COVID-19 pandemic. With several other therapeutics in the pipeline, the Massachusetts-based innovator is changing the world of medicine by harnessing the power of mRNA. It is exploring new frontiers while focusing on digitization and making systems modular, agile, and extensible by integrating them. mso-bidi-font-family:Cambria { AWS Services Used through increased customer satisfaction, retention, and brand loyalty Amazon Lex Contact Sales Learn more » Overview   metrics tracking AWS re:Invent 2022 - Commercial innovation at Moderna using Amazon Connect and AI (LFS201) page: WordSection1; mso-hansi-font-family:Cambria { * { ไทย p.MsoNormal, li.MsoNormal, div.MsoNormal { Learn more » Moderna’s goal of commercial excellence hinges on top-notch, fully integrated customer relationship management to power exceptional experiences. With OC3, the company is building a future-ready, modular infrastructure that gives a 360-degree view of the customer in a dynamic landscape. “Machine learning was key to bringing Moderna’s mRNA products to market, so it was natural to extend its use to commercial efforts,” says Salami. With Amazon Connect, you can set up a contact center in minutes that can scale to support millions of customers. Learn more » As it pivots to becoming a commercial organization, Moderna is using Amazon Web Services (AWS) to build personalized experiences for all stakeholders—patients, customers, agents, and supervisors. Its omnichannel cloud contact center delivers a consistent experience for users in all their interactions with the company while furthering Moderna’s vision to be a data-driven organization. Moreover, Moderna can better meet the changing needs of the broader healthcare community, including regulatory bodies and governments. mso-font-pitch:variable { Français Moderna is a global biotechnology company whose mission is to deliver the greatest impact to people through mRNA medicines. OC3 is intuitive, powered by a humanized, conversational artificial intelligence (AI) engine. Using Amazon Lex, a fully managed AI service with advanced natural language models, Moderna builds chatbots with AI that understand intent, maintain context, and automate simple tasks across languages. It also uses Amazon Polly to deploy high-quality, natural-sounding human voices in dozens of languages. In 2022, Moderna piloted a bot library with different personas for different functions and a single desktop to help make agents’ work more accessible. Solution | Deploying Machine Learning to Power Exceptional Customer Experiences  no items found  Amazon Connect Improved 中文 (繁體) Bahasa Indonesia Amazon DynamoDB Opportunity | Enhancing Digital Experiences for Moderna Using AWS   Learn how Moderna is building innovative, digital-first customer experiences with contact center automation using AWS. Long-term savings Türkçe Amazon Lex is a fully managed AI service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. Learn more » English The growing scope of Moderna’s work using AWS is based on successful past collaborations. “AWS has deep cross-industry expertise, which helps us be future ready, innovate continuously, and scale with agility,” Salami says. “AWS continues to disrupt itself and be a leader, and as AWS learns, we learn.” employee experience Tiếng Việt Because of a global strategic partnership between AWS and Salesforce, Moderna’s engineers can innovate faster with pre-built applications. And using Amazon DynamoDB, a fast, flexible NoSQL database service, Moderna delivers its apps with nearly unlimited throughput, storage, and replication. “Our aim is to deliver seamless, integrated, personalized experiences with the agility to match the changing needs of patients and the broader system,” says Bhowmick. “That means we have no silos, and we build dynamic cross-solution systems.” Português About Moderna Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The contact center is currently running in four regions to comply with the local compliance and regulatory needs, with standardized workflows across markets and lines of business. Added agility helps Moderna transition from geography-based vendors to a centralized cloud approach so that it can fully control its contact center. Using Amazon Connect—which helps set up a contact center in minutes—Moderna quickly set up its simple-to-use cloud contact center and onboarded agents to provide superior customer service at a lower cost. “The platform is vendor-agnostic, allowing us to deploy it across regions seamlessly,” says Salami. … Explore Moderna's journey of innovation using AWS The ecosystem is connected to various other downstream systems for adverse event reporting and triaging of quality cases. “AWS shares Moderna’s DNA,” says Bhowmick. “Using AWS, we built an operationally efficient solution while providing the best experience for our customers and patients and can scale with agility and extensibility. AWS collaboration has been fundamental to this entire journey,” says Bhowmick. } عربي Outcome | Personalizing Healthcare through Digitalization and Innovation   Moderna Drives Commercial Innovation Using Amazon Connect and AI mso-ascii-font-family:Cambria { Moderna is currently piloting several new projects to better serve its customers with an integrated global experience, like bot libraries. In addition, it is working to make agents’ work easier through simplified, standardized user interfaces and workflows and exploring different models for commercialization by incorporating best practices from other industries, like fintech. This amalgamation of science and technology is driving its progress toward personalized medicine so that patients can get the right information, the right access, and the right therapy at the right time. Amazon Polly Given its global aspirations and a drive for commercial excellence, Moderna needed a robust, automated customer-management solution. Its omnichannel cloud contact center (OC3) platform, built on AWS, helps the company provide a streamlined, personalized customer interaction experience in every touchpoint, across all lines of business and markets. Deutsch mso-fareast-font-family:Cambria { mso-pagination:widow-orphan { Italiano Moderna’s choice to use AWS to build OC3 was driven by a shared culture of innovation, iteration, and improvement. “AWS was a natural fit from a technology standpoint. Both Moderna and AWS are digital-first and share a mindset of delivering data-driven value for external stakeholders, from patients to governments to healthcare providers,” says Barbara Salami, vice president of digital for commercial at Moderna. “Our relationship with AWS is 10 years strong and spans across the company from genomics to manufacturing. It’s more than technology; it’s about the art-of-the-possible thinking. across functions and regions for driving continuous improvements mso-generic-font-family:roman { To build OC3, the team worked backward, starting with an ideal customer journey and streamlining operations toward that end. The platform handles inquiry, intake, interaction, and support, with built-in capabilities to support communications through customers’ preferred channels, like voice, chat, emails, web, and SMS. Customer service agents get intelligent content routed to their screen in near-real-time through a few clicks to address customer queries, additionally supported by a keyword-based search engine so that they don’t have to scramble for information to help customers. Built-in self-service capabilities further improve the customer experience, while integration with Moderna’s customer relations management system unlocks a 360-degree view of the customer. “Everything is integrated, modular, and cloud-based to support scaling and agility,” says Arpita Bhowmick, senior director, omnichannel contact center products for Moderna. “What’s unique is that the platform can scale to serve the entire gamut of business functions while following the compliance guardrails.” Founded in 2010, Moderna aims to deliver the greatest possible impact to people through its pioneering mRNA technology. With a robust technology platform as its backbone, it started the digital production and commercialization of its COVID-19 vaccine in 2021 and has delivered over 900 million doses thus far. Today, Moderna has 3,800 employees worldwide and 46 products in the pipeline, 31 of which are in clinical trials. Furthermore, the company is committed to having a diverse workforce and achieving carbon neutrality by 2030. *, serif { 1 End-to-end Ρусский 中文 (简体) div.WordSection1 { More Moderna Stories Arial, sans-serif { AWS has deep cross-industry expertise, which helps us be future ready, innovate continuously, and scale with agility. AWS continues to disrupt itself and be a leader, and as AWS learns, we learn.” Amazon Polly uses deep learning technologies to synthesize natural-sounding human speech, so you can convert articles to speech. Greater agility Get Started Customer Stories / Life Sciences Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Learn more » and speed to market
Modernizing FINRA Data Collection with Amazon DocumentDB _ FINRA Case Study _ AWS.txt
storage space With the new solution, translation is no longer needed between code and storage. Because Amazon DocumentDB natively stores data in JSON, it is simpler for FINRA to query and index data, reducing development cycles by 50 percent and extending the usability of data by seamlessly working with other systems that use JSON. This reduction in development time helps FINRA spend more time on innovation. “We no longer need to create one data model for the backend and another for the API layer,” says Elghoul. “We can take advantage of the development time that we’re saving to be more innovative and focus on the real business problems that we are solving.” Amazon OpenSearch Service The data that FINRA ingests must be secure. Amazon DocumentDB was an effective choice because it integrates with other AWS services used to deliver strict network isolation—services such as Amazon Virtual Private Cloud (Amazon VPC), used to define and launch AWS resources in a logically isolated virtual network. All data is encrypted at rest using AWS Key Management Service (AWS KMS), used to create and control keys to encrypt or digitally sign data. Encryption in transit is provided with Transport Layer Security. Using Amazon DocumentDB, FINRA can automatically monitor and back up data to Amazon Simple Storage Service (Amazon S3), object storage built to store and retrieve any amount of data from anywhere. Français Reduced The migration to Amazon DocumentDB also simplified the management of data versioning. Because filings and industry needs evolve over time, it is critical for FINRA to support and adapt to these changes. Using its legacy relational database, FINRA would have to track changes to its data using complex logic. Using Amazon DocumentDB, the service automatically publishes change events. Data collection and availability was the first piece of the puzzle for FINRA. Important goals for FINRA are making the data gathered in Amazon DocumentDB available for analytics, working alongside AWS to find the right services to help investigators find bad actors in the industry, and continuing to innovate. By achieving these goals, the organization will continue to improve on fulfilling its mission to protect investors by using data analysis. “To build products to support the future, we use services built for the future, providing capabilities at a pace our users and stakeholders expect,” says Elghoul. Español Using AWS, we are removing limits and moving faster. If we had to build all the services ourselves, it would have taken years to get where we are.” Outcome | Providing Analytics and Investigating Bad Actors Using AWS 日本語 About FINRA 2023 Close Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Improving Query and Indexing Performance for Regulatory Documents Using Amazon DocumentDB FINRA wanted to reduce time to market, the development time required to build new regulatory filings, and time to migrate existing files to JSON format. FINRA considered alternative database solutions and selected Amazon DocumentDB (with MongoDB compatibility), a fully managed native JSON database designed for scaling enterprise workloads, which the organization found to be a good fit for its use case. The organization has been using AWS since 2013 and began working on proofs of concept for Amazon DocumentDB in 2019. FINRA migrated to Amazon DocumentDB in early 2020 and delivered the Form U4 (Uniform Application for Securities Industry Registration or Transfer), used to register broker-dealers and investment advisers, in October 2020. Using AWS, FINRA has also simplified the storage process and improved its business across multiple vectors. “We are removing limits and moving faster. If we had to build all the services ourselves, it would have taken years to get where we are,” says Elghoul. In addition to Amazon DocumentDB, the organization uses Amazon OpenSearch Service—which facilitates performing interactive log analytics, near-real-time application monitoring, website search, and more—for advanced full-text search across the multiple databases it has for different use cases. Click to enlarge for fullscreen viewing.  Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Learn more » Mohammed Elghoul Senior Principal Architect of Regulatory Operations and Registration Platforms Technology, FINRA AWS Services Used FINRA works under the supervision of the US Securities and Exchange Commission to write and enforce rules governing brokerage firms that do business with the public in the United States. FINRA examines firms for compliance, fosters market transparency, and educates investors. Overview For cost optimization, FINRA uses AWS Graviton2 instances for Amazon DocumentDB, custom built by AWS using 64-bit Arm Neoverse cores to deliver optimal price performance. “We saved over 50 percent month over month by migrating to the new instance type and resizing the Amazon DocumentDB cluster to reduce the number of instances used and to gain better performance,” says Elghoul. 中文 (繁體) Bahasa Indonesia Simplified AWS Graviton2 operational cost savings month over month for the data collection framework Modernizing FINRA Data Collection with Amazon DocumentDB Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Amazon DocumentDB 2.5 million Over 50% As of January 2023, FINRA has collected about 2.5 million filings since the inception of the new framework. With the migration to Amazon DocumentDB, FINRA simplified its data collection applications and decreased development times by reducing the code necessary to map objects to relational tables. “We wanted to reduce getting involved in tweaking services or maintaining code. That’s why we prefer to use fully managed services from AWS,” says Mohammed Elghoul, senior principal architect of regulatory operations and registration platforms technology at FINRA. filings collected between October 2020 and January 2023 Customer Stories / Financial Services AWS Graviton2 instances provide up to 30% price/performance improvement for Amazon DocumentDB depending on database size and workload characteristics vs. Intel-based instances. Türkçe Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps.  Learn more » English FINRA is a not-for-profit organization that writes and enforces the rules governing brokers and broker-dealer firms in the United States. FINRA’s overarching goal is to protect investors and safeguard market integrity. It chose to build on AWS to fulfill this mission. The organization needs efficient data collection that is accurate and consistent. FINRA’s legacy database solution for data collection was a relational database that stored data in XML format. The organization decided to shift to using JSON format, improving query and indexing performance for regulatory documents while reducing storage space. Solution | Shortening Development Cycles and Achieving 50% Cost Savings Using AWS Amazon DocumentDB (with MongoDB compatibility) is a fully managed native JSON document database that makes it easy and cost effective to operate critical document workloads at virtually any scale without managing infrastructure. development cycles reduction that improved the time taken to go to market Deutsch Tiếng Việt Amazon S3 Italiano ไทย The Financial Industry Regulatory Authority (FINRA) wanted to improve data collection and data usability by switching from XML to JSON format across its entire data collection framework. FINRA collects data from several thousand providers, such as investment advisers and stock exchanges, and it tracks, aggregates, and analyzes market events to protect investors, making data usability critical. To improve the accuracy, reliability, and consistency of information collected and disseminated, FINRA used Amazon Web Services (AWS) for its solution. The organization accelerated development time, reduced ongoing maintenance costs, and strengthened data security. Architecture Diagram data collection applications Learn more » Learn how FINRA in the financial services industry reduced development times and ongoing maintenance costs using Amazon DocumentDB (with MongoDB compatibility) for its data collection framework. Português Contact Sales
Modernizing Infrastructure to Improve Reliability Using Amazon EC2 with Loacker _ Case Study _ AWS.txt
reduction in infrastructure costs Français Even though Loacker was new to the cloud, it migrated its primary SAP application to AWS quickly, cutting infrastructure costs by 32 percent. Loacker first contacted AWS in March of 2020, and its new solution went into deployment in June of 2021, after a 5-month migration to the cloud. Loacker began its modernization process by migrating its SAP application to Amazon Elastic Compute Cloud (Amazon EC2)—which provides secure and resizable compute capacity for virtually any workload—to provide a secure location to store and access its data. Outcome | Using AWS Solutions to Innovate 2023 Overview | Using AWS to Modernize Infrastructure for Loacker Español Amazon EC2 Increased to increase agility, availability, and resiliency 日本語 AWS Storage Gateway is a set of hybrid cloud storage services that provide on-premises access to virtually unlimited cloud storage. Learn more » Customer Stories / Manufacturing Amazon S3 Get Started 한국어 A. Loacker Spa/AG (Loacker), a South Tyrolean company leader in the international wafer market specializing in chocolate confections, wanted to modernize its infrastructure capacity and scalability so that it could increase the agility, availability, and resiliency of the systems that its manufacturing processes rely on. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Now that the migration of the SAP application is complete, Loacker has seen a reduction in costs and an improvement in reliability, including zero unavailability events from 2021 to 2023 due to using cloud resources. It will migrate more of its onsite applications to the cloud. In the future, Loacker will evaluate to enhance quality control and improve production processes by using machine learning services from AWS to interact with onsite production data. AWS has provided proofs of concept regarding business intelligence and remote workplaces. The ability to try different solutions has helped to boost innovation within the company. Ultimately, Loacker hopes to continue its journey of using AWS as it moves toward a software-as-a-service solution. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Loacker keeps its SAP disaster recovery environment aligned using AWS DataSync, a secure online service that automates and accelerates migrating data between on-premises and AWS storage services. It also uses this service to back up some large onsite file services that it was unable to back up using its previous software. Additionally, Loacker hosts its business-to-business website using Amazon CloudFront, a content delivery network service built for high performance, security, and developer convenience. In addition to using AWS services, Loacker used the nearby AWS Europe (Milan) Region to host its solutions. Using this AWS Region, which has three Availability Zones, Loacker can reliably spread applications across multiple data centers, adding even greater reliability and business continuity and eliminating any network latency considerations. Loacker has always had a special connection to the mountains, where it creates high-quality wafer and chocolate products. Its Italian and Austrian production plants are surrounded by a natural Alpine landscape, which helps it to focus on the respect of nature and the environment and use optimal, genuine ingredients. Founded in 1925 as a small pastry shop in Bolzano, Italy, Loacker now sells products in more than 100 countries. The onsite location of Loacker’s hardware and software often led to system access issues because of hardware limitations and sometimes due to extreme weather. Loacker decided to use Amazon Web Services (AWS) and to migrate the most important piece of its infrastructure, its SAP application, to the cloud. Because of this migration, the company increased system reliability while reducing costs. AWS Services Used 中文 (繁體) Bahasa Indonesia A. Loacker Spa/AG is a South Tyrolean company leader in the international wafer market specializing in chocolate confections. Loacker products are manufactured in the heart of the Alps and inspire people in over 100 countries. Loacker had no experience using AWS or the cloud before its migration, but it considers itself to be determined, disciplined, and open to new technologies. Because the migration from onsite to cloud solutions is a significant change, a successful migration requires transforming the mindset of the entire company. Loacker was fully committed to the cloud transition and invested in training and upscaling its workforce so that its employees could directly manage the solution. Ρусский عربي 中文 (简体) Opportunity | Reducing Cost and Improving Availability About A. Loacker Spa/AG Solution | Improving System Reliability and Reducing Infrastructure Costs by 32% Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Santo Natale IT Infrastructure Field Manager, Loacker Türkçe AWS provides a very good set of services in terms of availability, stability of services, and documentation.” As a 24/7 production factory, the company depends on an IT infrastructure that is the basis of production. Before its migration to the cloud, Loacker used two sites to host its business resources to provide high availability in case of failure of one site. However, the remote mountain location and associated extreme weather, especially in winter, could still result in issues with accessing its onsite hardware. Loacker needed to improve the reliability of its systems, and it looked to AWS. English AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. Learn more » AWS Storage Gateway Modernizing Infrastructure to Improve Reliability Using Amazon EC2 with Loacker Deutsch capacity to innovate 32% Tiếng Việt Loacker chose AWS because of its efficient cloud-based architecture, how well its services interacted, and its cost advantages. “AWS provides a very good set of services in terms of availability, stability of services, and documentation,” says Natale. Loacker considers the reliability and availability of AWS services as its most important benefit. Loacker’s new cloud-based solutions have eliminated availability lapses and corresponding interruptions in production. Loacker has also replaced onsite file servers and physical tapes with videotape libraries using AWS Storage Gateway, a set of hybrid cloud storage services that provides on-premises access to virtually unlimited cloud storage. The company also uses this service as network file system storage for its Linux machines. Overview | Opportunity | Solution | Outcome | AWS Services Used Italiano ไทย The company also uses Amazon Simple Storage Service (Amazon S3)—an object storage service offering industry-leading scalability, data availability, security, and performance—to store long-term and SAP backups. Previously using onsite hardware, Loacker experienced situations in which it was unable to retrieve its data, resulting in disruption of its business continuity. Since Loacker’s migration to the cloud in June 2021, the infrastructure has had zero downtime, with no associated production losses. By migrating its SAP application to AWS, Loacker reduced its infrastructure costs by 32 percent. Zero “We are a manufacturing company, not a technology company, so adoption of new technologies is a bit challenging,” says Santo Natale, IT infrastructure field manager. “AWS provided us with a lot of training resources, and one of the reasons we chose AWS was the very high quality of the support.” The expertise that AWS brings to the process has helped smooth the transition. “We performed an SAP technology upgrade—from R3 to SAP HANA—and that was a big accomplishment for us. We did not have any delays or issues in the migration of the infrastructure to AWS. Everything was great,” says Manfred Mayr, head of IT organization at Loacker. Learn more » Learn how Loacker modernized its manufacturing infrastructure using Amazon EC2. unavailability events from 2021 to 2023 AWS DataSync Modernized Português Contact Sales
Money Forward Increases Development Velocity 3x Working with AWS Training and Certification _ Case Study _ AWS.txt
Running Containers on Amazon EKS Français When Japanese financial services provider Money Forward used Amazon Web Services (AWS) to rapidly scale for expansion, it realized that success would be more than a matter of improving its technological infrastructure. The company also needed to train its employees and increase the number of engineers that had AWS expertise. Money Forward has upskilled nearly 200 of its engineers by working with AWS Training and Certification, which helps individuals build and validate their skills to get more out of the cloud. In boosting engineers’ knowledge and confidence about AWS through training, Money Forward has significantly increased the development speed and product release cadence of its services. AWS Training and Certification Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Learn more » Español Money Forward worked with AWS Training and Certification to provide training in AWS services for its engineers. The company wanted to increase the number of engineers who could use AWS services, like Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud or on premises. The company viewed the training as an investment in the business as well as human potential. “If you let employees experience things, you can expect growth from them,” says Yosuke Suzuki, general manager of the service infrastructure division at Money Forward. “We wanted our engineers to experience things that lead to growth.” Company operations have improved after AWS Training. Before, everything from adding middleware and capacity planning for media exposure to scaling up and scaling out had to go through the central infrastructure team, but now the operations team can complete them. This has led to faster product releases and more service offerings for customers. Backend engineers who participated in the training have also been adding and using AWS-managed middleware. Developers who previously took 30 minutes to complete deployment of new features now take only 10 minutes, and the volume of infrastructure changes has now accelerated by three times. 日本語 2022 Develop practical, in-depth skills for managing containers with Amazon EKS. Learn more » Get Started 한국어 Solution | Enhancing the Autonomy of the Application Team Overview | Opportunity | Solution | Outcome | AWS Services Used Learn how to design good cloud architecture with Architecting on AWS. With different knowledge levels of AWS among in-house engineers, Money Forward chose two AWS Training courses. Developers who were new to AWS took Architecting on AWS, which teaches learners to identify services and features to build resilient, secure, and highly available IT solutions in the AWS Cloud. This introductory training lowered hurdles to using AWS and helped engineers learn the fundamentals of building IT infrastructure on AWS. Engineers already familiar with using AWS took Running Containers on Amazon EKS, an intermediate course aimed at helping engineers learn container management and orchestration for Kubernetes using Amazon EKS, to promote the usage of the company’s in-house infrastructure made by using AWS and Amazon EKS. With the help of the AWS training team, the course was customized to teach the use of tools for the in-house infrastructure to make sure the training was practical. Established in 2012, Tokyo-based Money Forward, with its mission of "Money Forward. Move your life forward." has developed various businesses in the financial technologies and software-as-a-service (SaaS) domain for corporations, individuals, and financial institutions. Money Forward provides more than 200,000 billing companies with services—including the Money Forward Cloud, which uses SaaS solutions for back-office optimization such as accounting and finance, personnel, and legal affairs. The company also provides more than 12.8 million users with asset-management services, such as Money Forward ME, to solve personal money issues. “We wish to deliver even more and greater value to our users,” says Yosuke Tsuji, CEO of Money Forward. “We have achieved only 1 percent of our vision.” The demand for AWS Training courses exceeded the company’s expectations. Between November 2020 and April 2022, 260 engineers took the AWS Training and Certification courses, which received a 4.9/5.0 score in the post-training survey. “We have had engineers with skills and knowledge on AWS but who have not been able to train other engineers systematically. This training became a catalyst to spark a wide range of interest in AWS. The training also gave engineers a common language when using AWS, and our in-house infrastructure has been very effective,” says Junya Ogasawara, chief technology officer of Money Forward Home Company, a consumer company within Money Forward. Junya Ogasawara Chief Technology Officer, Money Forward Home Company 3x the speed Outcome | Improving Company Culture and Profits by Optimizing Team Structure AWS Services Used About Money Forward 中文 (繁體) Bahasa Indonesia From 30 to 10 minutes Amazon Elastic Kubernetes Service Contact Sales Ρусский Money Forward soon faced challenges amid this rapid growth and active use of AWS. The expansion stretched Money Forward’s central infrastructure team too thin. The company knew this challenge could slow the company’s advancing business growth. Money Forward developed a framework to speed up service improvement by allowing the application development team to build and operate infrastructure and provide services autonomously. The goal was to help service teams manage their own infrastructure, which meant scaling the organization and business. For this culture to work at Money Forward, its developers and systems engineers, who had different levels of knowledge about AWS and Kubernetes at the time, needed upskilling and in-depth understanding of AWS services. عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. In company infrastructure Architecting on AWS By upskilling employees through courses offered by AWS Training and Certification, financial services provider Money Forward improved the speed of its product releases and increased the number of products and services it offers to customers. Overview Bolstered confidence Reduced average product deployment time Customer Stories / Financial Services Türkçe Money Forward hopes to further optimize its newly established DevOps system. The application development teams will continue to be involved in operations, and software engineers will continue to use the AWS infrastructure. Money Forward believes it is essential to help more engineers learn AWS and to continue to release stable services faster. As a result of AWS Training and Certification, the company has improved not only its service to customers but also its culture. As Suzuki says, “Our message to potential hires is that you’ll be able to grow as an engineer and individual employee by joining Money Forward.” In Money Forward’s early days, a central infrastructure team handled the building and operating of the infrastructure portion of all products in an on-premises environment. With the growth of its existing services and the expansion of new ones, there was an increasing requirement to respond more quickly to its users’ needs. To make the infrastructure more robust and scalable, Money Forward started using AWS. In 2017, it started building new services on AWS and moved existing on-premises products, such as Money Forward Cloud Payroll and Money Forward ME, to the AWS Cloud in 2020 and 2021, respectively. Money Forward has services with more than 12.8 million users operating on AWS. English Opportunity | Solving a Bottleneck to Spur Business Growth AWS Training has boosted the use of AWS within the company, and application developers have been able to take over system-setting authority from the infrastructure engineers. “If we were still in a traditional on-premises environment where only infrastructure engineers could touch AWS, the current growth of Money Forward might have been slower,” says Ogasawara. “But now, even with the in-house infrastructure, the percentage of application teams that can use and operate by themselves is increasing, which has improved the release speed and productivity of our services.” Deutsch Money Forward Increases Development Velocity by 3x Working with AWS Training and Certification Tiếng Việt Accelerated the volume of infrastructure changes Established in 2012, Tokyo-based Money Forward is a fintech company that delivers tools to visualize and improve the financial health of individuals and small to midsize organizations. It serves its customers with a variety of personal finance apps and software-as-a-service solutions. Italiano ไทย We have had engineers with skills and knowledge on AWS but who have not been able to train other engineers systematically. This training became a catalyst to spark a wide range of interest in AWS.”  Learn more » In developers to use AWS services more proactively Reduced bottlenecks Português
myposter Case Study.txt
Migrating to the cloud has helped myposter innovate faster and also launch a second business, Kartenliebe, which makes personalized stationery and cards for weddings, birthdays, religious festivals, and other occasions. Français Max Tafelmayer Chief Technology Officer, myposter Español AWS designed and implemented solutions for myposter’s storage and compute needs, using Amazon Simple Storage Service (Amazon S3), an object storage service offering scalability, data availability, security, and performance. The company also uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for myposter’s fluctuating and varied workloads. This includes high levels of automation needed for customizing products in its web shop. The IT team now spends less time on maintenance, so it can focus on delivering value through service innovation. This has helped myposter launch and support Kartenliebe, by deploying another Kubernetes cluster. “Kubernetes has the best and widest range of tools and libraries,” says Tafelmayer, “which means the team can program in any language and pick the most appropriate features of each package to develop the business.” AWS offers a great menu of different services to choose from—and the opportunity for our people to develop and learn along the way.” Adopting AWS has also been beneficial when hiring talent. “We’re a fully digital business, with ambitions to grow,” says Tafelmayer. “The people we want to attract expect to work with the latest tools, so they have the opportunity to learn and grow themselves.” Get Started 한국어 More Flexible Storage and Improved Agility myposter is an ecommerce and photo production business based in Munich, Germany. Customers upload photos to create personalized photobooks, greeting cards, calendars, posters, and other printed items. myposter also licenses its platform to third parties. About myposter Amazon EC2 Scales to meet up to 400 percent increase in demand AWS Services Used Scaling Automatically to Meet Demand 中文 (繁體) Bahasa Indonesia Paves the way for new business launch using AWS Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский myposter decided to migrate to AWS in 2018, after the company had tried to create its own storage system using an open-source solution. Not satisfied with the storage environment’s reliability and stability, myposter turned to AWS to modernize its setup. “It was quickly clear to us that AWS was the best fit for our business in terms of storage, and for wider operations too,” says Max Tafelmayer, chief technology officer (CTO) at myposter. “It had all the services and flexibility we could ever need.” عربي With the new environment, myposter has resolved capacity issues during times of peak demand and reduced cost per customer order by 5 percent. In addition, replacing the open-source storage cluster with Amazon S3 has provided a more stable and reliable environment and freed up time for myposter’s IT team to focus on product development. myposter Scales, Modernizes, and Future-Proofs its Business Using AWS Learn more » Benefits of AWS No longer tied to rigid and time-consuming processes, myposter now has the freedom to do what it wants. “AWS offers a great menu of different services to choose from,” says Tafelmayer, “and the opportunity for our people to develop and learn along the way.” Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Türkçe Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Using Amazon RDS for MySQL, myposter can easily set up and operate relational databases in the cloud. English Amazon RDS myposter is an ecommerce and photo production business based near Munich, Germany. Its customers upload photos to create personalized photobooks, greeting cards, calendars, posters, and other printed items. myposter turned to AWS to create an infrastructure that could scale in times of high demand, such as during Black Friday sales promotions and over the Christmas period. Using AWS has helped the company innovate faster and also launch a second business, Kartenliebe, which makes personalized cards for weddings, birthdays, religious festivals, and other social occasions. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Reduces workload for IT maintenance and monitoring Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. myposter chose Amazon Web Services (AWS) to create an infrastructure that could scale in times of high demand, such as during Black Friday sales promotions and over the run-up to Christmas. At these times, the strain on its system could be up to 400 percent higher compared to other periods of the year. Deutsch myposter is an ecommerce and photo production business with 100 employees, based in Munich, Germany. myposter operates in the competitive market of digital image editing and printing, where a fast and efficient service for customers is essential. Its users upload photos to create personalized photobooks, greeting cards, calendars, posters, and other printed items. In addition to its digital printing operations, myposter rents its web shop infrastructure to third parties that require high-end visual processing and production services. To achieve the agility required to offer this service, myposter chose Amazon Elastic Kubernetes Service (Amazon EKS), a solution that uses a managed container service to run and scale Kubernetes applications in the cloud. Tiếng Việt Amazon S3 Italiano ไทย Contact Sales Frees IT team to focus on innovation 2022 日本語 Working with the previous infrastructure, it took the myposter IT team a week to set up a new server. Now, it takes just 5 minutes to add or remove a server to match fluctuating demand. The company believes that images are safer and more retrievable on AWS using Amazon S3. Issues that used to impede myposter’s operations, such as databases going out of sync, just do not happen anymore. 中文 (简体) By its nature, myposter’s business experiences uneven demand, with spikes and troughs of activity throughout the year and also at different times of the week. Managing the company’s on-premises infrastructure represented a significant operational overhead and myposter was concerned about the availability of customer images stored on its servers. Português Server Setup Now Takes 5 Minutes Instead of a Week
N KID Group Case Study Amazon Web Services (AWS).txt
“Our vision is always to have a robust system that can serve our customers in the best way possible. We are expanding fast and being on the AWS Cloud has given us a lot more flexibility and scalability,” Khoa comments. Picking the Right Cloud Partner Français With AWS Elastic Beanstalk, N KID engineers now conduct multiple deployments during the day, using a continuous integration/continuous development (CI/CD) approach, to improve functionality. At night, instances are scheduled to scale down, which has cut operational costs by 30 percent. “Developers now have peace of mind, and we are all more relaxed because we can deploy automatically using AWS Elastic Beanstalk with no constraints,” Khoa says. Benefits of AWS Español Develops and executes promotions 3 times faster Having successfully standardized its digital operations on AWS, N KID began working toward its next goal: providing a consistent customer experience. Payment at tiNiWorld indoor playgrounds is mostly digital, with visitors using the N KID mobile app or branded Near Field Communication (NFC) cards. However, with the group’s on-premises system, crashes frequently occurred during school holidays and on weekends when traffic spiked. This resulted in customers being limited to cash payments and employees needing to manually record transactions, which incurred a high risk of errors and dissatisfaction due to wait times. Learn More Amazon RDS for SQL Server 日本語 Contact Sales “With managed services like Amazon RDS for SQL Server, our developers can conduct performance checks on their own, and we can take advantage of native services on Amazon RDS for SQL Server such as backups and snapshots to upgrade our database without a dedicated DBA. Furthermore, we have reduced risk to better serve our growing customer base,” Khoa says. Since migrating to AWS, N KID has doubled the number of tiNiWorlds from 30 to 60 and branded retail outlets from 10 to 42. AWS Step Functions 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Get Started AWS Step Functions is a serverless function orchestrator that makes it easy to sequence AWS Lambda functions and multiple AWS services into business-critical applications. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Prior to running its payment processing and Windows-based workloads on AWS, updates were conducted weekly and a server restart was performed overnight to avoid affecting N KID customers. “If we had to fix an urgent bug, we could deploy immediately but with a lot of anxiety because we were afraid of the system going down,” Khoa recalls. Shorter Time-to-Market with Serverless Developers now have peace of mind, and we are all more relaxed because we can deploy automatically using AWS Elastic Beanstalk with no constraints.” Saves on headcount by offloading database administration AWS Services Used To learn more, visit Business Applications and Software. Cuts operational costs by 30% 中文 (繁體) Bahasa Indonesia For more than 10 years, N KID Group has been operating indoor playgrounds under its flagship brand, tiNiWorld, to give children a space to safely run, play, and explore. In 2016, the group introduced a mobile app and began a digital transformation to enrich its offline experience with online touchpoints. Its renewed vision is to be the top children’s platform in Vietnam. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. Ρусский عربي The Innovation Journey Continues N KID Group Modernizes Child’s Play on AWS 中文 (简体) Do Bui Anh Khoa Chief Technology Officer, N KID Group N KID continues to explore new opportunities for innovation on the cloud with Renova and AWS. Currently the group is working on turning all its services into dockers for a full container-based architecture. Additionally, N KID plans to reduce its Windows workloads from 40 to 20 percent to better support Kubernetes integration and its CI/CD approach. N KID Group Modernizes Child’s Play on AWS Auto scales during peak periods to prevent system crashes About N KID Group AWS Elastic Beanstalk Auto Scaling to Prevent System Downtime Top Online and Offline Platform for Kids Türkçe English Doubles number of play centers and retail outlets in 2 years More and more businesses are moving away from reliance on traditional hosting centers, and N KID did not want to be left behind. N KID recognized the benefits of cloud computing and the automation opportunities afforded to businesses on the cloud. “For us, it was never about whether or not we would move to the cloud, but rather when we would move,” explains Do Bui Anh Khoa, chief technology officer of N KID Group. Adopting a Stress-Free Approach to Deployment Vietnam and its children are in serious need of more green spaces to run and move freely. In Ho Chi Minh City, public parks cover only 0.55 square meters per citizen. This is a far cry from neighboring countries such as Singapore, where 8 square meters of land per citizen are reserved for parks and trees. N KID Group was founded in 2009 in Vietnam with a vision to become a leader in children’s entertainment. The group operates 60 tiNiWorld play centers and 42 tiNiStore and provides digital engagement platforms for kids and parents. Offloading Tedious Database Maintenance Deutsch Tiếng Việt The root of the problem was a massive server used prior to AWS that N KID’s lead developer used to manually deploy resources. When that lead developer left the company, leaving no documentation in his wake, N KID took the opportunity to automate. The company applied AWS Elastic Beanstalk to its transaction processing application, a .NET workload on Windows that is key to avoiding service interruption on the ground—or in N KID’s case, a bouncy rubber mat. Since implementing AWS Elastic Beanstalk, the group has not experienced any major instances of downtime, much to the relief of its customer service employees. SQL Server is a relational database management system developed by Microsoft. Amazon RDS for SQL Server makes it easy to set up, operate, and scale SQL Server deployments in the cloud. Italiano ไทย As the next step in the group’s modernization, N KID implemented serverless features executed with AWS Lambda code to automate scheduled tasks and break down monolithic architecture. This has resulted in tighter integration with distributors and retail partners through shared APIs and container-based services orchestrated with Kubernetes. The onus of backend design work for regular promotions can now be shared with N KID’s partners. The group wanted to embark on its cloud journey with an experienced consultant, which it found in Renova Cloud, an Amazon Web Services (AWS) Advanced Consulting Partner. The first step in N KID’s cloud journey was standardizing operations across its digital platforms. The group engaged Renova in 2017 to start migrating non-critical workloads, such as its website, to the AWS Cloud. Motivated by the positive experience, N KID decided to go all-in on AWS. Khoa says, “Renova played, and continues to play, an important role in N KID’s journey of modernizing our technology stack to provide a robust, ever-evolving experience for our customers.” With database management offloaded to AWS and serverless architecture in place, N KID engineers have more time to write quality code and build new features that bridge the online and offline N KID experience. An example of a new feature that N KID is launching is crossover holiday promotions where tiNiWorld visitors can enjoy discounts on the group’s ecommerce sites. “Being on AWS allows us to execute new ideas quickly, and marketing promotions can be developed and executed three times faster as a result,” Khoa says. 2020 After migrating its website and customer-facing assets to the cloud, N KID began modernizing its backend. Databases were the first in line for an upgrade. N KID was prompted to use Amazon Relational Database Service (RDS) for SQL Server by the departure of one of its database administrators (DBAs), and the switch to a managed database service has further reduced maintenance overhead. Learn more » N KID is also using AWS Step Functions to visualize workflows and better target the source of any issues that arise during promotions. As a recent example, N KID sent coupon codes to members’ phones and emails but noticed that several members didn’t receive them. Engineers were able to easily trace and repair the errors. AWS Lambda Português
Naranja X Modernizes Financial Services More Efficiently with SaaS Solutions in AWS Marketplace _ Naranja X Case Study _ AWS.txt
Français 2023 After validating a POC for a cloud solution, IT teams have the information they need to build a business case for senior leadership in order to acquire approval and budget for implementation. For example, when internal business requests for data modeling were taking up too much development time, the Naranja X IT team looked for a way to let different departments access analytics centrally and complete data modeling faster. Searching AWS Marketplace led the team to a free trial of Matillion Data Productivity Cloud, an enterprise tool that enables codeless data transformation. Español AWS Marketplace access to benefits Naranja X business teams could quickly configure the solution via a web-based user interface, test it, and provide feedback. When the solution successfully helped shrink time-to-insight from weeks to days, Naranja X didn’t have to delay for further contract negotiations. Its teams just continued using the SaaS solution while the company paid monthly in AWS Marketplace, knowing it could easily reassess and revise the agreement as needed in the future. Before, many different steps were required to assess solutions for security, functionality, UX, and integration capability with existing tools. In AWS Marketplace, we can simply choose one, opt for a SaaS free trial, and keep moving.” 日本語 Opportunity to test Outcome | Delivering a more Data-Centric Company Culture Get Started 한국어 Providing excellent service to millions of customers across more than 180 bank branches and a mobile app does not happen in a single transaction, especially as Naranja X continues along its journey to become a digital banking ecosystem. IT teams rely on quick, cloud-native improvements to support a seamless, cross-channel customer experience and optimize evolving business processes. Discovering new software is so efficient in AWS Marketplace, Naranja X IT leaders can hear about a new, ISV cloud solution at an AWS Summit, look it up in AWS Marketplace, message team members a link, and even test drive it immediately to understand the expected return on investment. “There are thousands of amazing cloud solutions from various vendors in AWS Marketplace,” says Pablo Adrián Mlynkiewicz, chief data and analytics officer at Naranja X. “But it’s the confidence it gives me that keeps me coming back.” Improved Solution | Subtracting Complexity with Consolidated Billing About Naranja X Established pricing models in AWS Marketplace don’t have to disrupt or limit existing business relationships. Private offers help Naranja X to continue existing relationships with preferred vendors with the added convenience of procurement in AWS Marketplace. And procurement strategies can always evolve. For example, when its pay-as-you-go model reached maturity with Snowflake, which helps organizations to mobilize data with Data Cloud, Naranja X reached out directly to its trusted advisor SEIDOR, which offered contract conditions for procuring Snowflake services in AWS Marketplace that served the company better at the time. AWS Services Used 中文 (繁體) Bahasa Indonesia Giving teams free rein to try different SaaS solutions simultaneously may sound like an invoicing headache, but for Naranja X finance teams, consolidated billing in AWS Marketplace quiets the noise. All AWS Marketplace purchases and agreements can be managed in Naranja X’s AWS account, where managers can quickly reference current spending and future commitments. Pablo Adrián Mlynkiewicz Chief Data & Analytics Officer, Naranja X Ρусский Naranja X is a FinTech enterprise modernizing banking and credit card services for nearly 5 million customers across Argentina. The company migrated to Amazon Web Services (AWS) to connect customers with more convenient products, services, and benefits that support financial health. عربي 20% faster 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Marketplace is a curated digital catalog enabling customers to quickly find, test, buy, deploy, and manage the third-party software, data, and professional services necessary to build solutions and run their business. Procurement teams leverage AWS Marketplace to accelerate innovation and enable cloud users to deploy solutions rapidly and securely, while reducing total cost of ownership and improving operational oversight. Learn more » Overview Naranja X Modernizes Financial Services More Efficiently with SaaS Solutions in AWS Marketplace Customer Stories / Financial Services Türkçe English Opportunity | Adding Confidence with Frictionless Deployment Previously, it could take months to set up suppliers in our system and conduct POCs. With AWS Marketplace SaaS free trials and flexible pricing options, our teams can test three or four ISV SaaS solutions in days and decide which is the best fit for our needs. This makes the overall procurement process so much faster.” invoice management But Naranja X teams can’t always do it alone. Working with independent software vendors (ISVs) to deploy readymade software-as-a-service (SaaS) solutions can enable Naranja X developers to build and solve at speed. But managers must also protect against accelerating costs or security risks. When leaders at Naranja X procure in AWS Marketplace, they have access to thousands of third-party cloud solutions that can be deployed almost instantly with little to no upfront commitment and are supported by powerful cost-control tools. And Naranja X doesn’t have to leave any preferred ISVs behind. Previously, when Naranja X needed a cloud solution, it contacted vendors one by one. Those vendors could provide ample documentation and demos, but IT leaders couldn’t say firsthand if the solutions worked well in their own environment. AWS Marketplace SaaS free trials allow Naranja X teams to get hands-on experience with ISV cloud solutions and create proof of concepts (POCs) before procuring them—without compromising on security. Flexible payment methods are another important benefit for Naranja X. The pay-as-you-go option can help launch shorter-term projects faster. For example, when Naranja X needed a Palo Alto Networks firewall solution to support data migration between AWS Regions, the procurement process didn’t slow the team down. Naranja X obtained licenses almost immediately and realized the benefits of the solution an estimated 20 percent faster, compared to previous procurements that required emailing back and forth to develop and finalize proposals—all while developers waited for a green light. Procuring SaaS solutions in AWS Marketplace has not only helped Naranja X get through the procurement process an estimated 20 perent faster, but it has also democratized access to data and delivered other efficiencies across the company. Where it once took weeks to assemble the right team and agree on business priorities to shape data modeling, for example, business teams are now using Matillion Data Productivity Cloud to create data models themselves within 3 days, without asking IT teams for help. Deutsch Tiếng Việt Such efficiencies contribute to building a stronger data culture at Naranja X, meaning more team members are equipped to make data-driven decisions. And as more teams use these solutions, time to value shrinks and the possibilities for new customer solutions grow. Cristian Deferrari Head of Infrastructure, Naranja X Italiano ไทย Contact Sales and validate POC before procuring services Naranja X is an Argentine FinTech working to make people's financial lives simpler. In addition to issuing over 10 million credit cards, the company has become a platform for access to financial products and services, giving opportunities to millions more people who are left out of the traditional financial system. Overview | Opportunity | Solution | Outcome Centralizing vendor management in AWS Marketplace also helps Naranja X finance teams to conduct better forecasting because what, how, and when to pay is within the company’s control. Before using AWS Marketplace, Naranja X consistently spent around 50 percent of vendor onboarding time discussing how Argentina’s unpredictable currency exchange rates might dramatically change the dynamics of a contract with an ISV. AWS Marketplace offers a more consistent process around billing and invoicing, so Naranja X can dedicate more time to agile innovation instead of lengthy negotiation. Português
NBCUniversal Case Study _ Advertising _ AWS.txt
One Platform relies on AWS ephemeral compute solutions such as Amazon EMR, a big data solution for petabyte-scale data processing, interactive analytics, and machine learning in the cloud. NBCU uses machine learning to tailor its 200 jobs to 8,000 servers and cost-efficiency models built around Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. Using One Platform, media buyers can plan effectively regardless of whether viewers are consuming content through a streaming service or traditional television. NBCU automates buying through demand-side platforms and through APIs that democratize access. Français 2023 Español NBCUniversal’s Solution AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Learn more » 日本語 NBCU Uses AWS to Build First-Party Data Solution within Its One Platform Technology Stack In order to more effectively manage big data workloads, NBCU migrated 4 PB of data into its data lake on AWS. “We’ve worked with the AWS team to reformat these data pipelining activities for this big data and synthesize it into our forecasting across linear and digital to help with our planning across these holistic media plans,” says Jeff Pinard, NBCU’s senior vice president of ad technology. Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Industry Challenge NBCU used Amazon Web Services (AWS) to build a first-party data solution within One Platform to help manage and process large volumes of data effectively and synthesize it for forecasting across linear and digital. To further efficiency in a pipeline that takes in 15.4 TB of interactive reporting data in near real time, NBCU uses AWS Lambda, a serverless, event-driven compute service that lets companies run code for virtually any type of application or backend service without provisioning or managing servers. The company was also able pivot its analysis of viewer patterns from outdated rating methods to near-real-time insights from big data. AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي Benefits of Using AWS 中文 (简体) The migration to AWS enabled data workloads to become more flexible, efficient and cost effective. NBCU estimates that its migration to AWS will save more than $35 million over 10 years—a 40 percent reduction. Within an ever-changing industry landscape, NBCUniversal (NBCU) sought to facilitate how advertisers reach target audiences across television, streaming services, and mobile apps. It needed to unite siloed linear and digital media planning and monetization while maintaining the privacy of viewers’ data. NBCU also needed a more flexible and scalable solution for managing large volumes of data for holistic forecasting. About NBCU Türkçe English NBCUniversal’s One Platform is an industry first, combining years of world-class digital and linear expertise with the benefits of big tech: first-party data, precision targeting, automated buying, and outcome-based measurement. We would have incurred a huge cost to be able to get the server power to do that in any on-premises environment ... We’ll do it on AWS in a very cost-effective way and provide the business near-real-time data that is exponentially increasing year over year." Using its data solution built on AWS, NBCU has also become more agile and reactive to business needs. Data volumes surrounding the Olympic Games, for example, increased to 7 GB for the Tokyo games in 2021. With the AWS data infrastructure in place, NBCU was able to scale and meet the needs of their customers without experiencing latency. NBCU also achieved near-real-time reporting and delivery analysis, helping the business manage or redirect buying patterns quickly and analyze its delivery day over day. “We would have incurred a huge cost to be able to get the server power to do that in any on-premises environment,” says Pinard. “We’ll do it on AWS in a very cost-effective way and provide the business near-real-time data that is exponentially increasing year over year.” Using NBCUniversal's One Platform, media buyers can plan effectively regardless of whether viewers are consuming content through a streaming service or traditional television. NBCU automates buying through demand-side platforms and through APIs that democratize access, and by building on AWS, NBCU estimates that it will save more than $35 million over 10 years. Deutsch Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Tiếng Việt Customer Stories / Advertising & Marketing Italiano ไทย Amazon EMR With content reaching a billion people monthly, NBCUniversal is a global media company that includes broadcast and streaming channels, cable television, theme parks, and a movie studio. Learn more » Amazon EC2 Jeff Pinard NBCU’s Senior Vice President of Ad Technology AWS Lambda Português
NeuroPro Case Study.txt
NeuroPro is Changing the Way Brain Related Diseases are Diagnosed Using AWS Français Compliance with regulations in different territories NeuroPro is a Swiss-based digital health solutions company that aims to solve data challenges in healthcare around diagnosing and treating brain-related diseases. Its VMLpro platform provides physicians with access to the data and tools they need to diagnose patients quickly and accurately. Español One reason for this is siloed, static, and incomplete data sources, which make it difficult for doctors to access the information they need to make quick and accurate diagnoses. Due to a skills shortage, the level of specialist knowledge needed to draw conclusions from the data is also not always a given. VMLpro uses Amazon Web Services (AWS) to process large volumes of patient data in real time and facilitates collaboration among healthcare professionals, who can connect via the platform to get a second opinion when they need it most. 日本語 Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. 2022 NeuroPro aims to improve treatments and outcomes for patients with brain diseases, by reducing misdiagnoses. Its real-time collaboration and remote diagnostics platform VMLpro uses Amazon Web Services (AWS) to support faster and more accurate diagnoses by providing physicians with access to the data and tools they need. It helps any healthcare provider quickly and easily access cloud-based resources to get a full picture of a patient’s brain function. Hospitals have reported that the VMLpro platform has reduced the time it takes to share medical data and make a diagnosis from weeks to minutes. Contact Sales Get Started 한국어 When doctors need a second opinion, VMLpro supports collaboration among physicians located across the globe. They can quickly and easily collaborate, accessing multiple data sources and files, to come to a solution. “With VMLpro, a doctor in Switzerland can quickly communicate with an expert in Australia, who is immediately able to see a live picture of a patient’s journey,” says Dr. El-Imad. “That means physicians have extra support, confidence, and guidance in their decision making.” Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Using AWS, the company encrypts electroencephalogram (EEG), magnetic resonance imaging (MRI), and computed tomography (CT) scan datasets before storing them in the cloud. Because encrypting large volumes of data is resource heavy, NeuroPro uses AWS on-demand compute power to perform these tasks quickly and cost efficiently, which means each hospital is not limited by its own resources to protect its data. Dr Abbas Badran Head of Development, NeuroPro Manages infrastructure maintenance with one full-time staff member instead of four. NeuroPro is Changing the Way Brain Related Diseases are Diagnosed Using AWS AWS Key Management Service (AWS KMS) lets you create, manage, and control cryptographic keys across your applications and more than 100 AWS services. Learn more » About NeuroPro NeuroPro, a Swiss-based digital health company, has created the first cloud-based collaboration platform for remote diagnostics of complex neurological cases using Amazon Web Services.  We’re dealing with medical data, which is highly sensitive, so it’s essential that it’s secure. AWS offers high levels of encryption for all stored data.” Using AWS, the NeuroPro team has the time to focus on customer experience and innovation because it manages infrastructure maintenance with just one full-time employee instead of the four it would take without AWS managed services. The team saves time and effort by automating tasks such as backup and recovery, queue management, lifecycle management, and system monitoring. “Maintaining our infrastructure is like magic,” says Dr. Badran. “Without AWS, it would take at least four people, but we can do it with one full-time member of staff. This means we have more resources to engage with our customers and make sure our platform is intuitive for them.” NeuroPro is confident that AWS will continue to support its growth and innovation as it looks to offer its solutions to more physicians across the world. “We want to help healthcare providers deliver the best possible treatment to brain disease patients, and those with other complex medical conditions,” says Dr. El-Imad. “Using AWS, we have the flexibility and power to achieve this.” AWS Services Used NeuroPro’s VMLpro helps any healthcare provider quickly and easily access cloud-based resources to get a full picture of a patient’s brain function. Running diagnostic algorithms on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for any workload, increases the speed and accuracy of diagnoses. Hospitals have reported that the VMLpro platform has reduced the time it takes to share medical data and make a diagnosis from weeks to minutes. Close collaboration with specialized partners remotely means that referrals can be made more quickly and in a more targeted manner. In many cases, reliable diagnosis and prompt treatment are crucial and the effective sharing of findings enables timely initiation of necessary treatments. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. To diagnose brain diseases, experts must analyze large volumes of patient health data, which is extracted from a variety of hospital monitoring equipment. Much of this must happen in real time. “We’re talking about terabytes of data,” says Dr. Jamil El-Imad, chief scientist at NeuroPro. “It’s simply not possible for some organizations to deal with data on this scale without our platform.” to enable global health expert collaboration with peace of mind. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Securing Medical Data Using AWS 中文 (简体) This means that any clinic or hospital—big or small—has remote access to the information they need to provide correct diagnoses for patients faster. This can ultimately improve care and outcomes and doctor expertise. Overview Amazon CloudFront Speeding Up Time to Innovation With data stored securely on AWS, NeuroPro can be confident about compliance with data regulations in different territories—it can share data with other institutions around the world with confidence. AWS Customer Success Stories Türkçe English Swiss-based NeuroPro aims to improve the diagnosis of brain diseases and other medical conditions with a digital solution that provides simple access to the tools and data that physicians need. Aimed at busy physicians who don’t have the time to learn complex new systems, the company’s solutions must be easy to learn and use. This was the premise for NeuroPro’s Virtual Mobile Laboratory for Professionals (VMLpro). Diagnosing Brain Disease in Hours, Not Days 6 days to 6 hours Secure patient data Using the content delivery network Amazon CloudFront, patient videos are transcoded and served over secure channels to physicians who can use them to help diagnose patients. “Running on AWS, our platform has the flexibility to support any data format,” says Dr. Teresa Sollfrank, chief product officer at NeuroPro. “Doctors can upload the relevant data and set permissions to streamline and speed up collaboration right from their desks.” For patients with brain diseases such as epilepsy and multiple sclerosis, misdiagnosis can mean years of taking the wrong medication and lead to other serious health problems. This is a sad reality for many. For example, some researchers suggest that one in three epilepsy patients are misdiagnosed. AWS KMS Deutsch Securing and protecting data is a top priority for NeuroPro, which must meet the highest Advanced Encryption Standard (AES) specifications and be compliant with regional regulations such as the EU General Data Protection Regulation (GDPR). “We’re dealing with medical data, which is highly sensitive, so it’s essential that it’s secure. It assures our customers that patient data is safe while being shared on the platform” says Dr Abbas Badran, head of development. “Additionally, AWS offers high levels of encryption for all stored data.” Tiếng Việt Customer Stories / Healthcare Italiano ไทย Learn how »  The platform also brings together other necessary diagnostic resources from care givers at all stages of the patient journey, including test results, clinician notes, and video files. On VMLpro, even large files, such as videos of patient symptoms, are easy to access and share from any location. 1 Reduces healthcare providers’ diagnosis times for brain disease. Learn more » Amazon EC2 Breaking Down Healthcare Silos to Enable Global Expert Collaboration Português
NodeReal case study.txt
NodeReal is a blockchain infrastructure and services provider that offers one-stop blockchain infrastructure services including full-fledged node services, blockchain as a service, and blockchain application tools and Application Programming Interface (API). Founded in 2021, NodeReal onboarded around 10,000 developers within its first 12 months, including projects such as BNB Chain, Aptos, CoinMarketCap, CertiK, Galxe, Trust Wallet, and ApeSwap. Français “Thanks to the high performance global network and cloud services from AWS, NodeReal has achieved its vision ‘Make Your Web3 Real’, and built the fastest and most reliable blockchain infrastructure for Web3 builders across the world,” says Jimmy Zhao, technology solutions director at NodeReal. NodeReal will next introduce a one-stop blockchain platform to help its customers build their own chains, as well as Layer-2 blockchains to support high-speed transactions. The company will also aim to build an open and community-driven API marketplace for Web3 developers. Jimmy Zhao Technology Solutions Director, NodeReal Español NodeReal uses AWS Graviton2-based instances, Amazon Elastic Compute Cloud (Amazon EC2) and AWS Managed Services for better price performance. NodeReal was also able to save money and resources by building on the AWS Cloud, as it did not have to secure physical servers and storage. 日本語 2022 Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. NodeReal Provides Scalable Infrastructure Solutions With Strong Price Performance for Web3 Development Striking a Balance Between Performance, Stability, and Scalability Benefits It deploys Amazon Aurora to automatically scale its database across multiple regions to support its global customers, and Amazon Elastic Kubernetes Service (Amazon EKS) to manage its container-based applications running in the Kubernetes open-source orchestration system. Combined with AWS Global Accelerator, which improves global application availability and performance, it maintains consistently low latencies for its customers and end-users.   On AWS, NodeReal’s customers can deliver faster transactions for their end-users on the blockchain with more responsive applications. AWS Graviton Processor AWS Services Used Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Learn more » 中文 (繁體) Bahasa Indonesia Amazon Aurora Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) NodeReal is a blockchain infrastructure and services provider that offers full-fledged node services, as well as blockchain application tools and API. Founded in 2021, NodeReal registered over 10,000 users and 1,000 active weekly users within its first 12 months, including BNB Chain, Apeswap, Project Galaxy, and Trust Wallet.   Solution Overview Amazon Elastic Kubernetes Service About Company Amazon Elastic Compute Cloud By running on AWS, NodeReal provides its customers with a high-performing, stable, and scalable environment to build Web3-based applications. This has helped grow its customer base to over 10,000 worldwide within the first 12 months of its founding in September 2021. Türkçe English ● 700,000 QPS: The number of queries NodeReal can handle per second ● 26 ms: The average latency achieved by deploying on the AWS Cloud ● 700,000/second: API calls that the company can scale to support within 30 minutes NodeReal is fully built and deployed on Amazon Web Services (AWS), which helps to maintain performance, stability, and scalability of its blockchain infrastructure. The company now handles 700,000 API requests per second from its Web3 customers. Furthermore, it supports 70 percent of all public API requests for the BNB Chain, making it the leading blockchain infrastructure provider for Web3 companies on the BNB Chain.   Helping NodeReal Become a Major Player in the Decentralized Economy Thanks to the high performance global network and cloud services from AWS, NodeReal has achieved its vision ‘Make Your Web3 Real’, and built the fastest and most reliable blockchain infrastructure for Web3 builders across the world.” Creating a Conducive Environment for Web3 Development Deutsch Opportunity Tiếng Việt Most of NodeReal’s Web3 customers develop decentralized, throughput-intensive applications for end-users, such as Non-Fungible Tokens (NFTs), decentralized finance (DeFi) wallets, and play-to-earn blockchain games (GameFi). One such customer is Trust Wallet, a multi-chain universal crypto wallet that has over 5,000,000 active users weekly. Italiano ไทย Find out how NodeReal came to support about 70 percent of all public Remote Procedure Calls for BNB Chain, a Layer-1 blockchain supporting leading cryptocurrency exchanges and other Web3 applications, within 12 months of its founding. AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. Learn more » Learn more » Outcome Overview | Opportunity | Solution | Benefits | Outcome | AWS Services Used As such, NodeReal built its blockchain infrastructure on the AWS Cloud, which is able to scale to deliver robust performance and reliability for high throughput requirements.   Português
Novo Nordisk Uses ML for Computer Vision to Optimize Pharmaceutical Manufacturing on AWS _ Novo Nordisk Case Study _ AWS.txt
ML models in production Amazon SageMaker helps you build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows Français Español to support other quality-assurance use cases   Novo Nordisk has successfully built an automated pipeline to deploy ML models at scale to different edge devices. The company is turning the cartridge-counting proof of concept into a production-grade solution and will continue to build the proof of concept for its agar plate use case. These solutions will significantly impact Novo Nordisk’s efficiency, improving its time to market and reducing manual labor so that its team can focus on innovation. Automates 日本語 Amazon SageMaker 2023 About Novo Nordisk Contact Sales Opportunity | Using Amazon SageMaker Pipelines to Deploy ML Models at Scale  Get Started 한국어 Novo Nordisk Uses ML for Computer Vision to Optimize Pharmaceutical Manufacturing on AWS time to market Novo Nordisk A/S is a multinational pharmaceutical company based in Denmark. Founded in 1923, the organization makes and markets pharmaceutical products with a focus on diabetes care and hormone therapy. Scales For the past 100 years, Novo Nordisk has developed innovative products to treat chronic diseases like diabetes, endocrine disorders, and rare blood conditions. More than 34 million patients use its diabetes-care products globally, and the company constantly seeks new digital technologies to optimize its processes for the benefit of its customers. It strives to get medicines to the people who need them at a faster pace and lower price while ensuring compliance. AWS Services Used Improves 中文 (繁體) Bahasa Indonesia Solution | Automating Key Quality-Assurance Tasks with ML and Computer Vision  Deploys ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي On AWS, Novo Nordisk created an automated ML pipeline that covers all the steps involved in the ML development process, from deployment to monitoring, while optimizing for scalability, customization, cost, and traceability. It used Amazon SageMaker Pipelines, the first purpose-built continuous integration and continuous delivery service for ML, to create each specific step in the pipeline and combine them to form a complete, interconnected solution. The pipeline used prelabeled images stored in Amazon Simple Storage Service (Amazon S3)—an industry-leading object storage service. It then resizes, labels, processes, and splits the images into three datasets: training, validation, and testing. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. 中文 (简体) “Through our engagement with the AWS team, we proved to ourselves and our company that we could take a computer-vision use case, put it into the cloud, and build a working pipeline,” says Kristensen. “And we can do it in a fast and scalable way.” Outcome | Using AWS Services to Streamline the Pharmaceutical Manufacturing Line  Overview Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. Learn more » Jonas Vejlgård Kristensen Solutions Architect, Novo Nordisk Türkçe English Novo Nordisk A/S (Novo Nordisk) supplies nearly 50 percent of the world’s insulin. Digital technologies are critical to optimizing the company’s manufacturing operations, enhancing quality, improving yield, and decreasing costs. To this end, Novo Nordisk is using computer vision combined with machine learning (ML) to automate key tasks on manufacturing lines, like cartridge counting and anomaly detection for agar plates, to reduce manual labor. AWS IoT Greengrass is an open-source edge runtime and cloud service for building, deploying, and managing device software. Learn more » Monitors On Amazon Web Services (AWS), Novo Nordisk has created a prototyping solution that effectively trains, deploys, and monitors its ML models and manages the datasets resulting from the pipelines. Alongside the AWS team, the company has built a workflow where a robotic arm places a box full of drug cartridges on a platform; a camera rig takes images of the box; ML inference is performed using an edge device; and the final results are displayed on a dashboard powered by Amazon QuickSight, which provides unified business intelligence at hyperscale. Learn how Novo Nordisk uses AWS to streamline manufacturing processes and reduce manual labor through automation. Deutsch AWS IoT Greengrass Tiếng Việt Amazon S3 Through our engagement with the AWS team, we proved to ourselves and our company that we could take a computer-vision use case, put it into the cloud, and build a working pipeline." Italiano Customer Stories / Life Sciences After the data is processed, the pipeline passes it to either model training, where it is trained with predefined parameters, or model tuning, where it is run through different parameters to find the optimal combination. Then, Novo Nordisk uses the test dataset to generate an evaluation report and determine whether the model is ready for deployment. After registering the model, it compiles the model and packages for deployment using Amazon SageMaker Edge, which makes it simple to operate ML models running on edge devices. The company also uses Amazon SageMaker Edge Manager, which provides model management for edge devices, to perform ML inference of each image.   Next, Novo Nordisk uses AWS IoT Greengrass, an open-source edge runtime and cloud service, to deploy the ML model and serve as the core software for the edge device. “We use AWS services to optimize our ML model for a specific edge device,” says Codina. “When we have the model up and running, every time that we make a prediction, we process the data and send it to the cloud to perform model monitoring.” Novo Nordisk monitors its ML models in production using Amazon QuickSight and Amazon Timestream, a fast, scalable, and serverless time-series database. With these monitoring capabilities, it can detect any anomalies and identify inaccurate results. For example, if a hand or object is covering a box of cartridges, Novo Nordisk can find the issue on an Amazon QuickSight dashboard, review the analyzed image, and correct the error. Moreover, the company has complete traceability of the ML model in production, a necessity in the highly regulated pharmaceutical industry.   After building out the pipeline to run its cartridge-counting model, Novo Nordisk wanted to see if it could repurpose it for a different use case for scalability. During the last 2 weeks of the prototyping engagement, the company configured the pipeline to detect bacteria growth on agar plates, thousands of which are manually analyzed every day. “We didn’t need to change much,” says Jonas Vejlgård Kristensen, solutions architect at Novo Nordisk. “We simply took a new dataset and used a different ML model. Then, we employed an anomaly-detection approach and adjusted the camera settings.” Learn more » quality-assurance tasks Amazon QuickSight ML models at scale to different edge devices Overview | Opportunity | Solution | Outcome | AWS Services Used Novo Nordisk had explored ML to automate time-consuming, manual tasks, but many of its processes were disconnected and difficult to scale. “We had all the parts of the ML-development process running locally on individual machines, from data processing to model training and even the manual transfer of the model to the edge devices,” says Carlos Ribera Codina, ML engineer at Novo Nordisk. “They were not interconnected, so this process could become quite difficult, especially when we had to deploy the models at scale and maintain them in production.” The team chose to migrate because it could use AWS services to create a pipeline that would run all these models automatically and interconnect them to expedite the development process.   Novo Nordisk entered into a 6-week prototyping engagement with the AWS team to train and deploy an ML model that uses computer vision to count the number of drug cartridges in a box—a task that it previously performed manually and was time and resource intensive. The new process involved capturing images of cartridge boxes from above, using pre-trained models to detect a cartridge, and counting the number of locations where a cartridge is identified in an image. Português
NTT DOCOMO builds a new data analysis platform for 9000 workers with AWS attracting 13 times more users and invigorating data use _ NTT Docomo Case Study _ AWS.txt
However, as cloudification progressed, cost consideration became even more important. With on-premises environments, users can operate data platforms provided by the Information Systems department without worrying cost. On the cloud, with more users and usage time, the cost rises. To alleviate the issue and raise awareness from IT and user departments, NTT DOCOMO ran a FinHack AWS Cloud Financial Management workshop. Tiếng Việt Français Syusaku Ijiri General Manager, Information Systems Department, NTT DOCOMO, INC. Increase data catalog monthly active users Español Increase user accounts Download PDF Version Here ≫ NTT DOCOMO’s use of data has burgeoned since the new service opened. And with the platform now firmly entrenched in user divisions, the enterprise is planning more ways to incorporate data into business and expand its range of use, including a verification sandbox that will make it easier for users to try new tools. The NTT DOCOMO Group is also aiming to expand the service to its new subsidiary, NTT Communications. 日本語 Amazon SageMaker 2023 Honoka Kudo The first was in providing separate analysis environments focused on users. The Data Analysis Lab provides functions like machine learning and visualization environments, which companies can pay for based with their AWS accounts. NTT DOCOMO predicts that use of these analytics environments will expand and benefit businesses. The company has also bolstered in-house training so user departments can build their own analytics environments. It also provides an a la carte service allowing users to select and combine AWS tools as needed. They have the option of combining these with tools that the Information Systems department provide. 한국어 NTT DOCOMO provides services for telecommunications and smart lifestyles as the parent company of the NTT DOCOMO Group. As of the end of fiscal year 2021, the enterprise serviced 84 million mobile phone users and 89 million d Point Club subscribers. Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Cloudifying data platform to grow and ingrain data-driven management The second initiative was data catalogs. Data catalogs are itemized forms summarizing the location and contents of data. The enterprise had previously used Excel to create similar environments, but this presented serious challenges where employees had to decipher scattered information. Creating data catalogs enable workers to check unified sets of data when needed. AWS PrivateLink provides private connectivity between virtual private clouds (VPCs), supported AWS services, and your on-premises networks without exposing your traffic to the public internet. Learn more » However, its on-premises data platform prevented quick infrastructure scaling and use of the latest tools. As NTT DOCOMO added services to its line up, data became increasingly decentralized and difficult for company departments to use properly. NTT DOCOMO started on July 1, 1992, with the NTT Group’s NTT Communications and NTT Comware becoming subsidiaries In May 2022. These three companies work together as the NTT DOCOMO Group to expand business, strengthen the competitiveness of its network, create and develop services, and promote digital transformation. AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Kouji Yamamoto 中文 (繁體) Bahasa Indonesia 10x Solution | Accelerating business use with distinct user-based analytics environments Contact Sales Ρусский New data platform construction period from On-Premise to Cloud عربي Shifting to the cloud saw an explosion in departmental use of the data platform. User numbers for the analytics environment increased 10-fold within a year of the July 2021 release. The number of users of accounts paid for by the Information Systems department rose 13-fold, and data catalog monthly active user numbers increased by a factor of 2.4. Customer Stories / Telecommunications Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. NTT DOCOMO began building its data infrastructure in January 2021. The platform was completed and available to users in July. Alongside the cloud shift, the Information Systems department challenged itself with two transformational initiatives. Syusaku Ijiri 7 months Overview 2.4x Hirotaka Hikage Get Started Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. Learn more » Mobile telecommunications carrier NTT DOCOMO migrated its on-premises data platform to Amazon Web Services (AWS) in just seven months. The company switched from a one-size-fits-all analytics platform to environments tailored to the needs of individual organizations while establishing data catalogs for easier analytics. The move saw analytics accounts multiply by 13 times in under a year, 10 times more analysis environment builds, and an boost in internal data user numbers.  Outcome | Changing mindset as a provider for more user-focused development Türkçe AWS PrivateLink English Moving to the cloud to evolve our data use, changing our IT staff’s mindset, and increasing our cost awareness will generate more satisfaction for analytics environment users and increase customer value.” After evaluating several cloud services, NTT selected AWS for its popularity among the DOCOMO Group, its low learning curve, its ease of linking between systems, and a comprehensive service line up. According to Kouji Yamamoto, an assistant manager in the Data Platform Group, “Our concept was to use the new data platform for environments where users could choose the right tools, instead of solutions provided by the Information Systems department. AWS was superior to other services because it ensured security while enabling us to build a reliable environment with plenty of flexibility.” “Shared cost awareness lets our IT and user departments easily reach mutual understanding,” says Hikage. “After operating the platform for a year, the cost is 30 percent less than at its peak, thanks to running the FinHack event with Information Systems departments and system integrators.” Enhance analytics environments About NTT DOCOMO, INC. Deutsch The enterprise was also highly impressed with AWS’s comprehensive cloud skills training, friendly support from dedicated AWS teams, and cloud economics and cost management tools that aid cost management. “AWS was the perfect partner to guide us in our data expansion,” says Hikage. Jun Kobayashi NTT DOCOMO builds a new data analysis platform on AWS, growing its users 13X and invigorating organization data use Amazon S3 “We’ve been able to cut service delivery times from six months on-premises to about three months on the cloud, and our business speed is steadily accelerating,” says Honoka Kudo of the Data Platform Group. “Shifting to the cloud eliminated the need to come to the office, and working from home during the COVID-19 pandemic was effortless. User departments can directly refer to data catalogs and build analytics environments with plenty of flexibility. As internal use of the new data platform grew, we received requests from multiple departments to expand functionality, and they can now use their preferred analytics tools more freely. Because we can pay for AWS accounts for any project wanting to employ the new platform and users can visualize expenses, cost awareness has increased throughout the company.” Italiano ไทย Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflowsGet Started with SageMakerTry a hands-on tutorial According to Jun Kobayashi, a manager of the Data Platform Group, “Shifting to the cloud means we don’t have to build servers based on demand forecasts as with on-premises solutions, and it's easier to scale up and out. We can control costs by raising our own awareness. As the Information Systems department providing analytics environments to user departments, we had become accustomed to scratch development, but we’re now more conscious of system-based fit-to-standard. We’ve realized the importance of developing from the perspective of data platform users and making them familiar with information through data catalogs and Q&A sites.” Says Hikage, “We’ll expand the new platform to more departments, quantify the relative value of data, and select and collect data needing refinement for a better managed Group.” “This step allowed us to cut the time our users require to decipher catalogs and accumulate knowledge,” says Yamamoto. “Data catalogs organize and visualize our Information Systems department’s knowledge and information of mission-critical systems.” Learn more » Amazon QuickSight 中文 (简体) 13x To solve this, the company shifted its data platform to the cloud. “We decided to transform our organization and shift to the cloud to obtain a clearer, data-driven understanding of our customers and offer superior services,” says Hirotaka Hikage, senior manager of Data Platform Group, Information Systems Department. Português
Numerix Scales HPC Workloads for Price and Risk Modeling Using AWS Batch _ Numerix Case Study _ AWS.txt
The Numerix team found a way to avoid these costs and increase efficiency by migrating its HPC analytics solution to Amazon Web Services (AWS) and using AWS Batch, which provides fully managed batch processing at any scale. Now, instead of asking its clients to invest in CPU cores, Numerix can offer access to an environment that is not limited by the amount of hardware on hand. “What AWS has afforded us is like what streaming has done for entertainment,” says Jim Jockle, chief marketing officer at Numerix. “Using AWS, we can run calculations that used to take a month in under 40 minutes, which is near real time for trade and risk management." Français “The cloud has been an inevitable journey for Numerix to provide efficiency and availability,” says Jockle. Numerix began undertaking some software-as-a-service projects in the cloud in 2012. In 2019, the migration to AWS accelerated as engineers started using Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity in the cloud, to run its HPC solutions. Numerix started using AWS Batch in 2021 to dynamically provision the optimal quantity and type of compute resources on Amazon EC2. With the new approach, analytics performance has improved by 180 times. risk management Enhanced Español Amazon Elastic Compute Cloud (EC2) More importantly, by using dynamic resource allocation on AWS, Numerix can meet demanding client constraints more effectively. “Using AWS Batch, we meet service-level agreements of 40 minutes or less on portfolios with tens of thousands of trades,” says Jockle. “That’s absolutely unheard of.” Engineers are staging information using Amazon Simple Storage Service (Amazon S3), cloud object storage built to retrieve any amount of data from anywhere. The increased memory and storage capacity on AWS have reduced bottlenecks across the analytics process. Now, Numerix is much better prepared to take on larger portfolios. Instead of telling clients that they will have to wait several months to purchase, receive, and install servers each time they scale up, Numerix can help them respond to sizing changes in days or hours. “Just being able to adapt quickly is a huge win,” says Humphrey. 日本語 AWS Services Used Outcome | Reaching Virtually Limitless Scalability at Limited Cost Using AWS Close Many of Numerix’s clients have appreciated the transition to a cloud-first mindset. “In the cloud model, clients no longer need a very large IT department to run our HPC solutions,” Humphrey says. Instead of buying more servers every time they scale up, organizations can adapt to sizing changes in the cloud in a matter of hours. Numerix also makes extensive use of Amazon EC2 Spot Instances, which help users to run fault-tolerant workloads for up to 90 percent off Amazon EC2 On-Demand pricing. By using Amazon EC2 Spot Instances and serverless technology, Numerix has experienced significant cost savings. financial analytics Italiano 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Numerix is eager to transition more of its clients to the cloud and is working to expand its software-as-a-service model as a key delivery and operational framework. “AWS provides such a huge range of services and capabilities,” says Humphrey. Instead of preparing hardware for the worst possible case, clients pay for computing power as they go. Numerix provides its analytics software to more than 250 global clients, including banks, regulators, and insurance companies. Its extensive mathematical models price deals against a wide variety of market states to simulate the likely effects if stock prices took a tumble. Financial institutions rely on this data to make decisions with billion-dollar implications, and they require the most advanced analytics available. Further complicating matters, financial markets have been in unprecedented territory since the early days of the COVID-19 pandemic. Trade and risk management information is especially valuable in this time of instability. “We have clients that are doing portfolios of 20,000 trades,” says Jockle. And those portfolios are only growing larger as firms embrace risk analytics in an attempt to shield themselves from vulnerability. Get Started Our clients are using our risk analytics to avoid billion-dollar losses. The introduction of near-real-time analytics with the virtually limitless scalability of AWS has been a real game changer.” These technical enhancements have a real-world impact. “Our clients are using our risk analytics to avoid billion-dollar losses,” says Jockle. “The introduction of near-real-time analytics with the virtually limitless scalability of AWS has been a real game changer.” About Numerix Opportunity | Using AWS Batch to Increase Analytics Performance for Numerix Figure 1: Advanced Analytics Architecture 中文 (繁體) Bahasa Indonesia Click to enlarge for fullscreen viewing.  AWS Step Functions in analytics performance Contact Sales Ρусский Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » عربي Numerix, a financial technology company, needed to find a way to scale its high performance computing (HPC) solution as client portfolios ballooned in size. Its institutional customers require insight into thousands of possible market scenarios to avoid being dangerously vulnerable to market changes. The rapidly increasing complexity of these capital markets meant that risk and pricing models were consuming costly and unwieldy computing resources. Financial organizations like Numerix and its customers had to invest in the expensive on-premises computing infrastructure for HPC. Founded in 1996, Numerix is a financial technology company headquartered in New York City, with 16 offices in 16 countries. It provides analytics software for more than 250 global clients, including banks, regulators, and insurance companies. 中文 (简体) near-real-time analytics 2022 Overview Numerix leaders agree that adopting cloud-native orchestrator and serverless architecture has been the key to taking advantage of the full elasticity of the cloud. Although Numerix used a lift-and-shift approach in the early stages of the migration, the full migration to a serverless model was a milestone. “The serverless model is exactly what we need so that we don’t have expensive resources running all the time,” says Humphrey. “We submit these workloads to AWS Batch, which orchestrates compute resources by provisioning the right Amazon EC2 instances for the jobs submitted, runs these jobs, and then shuts the instances down when the work is completed, and we’re charged for only the actual seconds of use.” Numerix uses AWS Step Functions, a low-code, visual workflow service for modern applications, to run its serverless capabilities. Learn how Numerix improved performance of its financial risk analytics solution by 180 times using AWS Batch. AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. Customer Stories / Financial Services Scaled 180x improvement Türkçe English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Unlocked Deutsch bottlenecks in analytics The complexity of this increase in trading and analytics volume is an immense mathematical challenge that requires a lot of compute power. Bill Humphrey, chief technology officer at Numerix, says, “For clients to run our solutions on premises, we have to tell them, ‘This is how many CPU cores you need to have in your data center when you install our software and run it every day. And you’ll have to buy even more next year because your portfolio is growing.’” That startup cost has been a barrier to the adoption of Numerix tools. Tiếng Việt Amazon S3 Numerix Scales HPC Workloads for Price and Risk Modeling Using AWS Batch ไทย AWS Batch lets developers, scientists, and engineers efficiently run hundreds of thousands of batch and ML computing jobs while optimizing compute resources, so you can focus on analyzing results and solving problems. Learn more » Architecture Diagram Solution | Reaching Virtually Limitless Scalability at Limited Cost Using AWS Learn more » Jim Jockle Chief Marketing Officer, Numerix Amazon Batch Decreased Português
Oportun Increases the Accuracy of Sensitive-Data Discovery by 95 Using Amazon Macie _ Oportun Case Study _ AWS.txt
in speed-to-discovery Français Customer Stories / Fintech About Oportun Español Oportun Architecture Diagram To accomplish its security goals—in addition to satisfying regulatory mandates and member demands for privacy—Oportun needed a solution that would not burden its security team with false positives as it scanned data. Other solutions Oportun tried required significant technology investments and still failed to achieve accuracy goals. “Accuracy is key,” says Carlos. “And we’ve found that Amazon Macie is 95 percent accurate for the critical attributes that we scan for, including social security numbers and tax identification numbers.” 99% reduction in cost Solution | Communicating Business Impact Using Amazon QuickSight 日本語 Contact Sales Amazon Athena is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats. Learn more » Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Athena Oportun is continually developing innovative data protection solutions as it seeks to remain ahead of both threats and competitors. Next, the company will use AWS capabilities to complement its current pipeline and add features, like observability and alerting, to improve risk monitoring and response. In addition to developing new tools, the team will be driving optimization to reduce its total cost of ownership. Due to the rapidly changing nature of member data, Oportun’s data security efforts have far-reaching effects. “When we started using Amazon Macie, scanning time went from days or weeks to hours, even hitting 30 minutes for smaller Amazon S3 buckets under 1 TB,” says Carlos. “And we saw that these findings were valid.” Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Learn how fintech Oportun, a neobank lender, achieved 95 percent data discovery accuracy using Amazon Macie. 100x improvement AWS Services Used It’s vital that Oportun’s technical teams can articulate the financial impact of risk issues to a nontechnical audience. To that end, the company uses a combination of AWS services to identify, assess, and communicate risk across the enterprise. Oportun uses Amazon Macie to identify sensitive data, and then uses Amazon Athena, an interactive query service that makes it simple to analyze data in Amazon S3 using standard SQL, to evaluate it. “We scan Amazon S3 buckets with Amazon Macie, send the results back to Amazon S3, and use Amazon Athena to read that result,” says Cruz. “Then, we use internal tools to identify unique records across many files to calculate data risk.” 中文 (繁體) Bahasa Indonesia of discovering sensitive data Within its new solution, Oportun makes heavy use of Amazon Macie–automated data discovery to identify Amazon S3 buckets with potential PII in a cost-effective and scalable way. With automated data discovery, Oportun doesn’t have to scan every single Amazon S3 bucket completely. Instead, it can identify and prioritize which Amazon S3 buckets must be remediated to accelerate risk reduction. The data security organization works with a heat map of priority buckets to remediate. Based on the heat map, the data security team engages other teams in agile sprints to rapidly remediate potentially risky data. Increased visibility into exposure has made it easier to align the organization around data security. The team also uses Amazon QuickSight, a service that powers data-driven organizations with unified business intelligence at hyperscale, to make it simple for everyone in the organization to understand the data. Ρусский عربي Learn more » 中文 (简体) The company is comfortable leading the way with new ideas. “We’re happy to collaborate with the AWS team on proof-of-concept work for new technologies,” says Carlos. “We want to do more, and using Amazon Macie is making that simpler.” 95% achievement in risk exposure 2022 Overview Oportun is a mission-driven organization that provides responsible and affordable financial services, at scale, to millions of people in the United States who are often poorly served by traditional financial services companies. At the core of its advanced credit decisioning engine is Oportun’s ability to process and interpret large volumes of consumer data, including PII, from disparate sources. The security and integrity of that data are absolutely essential. Oportun’s data security organization spends a great deal of time and money working cross-functionally with other teams to raise awareness around PII data security and remediate issues when they find them. Still, the team is always on the lookout for better tools to help reduce Oportun’s risk. That’s how Oportun discovered Amazon Macie in late 2021. Opportunity | Using Amazon Macie to Automatically Scan TB of Data for Oportun Amazon Macie Türkçe Over the past 8 years, Oportun has built several solutions on Amazon Web Services (AWS) and stored a considerable amount of data using Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. So, when the Oportun data security team started looking for a new data discovery offering for use with Amazon S3 buckets, it considered staying on AWS using Amazon Macie, which automates sensitive data discovery at scale. After initial testing indicated high speed and accuracy, Oportun implemented this solution. “Using Amazon Macie, we’re seeing a 100 times improvement on both speed to scan and time to discovery,” says Oswaldo Cruz, data security engineer at Oportun. English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Outcome | Building a Comprehensive Data Protection Offering Using AWS Services Amazon Macie is a data security and data privacy service that uses machine learning (ML) and pattern matching to discover and protect your sensitive data. Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. Learn more » Oportun Increases the Accuracy of Sensitive-Data Discovery by 95% Using Amazon Macie Deutsch Tiếng Việt Amazon S3 in data-scanning speed Significant decrease Italiano ไทย Architecture Diagram Oswaldo Cruz Data Security Engineer, Oportun Close A primary goal was to reduce risk as much as possible so that member PII is safer in the event of inadvertent access. Oportun is proud of the work that it has done to achieve that goal. “Using Amazon Macie, I think we’re pushing the envelope for the fintech space,” says Carlos. “We have a better sense of where our data is across a number of sources.” Click to enlarge for fullscreen viewing.  Oportun, a fintech lender and neobank with 1.9 million members, needed a better way to quickly identify and remediate potential security risks to its members’ personally identifiable information (PII). Other solutions Oportun tried could take weeks or months to scan data and identify exposed PII, making it difficult for company leaders to reduce risk. “We knew that there was a lot of PII in our systems,” says Carlos Carlos, director of data security at Oportun. “But we wanted to have a good sense of where that data was at virtually any moment.” Amazon QuickSight in scanning accuracy  Oportun is an AI-powered digital banking solution that has provided more than $12 billion in responsible and affordable credit. The company is certified as a Community Development Financial Institution. Português Using Amazon Macie, we’re seeing a 100 times improvement on both speed to scan and time to discovery.”
Optimize software development with Amazon CodeWhisperer _ AWS DevOps Blog.txt
AWS DevOps Blog Optimize software development with Amazon CodeWhisperer by Dhaval Shah , Nikhil Sharma , and Vamsi Cherukuri | on 30 MAY 2023 | in Amazon CodeWhisperer | Permalink |  Share Businesses differentiate themselves by delivering new capabilities to their customers faster. They must leverage automation to accelerate their software development by optimizing code quality, improving performance, and ensuring their software meets security/compliance requirements. Trained on billions of lines of Amazon and open-source code, Amazon CodeWhisperer is an AI coding companion that helps developers write code by generating real-time whole-line and full-function code suggestions in their IDEs. Amazon CodeWhisperer has two tiers: the individual tier is free for individual use, and the professional tier provides administrative capabilities for organizations seeking to grant their developers access to CW. This blog provides a high-level overview of how developers can use CodeWhisperer. Getting Started Getting started with CodeWhisperer is straightforward and documented here . After setup, CodeWhisperer integrates with the IDE and provides code suggestions based on comments written in the IDE. Use TAB to accept a suggestion, ESC to reject the suggestion ALT+C (Windows)/Option + C(MAC) to force a suggestion, and left and right arrow keys to switch between suggestions. CodeWhisperer supports code generation for 15 programming languages. CodeWhisperer can be used in various IDEs like Amazon Sagemaker Studio , Visual Studio Code, AWS Cloud9 , AWS Lambda and many JetBrains IDEs. Refer to the  Amazon CodeWhisperer documentation for the latest updates on supported languages and IDEs. Contextual Code Suggestions CodeWhisperer continuously examines code and comments for contextual code suggestions. It will generate code snippets using this contextual information and the location of your cursor. Illustrated below is an example of a code suggestion from inline comments in Visual Studio Code that demonstrates how CodeWhisperer can provide context-specific code suggestions without requiring the user to manually replace variables or parameters. In the comment, the file and Amazon Simple Storage Service ( Amazon S3 ) bucket are specified, and CodeWhisperer uses this context to suggest relevant code. CodeWhisperer also supports and recommends writing declarative code and procedural code, such as shell scripting and query languages. The following example shows how CodeWhisperer recommend the blocks of code in a shell script to loop through servers to execute the hostname command and save their response to an output file. In the following example, based on the comment, CodeWhisperer suggests Structured Query Language (SQL) code for using common table expression. CodeWhisperer works with popular Integrated Development Environments (IDEs), for more information on IDE’s supported please refer to CodeWhisperer’s documentation. Illustrated below is CodeWhisperer integrated with AWS Lambda console. Amazon CodeWhisperer is a versatile AI coding assistant that can aid in a variety of tasks, including AWS-related tasks and API integrations, as well as external (non AWS) API integrations. For example, illustrated below is CodeWhisperer suggesting code for Twilio’s APIs. Now that we have seen how CodeWhisperer can help with writing code faster, the next section explores how to use AI responsibly. Use AI responsibly Developers often leverage open-source code, however run into challenges of license attribution such as attributing the original authors or maintaining the license text. The challenge lies in properly identifying and attributing the relevant open-source components used within a project. With the abundance of open-source libraries and frameworks available, it can be time-consuming and complex to track and attribute each piece of code accurately. Failure to meet the license attribution requirements can result in legal issues, violation of intellectual property rights, and damage to a developer’s reputation. Code Whisperer’s reference tracking continuously monitors suggested code for similarities with known open-source code, allowing developers to make informed decisions about incorporating it into their project and ensuring proper attribution. Shift left application security CodeWhisperer can scan code for hard-to-find vulnerabilities such as those in the top ten Open Web Application Security Project (OWASP), or those that don’t meet crypto library best practices, AWS internal security best practices, and others. As of this writing, CodeWhisperer supports security scanning in Python, Java, and JavaScript languages. Below is an illustration of identifying the most known CWEs (Common Weakness Enumeration) along with the ability to dive deep into the problematic line of code with a click of a button. In the following example, CodeWhisperer provides file-by-file analysis of CWE’s and highlights the top 10 OWASP CWEs such as Unsensitized input is run as code, Cross-site scripting, Resource leak, Hardcoded credentials, SQL injection, OS command injection and Insecure hashing. Generating Test Cases A good developer always writes tests. CodeWhisperer can help suggest test cases and verify the code’s functionality. CodeWhisperer considers boundary values, edge cases, and other potential issues that may need to be tested. In the example below, a comment referring to using fact_demo() function leads CodeWhisperer to suggest a unit test for fact_demo() while leveraging contextual details. Also, CodeWhisperer can simplify creating repetitive code for unit testing. For example, if you need to create sample data using INSERT statements, CodeWhisperer can generate the necessary inserts based on a pattern. CodeWhisperer with Amazon SageMaker Studio and Jupyter Lab CodeWhisperer works with SageMaker Studio and Jupyter Lab, providing code completion support for Python in code cells. To utilize CodeWhisperer, follow the setup instructions to activate it in Amazon SageMaker Studio and Jupyter Lab . To begin coding, see User actions . The following illustration showcases CodeWhisperer’s code recommendations in SageMaker Studio. It demonstrates the suggested code based on comments for loading and analyzing a dataset. Conclusion In conclusion, this blog has highlighted the numerous ways in which developers can leverage CodeWhisperer to increase productivity, streamline workflows, and ensure the development of secure code. By adopting Code Whisperer’s AI-powered features, developers can experience enhanced productivity, accelerated learning, and significant time savings. To take advantage of CodeWhisperer and optimize your coding process, here are the next steps: 1. Visit feature page to learn more about the benefits of CodeWhisperer. 2. Sign up and start using CodeWhisperer. 3. Read about CodeWhisperer success stories About the Authors Vamsi Cherukuri Vamsi Cherukuri is a Senior Technical Account Manager at Amazon Web Services (AWS), leveraging over 15 years of developer experience in Analytics, application modernization, and data platforms. With a passion for technology, Vamsi takes joy in helping customers achieve accelerated business outcomes through their cloud transformation journey. In his free time, he finds peace in the pursuits of running and biking, frequently immersing himself in the thrilling realm of marathons. Dhaval Shah Dhaval Shah is a Senior Solutions Architect at AWS, specializing in Machine Learning. With a strong focus on digital native businesses, he empowers customers to leverage AWS and drive their business growth. As an ML enthusiast, Dhaval is driven by his passion for creating impactful solutions that bring positive change. In his leisure time, he indulges in his love for travel and cherishes quality moments with his family. Nikhil Sharma Nikhil Sharma is a Solutions Architecture Leader at Amazon Web Services (AWS) where he and his team of Solutions Architects help AWS customers solve critical business challenges using AWS cloud technologies and services. TAGS: codewhisperer , Developer Tools , DevOps Resources AWS Development Center AWS Developer Tools Blog AWS Cloud9 AWS CodeStar AWS Elastic Beanstalk AWS X-Ray Follow  AWS .NET on Twitter  AWS Cloud on Twitter  AWS on Reddit  LinkedIn  Twitch  Email Updates
Optimizing Fast Access to Big Data Using Amazon EMR at Thomson Reuters _ Case Study _ AWS.txt
Thomson Reuters is a leading provider of business information services. Its products include highly specialized information software and tools for legal, tax, accounting, and compliance professionals combined with the global news service Reuters. AWS CloudFormation Français 2023 Español John Engelhart Associate Architect, Thomson Reuters 300 automated 日本語 Outcome | Streamlining Data Accessibility to Drive Company-Wide Innovation Contact Sales The team also uses AWS CloudFormation to automate deployment of other resources. AWS CloudFormation manages artifacts generated from AWS CodeBuild, a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. These artifacts are used at later steps in the pipeline as part of an automated process that reduces manual errors so that the big data team iterates faster. It deploys workflows using AWS CodePipeline, a fully managed continuous delivery service that organizations use to automate their release pipelines for fast and reliable application and infrastructure updates. Instead of staggering workflows over specific times, each step now automatically initiates the next step. “I can’t imagine prioritizing our resources and getting near-real-time updates with our previous architecture,” says Scott Berres, lead developer at TR. “Using Amazon EMR ephemeral clusters, we can go as big as we want at near real time.” AWS Step Functions 한국어 Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. Overview | Opportunity | Solution | Outcome | AWS Services Used Using Amazon EMR, TR’s solution automatically adjusts to a fluctuating number of core nodes, from about 200 to more than 10,000 cores per hour. Amazon EMR clusters are right-sized and created automatically through AWS Step Functions, a visual workflow service for developers who are using AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and ML pipelines. The team deploys AWS Step Functions through AWS CloudFormation, which organizations use to model, provision, and manage AWS and third-party resources by treating infrastructure as code. After 7 years of big data workflows, the team had increasingly complex business requirements that constantly required new hardware for resource-intensive jobs. The team had been running its 300 workflows on premises using a multitenant single cluster of Apache Hadoop, an open-source framework that is used to store and process large datasets efficiently. For greater stability, the team created a second Apache Hadoop cluster that ran the same code, doubling costs and taking months to coordinate, schedule, and test upgrades. TR wanted to replace its higher-latency computing solution, which was designed for efficient batch processing, with a workflow that could handle the near-real-time data that its demanding business use cases increasingly required. seamlessly migrated to AWS Get Started Thomson Reuters is a leading provider of business information services. Its products include highly specialized information software and tools for legal, tax, accounting, and compliance professionals combined with the global news service Reuters. AWS Services Used Overview Using Amazon EMR, we spin up more resources and run our workflow more frequently. That is a huge win.” 中文 (繁體) Bahasa Indonesia About Thomson Reuters Rather than running all its workflows on a single Apache Hadoop cluster, TR runs each Apache Spark job on an ephemeral Amazon EMR cluster, which closes out after completion of the job. To manage datasets, the solution uses Apache Hudi on Amazon EMR, an open-source data management framework used to simplify incremental data processing and data pipeline development. As a result, TR has reduced cluster runtime by 48 percent. Instead of writing results to the Hadoop Distributed File System, Apache Hudi writes datasets to Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, high availability, security, performance, and durability. With TR’s decision to modernize its technologies and migrate its solutions to the cloud, the big data environment needed a plan. The team started with a small proof of concept around different compute solutions in the cloud. Ultimately, the team chose Amazon EMR, a cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning (ML) using open-source frameworks Apache Spark, Apache Hive, Presto, and more. Every other week throughout the migration, the TR team met with AWS engineers who made suggestions, set up working sessions, and even examined TR’s Apache Spark logs to find answers for any glitches. The team completed its migration of 3,000 Apache Spark jobs to AWS in 18 months. “The overall migration went about as smoothly as it could go,” says John Engelhart, associate architect at TR. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Learn more » for new services in time for new product updates 3,000 Apache Spark jobs Thomson Reuters (TR) needed to refresh its data center’s hardware and faced a costly license renewal for its enterprise data management system. TR also wanted to modernize its infrastructure to provide innovative features for customers. Using Amazon Web Services (AWS), TR’s big data team built a solution that streamlined and standardized its development processes in the cloud. The new solution provided seamless orchestration for TR’s 300 workflows, improved time to market for new features, and simplified access to TR’s big data assets, spurring innovation. Opportunity | Using Amazon EMR to Build an Elastic Compute Solution for Thomson Reuters Customer Stories / Financial Services in cluster runtime Türkçe AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. English Solution | Automating Workflows in the Cloud Improved time to market 48% reduction and more stable workflows Teams throughout TR have benefited from the ability of the big data team to provide more streamlined, accessible data. For example, TR has merged its big data tech stack with ML applications within the company. Research and development teams simply read data from Amazon S3 and use it to develop and productionize ML models for other internal teams, speeding innovation and facilitating the release of new products. “Other teams create custom business features, and that wasn’t the case when we were on premises,” says Engelhart. “Now lots of teams can find our data. They ask for it, and with justification and approval, we simply grant access. It’s spreading like wildfire through the company.” AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. In September 2022, TR launched Westlaw Precision, a new version of TR’s online research service and proprietary database for legal professionals. Using TR’s improved workflow built on AWS, Westlaw Precision doubles the speed at which lawyers conduct research, and it improves the quality of searches, reducing the risk of missing relevant cases. “Using Amazon EMR, we spin up more resources and run our workflows more frequently,” says Engelhart. “That is a huge win. We can provide content updates every 1 hour instead of every 24 hours.” Amazon EMR Deutsch Tiếng Việt 24 hours to 1 hour reduction Italiano ไทย Learn how Thomson Reuters built scalable, simplified workflows for big data using Amazon EMR. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. AWS CodeBuild Learn more » Optimizing Fast Access to Big Data Using Amazon EMR at Thomson Reuters Português
Optimizing Storage Cost and Performance Using Amazon EBS _ Devo Case Study _ AWS.txt
“We use terabytes and even petabytes of storage space,” says Miguel Martín, VP of product operations at Devo. “So the migration was a no-brainer from the financial side after the technical side had been validated.” Devo uses Amazon EBS to improve margins and flexibility, and the savings can be invested elsewhere, such as into value-adding innovation. By managing its infrastructure with block storage, Devo also saves 30–40 percent of the time it would otherwise have to spend on compliance. Today, in 2022, Devo serves customers among the Fortune 2000. Firms look to Devo to ingest their log data and manage it securely. As part of the SIEM process, Devo provides near-real-time analysis of alerts, which is crucial in an interconnected world with more pathways for security events to occur. “You can think of SIEM as a barrier, like our ozone layer,” says Tony Le, director of cloud partnerships at Devo. “It helps mitigate threats to customers’ networks.” Français Optimizing Storage Cost and Performance Using Amazon EBS with Devo Solution | Providing Scalability and Speed Using Amazon EBS While Saving 20% on Storage Sub-millisecond Amazon EBS Español   AWS Enterprise Support provides you with concierge-like service where the main focus is helping you achieve your outcomes and find success in the cloud. Learn more » When company teams need a consultation, Devo turns to AWS Enterprise Support, a 24/7 technical concierge service with high-quality engineers, tools, and technology. In weekly catch-up calls with a Technical Account Manager, Devo works toward cost optimization, operational efficiency, and new projects. Initiatives include a plan to use artificial intelligence and machine learning to automate up to 95 percent of security operations. Devo draws on AWS expertise to make the most of the services it uses while driving the pace of innovation and increasing Devo’s visibility in the marketplace. Throughout the 3-month migration, Devo’s top concern was serving its customers, so data processes continued to work seamlessly. “The most important factor was migrating without impacting our service availability,” says Martín. “There was zero downtime because we made the changes live. The customers didn’t even notice.” 日本語 Amazon Enterprise Support Devo is a cloud-native logging and security analytics company. Devo empowers global organizations to optimize the value of their security and operational data by providing solutions for near-real-time visibility and insight. Devo needed powerful storage capacity and scalable grid capacity for high-speed response. Older systems can struggle to keep up with new challenges and security events. But using capabilities native to AWS, Devo helps companies with legacy systems achieve next-generation SIEM seamlessly. Working alongside AWS has helped us grow from a five-person startup to a truly global company. There wouldn’t be Devo without AWS.” query response times 한국어 On-demand storage flexibility Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). by outsourcing infrastructure Get Started in costs by migrating to gp3 AWS Services Used Learn how Devo used Amazon EBS to improve profit margins, performance, and competitive flexibility. In July 2022, Devo became an AWS Partner. Since 2020, Devo has accelerated sales cycles by participating in AWS ISV Accelerate, a co-sell program for organizations that provide software solutions that run on or work alongside AWS. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Customer Stories / Software & Internet عربي 20% reduction 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Founded in Madrid, Spain, in 2011, Devo chose to build its infrastructure on AWS for its maturity and performance. Devo uses AWS to create cybersecurity solutions for organizations, helping transform security operations centers to empower investigation efforts and turning legacy and microservices-based applications into scalable, cloud-based solutions. based on changing business needs during migration 2022 AWS ISV Accelerate Overview Fast queries, near-real-time alerts, and data security are essential to migrating and managing large volumes of data for high-profile organizations and companies. “Working alongside AWS has helped us grow from a five-person startup to a truly global company,” says Le. “There wouldn’t be Devo without AWS.” Türkçe English The AWS ISV Accelerate Program is a co-sell program for organizations that provide software solutions that run on or integrate with AWS. The program helps you drive new business and accelerate sales cycles by connecting participating independent software vendors (ISVs) with the AWS Sales organization. About Devo Outcome | Achieving Optimal Cost Structure for Enhanced Storage Performance and Flexibility Devo, a global cloud-native logging and security analytics company, needed to optimize storage cost and performance for its customers while limiting downtime. As a security information and event management (SIEM) company, Devo is entrusted with mission-critical workloads, so the company cannot afford errors or breaks in availability. It also needs to be scalable because the amount of cybersecurity data a user ingests expands over time. Needing a way to store large amounts of data with powerful grid capacity for fast query response times, Devo turned to Amazon Web Services (AWS) to optimize cost performance and innovation with zero downtime for customers. Deutsch Tiếng Việt Opportunity | Boosting Devo’s Security Analytics Solutions Using AWS Italiano ไทย Zero downtime 30–40% time saved Learn more » Amazon EC2 Tony Le Director of Cloud Partnerships, Devo Devo centralizes customers’ raw data, configuring alerts and dashboards so that customers can rapidly identify malicious activity and unauthorized access. Customers can also take advantage of the powerful analytics that Devo provides on the backend for actionable insights. For example, customers might use these insights to create self-protection strategies for the future. And using the scale and speed of AWS, Devo can respond to queries at submillisecond speeds. To match workloads, the Devo analytics cluster uses Amazon Elastic Compute Cloud (Amazon EC2), secure and resizable compute capacity for virtually any workload, relying on nonvolatile memory express drives to write and replicate ephemeral data quickly. To dynamically increase performance with minimal downtime, Devo uses Amazon Elastic Block Store (Amazon EBS), an easy-to-use, scalable, high-performance block-storage service designed for Amazon EC2. Using Amazon EBS, Devo handles critical workloads, providing reliable storage and processing frequently accessed data while optimizing costs and accommodating customers’ daily needs, whether that’s 500 GB or 10 TB per day. Devo backs up customers’ data using Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. The company also maintains archive data for customers who need 3–7 years’ worth of data for compliance purposes. Devo meets its customers’ high expectations using a combination of current-generation instance types with EBS gp3 and st1 volumes for optimized compute, memory, and storage. In 2021, Devo migrated to Amazon EBS General Purpose Volumes gp3 volumes for data replication, realizing a 20 percent cost saving while maintaining performance. Amazon EBS gp3 volumes are general purpose solid state drive–based Amazon EBS volumes that can be used to provision performance independent of storage capacity in peak hours. For nonpeak hours, the data is written to Amazon EBS st1 volumes. With this strategy, Devo can quickly scale its solution capacity, tune performance, and change the type of live volumes with zero interruption to workload. By scaling input/output operations per second and throughput without additional block storage, Devo pays only for the storage it needs. Português Contact Sales
Optoma-customer-references-case-study.txt
Amazon Simple Storage Service Turning to the Cloud for Agile Software Development Français MariaDB is a popular open source relational database created by the original developers of MySQL. Optoma built Creative Board using Amazon Relational Database Service (Amazon RDS) for MariaDB with multi-AZ architecture to ensure service availability for its global customers. It uses Amazon CloudFront for low-latency data transmission, which is essential to support the real-time interaction component of Creative Board. It also employs Amazon ElastiCache for Redis to power real-time applications with sub-millisecond latency. Español Additionally, Optoma leverages Amazon Simple Storage Service (Amazon S3) for data storage and retrieval. “Amazon S3 has 99.999999999 percent data durability, which reduces our risk of service interruption by providing high stability to our customers,” Tsuei says. Optoma recently concluded a technical review of its Creative Board build with the AWS team, learning from and applying the principles of the AWS Well-Architected tool. “AWS helped evaluate our architecture design to make sure our service is robust enough to meet the real-time demands of educators and students around the world,” adds Tsuei. Learn More Tarcy Y.M. Tsuei Chief Digital Officer, Optoma 日本語 AWS Services Used Get Started 한국어 Tarcy Y.M. Tsuei, chief digital officer at Optoma, says, “We knew AWS would provide us with the flexibility to scale dynamically based on actual usage during development and production.” Optoma’s core values include reliability, innovation, and customer focus, and the AWS Cloud supports all three of these elements. Tsuei says, “Using AWS as a platform as a service has helped us provide a more reliable, stable, and secure service offering compared to managing these aspects on our own. We can focus on business logic and trust AWS for the rest.”   Optoma’s application engineering team is focusing on enhancing Creative Board and collecting user feedback to improve the product. Next, it will evaluate how artificial intelligence (AI) can be applied for further innovation in Creative Board or other education technology applications. “We’re considering how to help teachers determine how effective a class was by measuring participation rates or interaction with the board,” explains Tsuei. “Or how AI could improve students’ concentration and ability to absorb the information shared on Creative Board.” Stiff competition and long innovation cycles have led many equipment manufacturers to start developing more integrated solutions. Successful manufacturers are using their application and process expertise to create holistic hardware-plus-software solutions tailored to their customers’ needs. This approach has proven sustainable and profitable. Manufacturers that are further ahead in this transformation cycle delivered higher total shareholder returns over the past three years than peers that are just beginning to offer integrated solutions. Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Maintains low latency for real-time interactions 中文 (繁體) Bahasa Indonesia Using AWS as a platform as a service has helped us provide a more reliable, stable, and secure service offering compared to managing these aspects on our own." Optoma Facilitates Virtual Collaboration with Hybrid Learning Platform on AWS Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Learn more » 中文 (简体) Another division within Optoma, the internal IT team, is also taking advantage of the AWS Cloud and Amazon Elastic Container Service (Amazon ECS) to develop a market intelligence platform. The platform will collect data from internal sales and external sources such as social media to stay aligned with sentiment and developments in Optoma’s target markets. The company recently launched the Creative Board hybrid learning platform, its latest foray into IoT innovation. Creative Board allows users to simultaneously work or learn on Optoma’s interactive panel displays by providing a connected whiteboard with embedded annotation tools. Teachers, students, and corporate employees can use their computer or smartphone browsers to participate in classes or collaborate in brainstorming sessions. Until its pivot to software-driven innovation, Optoma relied on on-premises infrastructure for its IT requirements. However, when its software team was formed, the company turned to cloud computing for faster software development. Optoma had been using Amazon Web Services (AWS) to run its website and chose AWS as its application development platform. Optoma launched Creative Board 56 percent faster on the AWS Cloud compared to previous application launches on premises. Its engineers can create development infrastructure in as little as one week, whereas the on-premises infrastructure procurement cycle could take three months. Receives support for robust architecture builds Exploring AI to Enhance Learning Experiences Amazon Relational Database Service for MariaDB About Optoma Optoma is a global leader in display technologies such as projectors and interactive flat-panel displays. Its interactive solutions are currently used by corporate and education customers, plus individual consumers, in 159 countries. Türkçe In 2017, Optoma introduced Internet of Things (IoT) technology to enable the remote management of its devices. It first launched the Optoma Connect app for consumers to control projectors in the home. Optoma Connect uses the MQTT IoT messaging protocol running on Amazon Elastic Compute Cloud (Amazon EC2) instances, and it relies on Amazon Alexa to enable voice-activated commands. Amazon ElastiCache for Redis Launches products 56% faster English Making Virtual Collaboration Easy with Creative Board Benefits Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch Launching Products 56% Faster at a 36% Lower Cost Optoma is a leading provider of large format display solutions for large-venue installations, businesses, educators, and consumers. Since establishing its brand in 2000, the company has aspired to captivate, inspire, and help its customers connect via its comprehensive display offerings, from award-winning projectors to interactive flat panels and direct-view indoor LED displays. In 2016, Optoma began building proprietary software solutions to facilitate presentations, collaboration, and communication for remote and hybrid work environments.  Tiếng Việt Ensures reliable global service delivery Italiano ไทย Saves 36% on infrastructure costs Amazon CloudFront Contact Sales 2022 To learn more, visit aws.amazon.com/solutions/iot. In addition to speed of iteration and development, Optoma has found building on the AWS Cloud more cost-efficient. “We estimate a 36 percent cost savings by adopting AWS services because we are saving on the purchase of hardware and software licenses,” Tsuei says. Previously, Optoma would buy and renew licenses for security software, for example, as part of its application stack. On AWS, however, the company benefits from security by design, a foundational concept behind every AWS service. Tsuei concludes, “AWS continues to support our teams and innovation mindset to bring new and reliable products to market faster.” Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Português Architecting for Global Stability and Low Latency
Paige Case Study _ AWS.txt
To run its ML training workloads, Paige uses Amazon EC2 P4d Instances, powered by NVIDIA A100 Tensor Core GPUs, which deliver high performance for ML training and HPC applications in the cloud. Paige uses these instances to queue orchestrated ML jobs optimized to avoid paying for the idle time in between jobs and providing fit-for-purpose compute across its two compute environments. “Using Amazon EC2 P4d Instances, we increased our compute capacity while balancing costs across our on-premises and cloud environments,” says Razik Yousfi, vice president of engineering at Paige. “We didn’t have to come up with a substantial amount of capital to improve the performance of our HPC clusters.” Contact Sales Français Optimizes 72% faster Paige’s Compute Environments Click to enlarge for fullscreen viewing. Paige is using the power of AI to drive a new era of cancer discovery and treatment. To improve the lives of patients with cancer, Paige has created a cloud-based platform that transforms pathologists’ workflow and increases diagnostic confidence as well as productivity. Español In 2021, Paige created a proof of concept to determine which cloud services would best suit its HPC needs and work alongside its existing solutions, including PyTorch, which it uses as its ML framework. “The AWS team was great in connecting us with subject matter experts,” says Fleishman. “Those subject matter experts helped us evolve our proof of concept without wasting resources and successfully pitch using AWS to leadership.” With the information it gleaned from this test, Paige decided to replicate its on-premises workflow in the cloud, using AWS to expand its compute resources for intensive ML workloads. Using Amazon EC2 P4d Instances, we increased our compute capacity while balancing costs across our on-premises and cloud environments.” Now that Paige has built an ML workflow in the cloud, it will continue exploring more of the latest cloud technologies to find new ways to innovate and deliver more value to life sciences and healthcare organizations. “We’ve used AWS services to deploy a workflow that looks like what we have on premises with additional flexibility and scalability,” says Sarte. “On AWS, we can test out new cloud services more efficiently and find purpose-built solutions to support our ML training.” 日本語 Opportunity | Using Amazon S3 to Simplify Data Management for Paige and innovation Razik Yousfi Vice President of Engineering, Paige Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon FSx for Lustre Amazon EC2 P4d instances deliver the highest performance for machine learning (ML) training and high performance computing (HPC) applications in the cloud. Processes ML workflows internal workflows Outcome | Exploring AWS Cloud Services to Drive Innovation in Healthcare  To overcome this challenge, Paige turned to Amazon Web Services (AWS) and adopted a hybrid infrastructure model for running its PyTorch-based ML workloads and managing its growing data footprint. To improve the runtime performance of its software, the company adopted Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. Paige has replicated its on-premises workflows in the cloud, giving it the ability to use its on-premises and cloud environments in parallel through similar user interfaces. Additionally, the company can access compute capacity in bursts, helping it scale up and down as required by its ML workloads. This scalability helps Paige minimize operational overhead, reduce compute costs, and improve staff productivity. AWS Services Used Increases time savings 中文 (繁體) Bahasa Indonesia Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. ไทย Ρусский عربي data management 中文 (简体) Simplifies Learn more » 2022 With its hybrid cloud architecture, the Paige development team doesn’t have to manually run every ML workload. “On AWS, our developers can queue up our software and run our ML workloads without having to keep their hands on their keyboards,” says Matthew Sarte, senior systems engineer for HPC at Paige. Now that the company has streamlined its internal workflows to save time and improve productivity, the Paige team can focus on training more ML models and driving innovation. Overview About Paige Founded in 2017, Paige strives to transform cancer diagnostics by developing clinical-grade AI solutions to extract key insights from digital slides, such as large-size pathology images. Using ML, Paige can assist pathologists in the diagnosis of cancer and unlock hidden insights that are not visible to the naked eye, helping advance drug discovery and clinical breakthroughs. Türkçe Solution | Adopting Amazon EC2 P4d Instances to Speed Up Internal Workflows by 72 Percent English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram In 2019, Paige adopted Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere. Based on its experience using this service, the company wanted to deepen its use of AWS so it could maintain consistency across its cloud technologies. “Amazon S3 simplified our data management,” says Brandon Rothrock, director of AI science at Paige. “This service gave us the ability to use common interfaces and deep integration with our data platform, annotation platform, HPC compute, and many other applications that surround AI development operations.” AWS Storage Gateway Paige uses Elastic Fabric Adapter—which facilitates HPC and ML applications at scale—to distribute training workloads across multiple servers and accelerate training large ML models. To host its imaging and slide data, Paige uses Amazon FSx for Lustre, fully managed shared storage built on a popular high-performance file system. The company connected this service with some of its Amazon S3 buckets, which helps its development teams address petabytes of ML input data without manually prestaging data on high-performance filesystems. “By connecting Amazon FSx for Lustre to Amazon S3, we can train on 10 times the amount of data that we have ever tried in the on-premises infrastructure without any trouble,” says Alexander van Eck, staff AI engineer at Paige. The company manages assets that need to be visible both in the cloud and on premises using AWS Storage Gateway, which provides on-premises applications with access to virtually unlimited cloud storage. Biotechnology company Paige develops complex, advanced machine learning (ML) applications that support healthcare professionals in delivering precision diagnoses and treatment plans, helping improve their quality of care and patient outcomes. Because of its innovative approach to cancer detection, Paige became the first company to receive U.S. Food and Drug Administration approval for using artificial intelligence (AI) in the field of pathology. The company had built an on-premises solution, with a high performance computing (HPC) cluster powered by NVIDIA GPUs for running its ML workloads. Because Paige wanted to continue expanding its operations and developing more ML models, it needed to update its infrastructure given its growing computational requirements. To meet this need, Paige wanted to use cost-effective, scalable HPC resources in the cloud. Amazon S3 To support its operations, Paige requires a robust infrastructure that can handle the complexity of its training codebase and amount of training data. Before building its cloud infrastructure, the company developed its ML models natively on PyTorch and deployed their software using an HPC cluster that it had built using on-premises hardware. As Paige expanded its product and scientific pipeline, the company needed to scale its compute resources to match the increased demand. “Our on-premises solutions were maxed out,” says Mark Fleishman, senior director of infrastructure at Paige. “Our main goal is to train AI and ML models to help with cancer pathology. And the more compute capacity we have, the faster we can train our models and help solve diagnostic problems.” Learn how Paige in the life sciences industry accelerates PyTorch-based ML model training using Amazon EC2 P4d Instances powered by NVIDIA. Deutsch Tiếng Việt compute costs Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Italiano Customer Stories / Life Sciences AWS Storage Gateway is a set of hybrid cloud storage services that provide on-premises access to virtually unlimited cloud storage. Architecture Diagram Close in parallel Paige Furthers Cancer Treatment Using a Hybrid ML Workflow Built with Amazon EC2 P4d Instances Português Amazon Elastic Compute Cloud (EC2) P4d Instances
PayEye Launches POC for Biometric Payments in 5 Months Using AWS _ Amazon EKS.txt
Launched proof of concept for biometric payments in 5 months Because PayEye uses individuals’ personal biometric information to authenticate payments, security and data protection are major concerns. To gain approval to launch its service, it needed to ensure compliance with the EU General Data Protection Regulation (GDPR) and demonstrate to the Polish Financial Supervision Authority that it could ensure high levels of security for its users. “Security is crucial to our service,” says Łyczba. “ Using tools available from AWS, we are satisfied that we have achieved the high regulatory standard required.” Français Processed over 10,000 commercial transactions PayEye’s secure biometrics technology converts facial and iris features into unique patterns to authenticate payments. Consumers can use the technology to make biometrically authenticated purchases at shops, restaurants, and sports clubs after a very short registration on a mobile application, using point-of-sale devices called the eyePOS. Español The startup uses AWS for many aspects of its solution. “From security and databases to configuration, deployment, and caching, AWS was critical to developing our biometrics technology,” says Łyczba, chief technology officer (CTO) at PayEye. “Most of our solution relies on it.” 日本語 PayEye realized further cost savings by following suggestions from its AWS account team on ways to optimize its services. “We were able to precisely track our budget to ensure we could launch our proof of concept without seeking additional funding,” says Łyczba. Łukasz Łyczba Chief Technology Officer, PayEye Startup PayEye, founded in Poland in 2019, developed a biometrics payment service that uses a person’s iris and face to authenticate purchases. The company needed to act fast to secure funding, gain regulatory approvals, and win over retail partners before launching its solution. PayEye built its platform on AWS and completed a proof of concept for its biometric authentication technology in 5 months. Assisted by the tools and services available from AWS, it navigated security and data protection regulations, and launched a complete and secure payment ecosystem soon after the initial proof of concept. PayEye also uses AWS to analyze real-time data on device performance and user numbers to improve customer experience. PayEye uses Amazon QuickSight, a cloud-native, serverless business intelligence service. “From Amazon QuickSight dashboards we’re able to see which units are the most profitable and prioritize any tweaks that need to be made to functionality—this maximizes uptime for key revenue generators,” says Łyczba. Get Started 한국어 PayEye has a vision for a future where customers can authenticate purchases using their iris and face. Founded in 2019, the Polish startup knew that it needed to quickly demonstrate the technology for its biometric payment service to secure funding and win over retail and ecommerce partners. About PayEye PayEye, assisted by the tools and services available from AWS, navigated security and data protection regulations and has processed over 10,000 commercial transactions. The company also uses AWS to provide data-driven insights that help it to improve customer experience and support the international rollout of its payment service. Building a Secure Iris-Recognition Payment System on AWS AWS Services Used PayEye has more than 150 retail partners and has logged over 2,000 verified users for its payment service. This early success is due in part to the company monitoring and analyzing real-time device performance and customer usage. From this analysis, it gains insights into how it can improve its platform and customer experience. “With hardware it’s crucial to know how the devices are operating and which are most profitable,” says Łyczba. “This dictates how we prioritize maintenance and development.” 中文 (繁體) Bahasa Indonesia Ensured high levels of security for customer data  Ρусский عربي Learn more » 中文 (简体) Using Amazon Web Services (AWS), the company launched a proof of concept within 5 months and soon after conducted the first commercial transaction in June 2020. PayEye customers can now authenticate payments from 150 point-of sale devices installed in retail shops, restaurants, and sports clubs in the Polish city of Wrocław. Speeding up Development, Saving Costs, and Clearing Regulatory Hurdles From security and databases to configuration, deployment, and caching, AWS is critical to developing our biometrics technology. Our solution relies on it." Benefits of AWS Generating Business Insights Using Amazon QuickSight Amazon MQ Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises.Deploy applications with Amazon EKS in the cloud Deploy applications with Amazon EKS Anywhere Deploy applications with your own tools. Türkçe English Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can easily migrate to AWS without having to rewrite code. PayEye built its solution on Amazon Elastic Kubernetes Service (Amazon EKS), which makes it easy to deploy, manage, and scale containerized applications using Kubernetes. It also uses Amazon MQ, which reduces operational responsibilities by managing the provisioning, setup, and maintenance of message brokers. Analyzed real-time customer and device performance Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, and optimize resource utilization. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events.  Deutsch Tiếng Việt PayEye sped up the development process and entered the production phase within just a few months, using out-of-the box AWS services. This approach reduced the time and effort needed to find and hire talent, and has freed up PayEye’s team to focus on developing its core offering while being supported by just one cloud architect, lead DevOps engineer Lukasz Garncarz. “Using AWS is like having an in-house team,” says Łyczba. “We’ve saved money on recruitment, and we didn’t have to sink time into a lengthy hiring process,” says Łyczba. PayEye has created a biometric payment system that authenticates purchases through biometrics recognition. Founded in 2019, the Polish company provides its proprietary eyePOS terminals to retailers and restaurants and its mobile application to end users. Italiano ไทย Amazon EKS Amazon CloudWatch Contact Sales PayEye has just launched the next generation of its eyePOS devices and plans to launch its new biometric technology internationally in the coming months. The company expects it will be easy to recruit new team members as they continue to grow. "Everyone wants to work for a company that is changing global trends,” says Łyczba. “AWS supports us in this." PayEye Launches Proof of Concept for Biometric Payments in 5 Months Using AWS 2022 Amazon QuickSight allows everyone in your organization to understand your data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning. Amazon QuickSight Português Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.
Postis Case Study.txt
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Français Benefits of AWS Postis is pursuing further international expansion to scale-up its territorial coverage, the number of retailers it serves, and to optimize and increase the volume of deliveries. Postis is using local AWS compute resources to run its ML models. Español Expanded delivery services to 25 European countries in 3 years Learn more » Postis wants to help retailers and delivery companies master the last mile of the journey to their customers. The fast-growing Romanian startup provides a real-time digital platform for logistics automation, optimization, and tracking that ensures an excellent service experience across the entire consumer journey, from ordering all the way through to receiving goods. Lowered refusal rates by 20% 日本語 Amazon SageMaker Postis is off to a strong start, rapidly expanding its customer base and tripling revenues every year since its inception. “We’re now prepared to offer our services in all of Europe and to continue adding features to increase our ecosystem’s reach,” says Bulgarov. “Building our products on AWS has helped us achieve a lot in a short amount of time.” Retailers that use Postis can now offer new features to end users, thanks to speedy access to delivery data on the platform. For example, buyers receive the cost of their selected shipping option in real time, so retailers can provide the exact delivery cost while a buyer is still placing an order. Contact Sales Get Started 한국어 To do this, Postis uses machine learning (ML) to help sellers find the most suitable and cost-effective delivery solution for every type of product, customer journey, or destination. The company used Amazon Web Services (AWS) to create a scalable system with the power to run heavy ML workloads and support its global growth. This means Postis’ customers can then offer deliveries in new areas without the need to adjust their IT systems. “Our customers can quickly get set up to accept orders from new countries,” says Florin. “We have all of the infrastructure and data ready for them, so they just need to sign contracts with local couriers.”  Using AWS SageMaker to Quickly Train ML Models AWS Services Used Looking for a more efficient solution, the company began using Amazon SageMaker to build, train, and deploy its ML model. After that model started producing good results in a timely manner, Postis used Amazon Kinesis—which makes it easy to collect, process, and analyze real-time streaming data so you can get timely insights—to create easy-to-use dashboards to track the progress of deliveries in real time. It shares these dashboards with all internal departments to quickly identify bugs and to streamline customer service processes. Using AWS, the company, founded in 2017, now serves more than 200 customers in 25 countries across retail, ecommerce, logistics, and transportation—including big names such as Ikea, Carrefour, Auchan, and Intersport. It works to help customers provide efficient deliveries and make smarter strategic decisions.  中文 (繁體) Bahasa Indonesia Amazon Kinesis Ρусский Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. عربي When customers use Postis, the system analyzes their delivery operations and provides insights on the real-life behavior of their actual buyers. They can then use this information to make better strategic decisions. For instance, retailers can identify last-mile delivery failures and other common buyer experience issues, and then implement new policies to remedy problems. With historical data, retailers can analyze and compare performance and quality across their entire pool of carriers, improving their selection and contract negotiation. Data-driven decisions are also taken in real time, choosing the best solution based on more than 100 criteria in under 20 milliseconds. Postis Simplifies International Deliveries Using ML and AWS SageMaker 中文 (简体) Scaling to Meet Rising Demand During Busy Shopping Periods Postis is a fast-growing tech startup from Romania that provides a real-time digital platform for logistics automation, optimization, and tracking. Its software-as-a-service offering helps retailers and other businesses improve the efficiency of their delivery systems using machine learning. In just 3 years, Postis has expanded to manage orders in 25 European countries. Florin Bulgarov Chief Data Scientist, Postis Tracking data has also reduced order refusal rates—how often buyers don’t accept their delivery at their home—by 20 percent. “Some of our customers save hundreds of thousands of euros annually because our system reduces their refusal rates,” says Bulgarov. Learning How ML Can Speed up Deliveries and Improve their Quality We’re now prepared to offer our services in all of Europe and to continue to add features to increase our ecosystem’s reach. Building our service on AWS has helped us achieve a lot in a short amount of time." Türkçe Postis provides a real-time digital platform for logistics automation, optimization, and tracking, helping retailers and delivery companies master the last mile of the journey to their customers, ensuring a good experience from ordering all the way through to receiving goods. The fast-growing Romanian startup uses machine learning to help sellers find the most suitable and cost-effective delivery solution for every type of product, customer journey, or destination. It used AWS to create a scalable system with the power to run heavy machine learning workloads and support its global growth. Postis now serves more than 200 customers in 25 countries using AWS, including big names such as Ikea, Carrefour, Auchan, and Intersport. English Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Amazon RDS Its databases scale automatically to meet variable demand using Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for any workload, and Amazon Relational Database Service (Amazon RDS), which allows it to set up, operate, and scale relational databases in the cloud. “We don’t have to intervene when traffic spikes. Everything scales automatically,” says Bulgarov. “This means we’re confident we’re providing a reliable service and our IT teams can focus on other tasks.” Using APIs built by AWS, Postis can send real-time alerts on the progress of deliveries back to retailers or directly to consumers who have ordered goods. Providing consumers with direct access to tracking details helps Postis customers reduce the load on their customer service teams. Retailers received fewer calls—25 percent fewer—to their contact centers after they began using Postis’ real-time updates system to provide consumers with SMS or email alerts. Scaled to handle 7-10 times more orders during busy shopping periods Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. About Postis Deutsch Reduced customer service calls by 25% Tiếng Việt Expanding Across Europe and the World Using AWS Italiano ไทย Because it works with retailers, Postis needs to handle spikes in demand during busy shopping periods such as Black Friday and the Christmas season. It handles 7–10x more during these peak times. 2022 Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Postis spent a year collecting this data from customers and manually creating statistical formulas to produce useful insights. The process helped the team realize which data points were most valuable for training the model it now uses. “Our initial model ran too slowly on our on-premises resources, but the process was useful, because that’s when we started to understand the different factors that affect deliveries,” says Florin Bulgarov, chief data scientist at Postis.  Português Postis knew that by using ML it could provide the most efficient delivery options to its customers. But training an ML model requires vast amounts of data about transport and logistics operations, including how long deliveries take, the customer delivery preferences, the best alternatives between fulfilment points and delivery places, the performance of local couriers, and how often deliveries are rejected by recipients.
Power recommendation and search using an IMDb knowledge graph Part 1 _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Power recommendation and search using an IMDb knowledge graph – Part 1 by Gaurav Rele , Soji Adeshina , Divya Bhargavi , Karan Sindwani , Vidya Sagar Ravipati , and Matthew Rhodes | on 20 DEC 2022 | in Advanced (300) , Amazon ML Solutions Lab , Amazon Neptune , Amazon OpenSearch Service , Amazon SageMaker , AWS Data Exchange | Permalink | Comments |  Share The IMDb and Box Office Mojo Movies/TV/OTT licensable data package provides a wide range of entertainment metadata, including over 1 billion user ratings; credits for more than 11 million cast and crew members; 9 million movie, TV, and entertainment titles; and global box office reporting data from more than 60 countries. Many AWS media and entertainment customers license IMDb data through AWS Data Exchange to improve content discovery and increase customer engagement and retention. In this three-part series, we demonstrate how to transform and prepare IMDb data to power out-of-catalog search for your media and entertainment use cases. In this post, we discuss how to prepare IMDb data and load the data into Amazon Neptune for querying. In Part 2 , we discuss how to use Amazon Neptune ML to train graph neural network (GNN) embeddings from the IMDb graph. In Part 3 , we walk through a demo application out-of-catalog search that is powered by the GNN embeddings. Solution overview In this series, we use the IMDb and Box Office Mojo Movies/TV/OTT licensed data package to show how you can built your own applications using graphs. This licensable data package consists of JSON files with IMDb metadata for more than 9 million titles (including movies, TV and OTT shows, and video games) and credits for more than 11 million cast, crew, and entertainment professionals. IMDb’s metadata package also includes over 1 billion user ratings, as well as plots, genres, categorized keywords, posters, credits, and more . IMDb delivers data through AWS Data Exchange, which makes it incredibly simple for you to access data to power your entertainment experiences and seamlessly integrate with other AWS services. IMDb licenses data to a wide range of media and entertainment customers, including pay TV, direct-to-consumer, and streaming operators, to improve content discovery and increase customer engagement and retention. Licensing customers also use IMDb data to enhance in-catalog and out-of-catalog title search and power relevant recommendations. We use the following services as part of this solution: AWS Lambda Amazon Neptune Amazon Neptune ML Amazon OpenSearch Service AWS Glue Amazon SageMaker notebooks Amazon SageMaker Processing Amazon SageMaker Training The following diagram depicts the workflow for part 1 of the 3 part blog series. In this post, we walk through the following high-level steps: Provision Neptune resources with AWS CloudFormation . Access the IMDb data from AWS Data Exchange. Clone the GitHub repo . Process the data in Neptune Gremlin format. Load the data into a Neptune cluster. Query the data using Gremlin Query Language. Prerequisites The IMDb data used in this post requires an IMDb content license and paid subscription to the IMDb and Box Office Mojo Movies/TV/OTT licensing package in AWS Data Exchange. To inquire about a license and access sample data, visit developer.imdb.com . Additionally, to follow along with this post, you should have an AWS account and familiarity with Neptune, the Gremlin query language, and SageMaker. Provision Neptune resources with AWS CloudFormation Now that you’ve seen the structure of the solution, you can deploy it into your account to run an example workflow. You can launch the stack in AWS Region us-east-1 on the AWS CloudFormation console by choosing Launch Stack : To launch the stack in a different Region, refer to Using the Neptune ML AWS CloudFormation template to get started quickly in a new DB cluster . The following screenshot shows the stack parameters to provide. Stack creation takes approximately 20 minutes. You can monitor the progress on the AWS CloudFormation console. When the stack is complete, you’re now ready to process the IMDb data. On the Outputs tab for the stack, note the values for NeptuneExportApiUri and NeptuneLoadFromS3IAMRoleArn . Then proceed to the following steps to gain access to the IMDb dataset. Access the IMDb data IMDb publishes its dataset once a day on AWS Data Exchange. To use the IMDb data, you first subscribe to the data in AWS Data Exchange, then you can export the data to Amazon Simple Storage Service (Amazon S3). Complete the following steps: On the AWS Data Exchange console, choose Browse catalog in the navigation pane. In the search field, enter IMDb . Subscribe to either IMDb and Box Office Mojo Movie/TV/OTT Data (SAMPLE) or IMDb and Box Office Mojo Movie/TV/OTT Data . Complete the steps in the following workshop to export the IMDb data from AWS Data Exchange to Amazon S3. Clone the GitHub repository Complete the following steps: Open the SageMaker instance that you created from the CloudFormation template. Clone the GitHub repository. Process IMDb data in Neptune Gremlin format To add the data into Amazon Neptune, we process the data in Neptune gremlin format. From the GitHub repository, we run process_imdb_data.py to process the files. The script creates the CSVs to load the data into Neptune. Upload the data to an S3 bucket and note the S3 URI location. Note that for this post, we filter the dataset to include only movies. You need either an AWS Glue job or Amazon EMR to process the full data. To process the IMDb data using AWS Glue, complete the following steps: On the AWS Glue console, in the navigation pane, choose Jobs . On the Jobs page, choose Spark script editor . Under Options , choose Upload and edit existing script and upload the 1_process_imdb_data.py file. Choose Create. On the editor page, choose Job Details . On the Job Details page, add the following options: For Name , enter imdb-graph-processor . For Description , enter processing IMDb dataset and convert to Neptune Gremlin Format . For IAM role , use an existing AWS Glue role or create an IAM role for AWS Glue . Make sure you give permission to your Amazon S3 location for the raw data and output data path. For Worker type , choose G 2X . For Requested number of workers , enter 20. Expand Advanced properties . Under Job Parameters , choose Add new parameter and enter the following key value pair: For the key, enter --output_bucket_path . For the value, enter the S3 path where you want to save the files. This path is also used to load the data into the Neptune cluster. To add another parameter, choose Add new parameter and enter the following key value pair: For the key, enter --raw_data_path . For the value, enter the S3 path where the raw data is stored. Choose Save and then choose Run . This job takes about 2.5 hours to complete. The following table provide details about the nodes for the graph data model. Description Label Principal cast members Person Long format movie Movie Genre of movies Genre Keyword descriptions of movies Keyword Shooting locations of movies Place Ratings for movies rating Awards event where movie received an award awards Similarly, the following table shows some of the edges included in the graph. There will be in total 24 edge types. Description Label From To Movies an actress has acted in casted-by-actress Movie Person Movies an actor has acted in casted-by-actor Movie Person Keywords in a movie by character described-by-character-keyword Movie keyword Genre of a movie is-genre Movie Genre Place where the movie was shot Filmed-at Movie Place Composer of a movie Crewed-by-composer Movie Person award nomination Nominated_for Movie Awards award winner Has_won Movie Awards Load the data into a Neptune cluster In the repo, navigate to the graph_creation folder and run the 2_load.ipynb . To load the data to Neptune, use the %load command in the notebook, and provide your AWS Identity and Access Management (IAM) role ARN and Amazon S3 location of your processed data. role = '<NeptuneLoadFromS3IAMRoleArn>' %load -l {role} -s <s3_location> --store-to load_id The following screen shot shows the output of the command. Note that the data load takes about 1.5 hours to complete. To check the status of the load, use the following command: %load_status {load_id['payload']['loadId']} --errors --details When the load is complete, the status displays LOAD_COMPLETED , as shown in the following screenshot. All the data is now loaded into graphs, and you can start querying the graph. Fig: Sample Knowledge graph representation of movies in IMDb dataset. Movies “Saving Private Ryan” and “Bridge of Spies” have common connections like actor and director as well as indirect connections through movies like “The Catcher was a Spy” in the graph network. Query the data using Gremlin To access the graph in Neptune, we use the Gremlin query language. For more information, refer to Querying a Neptune Graph . The graph consists of a rich set of information that can be queried directly using Gremlin. In this section, we show a few examples of questions that you can answer with the graph data. In the repo, navigate to the graph_creation folder and run the 3_queries.ipynb notebook. The following section goes over all the queries from the notebook. Worldwide gross of movies that have been shot in New Zealand, with minimum 7.5 rating The following query returns the worldwide gross of movies filmed in New Zealand, with a minimum rating of 7.5: %%gremlin --store-to result g.V().has('place', 'name', containing('New Zealand')).in().has('movie', 'rating', gt(7.5)).dedup().valueMap(['name', 'gross_worldwide', 'rating', 'studio','id']) The following screenshot shows the query results. Top 50 movies that belong to action and drama genres and have Oscar-winning actors In the following example, we want to find the top 50 movies in two different genres (action and drama) with Oscar-winning actors. We can do this by using three different queries and merging the information using Pandas: %%gremlin --store result_action g.V().has('genre', 'name', 'Action').in().has('movie', 'rating', gt(8.5)).limit(50).valueMap(['name', 'year', 'poster']) %%gremlin --store result_drama g.V().has('genre', 'name', 'Drama').in().has('movie', 'rating', gt(8.5)).limit(50).valueMap(['name', 'year', 'poster']) %%gremlin --store result_actors --silent g.V().has('person', 'oscar_winner', true).in().has('movie', 'rating', gt(8.5)).limit(50).valueMap(['name', 'year', 'poster']) The following screenshot shows our results. Top movies that have common keywords “tattoo” and “assassin” The following query returns movies with keywords “tattoo” and “assassin”: %%gremlin --store result g.V().has('keyword','name','assassin').in("described-by-plot-related-keyword").where(out("described-by-plot-related-keyword").has('keyword','name','tattoo')).dedup().limit(10).valueMap(['name', 'poster','year']) The following screenshot shows our results. Movies that have common actors In the following query, we find movies that have Leonardo DiCaprio and Tom Hanks: %%gremlin --store result g.V().has('person', 'name', containing('Leonardo DiCaprio')).in().hasLabel('movie').out().has('person','name', 'Tom Hanks').path().by(valueMap('name', 'poster')) We get the following results. Conclusion In this post, we showed you the power of the IMDb and Box Office Mojo Movies/TV/OTT dataset and how you can use it in various use cases by converting the data into a graph using Gremlin queries. In Part 2 of this series, we show you how to create graph neural network models on this data that can be used for downstream tasks. For more information about Neptune and Gremlin, refer to Amazon Neptune Resources for additional blog posts and videos. About the Authors Gaurav Rele is a Data Scientist at the Amazon ML Solution Lab, where he works with AWS customers across different verticals to accelerate their use of machine learning and AWS Cloud services to solve their business challenges. Matthew Rhodes is a Data Scientist I working in the Amazon ML Solutions Lab. He specializes in building Machine Learning pipelines that involve concepts such as Natural Language Processing and Computer Vision. Divya Bhargavi is a Data Scientist and Media and Entertainment Vertical Lead at the Amazon ML Solutions Lab,  where she solves high-value business problems for AWS customers using Machine Learning. She works on image/video understanding, knowledge graph recommendation systems, predictive advertising use cases. Karan Sindwani is a Data Scientist at Amazon ML Solutions Lab, where he builds and deploys deep learning models. He specializes in the area of computer vision. In his spare time, he enjoys hiking. Soji Adeshina is an Applied Scientist at AWS where he develops graph neural network-based models for machine learning on graphs tasks with applications to fraud & abuse, knowledge graphs, recommender systems, and life sciences. In his spare time, he enjoys reading and cooking. Vidya Sagar Ravipati is a Manager at the Amazon ML Solutions Lab, where he leverages his vast experience in large-scale distributed systems and his passion for machine learning to help AWS customers across different industry verticals accelerate their AI and cloud adoption. TAGS: Amazon Neptune ML , Knowledge Graph Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Power recommendations and search using an IMDb knowledge graph Part 3 _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Power recommendations and search using an IMDb knowledge graph – Part 3 by Divya Bhargavi , Soji Adeshina , Gaurav Rele , Karan Sindwani , Vidya Sagar Ravipati , and Matthew Rhodes | on 06 JAN 2023 | in Amazon ML Solutions Lab , Amazon Neptune , Amazon OpenSearch Service , Amazon SageMaker , Customer Solutions , Data Science & Analytics for Media , Media & Entertainment , Technical How-to | Permalink | Comments |  Share This three-part series demonstrates how to use graph neural networks (GNNs) and Amazon Neptune to generate movie recommendations using the IMDb and Box Office Mojo Movies/TV/OTT licensable data package, which provides a wide range of entertainment metadata, including over 1 billion user ratings; credits for more than 11 million cast and crew members; 9 million movie, TV, and entertainment titles; and global box office reporting data from more than 60 countries. Many AWS media and entertainment customers license IMDb data through AWS Data Exchange to improve content discovery and increase customer engagement and retention. The following diagram illustrates the complete architecture implemented as part of this series. In Part 1 , we discussed the applications of GNNs and how to transform and prepare our IMDb data into a knowledge graph (KG). We downloaded the data from AWS Data Exchange and processed it in AWS Glue to generate KG files. The KG files were stored in Amazon Simple Storage Service (Amazon S3) and then loaded in Amazon Neptune . In Part 2 , we demonstrated how to use Amazon Neptune ML (in Amazon SageMaker ) to train the KG and create KG embeddings. In this post, we walk you through how to apply our trained KG embeddings in Amazon S3 to out-of-catalog search use cases using Amazon OpenSearch Service and AWS Lambda . You also deploy a local web app for an interactive search experience. All the resources used in this post can be created using a single AWS Cloud Development Kit (AWS CDK) command as described later in the post. Background Have you ever inadvertently searched a content title that wasn’t available in a video streaming platform? If yes, you will find that instead of facing a blank search result page, you find a list of movies in same genre, with cast or crew members. That’s an out-of-catalog search experience! Out-of-catalog search (OOC) is when you enter a search query that has no direct match in a catalog. This event frequently occurs in video streaming platforms that constantly purchase a variety of content from multiple vendors and production companies for a limited time. The absence of relevancy or mapping from a streaming company’s catalog to large knowledge bases of movies and shows can result in a sub-par search experience for customers that query OOC content, thereby lowering the interaction time with the platform. This mapping can be done by manually mapping frequent OOC queries to catalog content or can be automated using machine learning (ML). In this post, we illustrate how to handle OOC by utilizing the power of the IMDb dataset (the premier source of global entertainment metadata) and knowledge graphs. OpenSearch Service is a fully managed service that makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. OpenSearch Service offers the latest versions of OpenSearch, support for 19 versions of Elasticsearch (1.5 to 7.10 versions), as well as visualization capabilities powered by OpenSearch Dashboards and Kibana (1.5 to 7.10 versions). OpenSearch Service currently has tens of thousands of active customers with hundreds of thousands of clusters under management processing trillions of requests per month. OpenSearch Service offers kNN search, which can enhance search in use cases such as product recommendations, fraud detection, and image, video, and some specific semantic scenarios like document and query similarity. For more information about the natural language understanding-powered search functionalities of OpenSearch Service, refer to Building an NLU-powered search application with Amazon SageMaker and the Amazon OpenSearch Service KNN feature . Solution overview In this post, we present a solution to handle OOC situations through knowledge graph-based embedding search using the k-nearest neighbor (kNN) search capabilities of OpenSearch Service. The key AWS services used to implement this solution are OpenSearch Service, SageMaker, Lambda, and Amazon S3. Check out Part 1 and Part 2 of this series to learn more about creating knowledge graphs and GNN embedding using Amazon Neptune ML. Our OOC solution assumes that you have a combined KG obtained by merging a streaming company KG and IMDb KG. This can be done through simple text processing techniques that match titles along with the title type (movie, series, documentary), cast, and crew. Additionally, this joint knowledge graph has to be trained to generate knowledge graph embeddings through the pipelines mentioned in Part 1 and Part 2 . The following diagram illustrates a simplified view of the combined KG. To demonstrate the OOC search functionality with a simple example, we split the IMDb knowledge graph into customer-catalog and out-of-customer-catalog. We mark the titles that contain “Toy Story” as an out-of-customer catalog resource and the rest of the IMDb knowledge graph as customer catalog. In a scenario where the customer catalog is not enhanced or merged with external databases, a search for “toy story” would return any title that has the words “toy” or “story” in its metadata, with the OpenSearch text search. If the customer catalog was mapped to IMDb, it would be easier to glean that the query “toy story” doesn’t exist in the catalog and that the top matches in IMDb are “Toy Story,” “Toy Story 2,” “Toy Story 3,” “Toy Story 4,” and “Charlie: Toy Story” in decreasing order of relevance with text match. To get within-catalog results for each of these matches, we can generate five closest movies in customer catalog-based kNN embedding (of the joint KG) similarity through OpenSearch Service. A typical OOC experience follows the flow illustrated in the following figure. The following video shows the top five (number of hits) OOC results for the query “toy story” and relevant matches in the customer catalog (number of recommendations). Here, the query is matched to the knowledge graph using text search in OpenSearch Service. We then map the embeddings of the text match to the customer catalog titles using the OpenSearch Service kNN index. Because the user query can’t be directly mapped to the knowledge graph entities, we use a two-step approach to first find title-based query similarities and then items similar to the title using knowledge graph embeddings. In the following sections, we walk through the process of setting up an OpenSearch Service cluster, creating and uploading knowledge graph indexes, and deploying the solution as a web application. Prerequisites To implement this solution, you should have an AWS account , familiarity with OpenSearch Service, SageMaker, Lambda, and AWS CloudFormation , and have completed the steps in Part 1 and Part 2 of this series. Launch solution resources The following architecture diagram shows the out-of-catalog workflow. You will use the AWS Cloud Development Kit (CDK) to provision the resources required for the OOC search applications. The code to launch these resources performs the following operations: Creates a VPC for the resources. Creates an OpenSearch Service domain for the search application. Creates a Lambda function to process and load movie metadata and embeddings to OpenSearch Service indexes ( **-ReadFromOpenSearchLambda-** ). Creates a Lambda function that takes as input the user query from a web app and returns relevant titles from OpenSearch ( **-LoadDataIntoOpenSearchLambda-** ). Creates an API Gateway that adds an additional layer of security between the web app user interface and Lambda. To get started, complete the following steps: Run the code and notebooks from Part 1 and Part 2 . Navigate to the part3-out-of-catalog folder in the code repository. Launch the AWS CDK from the terminal with the command bash launch_stack.sh . Provide the two S3 file paths created in Part 2 as input: The S3 path to the movie embeddings CSV file. The S3 path to the movie node file. Wait until the script provisions all the required resources and finishes running. Copy the API Gateway URL that the AWS CDK script prints out and save it. (We use this for the Streamlit app later). Create an OpenSearch Service Domain For illustration purposes, you create a search domain on one Availability Zone in an r6g.large.search instance within a secure VPC and subnet. Note that the best practice would be to set up on three Availability Zones with one primary and two replica instances. Create an OpenSearch Service index and upload data You use Lambda functions (created using the AWS CDK launch stack command) to create the OpenSearch Service indexes. To start the index creation, complete the following steps: On the Lambda console, open the LoadDataIntoOpenSearchLambda Lambda function. On the Test tab, choose Test to create and ingest data into the OpenSearch Service index. The following code to this Lambda function can be found in part3-out-of-catalog/cdk/ooc/lambdas/LoadDataIntoOpenSearchLambda/lambda_handler.py : embedding_file = os.environ.get("embeddings_file") movie_node_file = os.environ.get("movie_node_file") print("Merging files") merged_df = merge_data(embedding_file, movie_node_file) print("Embeddings and metadata files merged") print("Initializing OpenSearch client") ops = initialize_ops() indices = ops.indices.get_alias().keys() print("Current indices are :", indices) # This will take 5 minutes print("Creating knn index") # Create the index using knn settings. Creating OOC text is not needed create_index('ooc_knn',ops) print("knn index created!") print("Uploading the data for knn index") response = ingest_data_into_ops(merged_df, ops, ops_index='ooc_knn', post_method=post_request_emb) print(response) print("Upload complete for knn index") print("Uploading the data for fuzzy word search index") response = ingest_data_into_ops(merged_df, ops, ops_index='ooc_text', post_method=post_request) print("Upload complete for fuzzy word search index") # Create the response and add some extra content to support CORS response = { "statusCode": 200, "headers": { "Access-Control-Allow-Origin": '*' }, "isBase64Encoded": False } The function performs the following tasks: Loads the IMDB KG movie node file that contains the movie metadata and its associated embeddings from the S3 file paths that were passed to the stack creation file launch_stack.sh . Merges the two input files to create a single dataframe for index creation. Initializes the OpenSearch Service client using the Boto3 Python library. Creates two indexes for text ( ooc_text ) and kNN embedding search ( ooc_knn ) and bulk uploads data from the combined dataframe through the ingest_data_into_ops function. This data ingestion process takes 5–10 minutes and can be monitored through the Amazon CloudWatch logs on the Monitoring tab of the Lambda function. You create two indexes to enable text-based search and kNN embedding-based search. The text search maps the free-form query the user enters to the titles of the movie. The kNN embedding search finds the k closest movies to the best text match from the KG latent space to return as outputs. Deploy the solution as a local web application Now that you have a working text search and kNN index on OpenSearch Service, you’re ready to build a ML-powered web app. We use the streamlit Python package to create a front-end illustration for this application. The IMDb-Knowledge-Graph-Blog/part3-out-of-catalog/run_imdb_demo.py Python file in our GitHub repo has the required code to la­­­­unch a local web app to explore this capability. To run the code, complete the following steps: Install the streamlit and aws_requests_auth Python package in your local virtual Python environment through for following commands in your terminal: pip install streamlit pip install aws-requests-auth Replace the placeholder for the API Gateway URL in the code as follows with the one created by the AWS CDK: api = '<ENTER URL OF THE API GATEWAY HERE>/opensearch-lambda?q={query_text}&numMovies={num_movies}&numRecs={num_recs}' Launch the web app with the command streamlit run run_imdb_demo.py from your terminal. This script launches a Streamlit web app that can be accessed in your web browser. The URL of the web app can be retrieved from the script output, as shown in the following screenshot. The app accepts new search strings, number of hits, and number of recommendations. The number of hits correspond to how many matching OOC titles we should retrieve from the external (IMDb) catalog. The number of recommendations corresponds to how many nearest neighbors we should retrieve from the customer catalog based on kNN embedding search. See the following code: search_text=st.sidebar.text_input("Please enter search text to find movies and recommendations") num_movies= st.sidebar.slider('Number of search hits', min_value=0, max_value=5, value=1) recs_per_movie= st.sidebar.slider('Number of recommendations per hit', min_value=0, max_value=10, value=5) if st.sidebar.button('Find'): resp= get_movies() This input (query, number of hits and recommendations) is passed to the **-ReadFromOpenSearchLambda-** Lambda function created by the AWS CDK through the API Gateway request. This is done in the following function: def get_movies(): result = requests.get(api.format(query_text=search_text, num_movies=num_movies, num_recs=recs_per_movie)).json() The output results of the Lambda function from OpenSearch Service is passed to API Gateway and is displayed in the Streamlit app. Clean up You can delete all the resources created by the AWS CDK through the command npx cdk destroy –app “python3 appy.py” --all in the same instance (inside the cdk folder) that was used to launch the stack (see the following screenshot). Conclusion In this post, we showed you how to create a solution for OOC search using text and kNN-based search using SageMaker and OpenSearch Service. You used custom knowledge graph model embeddings to find nearest neighbors in your catalog to that of IMDb titles. You can now, for example, search for “The Rings of Power,” a fantasy series developed by Amazon Prime Video, on other streaming platforms and reason how they could have optimized the search result. For more information about the code sample in this post, see the GitHub repo . To learn more about collaborating with the Amazon ML Solutions Lab to build similar state-of-the-art ML applications, see Amazon Machine Learning Solutions Lab . For more information on licensing IMDb datasets, visit developer.imdb.com . About the Authors Divya Bhargavi is a Data Scientist and Media and Entertainment Vertical Lead at the Amazon ML Solutions Lab,  where she solves high-value business problems for AWS customers using Machine Learning. She works on image/video understanding, knowledge graph recommendation systems, predictive advertising use cases. Gaurav Rele is a Data Scientist at the Amazon ML Solution Lab, where he works with AWS customers across different verticals to accelerate their use of machine learning and AWS Cloud services to solve their business challenges. Matthew Rhodes is a Data Scientist I working in the Amazon ML Solutions Lab. He specializes in building Machine Learning pipelines that involve concepts such as Natural Language Processing and Computer Vision. Karan Sindwani is a Data Scientist at Amazon ML Solutions Lab, where he builds and deploys deep learning models. He specializes in the area of computer vision. In his spare time, he enjoys hiking. Soji Adeshina is an Applied Scientist at AWS where he develops graph neural network-based models for machine learning on graphs tasks with applications to fraud & abuse, knowledge graphs, recommender systems, and life sciences. In his spare time, he enjoys reading and cooking. Vidya Sagar Ravipati is a Manager at the Amazon ML Solutions Lab, where he leverages his vast experience in large-scale distributed systems and his passion for machine learning to help AWS customers across different industry verticals accelerate their AI and cloud adoption. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Prima Group Case Study.txt
Solution | Ensuring Customer Satisfaction and Supporting Streaming Platform Subscriber Growth Using AWS, Prima Group has been able to scale its iPrima platform by a factor of 10, helping it increase content availability and serve more users at the same time. “If users note any issues with our streaming service, we get complaints, plus we risk upsetting our advertisers,” says Marek Kouřimský, chief of development at Prima Group. “We now have a stable and reliable service and feel assured we can grow without issues.” AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Learn more » Français Español Prima Group is the first commercial TV station in the Czech Republic. Launched in 1993, it now comprises 10 terrestrial TV channels in Czech Republic and one channel in Slovakia. Its streaming service, iPrima, became available in 2012 and offers a rich diversity of content, including in-house-produced Prima ORIGINALS programming, movies, TV series, sports, and documentaries. It offers both ad-supported, free-to-view services, and ad-free subscriber services, and holds streaming rights for popular international TV shows and movies. iPrima operates on two models: a free-to-view, advertising-supported model and an ad-free subscription model. Content for iPrima is taken from parent company Prima Group’s terrestrial channels, including popular in-house-produced programming—Prima ORIGINALS—and international content. A third-tier subscription model with online-only premium content is in the works to boost platform growth and offer a rival to well-known streaming brands. Learn how »  日本語 2023 Get Started 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used The company is now working on its next stage of modernization to connect internal and external services and minimize manual processes. For this, it is using Amazon Managed Streaming for Apache Kafka (Amazon MSK) to securely stream data with a fully managed, highly available Apache Kafka service. “We can now manage new subscribers more easily,” says Kouřimský. “Instead of manually creating an event when a new user comes on board, the system change is automatic.” Outcome | Ongoing Modernization to Minimize Manual Processes Amazon MSK makes it easy to ingest and process streaming data in real time with fully managed Apache Kafka. Learn more » Amazon Fargate Opportunity | Improving Scaling to Maintain TV Content Availability AWS Services Used Amazon MSK 中文 (繁體) Bahasa Indonesia Television company Prima Group, based in the Czech Republic, has migrated its iPrima streaming service platform to AWS to boost its ability to scale, and support the growth of its subscriber base. Using AWS, it has improved scaling by a factor of 10, compared to its previous hosting company, and reduced the size of its IT staff by 50 percent. To support platform stability, it uses Amazon Elastic Kubernetes Service (Amazon EKS). After the migration, it was able to develop Kubernetes clusters in just 14 days, compared to the 2 years it took previously. Ρусский To keep customers satisfied, and support growth of its iPrima streaming service, Prima Group decided to move to a cloud setup built on Amazon Web Services (AWS). By doing this, it could benefit from managed services, modernize its monolithic applications, and improve scaling. iPrima’s previous IT setup relied on a hosting service using a small number of bare-metal servers. With future growth in mind, the company chose AWS to help it increase the stability and availability of its streaming platform. This was essential to Prima Group, so it could scale to meet traffic peaks, provide a good customer experience, and support the introduction of new subscriber models. عربي 中文 (简体) Prima Group also uses AWS for its news website—operated in partnership with CNN—to speed up image loading times, making the process 1.5 times faster than before. It uses Amazon CloudFront, which securely delivers content with low latency and high transfer speeds. “We’ve been very impressed with this service, and we are now looking at using it for video files,” adds Kouřimský. By building on AWS, Prima Group was able to get its Kubernetes clusters up and running in just 14 days, compared with the 2 years it took previously. “Our hosting company spent a long time to get Kubernetes clusters into production—even then, they weren’t in proper working order,” says Kouřimský. “It’s hard to start from scratch when developing Kubernetes, and there are not many people who are proficient in it. With AWS, you can simply click and have clusters available quickly—plus, as a managed service, all the painful administration parts are managed automatically.” Learn more » Overview Prima Group Boosts Streaming Uptime and Creates Platform for Growth on AWS AWS Customer Success Stories Türkçe About Prima Group English Using AWS, Prima Group’s streaming platform is set for growth. “Managing our infrastructure is much easier, plus now we scale infinitely and keep our streaming customers happy,” says Kouřimský. “This gives us total peace of mind for the future of our streaming service.” Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Prima Group began by moving its streaming platform databases and applications to AWS as part of a two-phase project, completing the migration in 5 months. The new setup meant 50 percent fewer people were needed for infrastructure maintenance, so more of the IT team could focus on higher-value tasks such as product development. Deutsch Marek Kouřimský Chief of Development, Prima Group Tiếng Việt Content uptime is important for Prima Group’s streaming platform. The platform regularly streams selected Prima ORIGINALS a week ahead of the official broadcast dates, and commonly sees a surge in viewers when the most popular shows are aired. The company started to notice scaling issues during peak prime-time periods, especially when two of the Czech Republic’s most beloved TV series—ZOO and Slunečná—were shown. With availability starting to become an issue, Prima Group needed to address its ability to scale and grow. Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Italiano ไทย Amazon EKS Amazon CloudFront Contact Sales Established in 1993 as the first commercial channel in the Czech Republic, Prima Group has grown steadily and today broadcasts 10 terrestrial television channels nationwide. It also offers a streaming service, iPrima. Launched in 2012, the service offers a mix of original TV series, movies, sports, news, and documentary content. The first part of Prima Group’s modernization process has been to containerize applications to increase platform stability and support further scaling. For this, it is using Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Customer Stories / Media & Entertainment / Czech Republic Solution | Fast Kubernetes Development and Automated Cluster Management Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Managing our infrastructure is much easier, plus now we scale infinitely and keep our streaming customers happy. This gives us total peace of mind for the future of our streaming service.”
Processing Data 10x Faster Using Amazon Redshift Serverless with BlocPower _ BlocPower Case Study _ AWS.txt
The BlocPower team worked alongside the AWS team to create a proof of concept to see how Amazon Redshift Serverless would affect the performance and handling of the increased data volume for BlocMaps. “We performed benchmark tests with BlocMaps, which is what really raised our eyebrows,” says Davis. “Our application performed so much better, and our billing benefited from Amazon Redshift Serverless.” Specifically, the startup could process and query its data in minutes—10 times faster compared with its previous architecture. BlocPower’s mission is to make buildings in the United States smarter, greener, and healthier. The company has successfully implemented electrification, solar, and other energy-efficiency measures in more than 4,000 buildings to date. Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Solution | Processing Data 10x Faster to Deliver Actionable Energy Analytics Amazon S3 10x faster Español Since 2016, BlocPower has been building its data processing pipeline on AWS, adopting several cloud-based compute solutions, including Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. Initially, its DevOps team scaled its data processing pipeline by selecting different Amazon EC2 instances for running its clusters, which could take 2–3 hours to complete. “As we were gaining more customers on BlocMaps and working with more data, we were having to scale our cluster horizontally,” says Ankur Garg, director of data architecture and analytics at BlocPower. Optimized 日本語 AWS Services Used Climate technology leader BlocPower wanted to improve the user experience of its flagship product, BlocMaps—a software-as-a-service (SaaS) solution that provides actionable insights for building decarbonization to municipalities and utility companies—so that it could more effectively support its customers in their efforts to reduce greenhouse gas emissions in their buildings. With clean power at the core of its mission, BlocPower built a high-performance compute environment on Amazon Web Services (AWS). BlocPower can now minimize its own carbon footprint while processing data from over 100 million energy profiles of buildings across the United States. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Contact Sales Get Started 한국어 5 seconds or less Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Using Amazon Redshift Serverless to Improve Data Warehousing for BlocPower About BlocPower Afterward, BlocPower decided to adopt Amazon Redshift Serverless. In doing so, the company reduced the amount of time that its DevOps engineers spent on scaling its clusters. Additionally, by implementing Amazon Redshift Serverless alongside Amazon S3 and Amazon Redshift, BlocPower gained the ability to query its data across numerous data sources, including Amazon S3 buckets and data pulled with remote APIs through AWS Glue, which helps companies discover, prepare, and integrate all their data at virtually any scale. BlocPower intermittently runs processes to merge data sources and perform data transformations. Then, the team loads the results into Amazon Redshift. After introducing Amazon Redshift Serverless clusters that automatically scale to usage spikes, BlocPower improved its runtime performance by a factor of 10. “We can query our data in near real time,” says Davis. “We also saw an improvement in our APIs. Those two factors made using Amazon Redshift Serverless a no-brainer.” Amazon Redshift Learn how BlocPower in cleantech improved the performance of its energy analytics by 10x using Amazon Redshift Serverless. Our application performed so much better, and our billing benefited from Amazon Redshift Serverless.” Sean Davis Data Architect, BlocPower 中文 (繁體) Bahasa Indonesia Processing Data 10x Faster Using Amazon Redshift Serverless with BlocPower Reduced time Ρусский The company had also migrated its data to a combination of cloud-based data storage solutions, including Amazon Redshift, which is a fast, simple, and widely used cloud data warehouse. BlocPower stores the data that it gathers from 100 million building profiles in Amazon Simple Storage Service (Amazon S3), which offers object storage that is built to retrieve any amount of data from anywhere. As the complexity of BlocPower’s data profiles grew, the company wanted to increase access to more compute resources and resource management options for its teams. The startup was interested in the benefits of Amazon Redshift Serverless and engaged the AWS team. “The AWS team gave us an introduction to Amazon Redshift Serverless, which was very helpful and resolved any kind of apprehension that we had with using it moving forward,” says Sean Davis, data architect at BlocPower. عربي to deliver energy analytics on BlocMaps 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » These performance gains on the backend of its BlocMaps application have rendered a smoother user experience for BlocPower’s customers. By using Amazon Redshift, the startup has also reduced latency on the front end of its application, which is critical when demonstrating the application to new customers. Customers can view, filter, and visualize decarbonization metrics for buildings in specific geographic locations faster than before. Under its previous model, the BlocMaps application could take 20–30 seconds to load building profiles for its customers. Now, the application delivers these insights in under 5 seconds—an improvement that has resulted in positive customer feedback. “The performance of our BlocMaps applications is one of our top priorities from a revenue standpoint,” says Garg. “Good word of mouth helps us enter into new markets and new cities.” Outcome | Investing in a Serverless-First Approach to Support Social Equity 2022 BlocPower will continue to investigate AWS serverless solutions to improve the performance of its products. Based on its experience with this project, the company plans to migrate the Internet of Things data that it collects to Amazon Redshift Serverless as well. “The amount of time that it would’ve taken us to deliver insights from raw data would’ve been unimaginable if we had tried to set up our infrastructure on premises,” says Garg. “Working on AWS has been a huge advantage for us. The amount of time and money that we save helps us deliver energy insights to additional low- and moderate-income households.” Overview Customer Stories / Cleantech Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Türkçe English Amazon Redshift Serverless makes it easier to run and scale analytics without having to manage your data warehouse infrastructure. As the number of energy profiles grew, BlocPower needed a data warehouse that would automatically meet its workload-performance requirements and reduce the administrative burden. In July 2022, BlocPower learned about one of the latest AWS product offerings, Amazon Redshift Serverless, which companies use to get insights from their data in seconds without having to manage data warehouse infrastructure. BlocPower decided to test Amazon Redshift Serverless in its AWS environment, and it experienced a decrease in processing times by 90 percent while optimizing compute costs. These performance gains positioned the startup to streamline its DevOps workflows, allowing it to focus more on its decarbonization efforts. 90% reduction Deutsch Tiếng Việt compute costs Italiano ไทย in data processing times for managing clusters Founded in 2014, BlocPower is a Brooklyn-based leader that focuses on making American cities greener, smarter, and healthier. With a diverse, inclusive workforce that consists of 60 percent minorities and 30 percent women, the BlocPower team provides energy analytics to building managers and property owners in over 10 cities, helping them understand the potential of retrofitting their buildings with renewable energy sources. As of 2022, BlocPower successfully implemented electrification, solar, and other energy efficiency measures in over 4,000 buildings. Learn more » Amazon EC2 Amazon Redshift Serverless data processing Português Not only has BlocPower increased its revenue opportunities, but the startup has also optimized its compute costs. Having adopted Amazon Redshift Serverless, BlocPower no longer pays for its clusters’ idle time. “The serverless model has been perfect for us,” says Davis. “We pay less for our processes, and we get more compute resources when we need it. Overall, it’s been a very positive experience.”
Purple Technology Case Study _ AWS Step Functions.txt
In addition to better compliance with regulations through improved transparency, the Purple IT team has improved and accelerated software development processes using AWS. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use.File processing Stream processing Web applications IoT backends Mobile backends. Français Benefits of AWS Jan Červinka Director of Engineering, Purple Technology Español Amazon EC2 The Czech-based company builds apps that complement online trading platforms and support the changing and demanding needs of brokers. Purple’s solution enables tens of thousands of clients to trade many billions of dollars of assets each month. 日本語 In addition, brokers and traders must comply with rules that change from country to country. These rules are subject to sudden changes in regulation—and even to evolving legal interpretations. Get Started 한국어 Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools. Increases transparency of complex processes Purple Technology Responds Rapidly to Changing Regulations and Customer Needs Using AWS Amazon Lambda Purple needed a more transparent and effective way to manage the complex ruleset that governed customer onboarding and allow it to respond more quickly to changing rules. It found the solution it needed using AWS Step Functions, a low-code, visual workflow service that developers can use to build applications. “Onboarding involves complex processes that we have to be able to understand and update easily,” says Jan Červinka, director of engineering at Purple Technology. “We can now map and design all of these processes using AWS Step Functions.” To register new trading accounts with brokers, users need to go through a number of steps to qualify. The registration process checks many conditions, some via API, to confirm that the new customer is not disqualified from trading. Purple’s onboarding process also supports Know Your Customer (KYC) user verification and anti-money laundering (AML) processes. Amazon Step functions AWS Services Used Purple wanted greater transparency and control to improve its services and reduce the in-house resources required to maintain its applications. Using Amazon Web Services (AWS), Purple found a way to easily manage changes to the backend ruleset and make the ruleset more transparent to internal and external stakeholders. 中文 (繁體) Bahasa Indonesia While the Purple application has a user-friendly front end, the backend was a complex code base. Changes to the rules required developers to delve into the code to make amendments and make sure the app was compliant. Questions from product managers about the rules and processes required developers to create diagrams that would quickly become outdated. Contact Sales Ρусский عربي 中文 (简体) To simplify this maintenance process further, Purple built a Slack extension to allow rules to be repaired and amended from the messaging platform. This also means customer service teams at brokers can operate the tool and provide a responsive service to their own customers. Using AWS we have significantly improved the self-service capabilities of the customer support teams,” says Červinka. “That leads to a much faster time to resolution of certain issues customers may encounter.” About Purple Technology Learn more » Build and run applications without thinking about servers. Severless on AWS Responding to Changing Regulatory Changes On AWS, Purple also has greater freedom to innovate. Using AWS infrastructure as code means that developers can spin up test environments to work on new features. These test environments consume fewer resources than production sites. “Using AWS we can experiment and play with new ideas. And we have the confidence that we can stay on top of the changes to regulations through better control and transparency with AWS services,” says Pýrek. Using AWS Step Functions, the company always has up-to-date product documentation as it is automatically generated. Now if a regulator or legal counsel asks to review the application’s processes, Purple can share the documentation to demonstrate how it complies. “It’s much easier to produce visual reports and diagrams for our compliance stakeholders,” says Červinka. “That frees up IT teams from having to provide complex, time-intensive—and not really fun—support so they can instead focus on building new features.” Reduces maintenance burden on developer team Türkçe FinTech company Purple Technology builds applications and services for brokerage firms to onboard customers efficiently. End users self-manage their accounts and portfolios, which leaves brokerages free to focus on core functions such as client services and risk management. Users creating new trading accounts with brokers need to follow a stringent onboarding process that complies with complex rules and regulations to verify their identities. Using AWS, Purple has simplified the way these rulesets are coded into the app, making it easier for non-technical employees to manage the application and keep on top of changing regulations. English But managing those rules was a complex and time-consuming process, often requiring developer resources that would be better spent on product innovation, not maintenance. Resolves software issues more efficiently Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Developers can also devote more time to improving the platform rather than troubleshooting issues, because AWS Step Functions has reduced the time required for debugging. This has increased the team’s speed of development. Deutsch Amazon DynamoDB Trading and investing online relies on transparency, and trust in the platform, in the brokers, and in the identity of traders. Purple Technology helps build that trust. Tiếng Việt Italiano ไทย Using AWS, we have greater visibility into our complex processes, making them simple to visualize, manage, and update. This means we can be more responsive to any new or changing regulations and to customer needs.” Boosts speed of development of new features Communication with business decision-makers on new software features is now more productive. “It’s easy to read and modify AWS Step Functions,” says Filip Pýrek, serverless architect at Purple Technology. “We use it for prototyping when designing features, so non-technical colleagues can understand and discuss new processes.” 2022 Based in the Czech Republic, Purple Technology is a financial technology company founded in 2011. It provides an online trading platform for brokerages and their clients around the world. Faster Development and Product Maintenance As a FinTech company, Purple’s solution has to comply with a huge number of legal and regulatory rules that vary from territory to territory and are subject to constant change. Purple’s solution needs to accurately capture these rules to run checks during new trader account registrations. AWS Step Functions is a low-code, visual workflow service that developers use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. Workflows manage failures, retries, parallelization, service integrations, and observability so developers can focus on higher-value business logic. Português Using AWS Step Functions, Purple Technology maps out the workflows for each process so that it can easily fix any issues and demonstrate to regulators how customer checks are carried out. In addition, rather than drawing on developers to make changes, Purple can use trained, non-technical people to carry out maintenance.
Queensland University of Technology Advances Global Research on Rare Diseases Using the AWS Cloud.txt
Bellgard shared that the TRRF is applicable across all clinical care settings. As the digital research platform is cloud-based, it acts as a central coordinating data repository, which can be accessed by patients, clinicians and researchers from any device. This allows for current and convenient information sharing, closer engagement between patients and clinicians, and ultimately, improved patient care. Français “AS is a complex neurogenetic condition with multiple genotypes and phenotypes,” states Megan Cross, chairperson of the Foundation for Angelman Syndrome (FAST). “Prior to the creation of this platform, there was no capacity to collect, collate, and disseminate patient-reported data on a global scale. The TRRF has allowed parents of patients to engage with research, empowering their journey with AS." 2023 Enhanced Español Expansion Learn more » eResearch@QUT selected Amazon Relational Database Service (Amazon RDS) with Amazon Aurora Serverless to deliver high availability, compute performance, and scalability of its databases. For added efficiencies, the eResearch@QUT team deployed Amazon Elastic Container Service (Amazon ECS) on AWS Fargate which helped eliminate the management of virtual machines, and made it easier for the team to focus on application development, and automate the deployment of new registries. Development of the TRRF system has received funding by both nonprofit organizations and national competitive funding schemes, including MTPConnect and the National Health and Medical Research Council. Operating on a strict budget requires the eResearch@QUT team to cost-effectively manage the TRRF’s scalability, security, and compute capacity to support the global AS registry.  日本語 data governance frameworks AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Amazon Aurora Serverless 한국어 Matthew Bellgard Director of eResearch and TRRF Project Lead, Queensland University of Technology  Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora. It automatically starts up, shuts down, and scales capacity up or down based on your application's needs. Learn more » To learn more, visit aws.amazon.com/education. Get Started Amazon CloudFront AWS Services Used Solution | Delivering an Efficient, Highly Available and Secure Platform The Office of eResearch in Queensland University of Technology supports QUT researchers and external stakeholders, using innovative end-to-end digital and data solutions and strategies, to deliver real-world impact. The platform has also opened the possibility to launch registries for other rare diseases, faster. Bellgard, who also currently chairs the Asia Pacific Economic Community Rare Disease Network, mentioned that the eResearch@QUT DevOps team can now deploy a complete registry for other research projects within hours compared to days, prior to working with AWS. 中文 (繁體) Bahasa Indonesia Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Professor Matthew Bellgard, director of eResearch at QUT and TRRF project lead said, “Working with Amazon Web Services (AWS) to host our open-source digital platform has helped the team to continually apply optimizations reducing the total cost of ownership and improve the overall security posture of the platform.” Queensland University of Technology Advances Global Research on Rare Diseases Using the AWS Cloud The Office of eResearch at the Queensland University of Technology developed an open-source digital platform to collect and analyze health data in an Amazon Virtual Private Cloud (Amazon VPC) to secure patient information. About Office of eResearch, Queensland University of Technology “With multiple layers of security and, in particular, AWS’s service level agreement of up to 99.99% uptime, we are assured of protecting the patient data we collect, and meeting the stringent data handling and governance guidelines of Australia and other countries,” shared Bellgard. Customer Stories / Higher Education Overview uptime Türkçe The Office of eResearch at the Queensland University of Technology (eResearch@QUT) designs digital platforms to support and facilitate research projects from around the world. It helps researchers to apply real world applications of digital technology, including custom cloud solutions, and quantitative research methods with machine learning, to promote data-driven discoveries. To aid rare disease research, the eResearch@QUT team has developed the Trial Ready Registry Framework (TRRF), an open-source digital platform to collect and analyze health data. The TRRF has been deployed for Angelman Syndrome (AS), a rare neurodevelopmental disorder. Through this cloud-based platform, individuals and their parents or guardians living with AS from around the world can self-register and share patient-reported information to accelerate clinical research on the natural progression of the disease and facilitate clinical trial participation. Most recently, the TRRF has been deployed to establish the first Australian patient and clinical Australian Motor Neurone Disease (MND) registry through the MiNDAUS partnership. This is a national collaboration of clinicians and scientists, consumer advocacy groups, and consumers to improve person-centered care for people living with MND by providing data-driven policy direction in health care and research. Associate Professor Paul Talman, clinical lead of the MiNDAUS Registry shared, “The TRRF allows us to move from a rather static state where researchers obtain snapshots of data at any given timepoint to a more dynamic health care tool that the patients control.” English 99.99% AWS Fargate Amazon Virtual Private Cloud Deutsch Driving Tiếng Việt Learn More of the user network Italiano ไทย Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security. Learn more » novel research opportunities Opportunity | Setting Up a Digital Health Framework for Clinical Research security profile Delivering end-to-end eResearch@QUT housed the AS registry in an Amazon Virtual Private Cloud (Amazon VPC) to secure patient information. AWS WAF was added to Amazon VPC to monitor and block web traffic that may pose a threat to the platform. 中文 (简体) Outcome | Expanding the Use of the TRRF for Other Diseases and Clinical Care Settings Português Working with AWS to host our open-source digital platform has helped the team to continually apply optimizations reducing the total cost of ownership and improve the overall security posture of the platform.”
Query Response Time Improved Using Amazon Redshift Serverless _ Playrix Case Study _ AWS.txt
Français analyst productivity “We have a long-term relationship with AWS and use AWS solutions everywhere—in our games, development, researching, and more,” says Ivanov. “Adding Amazon Redshift Serverless to our solution has been another win.” cost savings Español Since adopting Amazon Redshift Serverless, Playrix has improved its ability to rapidly analyze near-real-time player data and allocate marketing spend as part of its demand-generation activities. Handling spikes in user queries is no longer a problem. The company is also better equipped to perform research using historical player data to identify and reengage inactive gamers. In the past, running queries on old data risked disrupting other critical processes, so the Playrix team avoided doing so. Query Response Time Improved Using Amazon Redshift Serverless with Playrix 日本語 AWS Services Used Ireland-based Playrix is one of the largest gaming companies in Europe and is among the top three most successful mobile developers in the world. Every month, more than 100 million people play the company’s popular games, which include Gardenscapes, Fishdom, Manor Matters, Homescapes, Wildscapes, and Township. Part of Playrix’s marketing strategy is to analyze past player data to identify inactive players, reengage them, and inspire them to start gaming again. To do so, it needed to efficiently analyze a massive quantity of player data, dating back 4–5 years, without disrupting other compute processes. In addition, Playrix wanted to achieve more predictable response times when providing one-time analytics to help allocate marketing spend. “Our stakeholders want to see dashboards with data from the previous day, including financial data used for quick decision-making,” says Igor Ivanov, technical director at Playrix. “So, it’s important for us to avoid any delays in the data.” Get Started 한국어 The company used Amazon Redshift to achieve these aims, eventually upgrading to three nodes of Amazon Redshift to meet its scaling needs. However, the company still had 600 TB of data remaining to migrate to Amazon Redshift and realized that three nodes weren’t enough. When Amazon Redshift Serverless became available, Playrix knew that it was the right solution to house the company’s data and to meet its needs during times when higher performance is necessary. “Amazon Redshift Serverless is great for achieving the on-demand high performance that we need for massive queries,” says Ivanov. Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Redshift Serverless is great for achieving the on-demand high performance that we need for massive queries.” Decreased cost Playrix began implementing Amazon Redshift Serverless in April 2022 and finished in July of that year. Initially, as a proof of concept, Playrix had upgraded its cluster from 3 to 12 nodes and saw how much more efficiently its teams could perform complicated analyses. When Amazon Redshift Serverless became available, Playrix was one of the first companies to pilot the service. The company migrated its remaining 600 TB of data from the past 4–5 years into an Amazon Redshift cluster, where it can also be accessed using Amazon Redshift Serverless—no need to store two copies of the data. Using Amazon Redshift Serverless, Playrix can query its historical data without disruption to regular analytics jobs. Playrix added Amazon Redshift Serverless to its provisioned cluster using the data-sharing feature, so unpredictable one-time queries and regular queries can access the same data—resulting in cost savings for Playrix. Using Amazon Redshift Serverless, the company can not only rapidly run queries on past data but has also decreased its response times to 4–5 minutes. “For analysts, it’s very important to be able to use the history of our games for decision-making,” says Ivanov. “Now that we’re using Amazon Redshift Serverless to more efficiently analyze results from the past 4 years, we can develop more accurate machine learning models.” Mobile gaming company Playrix, which had already been using solutions from Amazon Web Services (AWS), wanted to advance its use of Amazon Redshift, the fastest and most widely used cloud data warehouse, to enhance the analytics it uses to market to players. The company had successfully used Amazon Redshift and other AWS services for 5 years but wanted to scale its data analytics needs without disrupting other systems and processes—particularly when analyzing past player data. About Playrix Now that it uses Amazon Redshift Serverless as part of its solution for analyzing player data, Playrix is equipped to run massive queries on player data more cost effectively and without downtime, helping the company get more value out of its historic data. The resulting analytics drive marketing strategies to reengage inactive players and generate sales revenue. Due to its ongoing success using AWS solutions, Playrix plans to continue using AWS for data analysis and other business needs. Learn how Playrix, a leader in mobile gaming, improved query response time using Amazon Redshift Serverless. Igor Ivanov Technical Director, Playrix 中文 (繁體) Bahasa Indonesia Improved response times Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) In 2022, Playrix began using Amazon Redshift Serverless, a service that makes it easier for companies to run and scale analytics without having to manage data warehouse infrastructure, alongside Amazon Redshift. Since adopting Amazon Redshift Serverless, Playrix has improved response times for queries on massive amounts of historical data, improved its use of marketing analytics to increase game sales, and reduced its monthly costs by 20 percent. Amazon Redshift 2022 Overview Based in Ireland, Playrix is one of the largest gaming companies in Europe and is among the top three most successful mobile developers in the world. Each month, over 100 million people play the company’s games, which include hits such as Gardenscapes, Fishdom, and Manor Matters. of customer acquisition Solution | Using Amazon Redshift Serverless to Efficiently Run Queries on 600 TB of Data Türkçe English Amazon Redshift Serverless makes it easier to run and scale analytics without having to manage your data warehouse infrastructure. downtime Outcome | Driving Revenue with Historic Player Data Opportunity | Using Amazon Redshift Serverless to Analyze Near-Real-Time Player Data  Improved Playrix has also achieved significant cost savings now that it uses Amazon Redshift Serverless as part of a more flexible architecture featuring fixed clusters. The company saves 20 percent of the cost of its marketing stack and has decreased its cost of customer acquisition. In addition, analysts at the company now work more productively and save time when performing complex operations. “We now have more time for experimenting, developing solutions, and planning new research,” says Ivanov. Deutsch for massive data queries on historical data Tiếng Việt Italiano ไทย 20% Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » Amazon Redshift Serverless Decreased Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Rackspace Automates Infrastructure Management across Cloud Providers Using AWS Systems Manager _ Rackspace Case Study _ AWS.txt
SmartTickets—a component in VM Management and other Rackspace services that performs automatic remediation and gathers data in response to monitoring events in customer systems—handled more than 38,000 incidents across all of Rackspace’s managed products in just 2 months, between August and September 2021. Of those incidents, Rackspace used AWS Systems Manager to send 10,660 automated responses, which not only saved 1,480 labor hours and reduced costs but also drove faster response times for customers. Overall, Rackspace automated 70 percent of manual remediation. Rackspace also uses AWS Systems Manager to automatically resolve some of those issues. “Now we can provide services to customers at more economical rates,” says Prewitt. Français 2023 Español for customers at scale Manually managing hundreds of thousands of compute instances across multicloud and hybrid environments is a tremendous challenge—not to mention one that can become expensive. Technology services company Rackspace Technology (Rackspace) set out to resolve that dilemma for its customers by building a solution on Amazon Web Services (AWS). On AWS Systems Manager, Rackspace’s VM Management reduces complexity for customers by providing a single-pane view of their environments, even hybrid and multicloud ones. “More or less everything that AWS Systems Manager can do is exposed through an API,” says Gignac. That capability means Rackspace can automatically aggregate all the infrastructure data on AWS Systems Manager and expose it to customers through a user-friendly control panel. Previously, compiling data on disparate systems was challenging for customers. “Using the consistent dashboard improves customers’ security and peace of mind because they better understand what is powering their applications,” says Prewitt. With that visibility, decision makers can be agile and quickly adapt to industry changes to pursue business goals. 日本語 Learn how Rackspace Technology used AWS Systems Manager to automate management of multicloud and hybrid infrastructures, saving hundreds of labor hours monthly, cutting costs, and reducing complexity. AWS Systems Manager Automated 70% Get Started 한국어 Rackspace also uses Amazon CloudWatch, a monitoring and observability service, to support VM Management and other core offerings. The Amazon CloudWatch agent on the VMs performs monitoring and alerting based on the events happening in customers’ infrastructure. During the same 2-month span in 2021, Rackspace used Amazon CloudWatch to ingest 14,670 alarm events across all its products that use the AWS service. Rackspace also used AWS Systems Manager to automate more than 150 runbooks on its Advanced Monitoring & Resolution solution, which provides real-time monitoring and alerts for customers’ infrastructure. Each runbook performs diagnostics and troubleshooting on a specific issue detected using Amazon CloudWatch. “Instead of having to manually gather that information, Rackspace employees can see it right there,” says Prewitt. Overview | Opportunity | Solution | Outcome | AWS Services Used When things go wrong, customers expect Rackspace to step in and act swiftly to solve their problem. Using AWS Systems Manager, we can do that much more quickly." Reduces infrastructure complexity Solution | Supporting Automation, Staff Productivity, and Transparency on AWS Rackspace plans to work with customers to develop custom runbooks instead of generic ones. “In some cases, we’ll use AWS Systems Manager to automate and orchestrate the response and resolution of those runbooks,” says Gignac. AWS Services Used Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Reduced 中文 (繁體) Bahasa Indonesia Performs mass patching Managing multicloud environments at scale reliably and cost-effectively was a challenge because organizations had to manually perform activities across a fleet of hundreds of thousands of different compute instances. If the Rackspace team detected a security vulnerability on a customer’s system or a customer requested a patching activity, a Rackspace employee had to log in to the customer’s infrastructure, investigate and troubleshoot the issue, and perform manual patching. “Having humans doing that one by one on a large scale is not sustainable,” says Brad Gignac, principal engineer at Rackspace. “It also delays resolution time.” Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » About Rackspace Technology of manual remediation tasks through SmartTickets Overview AWS Systems Manager is a secure end-to-end management solution for resources on AWS, on premises, and on other clouds. Called VM Management, the solution supports Rackspace in managing customers’ virtual machines (VMs) across AWS or other cloud providers and multicloud environments. It runs on AWS Systems Manager, which supports managing servers running on AWS and in a user’s on-premises data center through a single interface. Using AWS Systems Manager to support VM Management and several other of its managed services, Rackspace transformed its core offerings from manual, resource-intensive processes to highly scalable, automated, simple solutions that reduce labor and decrease costs for Rackspace and its customers. Türkçe Outcome | Taking Automation to the Next Level on AWS English Rackspace Automates Infrastructure Management across Cloud Providers Using AWS Systems Manager Opportunity | Finding Scalability on AWS Systems Manager through automation VM Management automates the traditionally manual management of VMs or bare metal infrastructure. Historically, organizations have each needed a large information technology team to complete time-consuming tasks such as patching, agent distribution, server diagnostics, and issue remediation. “AWS Systems Manager has been a cornerstone of the automation and capabilities that we’ve built,” says Prewitt. Now customers can outsource that responsibility to Rackspace and eliminate the cost and complexity of patching their own infrastructure. The automation also improves security by avoiding errors associated with manual tasks. Josh Prewitt Chief Product Officer, Rackspace Technology Deutsch for Rackspace customers In 2015, Rackspace began taking advantage of AWS Systems Manager for various products, but in 2019 it extended its use of AWS services to other cloud environments. Since 2019, Rackspace has run VM Management on AWS Systems Manager to power patching activities across all the major cloud providers it supports. Using AWS Systems Manager, Rackspace performs mass patching at scale, covering more than 62,000 VMs across all its managed services. The company also reduced overhead and improved support efficiency by using a single solution. Tiếng Việt overhead costs Italiano ไทย Amazon CloudWatch Founded in 1998, Rackspace Technology is a global cloud solutions and services company that specializes in creating and managing multicloud solutions across infrastructure, applications, data, and security. It serves customers in 120 countries. Improves security Learn more » Rackspace helps organizations across 120 countries adopt modern technologies and intelligently manage and optimize them. The company specializes in creating solutions for hybrid and multicloud environments. “Many customers want us to shepherd them through the complexity and help them best take advantage of the technology,” says Josh Prewitt, chief product officer at Rackspace. Since first using AWS in 2015, Rackspace has transformed from building and running many of its applications internally to building them on AWS and is now an AWS Partner. On AWS, Rackspace solved a major industry challenge with a solution that saved time, cut costs, and reduced complexity for its customers and itself. “When things go wrong, customers expect Rackspace to step in and act swiftly to solve their problem,” says Prewitt. “Using AWS Systems Manager, we can do that much more quickly.” Português Rackspace needed a solution that could run both on premises and on the cloud. “We wanted one tool to use across the full suite of solutions that Rackspace manages,” says Gignac. AWS Systems Manager met that requirement and offered programmability. “That’s a key differentiator of AWS: we can use AWS Systems Manager to run shell scripts on individual VMs and do advanced orchestration,” Gignac continues.
Razer Deepened Gamer Engagement using Amazon Personalize _ Video Testimonial _ AWS.txt
Français 2023 Español 日本語 Customer Stories / Retail & Wholesale Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Razer, the world’s leading lifestyle brand for gamers, wanted to provide personalized hardware recommendations across a number of different applications and data domains to deepen engagement across its growing number of gamers. The company was keen to test the possibilities of machine learning (ML). However, as a small team, it posed a challenge when it needed to maintain the infrastructure that supports and scales the appropriate resources for training and inferencing a recommendation model, all while being accurate and applicable across multiple business domains. Razer turned to Amazon Web Services (AWS) for a solution and used Amazon Personalize intelligent user segmentation and advanced filtering features. Click-through rates for Razer Synapse, its unified cloud-based hardware configuration tool, were 10x better than industry standards using Amazon Personalize, generating additional revenue for the business. AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Hong Jie Wee Big Data Lead, Razer, Inc. عربي 中文 (简体)   About Razer Amazon Personalize allows developers to quickly build and deploy curated recommendations and intelligent user segmentation at scale using machine learning (ML). Türkçe English Amazon Personalize Razer Deepened Gamer Engagement Using Amazon Personalize Deutsch Tiếng Việt Italiano ไทย Implementing personalized recommendations in Razer Synapse has enabled us to see a click-through-rate 10x better than industry standards, generating additional revenue for the business. Leveraging ML and Amazon Personalize made it easier and more convenient for us to maintain a personalization system.” Learn how Razer built and maintained a robust personalization engine to keep gamers engaged using Amazon Personalize. Razer is a leading lifestyle brand for gamers. With a fan base that spans every continent, the company has designed and built a gamer-focused marketplace of hardware, software, and services. Learn more » Português
Reaching Remote Learners Globally Using Amazon CloudFront _ Doping Hafiza Case Study _ AWS.txt
Learn how Doping Hafiza transformed its educational technology services on AWS with the help of Sufle, an AWS Partner. Français Habil Bozali Head of Software Architecture, Doping Hafiza We can upload a video and convert, transport, and distribute it automatically to our content delivery network using AWS services.” 2023 Español to support millions of learners 日本語 increase in content delivery speed reduction in time and effort to access storage As an AWS Advanced Tier Services Partner, Sufle has supported organizations in their digital transformations for over 10 years. The company won AWS Partner of the Year for Turkey in 2022, a recognition of its success and expertise in the field. After participating in a webinar hosted by Sufle, Doping Hafiza saw an opportunity to modernize its content delivery network. “Doping Hafiza wanted a solution to host all its content, optimized for millions of users throughout Turkey and beyond,” says Gür. “The video-based learning environment required minimal latency and conversion because students might not have a very good internet connection or speed at home. After this engagement, we started developing and designing the new infrastructure together.” 한국어 decrease in processing, storage, and delivery costs Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Amazon Elemental MediaConvert Customer Stories / Education AWS Services Used About Doping Hafiza 中文 (繁體) Bahasa Indonesia Opportunity | Using AWS Services to Deliver Educational Video Content for Doping Hafiza AWS Elemental MediaConvert is a file-based video transcoding service with broadcast-grade features. It allows you to easily create video-on-demand (VOD) content for broadcast and multiscreen delivery at scale. Learn more » Ρусский With help from Sufle, Doping Hafiza migrated its data from multiple on-premises data centers to AWS. It uses 32 AWS services to host and deliver its content; in particular, the company has centralized its educational media on Amazon CloudFront, a content delivery network service built for high performance, security, and developer convenience. “Doping Hafiza relied on different solutions to host paid and free content. Before, it was not possible to accomplish this on a shared system,” says Gür. “On Amazon CloudFront, we were able to centralize the videos and make it possible to stream both types of content from one place.” عربي Reaching Remote Learners Globally Using Amazon CloudFront with Doping Hafiza Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 5x Scales Overview or downtime in 6 months Get Started Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer experience. Türkçe English 95% Solution | Increasing Content Delivery Speed by 5x Using Amazon CloudFront To address these challenges, Doping Hafiza migrated to Amazon Web Services (AWS) with the help of Sufle, an AWS Partner. The company centralized its data on the cloud and adopted 32 AWS services to improve its content delivery capabilities, improving speed, latency, and scalability. Through this engagement, Doping Hafiza has vastly enhanced its service quality and is now better equipped to expand its offerings and serve learners worldwide. Amazon S3 “On AWS, we helped Doping Hafiza transform its content delivery service in a short amount of time,” says Gür. “With managed AWS services, the operational cost was minimal, and with the right cloud architecture, it was simple for Doping Hafiza to migrate. Everything related to content delivery was made possible in one place.” Deutsch Outcome | Reaching Remote Learners on a Global Scale with Cloud Infrastructure Founded in 2011, Doping Hafiza is an educational technology company that provides video-based learning environments for Turkish primary, middle, and secondary school students. Using machine learning and other advanced technologies, the company helps millions of students prepare for exams with personalized studying programs, coaching services, lectures, and more. Before migrating to AWS, Doping Hafiza relied on several on-premises systems to store its data and delivered educational content to students using multiple websites and video players. “Doping Hafiza uploaded all its video content to public streaming services and embedded those videos on its website,” says Gizem Gür, senior solutions architect and cofounder of Sufle. “One of those public providers asked for thousands of dollars because these videos generated a large amount of traffic. This is when Doping Hafiza engaged Sufle.” Tiếng Việt No interruptions With the global scalability of Amazon CloudFront, Doping Hafiza increased the speed of content delivery by five times. Additionally, the company has not seen any interruptions or downtime since migrating to AWS, which has improved service quality. “Before the migration, we had some problems with availability and latency. Sometimes, our service crashed,” says Habil Bozali, head of software architecture at Doping Hafiza. “In the first 6 months on AWS, we saw no issues with Amazon CloudFront or any other AWS media service.” All Doping Hafiza’s data is stored, encrypted, and versioned using Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Using Amazon S3, Doping Hafiza can manage and search for content in a centralized location rather than multiple systems, reducing the time and effort to access storage by 95 percent. Italiano ไทย During the COVID-19 pandemic, demand for online learning services increased exponentially around the world, and educational technology providers like Doping Hafiza needed to quickly adapt to the new reality. To continue providing advanced learning tools, the company needed scalable infrastructure that could deliver video content to remote learners at low latency. However, this was not a simple task. Doping Hafiza needed to migrate a vast amount of data from multiple on-premises systems and third-party providers so that learners could enjoy a seamless experience across different channels. Amazon CloudFront Contact Sales Now that all Doping Hafiza’s media services are hosted on AWS, Sufle is helping the company migrate the last of its applications to AWS. Once the migration is complete, Doping Hafiza’s next step is to expand its services to learners on a global scale—with the speed, cost effectiveness, and scalability of the cloud. Learn more » Founded in 2011, Doping Hafiza is an educational technology company that provides video-based learning environments for Turkish primary, middle, and secondary school students. Its advanced technologies empower millions of learners. 30% 中文 (简体) Português Before migrating to AWS, Doping Hafiza relied on a costly, nonoptimized third-party solution to transcode and host some of its videos. Now, it uses AWS Elemental MediaConvert, a file-based video transcoding service with broadcast-grade features, to transform video content to different output options and deliver adaptive streams to users with Amazon CloudFront. Using this service, the company can automatically convert all high-quality video streams to different bit rates, including low-level qualities for students who do not have a good internet connection at home. “Before, we would need to download the video, convert it, and upload it to a different backup service and to different content delivery networks, which was time consuming,” says Bozali. “Now, we can upload a video and convert, transport, and distribute it automatically to our content delivery network using AWS services.” By adopting this solution, Doping Hafiza has reduced its processing, storage, and delivery costs by 30 percent.
Read Innovates Video Call Transcription Using Amazon EC2 G5 Instances Powered by NVIDIA _ Read Case Study _ AWS.txt
Français reduced from 30–60 seconds When Transcription 2.0 is integrated into videoconferencing software, like Zoom, Microsoft Teams, and Google Meet, Read can measure the effectiveness of an organization’s meetings over the course of a month and make specific recommendations to improve the quality of the meetings. After that, Read can continue monitoring meetings to make sure that its customers achieve their goals. Español Read uses Amazon Web Services (AWS) to host its solution on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. To power its transcription tool, the company also used NVIDIA Riva (Riva), a GPU-accelerated speech artificial intelligence software development kit from NVIDIA, an AWS Partner. Using Riva on Amazon EC2, Read improved the performance of its transcription tool while keeping costs low. Solution | Saving up to 30% on Costs Using Amazon EC2 G5 Instances and NVIDIA’s Riva Read runs Riva on Amazon EC2 G5 Instances to deliver highly accurate transcription in near real time. In addition to this natural-language-processing use case, Read also uses Amazon EC2 G5 Instances for training and deploying its video models. Within 6 weeks of adopting Riva and Amazon EC2 G5 Instances, Read deployed a solution that minimizes costs and maximizes performance. “Deploying Riva on Amazon EC2 G5 Instances was very easy,” says Dillon Dukek, Read’s senior software engineer. “We didn’t have to train any of our own acoustic or language models to convert audio to text. It’s a bundled solution that can just be rolled out.” Finding highly performant and cost-effective technology was the driving force behind Read’s decision to choose an AWS solution. The high performance of Amazon EC2 G5 Instances powered by NVIDIA A10G Tensor Core GPUs makes this solution a particularly cost-efficient choice for making ML inferences and training moderately complex ML models, like those needed for natural language processing. In fact, Amazon EC2 G5 Instances offer anywhere between 15 and 40 percent better price performance compared with the previous generation of GPU-based instances. “We significantly improved costs per meeting hour,” says Rob Williams, vice president of engineering at Read. After transitioning to Amazon EC2 G5 Instances, Read saw a 20–30 percent reduction in costs. 日本語 AWS Nitro System Amazon EC2 G5 instances are the latest generation of NVIDIA GPU-based instances that can be used for a wide range of graphics-intensive and machine learning use cases.  Learn more » on per-request basis Contact Sales Get Started 한국어 up to 0.2 streams per machine Overview | Opportunity | Solution | Outcome | AWS Services Used 40–50ms latency response Read’s solution also led to faster response times for users. Dukek says that, with Read’s old tools, the real-time meeting reports and feedback were showing up after about 30–60 seconds. Such high latency wasn’t effective at helping presenters to course correct their meetings when quality and engagement dropped. “Now, we have that down to the 1-second range,” he says. “We’re providing feedback on a quick basis, and people can see a near-real-time view of how their meetings are going.” Williams adds, “We view the ability to have these effective metrics in response to the ongoing conversation as a critical part of our value offering.” Now, Read can deliver its feedback and meeting reports to more clients much faster than it could before. AWS Services Used 中文 (繁體) Bahasa Indonesia Amazon EC2 G5 Instances Learn how software company Read reduced costs by 20–30 percent using Amazon EC2 G5 Instances. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » Rob Williams Vice President of Engineering, Read 2022 About Read Overview reduction in costs Türkçe English 30 streams on CPU-only boxes Read, a videoconferencing software startup, needed to reduce costs to sustain its growing business. The company relies on an always-on automatic speech recognition service to provide near-real-time augmented transcriptions of video meetings. When Read’s customer base grew suddenly, Read began looking for a more cost-effective solution to support its new customers. Read originally used CPUs to process audio and video and provide augmented transcripts to its clients. However, in Read’s unique use case, which requires always-on audio streaming, a quick explosion of growth made its tools too cost prohibitive. In late 2021, Read executives decided to move away from the original transcription tool. After researching options and creating a successful proof of concept, the company switched to Riva and ran it on Amazon EC2 G5 Instances—high-performance GPU-based instances for graphics-intensive applications and ML inference. Opportunity | Building Voice-to-Text Transcription Using Services from AWS and NVIDIA  Using Riva and Amazon EC2 G5 Instances, Read improved costs and performance. In pursuit of the company mission to make virtual human interactions better and smarter, Read expects to continue scaling up. As Read expands, the company will continue to deploy sophisticated ML models on Amazon EC2 G5 Instances powered by NVIDIA GPUs to meet its growing needs. Williams says, “Using AWS, we have the ability to scale and extend our quotas and the resources to support our business.” The AWS Nitro System is the underlying platform for our next generation of EC2 instances that enables AWS to innovate faster, further reduce cost for our customers, and deliver added benefits like increased security and new instance types. Learn more » Deutsch Tiếng Việt Read is a Seattle-based videoconferencing software company founded in 2021. It offers an innovative transcription tool that augments near-real-time text transcription with information on listener sentiment and engagement to make meetings better. Italiano ไทย Read Innovates Video Call Transcription Using Amazon EC2 G5 Instances Powered by NVIDIA Using Amazon EC2 G5 Instances also led to multiple performance benefits. Amazon EC2 G5 Instances are built on the AWS Nitro System to maximize resource efficiency through a combination of dedicated hardware and lightweight hypervisor facilitating faster innovation and enhanced security. On its previous CPUs, Read saw only about 0.2 streams per machine, but using Riva on Amazon EC2 G5 Instances, it can process about 30 concurrent streams per machine with only 40–50 milliseconds of latency per request. 1-second response times Founded in mid-2021, Read meets the needs of today’s hybrid and remote working environments. As the number and frequency of online meetings increased, so did the need for innovative near-real-time voice-to-text transcription. One part of Read’s services is the innovative tool Transcription 2.0. In addition to automatic transcriptions of meetings, the tool uses machine learning (ML) to offer insights about audience sentiment and engagement. It also identifies impactful statements throughout the meeting. This allows meeting hosts—such as managers, professors, recruiters, and presenters—to adjust content around what participants focus on and what they ignore. Amazon EC2 Using AWS, we have the ability to scale and extend our quotas and the resources to support our business.” 30% Outcome | Accelerating Continued Growth Using Amazon EC2 G5 Instances Português
Realizing the Full Value of EHR in a Digital Health Environment on AWS with Tufts Medicine _ Tufts Medicine Case Study _ AWS.txt
Dr. Shafiq Rab Chief Data Officer, System Chief Information Officer and Executive Vice President, Tufts Medicine  AWS CloudFormation Français 2023 Tufts Medicine began laying the groundwork for its digital transformation in 2020, extensively evaluating cloud providers. “Our goal was to get out of the data center entirely,” says Jeremy Marut, chief of digital modernization at Tufts Medicine. “We wanted to implement a single EHR in the cloud and migrate all critical applications so that we could take advantage of high availability and modern technologies.” Español Learn how Tufts Medicine implemented its EHR in the cloud and migrated 42 applications in 14 months using AWS AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. The team at Tufts Medicine compressed a 6-month on-premises hardware procurement-and-deployment process into a 4-week cloud deployment for its AWS landing zone and Epic build environment. Tufts Medicine also migrated 42 business-critical third-party applications to AWS in just 9 months, with a go-live by the end of March 2022. “We’re in the business of saving lives,” says Marut. “We’re not here to run data centers, and using AWS frees us from the rote, mundane work that is typically required, freeing up funds and minds to change healthcare for the better.” Learn more » 日本語 significant cost savings AWS Professional Services’ offerings use a unique methodology based on Amazon’s internal best practices to help you complete projects faster and more reliably, while accounting for evolving expectations and dynamic team structures along the way. Tufts Medicine wanted to modernize its healthcare technology to provide better care for patients by leaving traditional data centers and liberating the organization from technical debt. It decided to implement Epic as its electronic health record (EHR) system and migrate 42 integrated third-party applications to Amazon Web Services (AWS). Tufts Medicine deployed its entire EHR environment—including production systems, disaster recovery, and training—using AWS infrastructure in 14 months. The organization stands out as the first health system to implement a full Epic environment on AWS. Through this migration, Tufts Medicine consolidated technology stacks, modernized its applications, optimized cost, and most important, delivered new and improved services for patients and care providers. Get Started 한국어 AWS Professional Services Tufts Medicine also has improved its security and monitoring. “Security is critical,” says Dr. Rab. “As part of this implementation, we made sure everything was encrypted, both in motion and at rest. It’s amazing how many legacy applications were not supporting these best practices.” Tufts Medicine uses AWS Control Tower—a service to set up and govern a secure multiaccount AWS environment—to automate alerting and monitoring. In addition, Tufts Medicine has implemented Amazon CloudWatch to collect and visualize near-real-time logs, metrics, and event data in automated dashboards, streamlining infrastructure and application maintenance. Using AWS Control Tower and Amazon CloudWatch, Tufts Medicine has deployed canaries to test infrastructure operations and can automatically launch remediation efforts based on best practices. “When things aren’t working, we’re not only alerting, but we’re also autohealing and autofixing,” says Marut. “We’ve improved safety and security by using these tools.” Overview | Opportunity | Solution | Outcome | AWS Services Used Realizing the Full Value of EHR in a Digital Health Environment on AWS with Tufts Medicine Improved Previously known as Wellforce, Tufts Medicine is an integrated health system in Massachusetts comprising three hospitals, a home-healthcare network, and more than 15,000 healthcare providers. Outcome | Serving Patients and Redefining the Provider Experience Using AWS Tufts Medicine chose AWS as its cloud provider for the elastic compute, processing, and memory capabilities to handle Tufts Medicine’s EHR implementation. In February 2021, Tufts Medicine started working alongside AWS Professional Services, a global team of experts that help organizations achieve their desired business outcomes using AWS. Tufts Medicine used AWS Professional Services to build out the cloud infrastructure and configure the required cloud services to operate its healthcare environment. AWS Services Used Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. 中文 (繁體) Bahasa Indonesia Through the end of 2023, Tufts Medicine is working to rationalize and migrate its 800-application portfolio to AWS, a move that the company expects will save millions of dollars per year. Tufts Medicine is also adding seven more languages to myTuftsMed, working to open a virtual pharmacy, and developing machine learning capabilities to provide precision therapies to patients. “Using AWS, our goal at Tufts Medicine is not only to redefine healthcare but to reinvent the way that it is delivered,” says Dr. Rab. As part of its cloud migration, Tufts Medicine consolidated 109 patient portals into a single portal—myTuftsMed. Through the portal, patients communicate with caregivers, request prescriptions, access test results, and manage appointments in three languages. To streamline virtual care, Tufts Medicine used Amazon Connect to set up a contact center that can scale to support millions of patients. Tufts Medicine also built chatbots using Amazon Lex, a fully managed artificial intelligence service with advanced natural language models, so that patients can get answers and access services simply and quickly. “Our goal was to remove the barriers for patients and consumers to access the healthcare that they need,” says Dr. Shafiq Rab, chief data officer, system chief information officer, and executive vice president at Tufts Medicine. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Migrated The systems migration to AWS has improved user response time for Tufts Medicine’s care providers and its IT department. The system delivers submillisecond speeds, whether providers are accessing it from the clinic, the hospital, or a home office. “Our physicians and our nursing leaders are expressing delight. Now, caregivers can quickly access the EHR and supporting information that they need,” says Dr. Rab. “Because of the architecture that we defined to deploy Epic on AWS, we are seeing very fast response times.” Additionally, IT personnel can focus on innovations rather than routine maintenance. “It opens our teams up to do things that will drive healthcare innovation,” says Marut. “The morale is through the roof.” Tufts Medicine is a healthcare system comprising three hospitals, a home-healthcare network, and a large clinical integrated network in eastern Massachusetts. The organization serves four million patients and involves 18,000 healthcare workers and employees in providing care. Before the migration, its portfolio consisted of more than 800 applications with duplicative licensing across hospitals. Each hospital also maintained independent IT departments. Achieved Overview By migrating to AWS, Tufts Medicine has saved significant cost while increasing the speed to innovation across the health system. For example, it defines all systems architecture using infrastructure-as-code templates in AWS CloudFormation, a service for users to model, provision, and manage AWS and third-party resources. Using AWS CloudFormation, Tufts Medicine can spin up an environment in less than 6 minutes. When it needed to ingest document images for four million patients, initial estimates indicated that the process would take 200 days. Instead, Tufts Medicine completed the process in 72 hours. Türkçe AWS Control Tower English  patient records transferred to initialize EHR in the cloud Using AWS, our goal at Tufts Medicine is not only to redefine healthcare but to reinvent the way that it is delivered.” Solution | Building the Cloud Technology Stack and Data Estate for Tufts Medicine in 14 Months AWS Control Tower simplifies AWS experiences by orchestrating multiple AWS services on your behalf while maintaining the security and compliance needs of your organization. In only 14 months, Tufts Medicine deployed a new EHR implementation entirely using AWS infrastructure, across two independent AWS Regions, with three independent Availability Zones per Region. This deployment provides Tufts Medicine with multitiered disaster recovery. Failure of a data center in the primary production Region is quickly addressed by requesting additional capacity in the remaining Availability Zones. Opportunity | Migrating EHR and Integrated Applications to Create Cloud-Based Systems and Data Estate for Tufts Medicine Deutsch About Tufts Medicine Tiếng Việt Customer Stories / Healthcare Italiano ไทย Amazon CloudWatch patient experience, system response, workflow consistency, and employee satisfaction Four million Learn more » 42 applications in 14 months Português
Recommend and dynamically filter items based on user context in Amazon Personalize _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Recommend and dynamically filter items based on user context in Amazon Personalize by Gilles-Kuessan Satchivi , Aditya Pendyala , and Prabhakar Chandrasekaran | on 29 JUN 2023 | in Amazon Personalize , Intermediate (200) , Technical How-to | Permalink | Comments |  Share Organizations are continuously investing time and effort in developing intelligent recommendation solutions to serve customized and relevant content to their users. The goals can be many: transform the user experience, generate meaningful interaction, and drive content consumption. Some of these solutions use common machine learning (ML) models built on historical interaction patterns, user demographic attributes, product similarities, and group behavior. Besides these attributes, context (such as weather, location, and so on) at the time of interaction can influence users’ decisions while navigating content. In this post, we show how to use the user’s current device type as context to enhance the effectiveness of your Amazon Personalize -based recommendations. In addition, we show how to use such context to dynamically filter recommendations. Although this post shows how Amazon Personalize can be used for a video on demand (VOD) use case, it’s worth noting that Amazon Personalize can be used across multiple industries. What is Amazon Personalize? Amazon Personalize enables developers to build applications powered by the same type of ML technology used by Amazon.com for real-time personalized recommendations. Amazon Personalize is capable of delivering a wide array of personalization experiences, including specific product recommendations, personalized product reranking, and customized direct marketing. Additionally, as a fully managed AI service, Amazon Personalize accelerates customer digital transformations with ML, making it easier to integrate personalized recommendations into existing websites, applications, email marketing systems, and more. Why is context important? Using a user’s contextual metadata such as location, time of day, device type, and weather provides personalized experiences for existing users and helps improve the cold-start phase for new or unidentified users. The cold-start phase refers to the period when your recommendation engine provides non-personalized recommendations due to the lack of historical information regarding that user. In situations where there are other requirements to filter and promote items (say in news and weather), adding a user’s current context (season or time of day) helps improve accuracy by including and excluding recommendations. Let’s take the example of a VOD platform recommending shows, documentaries, and movies to the user. Based on behavior analysis, we know VOD users tend to consume shorter-length content like sitcoms on mobile devices and longer-form content like movies on their TV or desktop. Solution overview Expanding on the example of considering a user’s device type, we show how to provide this information as context so that Amazon Personalize can automatically learn the influence of a user’s device on their preferred types of content. We follow the architecture pattern shown in the following diagram to illustrate how context can automatically be passed to Amazon Personalize. Automatically deriving context is achieved through Amazon CloudFront headers that are included in requests such as a REST API in Amazon API Gateway that calls an AWS Lambda function to retrieve recommendations. Refer to the full code example available at our GitHub repository . We provide a AWS CloudFormation template to create the necessary resources. In following sections, we walk through how to set up each step of the sample architecture pattern. Choose a recipe Recipes are Amazon Personalize algorithms that are prepared for specific use cases. Amazon Personalize provides recipes based on common use cases for training models. For our use case, we build a simple Amazon Personalize custom recommender using the User-Personalization recipe. It predicts the items that a user will interact with based on the interactions dataset. Additionally, this recipe also uses items and users datasets to influence recommendations, if provided. To learn more about how this recipe works, refer to User-Personalization recipe . Create and import a dataset Taking advantage of context requires specifying context values with interactions so recommenders can use context as features when training models. We also have to provide the user’s current context at inference time. The interactions schema (see the following code) defines the structure of historical and real-time users-to-items interaction data. The USER_ID , ITEM_ID , and TIMESTAMP fields are required by Amazon Personalize for this dataset. DEVICE_TYPE is a custom categorical field that we are adding for this example to capture the user’s current context and include it in model training. Amazon Personalize uses this interactions dataset to train models and create recommendation campaigns. { "type": "record", "name": "Interactions", "namespace": "com.amazonaws.personalize.schema", "fields": [ { "name": "USER_ID", "type": "string" }, { "name": "ITEM_ID", "type": "string" }, { "name": "DEVICE_TYPE", "type": "string", "categorical": True }, { "name": "TIMESTAMP", "type": "long" } ], "version": "1.0" } Similarly, the items schema (see the following code) defines the structure of product and video catalog data. The ITEM_ID is required by Amazon Personalize for this dataset. CREATION_TIMESTAMP is a reserved column name but it is not required. GENRE and ALLOWED_COUNTRIES are custom fields that we are adding for this example to capture the video’s genre and countries where the videos are allowed to be played. Amazon Personalize uses this items dataset to train models and create recommendation campaigns. { "type": "record", "name": "Items", "namespace": "com.amazonaws.personalize.schema", "fields": [ { "name": "ITEM_ID", "type": "string" }, { "name": "GENRE", "type": "string", "categorical": True }, { "name": "ALLOWED_COUNTRIES", "type": "string", "categorical": True }, { "name": "CREATION_TIMESTAMP", "type": "long" } ], "version": "1.0" } In our context, historical data refers to end-user interaction history with videos and items on the VOD platform. This data is usually gathered and stored in application’s database. For demo purposes, we use Python’s Faker library to generate some test data mocking the interactions dataset with different items, users, and device types over a 3-month period. After the schema and input interactions file location are defined, the next steps are to create a dataset group, include the interactions dataset within the dataset group, and finally import the training data into the dataset, as illustrated in the following code snippets: create_dataset_group_response = personalize.create_dataset_group( name = "personalize-auto-context-demo-dataset-group" ) create_interactions_dataset_response = personalize.create_dataset( name = "personalize-auto-context-demo-interactions-dataset", datasetType = ‘INTERACTIONS’, datasetGroupArn = interactions_dataset_group_arn, schemaArn = interactions_schema_arn ) create_interactions_dataset_import_job_response = personalize.create_dataset_import_job( jobName = "personalize-auto-context-demo-dataset-import", datasetArn = interactions_dataset_arn, dataSource = { "dataLocation": "s3://{}/{}".format(bucket, interactions_filename) }, roleArn = role_arn ) create_items_dataset_response = personalize.create_dataset( name = "personalize-auto-context-demo-items-dataset", datasetType = ‘ITEMS’, datasetGroupArn = items_dataset_group_arn, schemaArn = items_schema_arn ) create_items_dataset_import_job_response = personalize.create_dataset_import_job( jobName = "personalize-auto-context-demo-items-dataset-import", datasetArn = items_dataset_arn, dataSource = { "dataLocation": "s3://{}/{}".format(bucket, items_filename) }, roleArn = role_arn ) Gather historical data and train the model In this step, we define the chosen recipe and create a solution and solution version referring to the previously defined dataset group. When you create a custom solution, you specify a recipe and configure training parameters. When you create a solution version for the solution, Amazon Personalize trains the model backing the solution version based on the recipe and training configuration. See the following code: recipe_arn = "arn:aws:personalize:::recipe/aws-user-personalization" create_solution_response = personalize.create_solution( name = "personalize-auto-context-demo-solution", datasetGroupArn = dataset_group_arn, recipeArn = recipe_arn ) create_solution_version_response = personalize.create_solution_version( solutionArn = solution_arn ) Create a campaign endpoint After you train your model, you deploy it into a campaign . A campaign creates and manages an auto-scaling endpoint for your trained model that you can use to get personalized recommendations using the GetRecommendations API. In a later step, we use this campaign endpoint to automatically pass the device type as a context as a parameter and receive personalized recommendations. See the following code: create_campaign_response = personalize.create_campaign( name = "personalize-auto-context-demo-campaign", solutionVersionArn = solution_version_arn ) Create a dynamic filter When getting recommendations from the created campaign, you can filter results based on custom criteria. For our example, we create a filter to satisfy the requirement of recommending videos that are only allowed to be played from user’s current country. The country information is passed dynamically from the CloudFront HTTP header. create_filter_response = personalize.create_filter( name = 'personalize-auto-context-demo-country-filter', datasetGroupArn = dataset_group_arn, filterExpression = 'INCLUDE ItemID WHERE Items.ALLOWED_COUNTRIES IN ($CONTEXT_COUNTRY)' ) Create a Lambda function The next step in our architecture is to create a Lambda function to process API requests coming from the CloudFront distribution and respond by invoking the Amazon Personalize campaign endpoint. In this Lambda function, we define logic to analyze the following CloudFront request’s HTTP headers and query string parameters to determine the user’s device type and user ID, respectively: CloudFront-Is-Desktop-Viewer CloudFront-Is-Mobile-Viewer CloudFront-Is-SmartTV-Viewer CloudFront-Is-Tablet-Viewer CloudFront-Viewer-Country The code to create this function is deployed through the CloudFormation template. Create a REST API To make the Lambda function and Amazon Personalize campaign endpoint accessible to the CloudFront distribution, we create a REST API endpoint set up as a Lambda proxy. API Gateway provides tools for creating and documenting APIs that route HTTP requests to Lambda functions. The Lambda proxy integration feature allows CloudFront to call a single Lambda function abstracting requests to the Amazon Personalize campaign endpoint. The code to create this function is deployed through the CloudFormation template. Create a CloudFront distribution When creating a CloudFront distribution, because this is a demo setup, we disable caching using a custom caching policy, ensuring the request goes to the origin every time. Additionally, we use an origin request policy specifying the required HTTP headers and query string parameters that are included in an origin request. The code to create this function is deployed through the CloudFormation template. Test recommendations When the CloudFront distribution’s URL is accessed from different devices (desktop, tablet, phone, and so on), we can see personalized video recommendations that are most relevant to their device. Also, if a cold user is presented, the recommendations tailored for user’s device are presented. In the following sample outputs, names of videos are only used for representation of their genre and runtime to make it relatable. In the following code, a known user who loves comedy based on past interactions and is accessing from a phone device is presented with shorter sitcoms: Recommendations for user: 460 ITEM_ID GENRE ALLOWED_COUNTRIES 380 Comedy RU|GR|LT|NO|SZ|VN 540 Sitcom US|PK|NI|JM|IN|DK 860 Comedy RU|GR|LT|NO|SZ|VN 600 Comedy US|PK|NI|JM|IN|DK 580 Comedy US|FI|CN|ES|HK|AE 900 Satire US|PK|NI|JM|IN|DK 720 Sitcom US|PK|NI|JM|IN|DK The following known user is presented with feature films when accessing from a smart TV device based on past interactions: Recommendations for user: 460 ITEM_ID GENRE ALLOWED_COUNTRIES 780 Romance US|PK|NI|JM|IN|DK 100 Horror US|FI|CN|ES|HK|AE 400 Action US|FI|CN|ES|HK|AE 660 Horror US|PK|NI|JM|IN|DK 720 Horror US|PK|NI|JM|IN|DK 820 Mystery US|FI|CN|ES|HK|AE 520 Mystery US|FI|CN|ES|HK|AE A cold (unknown) user accessing from a phone is presented with shorter but popular shows: Recommendations for user: 666 ITEM_ID GENRE ALLOWED_COUNTRIES 940 Satire US|FI|CN|ES|HK|AE 760 Satire US|FI|CN|ES|HK|AE 160 Sitcom US|FI|CN|ES|HK|AE 880 Comedy US|FI|CN|ES|HK|AE 360 Satire US|PK|NI|JM|IN|DK 840 Satire US|PK|NI|JM|IN|DK 420 Satire US|PK|NI|JM|IN|DK A cold (unknown) user accessing from a desktop is presented with top science fiction films and documentaries: Recommendations for user: 666 ITEM_ID GENRE ALLOWED_COUNTRIES 120 Science Fiction US|PK|NI|JM|IN|DK 160 Science Fiction US|FI|CN|ES|HK|AE 680 Science Fiction RU|GR|LT|NO|SZ|VN 640 Science Fiction US|FI|CN|ES|HK|AE 700 Documentary US|FI|CN|ES|HK|AE 760 Science Fiction US|FI|CN|ES|HK|AE 360 Documentary US|PK|NI|JM|IN|DK The following known user accessing from a phone is returning filtered recommendations based on location (US): Recommendations for user: 460 ITEM_ID GENRE ALLOWED_COUNTRIES 300 Sitcom US|PK|NI|JM|IN|DK 480 Satire US|PK|NI|JM|IN|DK 240 Comedy US|PK|NI|JM|IN|DK 900 Sitcom US|PK|NI|JM|IN|DK 880 Comedy US|FI|CN|ES|HK|AE 220 Sitcom US|FI|CN|ES|HK|AE 940 Sitcom US|FI|CN|ES|HK|AE Conclusion In this post, we described how to use user device type as contextual data to make your recommendations more relevant. Using contextual metadata to train Amazon Personalize models will help you recommend products that are relevant to both new and existing users, not just from the profile data but also from a browsing device platform. Not only that, context like location (country, city, region, postal code) and time (day of the week, weekend, weekday, season) opens up the opportunity to make recommendations relatable to the user. You can run the full code example by using the CloudFormation template provided in our GitHub repository and cloning the notebooks into Amazon SageMaker Studio . About the Authors Gilles-Kuessan Satchivi is an AWS Enterprise Solutions Architect with a background in networking, infrastructure, security, and IT operations. He is passionate about helping customers build Well-Architected systems on AWS. Before joining AWS, he worked in ecommerce for 17 years. Outside of work, he likes to spend time with his family and cheer on his children’s soccer team. Aditya Pendyala is a Senior Solutions Architect at AWS based out of NYC. He has extensive experience in architecting cloud-based applications. He is currently working with large enterprises to help them craft highly scalable, flexible, and resilient cloud architectures, and guides them on all things cloud. He has a Master of Science degree in Computer Science from Shippensburg University and believes in the quote “When you cease to learn, you cease to grow.” Prabhakar Chandrasekaran is a Senior Technical Account Manager with AWS Enterprise Support. Prabhakar enjoys helping customers build cutting-edge AI/ML solutions on the cloud. He also works with enterprise customers providing proactive guidance and operational assistance, helping them improve the value of their solutions when using AWS. Prabhakar holds six AWS and six other professional certifications. With over 20 years of professional experience, Prabhakar was a data engineer and a program leader in the financial services space prior to joining AWS. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Red Canary Architects for Fault Tolerance and Saves up to 80 Using Amazon EC2 Spot Instances _ Red Canary Case Study _ AWS.txt
Amazon Simple Queue Service (SQS) lets you send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. Français Since early 2020, Red Canary further optimizes costs by using Savings Plans, a flexible pricing model to reduce costs by up to 72 percent compared with On-Demand prices, in exchange for a 1-year or 3-year hourly spend commitment. The company’s Compute Savings Plan covers the compute demand for additional services that Red Canary hosts to run a third-party product for customers, which is not as flexible as its own MDR solution. In December 2021, Red Canary also began using AWS Graviton processors, designed by AWS to deliver the best price performance for cloud workloads running on Amazon EC2. Using AWS Graviton processors, the company achieves an additional 30 percent of savings on top of the savings realized from using Spot Instances while achieving equivalent processing speeds to what it experienced using x86 processors. Founded in 2014, Red Canary is a cybersecurity company providing managed detection and response services. Its mission is to create a world where every company can make its greatest impact without fear of damage from cyberthreats. 2023 Red Canary Platform Diagram Español Amazon EC2 while optimizing costs Red Canary Architects for Fault Tolerance and Saves up to 80% Using Amazon EC2 Spot Instances 日本語 “We’re investing our effort into making sure that we’re the experts and can help customers protect their cloud environments,” says Rothe. “We will use AWS in the future to make sure that when unauthorized users get ahold of access keys that they shouldn’t have, we can detect them and shut them down before they cause any damage.” Contact Sales Red Canary uses containerization to manage the scaling of its solution. In 2020, Red Canary migrated its containers to Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service. In Amazon EKS, each of the processing components can be scaled individually using automatic scaling functions, making it much simpler to manage the MDR as workloads scale from 500 to 1,000 nodes throughout the day. Additionally, using Amazon EKS, Red Canary has more flexibility to use different types of instances, making it simpler to take advantage of Spot Instances. “Before, running our own Kubernetes clusters meant that we had to be experts on all things Kubernetes. Now, using Amazon EKS, we don’t have to manage cluster maintenance, and we have near zero operational issues,” says Rothe. 65–80% reduction Get Started 한국어 Increase in Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 30% savings Amazon EKS Outcome | Investing in Cloud Expertise Using AWS durability, scalability, and fault tolerance Processes 1 PB of data daily Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. AWS Services Used Opportunity | Using Amazon EC2 Spot Instances to Reduce Compute Costs for Red Canary by 65–80% 中文 (繁體) Bahasa Indonesia Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Now, Red Canary is working alongside AWS Enterprise Support—which provides customers with concierge-like service focused on helping customers achieve outcomes and find success in the cloud—to perform a review of its architecture using the AWS Well-Architected Framework. This framework lays out architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. Using AWS, Red Canary’s solution is highly reliable. “The design tenets that we used when we built these engine components give us the confidence that, even when we make a mistake, we know how to recover from it,” says Davis. The MDR is built to be thorough—to make sure that every piece of data gets processed—with a service-level objective to get data through the detection pipeline and in front of a detection engineer in 15 minutes. “We don’t have to detect and stop unauthorized users in seconds; it takes them time, so it’s more important for our system to be durable and to make sure all the data gets processed,” says Rothe. Brian Davis Principal Engineer, Red Canary Learn more » In 2016, Red Canary migrated to Amazon Web Services (AWS) and rebuilt its architecture to be highly fault tolerant. This architecture made it possible for Red Canary to benefit from more cost-effective instances on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. Using Amazon EC2 Spot Instances to take advantage of unused Amazon EC2 capacity at a discount, Red Canary built a durable, scalable, cost-effective solution to monitor client workloads and protect them from unauthorized users. Overview About Red Canary On any given day, Red Canary might ingest and run analytics on over 1 PB of telemetry data from third-party products or directly from customer environments. The company reduced costs by running its data processing pipeline on Spot Instances. “Amazon EC2 Spot Instances give us cost-effective compute to process massive amounts of data,” says Brian Davis, principal engineer at Red Canary. “Our infrastructure is mature enough to tolerate the dynamic nature of Spot Instances.” Red Canary estimates that it saves 65–80 percent per instance by using Spot Instances. Türkçe Amazon SQS using AWS Graviton2 processors Learn how cybersecurity firm Red Canary built a fault-tolerant compute pipeline that facilitated as much as 80 percent savings using Amazon EC2 Spot Instances. English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram To use Spot Instances, Red Canary built its architecture to handle having compute instances removed in the middle of processing. Red Canary’s MDR ingests data from customer environments into Amazon Simple Storage Service (Amazon S3), an object storage service, for analysis. At each step in the analysis, the component that is processing the data picks up a file from an Amazon S3 bucket, applies its analytics, and then writes it to the next bucket down the chain. Each Amazon S3 bucket is connected to Amazon Simple Notification Service (Amazon SNS), a fully managed Pub/Sub service for application-to-application messaging. Amazon SNS sends a message to the next component, which picks up the message using Amazon Simple Queue Service (Amazon SQS), a service for users to send, store, and receive messages between software components. In Red Canary’s solution, when a compute instance drops out while a component is processing a file, the job will return to the Amazon SQS queue, and the system will spin up a new replica of the component to run that job. “We take pride in the fact that all the data that we’re meant to process gets processed and that we don’t miss threats to our customers,” says Rothe. “We use Amazon S3—with its legendary availability and performance—as a core part of our data processing pipeline because we want durability.” Deutsch Cybersecurity company Red Canary needed a reliable, scalable solution to process over 1 PB of data daily while optimizing costs. The company offers managed detection and response (MDR) services, continually monitoring customer environments for potential cyberthreats. As the company grew, its previous solution was unable to provide the amount of compute power that Red Canary required at a low enough price for the company to stay competitive. Tiếng Việt Amazon S3 Amazon EC2 Spot Instances give us cost-effective compute to process massive amounts of data.” Italiano ไทย Solution | Containerizing to Make a Scalable Solution Using Amazon EKS Architecture Diagram Close Learn more » Click to enlarge for fullscreen viewing.  Red Canary was founded in 2014 with the vision to create a world where every company can make its greatest impact without fear of damage from cyberthreats. To support that vision, Red Canary’s MDR provides 24/7 monitoring to 800 companies across multiple industries—including financial services, social media, healthcare, and manufacturing—and helps these companies respond to cyberthreats when needed. (See Figure 1: Red Canary Platform Diagram.) When Red Canary migrated to AWS in 2016, it sought ways to reduce costs on its new architecture. “We needed to find a way to perform threat detection across this massive flood of data and do it within a cost envelope that fit the profile of our industry,” says Chris Rothe, chief technology officer at Red Canary. “We wanted to focus on detecting threats for our customers and keeping them safe, not on being infrastructure experts.” in compute costs Português
Reducing Adverse-Event Reporting Time for Its Clients by 80 _ Indegene Case Study _ AWS.txt
Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » Español *.MsoChpDefault { 日本語 2022 The NAEM solution built on AWS helps reduce the average processing time of adverse events from over 90 minutes to under 15 minutes, achieving over 80 percent time savings. Clients use an electronic data interchange built on AWS to send adverse-event report files to a system that initiates the NAEM workflows. These reports get stored securely using Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Automation services pick up files from Amazon S3 and route them to an AI-augmented application that supports clinical experts, who complete and review cases. Then, they deliver the files back to the client as industry-standard E2B (R2) or E2B (R3) files. 한국어 mso-hansi-font-family:Calibri { mso-fareast-theme-font:minor-latin { AWS Services Used Indegene’s NAEM uses AI to help agents automate the reporting of adverse events, product-quality complaints, and allied medical information. Its intelligent call flow assists with automatic capture and population of adverse event data, which makes the process faster and more accurate. “Using AWS, our users can make judgment decisions readily, perform duplicate checks, and accurately triage and validate cases,” says Vladimir Penkrat, Indegene’s Practice Head of Safety and Regulatory Affairs. *, sans-serif { Founded in India in 1998, Indegene is a technology-led healthcare solutions provider. Now in 15 offices worldwide, the company helps its clients with digital transformation, from research and development to management to commercial applications. Bahasa Indonesia Learn how Indegene helps life sciences companies streamline and scale adverse event reporting while generating efficiencies and cost savings, using its solution built on AWS. In 2021, a global pharmaceutical company asked Indegene for help addressing a sudden increase in case volume after a product launch. The solution needed to work with its enterprise environment to properly exchange files and leave full audit trails. The company implemented an upgraded version of NAEM, which uses Amazon Comprehend, a natural-language processing service that uses machine learning (ML) to uncover valuable insights and connections in text. NAEM also uses a related service, Amazon Comprehend Medical—which uses ML that has been pre-trained to understand and extract health data from medical text—to extract information from doctors’ notes and clinical trial reports. The solution has scaled to process about half a million cases, automates over 400 rules, and uses AI to improve overall processing efficiency by 60 percent. mso-fareast-font-family:Arial { Overview Cost optimization page: WordSection1; mso-hansi-theme-font:minor-latin { * { ไทย p.MsoNormal, li.MsoNormal, div.MsoNormal { Learn more » mso-font-pitch:variable { AWS Lambda Français Solution | Extracting Insights from Adverse-Events Data Using AWS Services   中文 (繁體) Contact Sales 60% Tarun Mathur Chief Technology Officer, Indegene Türkçe Indegene began using AWS in the early 2000s, when it adopted Amazon Elastic Compute Cloud (Amazon EC2) to provide secure and resizable compute capacity for its workloads. This relationship has strengthened, and today, Indegene is an AWS Partner. “Our mission is to help pharmaceutical organizations be future ready and drive business transformation by using technology in an agile, efficient way,” says Mathur. “Keeping up with all the new AWS services and capabilities has been a good challenge, and the variety of training programs and great technical support is a bonus. AWS is leading the pack in innovation.” English mso-ascii-font-family:Calibri { Tiếng Việt Português reduction in cases requiring follow-up Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. “On AWS, we are more efficiently capturing data about medicines and maintaining full compliance with global regulations, which results in a much healthier patient population,” says Sameer Lal, Indegene’s senior vice president. “And that’s what we are hoping for in the end: delivery to a much healthier world.” Indegene also uses AWS Lambda, a serverless, event-driven compute service, to direct files into its database, which is built using Amazon Relational Database Service (Amazon RDS), a collection of managed services that make it simple to set up, operate, and scale databases in the cloud. For security, the company uses AWS services to implement predefined actions, such as how long the system retains certain pieces of information. Indegene uses encryption certificates for data in transit and at rest, and clients can access a virtual private cloud through the AWS Client VPN. mso-bidi-theme-font:minor-bidi { } عربي Using AWS Auto Scaling, which monitors applications and automatically adjusts capacity, Indegene can scale on demand to serve clients of virtually all sizes without having to provision physical infrastructure and servers. “AWS is our go-to cloud infrastructure,” Mathur says. “We have had virtually no downtime. Even with spikes or surges in volumes, our systems are fully available. The cost savings, innovation, security, compliance, and reliability are unparalleled.” Times New Roman { About Company Amazon Comprehend Medical is a HIPAA-eligible natural language processing (NLP) service that uses machine learning that has been pre-trained to understand and extract health data from medical text, such as prescriptions, procedures, or diagnoses. Learn more » Most pharmaceutical companies process over 50 percent of PV cases manually to record adverse events, enter them into a specialized safety database, reconcile with corresponding medications, and submit data to health authorities using industry-standard E2B protocols. About 75 percent of cases require follow-up days or weeks later. Ultimately, data elements are compiled into a loose-text format, known as a narrative, which articulates the case’s disposition. This process is inefficient and diminishes the potential value of analytics. Using its automated workflows, Indegene can extract structured and unstructured data and send it to the client’s enterprise environment for submission and downstream analytics. Pharmaceutical companies can produce safer medicines with fewer side effects, supporting a healthier population. “AWS is already well respected in the life sciences industry,” says Tarun Mathur, Chief Technology Officer at Indegene. “Many of the big pharmaceutical companies use AWS, so many issues related to IT approvals and certifications are accelerated when you’re deploying your solution to the AWS environment.” Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » improvement in adverse event management efficiency Deutsch mso-pagination:widow-orphan { Amazon S3 Italiano Efficient scaling mso-fareast-font-family:Calibri { Amazon Comprehend Medical 80% AWS is our go-to cloud infrastructure. We have had virtually no downtime. Even with spikes or surges in volumes, our systems have been fully available. The cost savings, innovation, security, compliance, and reliability are unparalleled.” mso-generic-font-family:roman { Indegene is growing its AI and ML capabilities—expanding the intake channels and formats the system can ingest—to include much greater unstructured capability. The company plans to incorporate more automation into the user interface with smarter intake functionality. The next generation of NAEM will be even more scalable by using Amazon ElastiCache for Redis—an in-memory data store that provides sub-millisecond latency to power internet-scale near-real-time applications. This upgrade will substantially reduce turnaround time while maintaining quality. Ρусский The solution has also reduced the number of follow-ups by 50 percent. “Our clients can look at a patient’s case and make the right judgment based on the patient’s risk,” says Penkrat. “They can effectively make use of high-throughput activity that is compliant and that sometimes needs to be processed in 1 day. Our clients use the dashboards and the intelligence that the system provides to properly prioritize case types.” 中文 (简体) mso-ascii-theme-font:minor-latin { div.WordSection1 { for database management 50% Indegene Reduces Adverse-Event Reporting Time for Its Clients by 80% Using AWS Arial, sans-serif { The pharmacovigilance (PV) process for life sciences companies still relies heavily on inefficient and manual operations. Indegene, a technology-led healthcare solutions provider, sought to transform this process to help its clients drive efficient, meaningful PV outcomes. Using Amazon Web Services (AWS), Indegene built a modern, agile, efficient, and compliant solution for pharmaceutical safety case processing: the NEXT Adverse Event Management System (NAEM). NAEM helps pharmaceutical companies reduce turnaround time for case reporting while improving quality, traceability, reconciliation, and cost efficiency. Using NAEM, organizations have boosted efficiencies by 60 percent using artificial intelligence (AI) and advanced analytics, delivering effective outcomes for patients in over 50 countries. Amazon RDS to handle about half a million cases reduction in time to report adverse events Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Outcomes | Contributing to a Healthier Patient Population Using Solutions Built on AWS  Get Started Customer Stories / Life Sciences Opportunity | Improving Adverse Event Management Process Efficiency
Reducing Costs of Cryo-EM Data Storage and Processing by 50 Using AWS _ Vertex Pharmaceuticals Case Study.txt
Vertex is a pharmaceutical company headquartered in Boston that studies complex molecules and researches treatments for serious diseases using the latest microscopy technologies around the world. Vertex Pharmaceuticals (Vertex) is a global biotechnology company that invests in scientific innovation to create transformative medicines for people with serious diseases. Vertex uses cryogenic electron microscopy (cryo-EM) to generate sophisticated images and insights into a protein’s 3D structure and the structure of potential drug targets. Through that process, the company’s chemists can design better drug molecules by optimizing their structure to bind to their targets. Français Vertex has already reduced the time needed for delivering analysis results, and it hopes to accelerate it further. “With live processing, we could jump-start analysis just as data comes off the microscope,” says Posson. “We might be able to cut our 1-week timeline in half.” However, cryo-EM workflows require a huge amount of compute and storage resources. Scientists doing analyses across multiple research sites generate petabytes of data. Vertex needed to make its infrastructure scalable to support its growing needs while providing adequate processing power to accelerate the research. >50% Español 2x To manage compute for data processing, Vertex uses AWS ParallelCluster, an open-source cluster management tool that makes it straightforward to deploy and manage elastic HPC clusters on AWS. It will spin HPC nodes up and down based on the demands of the analysis software. “When they’re done, we can go back to paying almost zero,” says Iturralde. “We don’t have to worry that the pace of science is going to overwhelm our resources or divert our attention toward maintaining the infrastructure.”   However, while this advanced technology has unlocked the potential for new discoveries and treatments, the need for storage and compute capacity has also increased. “Running a microscope for cryo-EM generates terabytes of data every day,” says Roberto Iturralde, senior director of software engineering for Vertex Pharmaceuticals. “It’s common to generate 1 PB of data in 1 year.” Further, scientists need insights fast. Vertex’s on-premises infrastructure for running its cryo-EM workloads was struggling to keep pace with its rapidly growing compute and storage demands. 日本語 AWS Services Used 2022 Solution | Reducing Data Storage Costs and Accelerating Processing Using AWS ParallelCluster  Get Started 한국어 Amazon FSx for Lustre Overview | Opportunity | Solution | Outcome | AWS Services Used Vertex added native single sign-on support using Amazon Cognito, which businesses can use to add sign-up, sign-in, and access control to web and mobile apps quickly and easily. “Using Amazon Cognito gives us that additional comfort that only the appropriate employees have access to the software,” says Iturralde. Alongside this, Vertex uses Application Load Balancer—which load balances HTTP and HTTPS traffic with advanced request routing targeted at the delivery of modern applications—to secure its networking. AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Learn more » Amazon EC2 improvement in data processing times After processing, Vertex sends the data back to Amazon S3. The company sorts data efficiently using Amazon S3 Lifecycle policies, sets of rules that define actions that Amazon S3 applies to a group of objects. “Using Amazon S3 Lifecycle policies, we can put data into different tiers to lower the cost of storage,” says Iturralde. The company can also scale its storage seamlessly, limiting data center overhead. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. scalability & improved productivity Vertex also plans to continue making its HPC infrastructure more elastic and cloud native to save costs. “By working on AWS, we’re able to spend more time focusing on how we can innovate,” says Iturralde. “We can be creative and take advantage of the cloud to accelerate our science.” 中文 (繁體) Bahasa Indonesia Several days ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Vertex uses cryo-EM to discover treatments for diseases by analyzing the molecular structure of potential drug targets. “Cryo-EM helps us get sufficient resolution for deeper insights into protein structures that we were unable to study only a few years ago,” says David Posson, principal research scientist for Vertex Pharmaceuticals. 中文 (简体) About Vertex Pharmaceuticals Vertex Pharmaceuticals Reduces Costs of Cryo-EM Data Storage and Processing by 50% Using AWS Roberto Iturralde Senior Director of Software Engineering, Vertex Pharmaceuticals Learn more » Storing data long term presented another challenge. After a few weeks, scientists rarely accessed the older microscope data. However, Vertex’s on-premises environment wasn’t optimized to save costs based on usage and access patterns. With the domain evolving quickly, it was becoming expensive to keep up with the continuous hardware, software, networking, and security upgrades needed to manage the cryo-EM infrastructure on premises. In early 2022, Vertex realized it needed a more elastic solution with better performance. Overview   Enchanced reduction in costs AWS ParallelCluster Türkçe Amazon Elastic Compute Cloud (Amazon EC2) provides secure and resizable compute capacity for virtually any workload. Learn more » English Vertex had already been using AWS since 2015 for different workloads. Inspired by new features launched at AWS re:Invent 2021, Vertex redesigned its entire cryo-EM workload and migrated it to AWS. The company prototyped the new architecture in just 3 months. “AWS has the broadest and deepest set of cloud-native technologies that we want to use at Vertex,” says Iturralde. “Using AWS, we quickly switched to a new design that better met the evolving requirements of our scientists.” Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Learn more » By matching its compute costs to workload demands, Vertex has reduced costs by 50 percent. Further, it has achieved two times better performance than its previous architecture. And Vertex has removed the bottlenecks its cryo-EM team faced in the on-premises environment when sharing resources with other groups, which it often did. “Previously, it took several weeks to analyze cryo-EM data, even when no one else was using resources,” says Posson. “Now, we can reliably deliver data in under 1 week using AWS.” 3 months On AWS, Vertex has made its processes efficient, scalable, and cost effective while reducing manual maintenance. Building on AWS also means that the company has access to the latest compute and GPU resources without the months-long lead time associated with procuring data center hardware. For example, Vertex is running Amazon EC2 G5 instances, which deliver a powerful combination of CPU, host memory, and GPU capacity. By performing cryo-EM processes in the cloud, scientists can do near-real-time analysis. Vertex uses expensive microscope time more efficiently and facilitates scientific breakthroughs. By working on AWS, we’re able to spend more time focusing on how we can innovate. We can be creative and take advantage of the cloud to accelerate our science.” improvement in performance By migrating to AWS, Vertex migrated its workloads closer to where the data arrived in Amazon Simple Storage Service (Amazon S3)—an object storage service that offers industry-leading scalability, data availability, security, and performance. Vertex also uses Amazon FSx for Lustre, a fully managed shared storage built on one of the world’s most popular high-performance file systems, to give scientists exactly the amount of storage resources that they need during active analysis. Learn how Vertex Pharmaceuticals accelerates drug discovery by running its cryo-EM workflows on AWS. Deutsch Vertex initially had to transfer all the data from microscopes in external facilities to its data center using hard disks, which took weeks. When new data came in, the company’s on-premises HPC clusters couldn’t efficiently handle the bursts in activity. They also couldn’t scale down during periods of low activity. Tiếng Việt Amazon S3 Opportunity | Accelerating the Processing Performance of Cryo-EM Workflows to Generate Insights Faster  Italiano Customer Stories / Life Sciences Contact Sales to complete prototype of new architecture Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity to support virtually any workload. Vertex improved the performance of its high-performance computing (HPC) workloads, accelerated data analyses, and made its system scalable while reducing overall storage and compute costs by over 50 percent. Outcome | Accelerating Data Processing to Speed Up Research Using Amazon EC2  Português Vertex migrated its data storage and processing to Amazon Web Services (AWS). The company used several AWS services, including
Reducing Failover Time from 30 Minutes to 3 Minutes Using Amazon CloudWatch _ Thomson ReutersCase Study _ AWS.txt
AWS KMS Amazon Elastic Kubernetes Service (Amazon EKS) automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Learn more » Français Increased reduction in failover time Zafar Khan Architect, Platform Engineering Department, Thomson Reuters Enhanced Español Should two health checks fail, Thomson Reuters uses Amazon Route 53, a highly available and scalable Domain Name System web service, to automatically forward traffic to the closest AWS Region to minimize latency. Once the route is fixed, traffic reverts to the original AWS Region. Having automated the failover process using Amazon Route 53 health checks and Amazon CloudWatch, Thomson Reuters has seen failover time drop from 30 minutes to 3 minutes. Recovery point objective time has improved as well. “We want to avoid any manual intervention when we have an incident, and the automated process to achieve the failover has reduced our recovery point objective from 2 hours to 30 minutes,” says Vyas. Thomson Reuters expects to see availability improvements from the team’s implementation of a nearest-available, latency-based routing using Amazon Route 53. The company also used additional AWS services with security in mind. Thomson Reuters used AWS Secrets Manager to centrally manage the lifecycle of secrets using AWS Key Management Service (AWS KMS) to create and control keys used to encrypt data. Using these solutions helps Thomson Reuters adapt to best practices without impeding employee access to company assets. labor time To create an identity solution used by the company’s applications within its internal network that would achieve reliability goals while meeting security constraints, Thomson Reuters built a failover solution that uses AWS Lambda, a serverless, event-driven compute service, to monitor application health. The solution also uses Amazon CloudWatch, which collects and visualizes near-real-time logs, metrics, and event data in automated dashboards. An Amazon CloudWatch alarm is automatically initiated when metrics indicate poor application health. Health alerts unlock a more granular approach to application monitoring, freeing up engineering resources for value-added projects. “Using AWS, we have health alerts in place to address our enhancement goals in alignment with our long-term strategy of moving from a holding company to an operating company,” says Khan. 日本語 2023 Outcome | Preparing for Continued Cloud Migration on AWS Customer Stories / Media & Entertainment 1.5 hour Get Started 한국어 27 minute Overview | Opportunity | Solution | Outcome | AWS Services Used reduction in recovery point objective time AWS Key Management Service (AWS KMS) lets you create, manage, and control cryptographic keys across your applications and more than 100 AWS services. Learn more » Thomson Reuters wants to achieve more on the cloud than just strengthening the resiliency and scalability of its authentication solution. “Since we started our journey to use the cloud in 2016, we’ve believed that cloud-native architecture delivers the most value for our company,” says Matt Dimich, vice president, enablement in platform engineering at Thomson Reuters. From 2020 to 2022, the company launched a change program that combined both lift-and-shift and cloud-native elements, ultimately migrating multiple products to AWS. This project is slated to be three to four times the size of prior migrations. Thomson Reuters will use distributed microservices architecture for the projects that it can migrate directly to cloud-native services, which will facilitate the adoption of DevOps best practices and containerization benefits. Meanwhile, the company sees its lift-and-shift projects as a stepping stone to later modernization, keeping with customer needs. Solution | Using Amazon Route 53 and Amazon CloudWatch to Apply Health Checks and Reduce the Recovery Point Objective from 2 Hours to 30 Minutes   AWS Services Used Opportunity | Prioritizing SSO as Part of a Broad Cloud-Migration Strategy 中文 (繁體) Bahasa Indonesia Thomson Reuters is a global provider of business information services. Its products include highly specialized information-facilitated software and tools for legal, tax, accounting, and compliance professionals, combined with the renowned news service, Reuters. Learn how global content-driven technology company Thomson Reuters bolstered availability using Amazon CloudWatch. Amazon Route 53 Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. Route 53 connects user requests to internet applications running on AWS or on-premises. Learn more » To overcome the authentication challenges that its employees faced and to harden its security posture, Thomson Reuters selected Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service that runs Kubernetes on AWS and on-premises data centers. “We use Amazon EKS to deliver an automated solution that offers resilience and scalability on an as-needed basis,” says Khan. As a result, Thomson Reuters reduced both manual effort and recovery time. On Amazon EKS, the company also gained high availability and a wide range of features, including Amazon EKS control pane audit logs for simplifying cluster management.  Overview With its identity solution in place, Thomson Reuters feels confident that its global workforce will have secure and easy access to company systems. “Our project using AWS services is one of the success stories of hybrid solutions,” says Khan. About Thomson Reuters Türkçe English Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. availability Saved Our project using AWS services is one of the success stories of hybrid solutions.” Deutsch Reducing Failover Time from 30 Minutes to 3 Minutes Using Amazon CloudWatch with Thomson Reuters Tiếng Việt Italiano ไทย Amazon EKS Amazon CloudWatch Contact Sales Learn more » Thomson Reuters operates in more than 100 countries and has over 38,000 employees. Those employees need to authenticate themselves and securely sign in to company systems no matter where they are. The need for a new SSO solution was part of a broader shift toward cloud development. Thomson Reuters committed to its cloud strategy in 2016 as part of its customer-focused mindset, and it has launched many migration projects since then, moving toward cloud-native architecture to establish a foundation for future innovation. “As part of our strategic direction, we wanted to use a hybrid solution to unlock cloud offerings, save costs, and automate deployments,” says Zafar Khan, architect with the platform engineering department at Thomson Reuters. Because Thomson Reuters has considerable experience on AWS, it was a natural choice for the build of its new SSO solution. Amid efforts to boost its operational efficiency, global content-driven technology company Thomson Reuters needed a secure and highly available identification solution for its international workforce. The manual failover process from its legacy on-premises solution left employees locked out of company systems for as long as 30 minutes. “Single sign-on (SSO) is highly critical, and not only from the revenue perspective,” says Bhavin Vyas, lead systems engineer at Thomson Reuters. “If our authentication service is not working, there will be a huge internal impact.” As part of its broader cloud strategy, the company decided to build a new solution on Amazon Web Services (AWS) to deliver highly available SSO authentication. Português security
Reducing Infrastructure Costs by 66 by Migrating to AWS with SilverBlaze _ SilverBlaze Case Study _ AWS.txt
Solution | Cutting Infrastructure Costs by 66% and Improving Scalability on AWS Français Reducing Infrastructure Costs by 66% by Migrating to AWS with SilverBlaze 2023 AWS Application Migration Service and performance Español After comparing several options, SilverBlaze chose AWS for the high performance and low cost that it offered. SilverBlaze did not have experience using AWS, but other businesses within Harris did, which gave SilverBlaze further confidence in choosing AWS. The company also decided to work alongside an AWS Partner to facilitate the migration and chose Atayo because of its proven track record of migrating customers to AWS. Saved employee time 日本語 to customers Outcome | Investing in the Future on AWS Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Using AWS Application Migration Service to Migrate to AWS in 2 Months for SilverBlaze Improved scalability AWS Services Used reduction in annual infrastructure costs 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский previously spent on troubleshooting عربي Minimized disruptions 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Adam Smith Senior Vice President, SilverBlaze Overview AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automating the conversion of your source servers to run natively on AWS. It also simplifies application modernization with built-in and custom optimization options. Now on AWS, SilverBlaze employees redirect time that they previously spent troubleshooting to working on more important tasks. Additionally, the company can add new features using advanced AWS tools and services. SilverBlaze has recommended AWS and Atayo to other businesses within Harris. SilverBlaze, a subsidiary of Harris Computer (Harris), provides software to over 100 utility companies with millions of end customers. SilverBlaze’s applications offer a self-service portal for consumers of electricity, water, gas, and telecommunications services to track their consumption and manage payments. The company had been using a colocation data center to host its solutions for 10 years, but the host was small and couldn’t offer SilverBlaze the scalability and performance that it needed to meet its service-level agreements with customers. To avoid renewing an expensive contract with the data center, SilverBlaze began looking for a cloud provider. In addition to cost savings, SilverBlaze has also improved performance and scalability using AWS. In the colocation data center, the company experienced some performance issues and could scale up only by giving advanced notice to the hosting provider. Now on AWS, SilverBlaze can scale as needed, as well as take advantage of built-in security features that reduce the amount of time that SilverBlaze employees spend managing security and compliance. “Using AWS, we can scale—increasing or decreasing our size—which we couldn’t easily do before. We’ve realized an increase in security, and we can provide our customers with better disaster recovery and high availability that we couldn’t do before,” says Smith. “We’re exceeding our service-level agreements with customers, and the customers are happy.”   66% 2 months Türkçe Using AWS Application Migration Service, SilverBlaze rehosted 45 servers to AWS. SilverBlaze installed agents on its source servers that performed a block-level replication of the servers to AWS in near real time and kept the replicas up to date—with a recovery point objective of seconds—during the whole migration process. One of the biggest benefits of using the service was quickly launching new test environments. When SilverBlaze launched a test server using AWS Application Migration Service, the service continued to sync with the original machines; therefore, SilverBlaze could relaunch new test environments without resyncing the replicas each time. “AWS Application Migration Service reduces the time between test cycles significantly,” says Fonseca. English Now that the SilverBlaze application is running on AWS, infrastructure costs have decreased by 66 percent, which equates to hundreds of thousands of dollars in savings per year. “Every month going forward from this point on, we continue realizing those savings,” says Smith. “It’s a huge benefit to our business. We can focus our funds on innovation and technology and building out our products.” SilverBlaze further cost-optimized by rightsizing its instances and by choosing instance types that better fit its use cases. SilverBlaze, a software innovation, development, and consulting firm for utility companies, wanted to reduce infrastructure costs and better meet fluctuating demand by migrating from a colocation data center to the cloud. As usage of its applications increased, SilverBlaze had to pay higher prices to the data center to scale its capacity. “The costs kept increasing, and we weren’t seeing the value of the increase,” says Adam Smith, senior vice president at SilverBlaze. “We knew that we needed to go to one of the large cloud providers.” When I look at the money that we could have saved, I realize that we should have migrated sooner. Now on AWS, we can take advantage of all the features that we couldn’t before.” Customer Stories / Professional Services The migration was completed with minimal disruption to customers. The cutover window was less than 1 hour, with a few additional hours of testing to verify that everything was running smoothly. Because users might access the application at any time of day to view their utility consumption, this quick cutover was important to SilverBlaze and its customers. SilverBlaze needed to migrate quickly before the end of its contract and wanted to minimize disruption to customers, so Atayo proposed that SilverBlaze use AWS Application Migration Service. “We’ve used AWS Application Migration Service a lot in the past and have had fantastic success with it,” says Luis Fonseca, solution architect at Atayo. “Not only did the service meet the requirements for what SilverBlaze was trying to accomplish by migrating in a particular timeline, but it also just makes the process of migrating and doing lift-and-shift operations incredibly simple.” The migration began in February 2022 and concluded in April 2022, 1 week before the deadline. Deutsch Tiếng Việt About SilverBlaze Italiano ไทย “When I look at the money that we could have saved, I realize that we should have migrated sooner,” says Smith. “Now on AWS, we can take advantage of all the features that we couldn’t before. From a performance perspective, a scalability perspective, and a reliability perspective, we believe that we’re on one of the best solutions out there: AWS.” Learn how software company SilverBlaze cut infrastructure costs by 66 percent by migrating to AWS using AWS Application Migration Service. Learn more » to migrate 45 servers SilverBlaze, a subsidiary of Harris Computer, offers software that helps utility consumers make informed decisions for a sustainable future while helping providers reduce costs, drive innovation, and improve the health of their business and the planet. SilverBlaze chose Amazon Web Services (AWS) as its cloud provider and worked with Atayo Group Inc. (Atayo), an AWS Partner, which had experience migrating customers to AWS. To complete the migration in a short time, SilverBlaze used AWS Application Migration Service (formerly CloudEndure Migration), which minimizes time-intensive, error-prone manual processes by automating the conversion of source servers to run natively on AWS. Using AWS Application Migration Service, SilverBlaze migrated 45 servers quickly and simply to AWS with minimal disruptions to customers. Now on AWS, SilverBlaze has cut its infrastructure costs, has improved its performance and staff productivity, and can access greater functionality using other AWS services. Português
Reducing Log Data Storage Cost Using Amazon OpenSearch Service with CMS _ Case Study _ AWS.txt
67% reduction Français and data replay 2023 Español The Centers for Medicare & Medicaid Services (CMS) is a federal agency under the US Department of Health & Human Services. CMS administers Medicare to more than 83 million people, effectively making it the United States’ largest health insurer. The process of designing, developing, and implementing CMS’s new system was quick, going from idea to product in 6 months. CMS had worked alongside AWS for about 10 years prior to the beginning of this project, so the agency already had a system for approving projects being developed on AWS. Additionally, CMS was able to implement the new system so quickly because Amazon OpenSearch Service was simple and intuitive. Unlike using the old system, which required expertise to use properly, CMS employees have had a much easier time adopting Amazon OpenSearch Service. “We didn’t have to send engineers to get training,” says Spitz. “The ease of use of Amazon OpenSearch Service has made it so much simpler for our security operations center to very quickly build dashboards and do forensics.” Amazon OpenSearch Service 日本語 Outcome | Increasing Efficiency and Savings for the Future Ultimately, the project pressed CMS to consider how it can use all types of log data more efficiently and in more out-of-the-box ways. “Because we’re using Amazon OpenSearch Service, we’ve been able to redirect resources to other missions. Instead of spending millions of dollars on repeatable security functions, we can invest that money toward needs like Medicare modernization,” says Spitz. Close Get Started 한국어 Learn more » Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Because we’re using Amazon OpenSearch Service, instead of spending millions of dollars on repeatable security functions, we can invest that money toward needs like Medicare modernization.” CMS is one of the largest purchasers of healthcare in the world. Medicare, Medicaid, and CHIP provide healthcare for one in four Americans. Medicare enrollment has increased from 19 million beneficiaries in 1966 to approximately 64 million beneficiaries. Medicaid enrollment has increased from 11 million beneficiaries in 1966 to about 83 million beneficiaries. Administrating these programs amounts to CMS ingesting 14–15 TB of log data every single day. Over the years, storage on the old system became increasingly expensive because the massive amounts of log data that ran through CMS only grew. CMS needed to reduce the costs of its log data storage system, and it also wanted a cost-effective solution to perform log data analysis and to respond to security issues more quickly. Bob Spitz Founder of alignIT and Consultant, CMS in data storage costs Improves security features AWS Services Used Opportunity | Using Amazon OpenSearch Service to Reduce Data Log Storage Costs for CMS Solution | Cutting Log Data Storage Costs by 67% and Accessing New Features 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. 中文 (简体) About the Centers for Medicare & Medicaid Services Now, using Amazon OpenSearch Service, CMS saves 67 percent of the costs of its previous log data storage solution. The solution ingests 2 TB of log flow data daily, which are stored in buckets in Amazon Simple Storage Service (Amazon S3), an object storage service built to store and retrieve any amount of data from anywhere. “Amazon S3 plays a huge role in the overall solution, keeping costs down but also making the data readily available and simple to consume using Amazon OpenSearch Service,” says Spitz. The solution then uses AWS Lambda, a serverless, event-driven compute service, to sort the data and send it to the appropriate Amazon OpenSearch Service repositories. “Being able to use Amazon OpenSearch Service and Amazon S3 significantly reduces our costs,” says Spitz. Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. Learn how federal agency CMS cut costs with Amazon OpenSearch Service. Overview The Centers for Medicare & Medicaid Services (CMS), the largest purchaser of healthcare in the United States, had to reduce the cost of its log data storage. The agency produces enormous amounts of log data, most of which is stored and reviewed only when issues occur. Paying for storage with its centralized logging system was becoming cost prohibitive. CMS began working out an alternative using Amazon Web Services (AWS) cloud-native services. In just 6 months, CMS developed a proof of concept, obtained approval, developed, finalized, and deployed a new cloud-based log data storage system on AWS that costs 67 percent less and makes data analysis simpler. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Figure 1. CMS’s serverless virtual private cloud flow log ingestion pipeline and Amazon OpenSearch Service log analytics solution Türkçe English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram CMS chose to use Amazon OpenSearch Service, which securely unlocks near-real-time searching, monitoring, and analysis of business and operational data for use cases such as application monitoring, log analytics, observability, and website search. Using Amazon OpenSearch Service presented a low-cost alternative for log ingestion and storage that would be simple to use when compared to other possible solutions, including open-source options, which would be costly to develop, build, and maintain. “We weren’t looking at it just as a base to store data,” says Bob Spitz, founder of alignIT and consultant for CMS. “We made sure that Amazon OpenSearch Service would meet all our needs: quick data ingesting, low amounts of data copying, and rapid data insights.” Reducing Log Data Storage Cost Using Amazon OpenSearch Service with CMS The agency’s online systems face constant security threats from international and domestic actors. CMS primarily uses Amazon OpenSearch Service to quickly identify what data has been affected during a security issue. Before reimagining its logging system, CMS would effectively lose the logging data that could show the agency what had happened, and it would have to manually pull missing datasets. Now, the system automatically saves historical data and can queue the data for reingestion if needed. This means CMS can use Amazon OpenSearch Service to automatically replay data from the system’s virtual private cloud flow logs that the system created before and during the issue. Instead of taking 2 weeks for two engineers to find what data was lost, CMS can let the system self-fix. CMS also uses AWS tools to provide near-real-time monitoring and analysis. The agency builds dashboards in Amazon OpenSearch Service to better process data and set automatic alerts in case of security issues. CMS further increases data security by using access management and security features in Amazon S3 to restrict access to data and keep it secure when it is shared between systems. CMS has no plans for slowing down in its quest for efficiency. Currently, the log data storage system is being used mostly by CMS’s security operations team. Because the system is so effective and simple to use, CMS plans to spread the technology to other application teams by making the data available as a shared service. “By using AWS, we can plan for the future and make sure that CMS IT systems are effective, efficient, and secure,” says Spitz. Deutsch Tiếng Việt Amazon S3 Italiano ไทย Architecture Diagram Customer Stories / Government Learn more » Click to enlarge for fullscreen viewing.  CMS administers Medicare, Medicaid, the Children’s Health Insurance Program (CHIP), and the Clinical Laboratory Improvement Amendments of 1988 program. The passage of the Patient Protection and Affordable Care Act led to the expansion of CMS’s role in the healthcare arena beyond its traditional role of administering Medicare, Medicaid, and CHIP. Over the last 50 years, CMS evolved into the largest purchaser of healthcare and now maintains the nation’s largest collection of healthcare data. AWS Lambda Português
Reducing Time to Results Carbon Footprint and Cost Using AWS HPC _ Baker Hughes Case Study _ AWS.txt
in carbon footprint  Français Baker Hughes is also benefiting from Amazon’s path to powering its operations with 100 percent renewable energy as part of The Climate Pledge. The company has reduced the carbon footprint of its HPC workloads by 99 percent compared with on premises based on the AWS customer carbon footprint tool, which uses simple-to-understand data visualizations to help customers review, evaluate, and forecast emissions. Baker Hughes plans to continue its digital transformation, focusing on efficiency as a way to reduce emissions. By using advanced AWS technology, Baker Hughes optimizes its HPC applications while supporting the company’s long-term strategic vision to facilitate the global energy transition. 98% reduction Amazon Elastic Compute Cloud (Amazon EC2) Yogesh Kulkarni Senior Director, CTO India, Baker Hughes Español About Baker Hughes 日本語 Opportunity | Seeking an Elastic HPC Solution 2022 AWS Migration Acceleration Program (AWS MAP) Amazon EC2 offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Baker Hughes Reduces Time to Results, Carbon Footprint, and Cost Using AWS HPC The solution went live in the fourth quarter of 2021. Now more than 150 TPS engineers in Italy, India, and the United States run as many simulations as needed prior to physical tests, leading to better accuracy with fewer test iterations. Plus, Baker Hughes onboards multiple users every month without impacting HPC job performance. “We were initially planning to migrate the equivalent compute capacity of 100 teraflops to AWS, but by giving engineers the possibility to scale, the consumption spiked by four times within 3 months of go-live,” says Yogesh Kulkarni, senior director, CTO India at Baker Hughes. Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon FSx for Lustre Customer Stories / Energy & Utilities Get Started 한국어 To run CFD simulations, Baker Hughes uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. The solution accelerates HPC by attaching Intel-based Amazon EC2 instances to Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances to run applications requiring high levels of internode communications at scale. EFA offers dedicated throughput of 100 gigabits per second per HPC job compared to the traditional network interface which offers 300 gigabits per second throughput shared across multiple HPC jobs. As a result, HPC jobs using EFA have low latency compared to the traditional network interface at a fraction of a cost. To further improve performance and reduce network latency, Baker Hughes deploys Amazon EC2 fleets of instances in placement groups, one per HPC job based on Shared-Nothing Architecture principle. Amazon EC2 spreads new instances across the underlying hardware as they launch, and placement groups influence the placement of interdependent instances to meet the throughput needs of the workload. By running on AWS, Baker Hughes avoids the issue of hardware lock-in that is inherent to an on-premises HPC solution. “For Ansys jobs, we now have the ability to use the best price-performance compute instances and continually onboard the latest generation processors as soon as they are available,” says Yogesh. AWS Services Used Baker Hughes migrated its computational fluid dynamics applications to AWS, cutting gas turbine design cycle time, saving 40 percent on HPC costs, and reducing its carbon footprint by 99 percent. 99% reduction 中文 (繁體) Bahasa Indonesia 40% reduction Ρусский 26% faster عربي Running Ansys simulations on AWS helps TPS to accelerate its engineering schedules and achieve a faster time to market.” 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The Amazon WorkSpaces family of solutions provides the right virtual workspace for varied worker types, especially hybrid and remote workers. Improve IT agility and maximize user experience, while only paying for the infrastructure that you use. Learn more » We were initially planning to migrate the equivalent compute capacity of 100 teraflops to AWS, but by giving engineers the possibility to scale, the consumption spiked by four times within 3 months of go-live." AWS CodePipeline Overview in wait time Outcome | Reducing Wait Time and Carbon Footprint by over 90% and Cost by 40% on AWS Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Learn more » Solution | Simplifying Customer Experience and Improving Efficiency of HPC Jobs Using Amazon EC2 Türkçe For more than 100 years, Baker Hughes has been a global leader in industrial turbomachinery and innovation through its Turbomachinery and Process Solutions (TPS) Research Center. Based in Florence, Italy, TPS provides the turbine, compressor, and pump technology that is currently used by the energy industry. Its NovaLT gas turbines set new standards in greenhouse gas emissions, efficiency, and reliability. English in HPC costs Baker Hughes uses several storage options on AWS for its CFD workloads. To store and protect data, Baker Hughes uses Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Amazon S3 works natively alongside Amazon FSx for Lustre, which provides fully managed shared storage with the scalability and performance of the popular Lustre file system and handles the company’s most input- and output-intensive workloads. When linked to an Amazon S3 bucket, an FSx for Lustre file system transparently presents Amazon S3 objects as files and lets engineers write results back to Amazon S3. Baker Hughes streamlines the pipeline for continuous integration and continuous delivery through automated deployments using AWS CodePipeline, a fully managed continuous delivery service that helps organizations automate release pipelines. And engineers can log in and run HPC jobs from any secure connection using Amazon WorkSpaces, a fully managed desktop virtualization service that provides secure, reliable, and scalable access from any location. TPS engineers can run the most resource-intensive Ansys jobs with 98 percent less wait time and 26 percent faster using the same license pool on AWS compared with the on-premises HPC solution, reducing the time to results. The engineers can now run design simulations in parallel on AWS compared with running them sequentially on premise. Plus, the most complex simulations with specific memory requirements not able to run on premise can now be run on AWS. The use of AWS cost-optimization levers—AWS MAP, Savings Plans, and EDP—helped Baker Hughes reduce its HPC spend by 40 percent. The collaboration between globally distributed Baker Hughes and AWS network of experts was instrumental to these outcomes. Deutsch Tiếng Việt runtime in resource-intensive HPC job Italiano ไทย David Meyer Director of Digital Operations for HPC and Remote Visualization, Baker Hughes Contact Sales Using the runtime performance of an Ansys Fluent job as a proof of concept, Baker Hughes compared cloud providers in early 2021. AWS Professional Services, a global team of experts that can help organizations realize desired business outcomes when using AWS, delivered the proof of concept within weeks and on budget, proving the best runtime performance. To accelerate its cloud migration and modernization journey, Baker Hughes used AWS Migration Acceleration Program (AWS MAP), a comprehensive and proven cloud migration program based upon the experience of AWS in migrating thousands of enterprise customers to the cloud. Baker Hughes used AWS MAP to optimize its cloud spend alongside the company’s use of Savings Plans and AWS Enterprise Discount Program, flexible and custom-tailored pricing models for AWS services. To run simulations for designing gas turbines, TPS engineers had been using on-premises HPC solutions for CFD applications from Ansys, an AWS Partner. These included Ansys Fluent for fluid simulation, Ansys CFX for turbomachinery applications, and Ansys Mechanical for structural engineering. Resource capacity bottlenecks allowed limited simulations with long wait and run times for the engineers prior to running expensive and burdensome physical tests. “To remove this bottleneck and better manage the peaks, we needed to expand capacity to 400 teraflops, but we didn’t want to pay for peak capacity yearlong,” says David Meyer, director of digital operations for HPC and remote visualization for Baker Hughes. “We needed an elastic solution for an optimal total cost of ownership.” Baker Hughes is a leading energy technology company with approximately 54,000 employees operating in over 120 countries. It designs, manufactures, and services transformative technologies to help take energy forward. Engineers at Baker Hughes were using an on-premises high performance computing (HPC) solution to simulate gas turbine designs, but it couldn’t scale due to resource capacity bottlenecks. Engineers faced long simulation wait and run times with an increased need for physical prototypes. Baker Hughes chose to migrate its computational fluid dynamics (CFD) applications from on premises to Amazon Web Services (AWS). As a result, the company saved 40 percent on HPC costs, and reduced wait time by 98 percent, run time by 26 percent, and carbon footprint of the HPC solution by 99 percent, helping the company to achieve a faster time to results.  The AWS Migration Acceleration Program (MAP) is a comprehensive and proven cloud migration program based upon AWS’s experience migrating thousands of enterprise customers to the cloud. Enterprise migrations can be complex and time-consuming, but MAP can help you accelerate your cloud migration and modernization journey with an outcome-driven methodology. Learn more » Português
Reinventing the data experience_ Use generative AI and modern data architecture to unlock insights _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Reinventing the data experience: Use generative AI and modern data architecture to unlock insights by Navneet Tuteja and Sovik Nath | on 13 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence , Generative AI , Technical How-to | Permalink | Comments |  Share Implementing a modern data architecture provides a scalable method to integrate data from disparate sources. By organizing data by business domains instead of infrastructure, each domain can choose tools that suit their needs. Organizations can maximize the value of their modern data architecture with generative AI solutions while innovating continuously. The natural language capabilities allow non-technical users to query data through conversational English rather than complex SQL. However, realizing the full benefits requires overcoming some challenges. The AI and language models must identify the appropriate data sources, generate effective SQL queries, and produce coherent responses with embedded results at scale. They also need a user interface for natural language questions. Overall, implementing a modern data architecture and generative AI techniques with AWS is a promising approach for gleaning and disseminating key insights from diverse, expansive data at an enterprise scale. The latest offering for generative AI from AWS is Amazon Bedrock , which is a fully managed service and the easiest way to build and scale generative AI applications with foundation models. AWS also offers foundation models through Amazon SageMaker JumpStart as Amazon SageMaker endpoints. The combination of large language models (LLMs), including the ease of integration that Amazon Bedrock offers, and a scalable, domain-oriented data infrastructure positions this as an intelligent method of tapping into the abundant information held in various analytics databases and data lakes. In the post, we showcase a scenario where a company has deployed a modern data architecture with data residing on multiple databases and APIs such as legal data on Amazon Simple Storage Service (Amazon S3), human resources on Amazon Relational Database Service (Amazon RDS), sales and marketing on Amazon Redshift , financial market data on a third-party data warehouse solution on Snowflake , and product data as an API. This implementation aims to enhance the productivity of the enterprise’s business analytics, product owners, and business domain experts. All this achieved through the use of generative AI in this domain mesh architecture, which enables the company to achieve its business objectives more efficiently. This solution has the option to include LLMs from JumpStart as a SageMaker endpoint as well as third-party models. We provide the enterprise users with a medium of asking fact-based questions without having an underlying knowledge of data channels, thereby abstracting the complexities of writing simple to complex SQL queries. Solution overview A modern data architecture on AWS applies artificial intelligence and natural language processing to query multiple analytics databases. By using services such as Amazon Redshift, Amazon RDS, Snowflake, Amazon Athena , and AWS Glue , it creates a scalable solution to integrate data from various sources. Using LangChain , a powerful library for working with LLMs, including foundation models from Amazon Bedrock and JumpStart in Amazon SageMaker Studio notebooks, a system is built where users can ask business questions in natural English and receive answers with data drawn from the relevant databases. The following diagram illustrates the architecture. The hybrid architecture uses multiple databases and LLMs, with foundation models from Amazon Bedrock and JumpStart for data source identification, SQL generation, and text generation with results. The following diagram illustrates the specific workflow steps for our solution. The steps are follows: A business user provides an English question prompt. An AWS Glue crawler is scheduled to run at frequent intervals to extract metadata from databases and create table definitions in the AWS Glue Data Catalog . The Data Catalog is input to Chain Sequence 1 (see the preceding diagram). LangChain, a tool to work with LLMs and prompts, is used in Studio notebooks. LangChain requires an LLM to be defined. As part of Chain Sequence 1, the prompt and Data Catalog metadata are passed to an LLM, hosted on a SageMaker endpoint, to identify the relevant database and table using LangChain. The prompt and identified database and table are passed to Chain Sequence 2. LangChain establishes a connection to the database and runs the SQL query to get the results. The results are passed to the LLM to generate an English answer with the data. The user receives an English answer to their prompt, querying data from different databases. This following sections explain some of the key steps with associated code. To dive deeper into the solution and code for all steps shown here, refer to the GitHub repo . The following diagram shows the sequence of steps followed: Prerequisites You can use any databases that are compatible with SQLAlchemy to generate responses from LLMs and LangChain. However, these databases must have their metadata registered with the AWS Glue Data Catalog. Additionally, you will need to have access to LLMs through either JumpStart or API keys. Connect to databases using SQLAlchemy LangChain uses SQLAlchemy to connect to SQL databases. We initialize LangChain’s SQLDatabase function by creating an engine and establishing a connection for each data source. The following is a sample of how to connect to an Amazon Aurora MySQL-Compatible Edition serverless database and include only the employees table: #connect to AWS Aurora MySQL cluster_arn = <cluster_arn> secret_arn = <secret_arn> engine_rds=create_engine('mysql+auroradataapi://:@/employees',echo=True,   connect_args=dict(aurora_cluster_arn=cluster_arn, secret_arn=secret_arn)) dbrds = SQLDatabase(engine_rds, include_tables=['employees']) Next, we build prompts used by Chain Sequence 1 to identify the database and the table name based on the user question. Generate dynamic prompt templates We use the AWS Glue Data Catalog, which is designed to store and manage metadata information, to identify the source of data for a user query and build prompts for Chain Sequence 1, as detailed in the following steps: We build a Data Catalog by crawling through the metadata of multiple data sources using the JDBC connection used in the demonstration. With the Boto3 library, we build a consolidated view of the Data Catalog from multiple data sources. The following is a sample on how to get the metadata of the employees table from the Data Catalog for the Aurora MySQL database: #retrieve metadata from glue data catalog   glue_tables_rds = glue_client.get_tables(DatabaseName=<database_name>, MaxResults=1000)     for table in glue_tables_rds['TableList']:         for column in table['StorageDescriptor']['Columns']:              columns_str=columns_str+'\n'+('rdsmysql|employees|'+table['Name']+"|"+column['Name']) A consolidated Data Catalog has details on the data source, such as schema, table names, and column names. The following is a sample of the output of the consolidated Data Catalog: database|schema|table|column_names redshift|tickit|tickit_sales|listid rdsmysql|employees|employees|emp_no .... s3|none|claims|policy_id We pass the consolidated Data Catalog to the prompt template and define the prompts used by LangChain: prompt_template = """ From the table below, find the database (in column database) which will contain the data (in corresponding column_names) to answer the question {query} \n """+glue_catalog +""" Give your answer as database == \n Also,give your answer as database.table ==""" Chain Sequence 1: Detect source metadata for the user query using LangChain and an LLM We pass the prompt template generated in the previous step to the prompt, along with the user query to the LangChain model, to find the best data source to answer the question. LangChain uses the LLM model of our choice to detect source metadata. Use the following code to use an LLM from JumpStart or third-party models: #define your LLM model here llm = <LLM> #pass prompt template and user query to the prompt PROMPT = PromptTemplate(template=prompt_template, input_variables=["query"]) # define llm chain llm_chain = LLMChain(prompt=PROMPT, llm=llm) #run the query and save to generated texts generated_texts = llm_chain.run(query) The generated text contains information such as the database and table names against which the user query is run. For example, for the user query “Name all employees with birth date this month,” generated_text has the information database == rdsmysql and database.table == rdsmysql.employees . Next, we pass the details of the human resources domain, Aurora MySQL database, and employees table to Chain Sequence 2. Chain Sequence 2: Retrieve responses from the data sources to answer the user query Next, we run LangChain’s SQL database chain to convert text to SQL and implicitly run the generated SQL against the database to retrieve the database results in a simple readable language. We start with defining a prompt template that instructs the LLM to generate SQL in a syntactically correct dialect and then run it against the database: _DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Only use the following tables: {table_info} If someone asks for the sales, they really mean the tickit.sales table. Question: {input}""" #define the prompt PROMPT = PromptTemplate( input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE) Finally, we pass the LLM, database connection, and prompt to the SQL database chain and run the SQL query: db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT) response=db_chain.run(query) For example, for the user query “Name all employees with birth date this month,” the answer is as follows: Question: Name all employees with birth date this month SELECT * FROM employees WHERE MONTH(birth_date) = MONTH(CURRENT_DATE()); User Response: The employees with birthdays this month are: Christian Koblick Tzvetan Zielinski Clean up After you run the modern data architecture with generative AI, make sure to clean up any resources that won’t be utilized. Shut down and delete the databases used (Amazon Redshift, Amazon RDS, Snowflake). In addition, delete the data in Amazon S3 and stop any Studio notebook instances to not incur any further charges. If you used JumpStart to deploy an LLM as a SageMaker real-time endpoint, delete endpoint through either the SageMaker console or Studio. Conclusion In this post, we integrated a modern data architecture with generative AI and LLMs within SageMaker. This solution uses various text-to-text foundation models from JumpStart as well as third-party models. This hybrid approach identifies data sources, writes SQL queries, and generates responses with query results. It uses Amazon Redshift, Amazon RDS, Snowflake, and LLMs. To improve the solution, you could add more databases, a UI for English queries, prompt engineering, and data tools. This could become an intelligent, unified way to get insights from multiple data stores. To dive deeper into the solution and the code shown in this post, check out the GitHub repo . Also, refer to Amazon Bedrock for use cases on generative AI, foundation models, and large language models. Appendix Example prompts Domain Database/API Prompt SQL (Generated by LLM) Output Sales & Marketing Amazon RedShift How many ticket sales are there? SELECT COUNT(*) AS total_sales FROM tickit.sales; There are 172,456 ticket sales. Sales & Marketing Amazon RedShift What was the total commission for the ticket sales in the year 2008? SELECT SUM(commission) AS total_commission FROM tickit.sales WHERE EXTRACT(YEAR FROM saletime) = 2008 The total commission for ticket sales in the year 2008 was $16,614,814.65. Legal S3 How many frauds happened in the year 2023? SELECT count(*) FROM claims WHERE extract(year from write_time) = 2023 AND fraud = 1; There were 164 fraud claims in 2023. Legal S3 How many policies were claimed this year? SELECT count(*) FROM claims; There were 5000 claims made this year. Human Resources Amazon Aurora MySQL Name all employees with birth date this month SELECT * FROM employees WHERE MONTH(birth_date) = MONTH(CURRENT_DATE()); The employees with birthdays this month are: Christian Koblick Tzvetan Zielinski Kazuhito Cappelletti Yinghua Dredge Human Resources Amazon Aurora MySQL How many employees were hired before 1990? SELECT COUNT(*) AS 'Number of employees hired before 1990' FROM employees WHERE hire_date < '1990-01-01' 29 employees were hired before 1990. Finance and Investments Snowflake Which stock performed the best and the worst in May of 2013? SELECT name, MAX(close) AS max_close, MIN(close) AS min_close FROM all_stocks_5yr WHERE date BETWEEN '2013-05-01' AND '2013-05-31' GROUP BY name ORDER BY max_close DESC, min_close ASC The stock that performed the best in May 2013 was AnySock1 (ASTOCK1) with a maximum closing price of $842.50. The stock that performed the worst was AnySock2 (ASTOCK2) with a minimum closing price of $3.22. Finance and Investments Snowflake What is the average volume stocks traded in July of 2013? SELECT AVG(volume) AS average_volume FROM all_stocks_5yr WHERE date BETWEEN '2013-07-01' AND '2013-07-31' The average volume of stocks traded in July 2013 was 4,374,177 Product – Weather API What is the weather like right now in New York City in degrees Fahrenheit? About the Authors Navneet Tuteja  is a Data Specialist at Amazon Web Services. Before joining AWS, Navneet worked as a facilitator for organizations seeking to modernize their data architectures and implement comprehensive AI/ML solutions. She holds an engineering degree from Thapar University, as well as a master’s degree in statistics from Texas A&M University. Sovik Kumar Nath is an AI/ML solution architect with AWS. He has extensive experience designing end-to-end machine learning and business analytics solutions in finance, operations, marketing, healthcare, supply chain management, and IoT. Sovik has published articles and holds a patent in ML model monitoring. He has double masters degrees from the University of South Florida, University of Fribourg, Switzerland, and a bachelors degree from the Indian Institute of Technology, Kharagpur. Outside of work, Sovik enjoys traveling, taking ferry rides, and watching movies. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Relay Therapeutics Case Study.txt
Since deploying the AWS high-performance computing solution, Relay Therapeutics has run multiple screens of five billion compounds. Because of the scalability offered by AWS, scientists can run the screens on multiple snapshots of the same moving protein target.   Typically, in traditional IT environments, pharmaceutical companies virtually screen a few million compounds at a time. Relay Therapeutics was determined to scale that number into the billions and turned to Amazon Web Services (AWS) to solve the challenge. “The major factor in selecting AWS over other cloud providers is the support we received from the start,” says Pat Walters, senior vice president of computation at Relay Therapeutics. “And it has continued to help us make our processes work more efficiently.” Pierce estimates that Amazon EC2 Spot Instances reduce compute costs by 50 percent compared to conducting virtual screening on premises. AWS and Relay Therapeutics also built parameter checks into the process to keep analysis costs from exceeding the budgeted amount. “We get alerted if a job will go beyond a set expense threshold,” Walters explains. “That tells us a parameter is off so we can terminate the job or make an adjustment on the fly.” Français Benefits of AWS By accessing close to 100,000 CPUs on AWS, the Relay Therapeutics team is able to perform the analysis of billions of compounds in one day. It solved the CPU cost challenge by capitalizing on the elastic capacity of Scales compute resources as required for each analysis job Español Sorting a table with billions of rows is not a trivial exercise. By using AWS technologies, we can deal with all that information efficiently, which helps us strive toward our ultimate goal—getting medicines to patients faster than we previously thought possible.” On AWS, the company also simplified virtual screening so scientists can use open source scripts to kick off analysis on AWS Batch. The scientists then rapidly analyze the data by taking advantage of Amazon Athena, a serverless query service with no infrastructure to manage.  Reduces compute resource costs by 50% Analyzes 5 billion molecular compounds in 1 day vs. months 日本語 Simplified Process for Scientists 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Amazon Athena , which can be spun up and turned off as needed. Get Started Processing Billions of Molecules in 24 Hours Relay Therapeutics Uses AWS to Accelerate Drug Discovery AWS Services Used 中文 (繁體) Bahasa Indonesia AWS Batch Contact Sales Ρусский عربي In the future, Relay Therapeutics anticipates scientists may be able to virtually screen commercially available libraries of 10 billion compounds, which will require integrating machine learning to control the costs. AWS data center 50% Savings in Compute Costs 中文 (简体) Amazon EMR will be important components of this effort.   About Relay Therapeutics Enables scientists to easily run complex analysis Pat Walters Senior Vice President of Computation, Relay Therapeutics Achieving the Impossible Türkçe Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. English AWS Batch—a cloud-native orchestration service—in conjunction with Spot Instances, Relay Therapeutics easily scales to the required number of CPUs for each virtual screen. AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. Based in Massachusetts, Relay Therapeutics is committed to creating medicines that have a transformative impact on patients. The company combines unprecedented computational power with leading-edge experimental approaches across structural biology, biophysics, chemistry, and biology. A few years ago, the Relay Therapeutics team did not think it was possible to run virtual screening at the scale the company has now achieved, with scientists analyzing tables with a billion rows. “Even sorting a table with billions of rows is not a trivial exercise,” Walters emphasizes. “By using AWS technologies, we can deal with all that information efficiently, which helps us strive toward our ultimate goal—getting medicines to patients faster than we previously thought possible.” Relay Therapeutics is a precision medicine company transforming the drug discovery process by leveraging unparalleled insights into protein motion. Prior to testing promising compounds in the lab, scientists have to consider a molecular universe of available starting points numbering close to 10 billion compounds. They need to filter this extensive set down to the 100–200 compounds most likely to bind to the biological target. By analyzing more compounds, scientists increase the chances they will find the right molecules to test in the lab. In a typical on-premises data center, with thousands of CPUs, the analysis of a billion compounds could take months. Deploying sufficient CPUs in an on-premises data center would also be cost-prohibitive, particularly due to the “bursty” nature of the analyses. On the Horizon: Processing 10 Billion Compounds Relay Therapeutics leverages unused Amazon EC2 capacity in the AWS Cloud at up to a 90 percent discount compared to pricing for On-Demand Instances. By relying on Deutsch Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances Availability Zones and Tiếng Việt Italiano ไทย Validates analysis parameters to avoid cloud cost overruns 2020 Learn more » Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Amazon EC2 Spot Instances Scientists don’t have to worry about complex programming, so they have more time to analyze results and optimize the drug discovery process. “Orchestrating that many jobs manually in a traditional system is a nightmare,” says Levi Pierce, director of computation at Relay Therapeutics. “But using AWS Batch saves us a lot of time.” Português
Resilience Builds a Global Data Mesh for Lab Connectivity on AWS _ Case Study _ AWS.txt
Français Adam Mendez Associate Director for Data Engineering, Resilience Español Outcome | Continuing to Accelerate Learning Cycles for Drug Development   About Resilience 日本語 2023 at rest and in transit 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Automating and Accelerating Data Transfer for Resilience   uploaded to Amazon S3 to date 100+ Get Started AWS Cloud Development Kit (AWS CDK) accelerates cloud development using common programming languages to model your applications. Learn more » Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Amazon CloudWatch AWS Services Used 中文 (繁體) Bahasa Indonesia  to build the infrastructure for data to be available in the cloud ไทย Ρусский In less than 3 months, Resilience’s Digital Research & Development organization, working closely with its data engineering and networking teams, built AWS infrastructure to power its globally connected system. The solution uses AWS DataSync, a secure, online service that automates and accelerates data transfer, to migrate data from its on-premises systems to the AWS Cloud. This data is transferred securely using AWS PrivateLink, which establishes connectivity between virtual private clouds and AWS services without exposing data to the internet. This data is then stored on Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere, and can be accessed by both scientific and business users across Resilience’s organization. “With a centrally managed system for data storage on AWS, we can seamlessly integrate with other applications and analytics software, whether they are third-party software-as-a-service solutions or internally developed,” says Mendez.   To date, Resilience has uploaded more than 75 TB of research data from over 100 various lab devices to Amazon S3. Scientific and business users across Resilience can now review, process, and analyze their instrument data on Amazon S3 to achieve their research and development goals. The company relies on AWS Internet of Things services such as AWS IoT Greengrass, an open-source edge runtime and cloud service, to automatically invoke the migration tasks on demand, providing scientists with access to their data on the cloud in under 5 minutes. By using AWS Cloud Development Kit (AWS CDK), which accelerates cloud development using common programming languages, to model its applications, Resilience can onboard new devices and bring entire sites online in a matter of days. With its infrastructure-as-code approach, Resilience is helping dozens of research teams expedite their work. “By facilitating near-real-time data upload from each of our sites, we can provide strong data backup while helping teams use insights in a cross-functional, cross-site manner,” says Jonathan Rivernider, lab systems engineer at Resilience. “This puts data into the hands of scientists faster to accelerate learning cycles.”   On the cloud, Resilience’s lab data needed to be organized in a way that aligns with how scientists use their data. To accomplish this, the team designed an Amazon S3 data lake using the AWS Prescriptive Guidance for Data Lake Architectures and engaged Quilt Data, an AWS Partner, to assign governance controls. These controls turn the instrument datasets into data packages, an immutable record of raw lab data, analyzed data, and associated lab files, including graphs and PowerPoints. Now, as data moves through analysis stages by scientists, data packages are maintained on Amazon S3 with versioning, metadata, and lineage information. This data is searchable in a user portal for authorized lab and business users and integrates with their electronic library notebooks.   Using Amazon CloudWatch, a monitoring service that provides operational insights for various AWS resources, the teams were also able to build a robust logging system for all data transfer tasks. Now, Resilience can verify that proper alerts are in place to verify the operational health of the system and each lab instrument. “Given the sensitive nature of the research data, security of this system is paramount,” says Jiro Koga, senior systems engineer at Resilience. “By incorporating strict network firewall rules, client certificates, and secure endpoints using AWS PrivateLink, all data is safely transferred with encryption in flight and at rest.” عربي AWS PrivateLink 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. By connecting laboratory instruments to AWS, Resilience has accelerated the transfer of key data for its research, manufacturing, and product development workflows. Scientists and business users alike have reliable access to the data they need to make key decisions, and the company intends to scale this solution further to support more research sites and instruments. 75+ TB Solution | Connecting More than 100 Laboratory Instruments from Six Research Sites to the Cloud  Overview Despite the scientific advancements propelling cell and gene therapy development, the manufacturing technology behind these complex medicines hasn’t kept pace. Resilience is addressing this gap. The biomanufacturing company offers customized and scalable solutions that aim to produce these complex medicines faster, with less risk and increased flexibility. By centralizing vast amounts of data from diverse product areas and laboratory instruments across production sites and analyzing them for insights, Resilience is discovering ways to produce novel therapies safely and at scale. < 5 mins   Resilience Builds a Global Data Mesh for Lab Connectivity on AWS laboratory instruments from six sites connected Türkçe Encrypts data English With a centrally managed system for data storage on AWS, we can seamlessly integrate with other applications and analytics software, whether they are third-party software-as-a-service solutions or internally developed." AWS CDK “By creating a reusable pattern that can be used across any site, we demonstrated how to connect different AWS services to build an entire data management system,” says Brian McNatt, global head for digital research and development at Resilience. “We fully intend to continue expanding our AWS data network as Resilience’s manufacturing footprint continues to grow across more sites and more key research devices.” < 3 months Using a range of offerings from Amazon Web Services (AWS), Resilience has built a globally connected system for uploading, storing, managing, and finding data from each of its research and manufacturing sites securely in the cloud. With a network of over 100 cloud-connected lab devices across six company sites, Resilience has reduced the turnaround time between experiments and insights while helping customers accelerate production. Deutsch Tiếng Việt Italiano Customer Stories / Life Sciences AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. Resilience is a technology-focused biomanufacturing company dedicated to broadening access to complex medicines.. Founded in 2020, the company is building a sustainable network of high-tech, end-to-end manufacturing solutions to ensure the treatments of today and tomorrow can be made quickly, safely, and at scale. Contact Sales Learn more » Founded in 2020, Resilience is driving innovative biomanufacturing. It offers a range of scalable, off-the-shelf biomanufacturing modalities for gene therapies, nucleic acid synthesis, protein purification, and more for leading pharmaceutical and biotechnology companies. It also oversees a large network of instruments, including bioreactors, flow cytometers, microscopes, and genomic sequencers.   To accelerate production and decrease the time between performing experiments and generating insights, Resilience needed to build connectivity from each of its research and manufacturing sites to the cloud. With such a vast volume and diversity of data, however, building a connected data network was no simple task. “We have lots of product areas, which require an equally wide range of laboratory instruments to develop them. This creates a high degree of data heterogeneity,” says Adam Mendez, associate director for data engineering at Resilience. “We needed a robust system for data transfer that was agnostic to the data type and could quickly and securely upload the data from all lab devices to the cloud.” The company identified AWS as the optimal solution for the project due to its secure, scalable infrastructure and powerful Internet of Things (IoT) capabilities. AWS DataSync Learn how biomanufacturing innovator Resilience revolutionizes the way novel medicines are produced with a connected network for data transfer on AWS.  AWS PrivateLink provides private connectivity between virtual private clouds (VPCs), supported AWS services, and your on-premises networks without exposing your traffic to the public internet. Learn more » Português
ResMed Case Study _ AWS AppSync _ AWS.txt
Optimized ResMed staff’s time and energy Improved data latency from 7 minutes to 10 seconds  After completing a proof of concept alongside the AWS team, ResMed decided to completely rearchitect its environment for myAir using cloud-native services with enhanced security features. ResMed began the implementation of AWS AppSync and additional AWS solutions in March 2020 and initiated its first rollout to the AWS Asia Pacific Region in January 2021. The company continued to implement the serverless solution in select Regions before concluding the project in July 2021, when it migrated its largest user base in the United States. From start to finish, the implementation went smoothly with support from AWS. Français ResMed offers digital health solutions like AirView and myAir, which give healthcare providers and device users the ability to remotely self-monitor positive airway pressure (PAP) and ventilator treatment. Monitoring PAP and ventilator usage can help improve users’ adherence as well as clinicians’ patient management efficiency. As of December 31, 2021, myAir had over 4 million registered users who can receive personalized support, tailored coaching tips, access to therapy data, and nightly sleep scores that help them get a better night’s sleep. Additionally, over 18.5 million PAP users were remotely monitored in ResMed’s AirView solution for clinicians. Español Brian Hickey Director of Engineering, Patient Experience, ResMed Seeking Greater Scalability with Serverless Architecture Since then, ResMed has increased its productivity and accelerated its time to market for digital solution launches and upgrades while using AWS AppSync. “Our biggest reason for using AWS AppSync was the synchronization infrastructure that it provided,” says Stanley Kurdziel, senior engineering manager at ResMed. Now, data updates more seamlessly for users using the myAir app on multiple devices. Using AWS AppSync, the ResMed team can be more responsive and make quick, same-day changes that would have previously taken weeks to enact, reducing the time to deploy new code by 90 percent. “Speed is a key benefit,” says Kurdziel. “We want the ability to change something quickly without difficulty. Using AWS AppSync, now we have that capacity.” After implementing AWS AppSync and other AWS solutions with myAir, ResMed has built two additional serverless products that have recently gone live. It plans to continue a serverless-first approach with all new projects in the future.  日本語 Provides deeper insights and analytics on user engagement Contact Sales ResMed pioneers innovative solutions that empower people to lead healthier, higher-quality lives. Its digital health technologies and cloud-connected medical devices transform care for people with sleep apnea, COPD, and other chronic diseases. 한국어 AWS AppSync Reduced operational overhead by 80% Implementing AWS AppSync Considering Future Serverless Solutions Get Started To further accelerate its time to market, ResMed built a continuous integration and continuous delivery pipeline using AWS CodePipeline, a fully managed continuous delivery service that helps users automate their release pipelines for fast and reliable application and infrastructure updates, and AWS CodeBuild, a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. Implementing these fully managed solutions means that ResMed has increased staff productivity. “The amount of time and labor we’ve saved on operations means that we’ve been able to increase the number of people working on the app,” says Hickey. “Now, everyone gets to work on building new things for the product, things that customers and users get to see and experience, rather than spending all their time on operations.” “Using AWS AppSync, we invested up front, and now we can turn around products much faster and much more economically. It was a no-brainer for us.”  AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.  AWS Services Used 中文 (繁體) Bahasa Indonesia Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский AWS CodePipeline عربي Provides more accurate insights to users using machine learning 中文 (简体) Before adopting AWS AppSync, ResMed ran its myAir application as a monolithic application using on-premises servers. Under this model, the company faced two key challenges: the existing data center could not handle the company’s quickly growing user base, and the software that it had been using had aged poorly, creating challenges and stress for ResMed’s development and operations teams. The company believed that migrating to the cloud in a serverless architecture would provide significant benefits to its business and users. Learn more » ResMed Improves Agility and User Satisfaction Using AWS AppSync Benefits of AWS ResMed employed AWS Lambda, a serverless, event-driven compute service that lets users run code for virtually any type of application or backend service without provisioning or managing servers. The company also adopted Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at virtually any scale. ResMed selected both these solutions because they are fully managed, which empowers its developers to devote their time to innovating new features rather than troubleshooting operational issues. By reducing the server management workload of ResMed’s development team, the company can now achieve more with less effort. “Serverless solutions are really powerful and really simple to use, deploy, and manage,” says Hickey. Moreover, the company could reduce its operational overhead cost by approximately 80 percent compared with its legacy system.  ResMed turned to Amazon Web Services (AWS) solutions to scale to support more device users globally, reduce application latency, and deploy new features more quickly. To develop its myAir application, ResMed selected AWS AppSync, a serverless GraphQL and Pub/Sub API service that simplifies building modern web and mobile applications. In conjunction with a suite of other AWS solutions, the company could use AWS AppSync to reduce operational overhead, improve the user experience, and provide more accurate and valuable insights by using machine learning. Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data export tools. About ResMed Digital health leader ResMed is one of the leading global providers of cloud-connected solutions for people with sleep apnea, COPD, asthma, and other chronic conditions. In 2021, ResMed helped improve the lives of over 133 million people in over 140 countries. Now, ResMed has a goal to improve 250 million lives in 2025, and it needs an agile, serverless solution to increase user satisfaction and achieve greater scalability.  Working on AWS, ResMed has improved latency for its users. “Data that used to take 7 minutes to show up for a user now arrives in less than 10 seconds,” says Hickey. Its users not only get data more quickly, but they also have access to more of it. Using its new serverless architecture, ResMed can now perform microexperiments and determine what features and data are most beneficial to users. English “We needed the basic agility of serverless architecture as well as the ability to integrate with other services in the cloud,” says Brian Hickey, director of engineering, patient experience at ResMed. “We wanted to take advantage of those simple integrations and innovate rapidly and efficiently.”  Deutsch Amazon DynamoDB Tiếng Việt Italiano ไทย Türkçe Reduced new code deployment time by 90% 2022 “We have years and years of runway benefit with this solution,” says Hickey. “Using AWS AppSync, we invested up front, and now we can turn around products much faster and much more economically. It was a no-brainer for us.” AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. AWS Lambda AWS AppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like AWS DynamoDB, Lambda, and more. Learn more » Português
Respond.io Scales Its Messaging Platform and Connects 10000 Companies with Customers on AWS _ Respond.io Case Study _ AWS.txt
About Respond.io Français Hassan says, “Our workflows require a sophisticated architecture, with thousands of executions running per minute. With AWS Lambda and AWS Fargate, we can manage this seamlessly, without worrying about security patching and server maintenance.” Hassan Ahmed CTO and Cofounder, Respond.io 2023 100+ Million Español AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Furthermore, users can conduct extensive searches within chat logs to understand their customers’ needs and challenges and obtain comprehensive reports that guide strategy. Business decision-makers can explore factors like average response times, problem resolution success rates, peak hours for sales inquiries, and other business-critical data. Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Respond.io is a software as a service (SaaS) platform that helps companies manage all its customer messaging in one place. For example, a retailer may receive customer support requests and sales inquiries through a variety of messaging channels. These messages are filtered into the Respond.io platform, where customer support and sales staff can address them in an organized and efficient manner. Customizable automations Amazon OpenSearch Service 日本語 Respond.io also provides its customers with extensive reporting features that help them glean powerful insights from the vast amount of data created through customer messaging. Its reporting module is powered by Amazon OpenSearch Service. This means customers can obtain reports in milliseconds and analyze variables, like which messaging channels their customers prefer, peak messaging times, and additional insights that guide operations and marketing strategies. Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Based in Kuala Lumpur, Malaysia, with offices in Hong Kong, Respond.io is a comprehensive customer conversation management software that facilitates seamless marketing, sales, and support communications across instant messaging, web chat, and email. Get Started 한국어 messages stored in Amazon DynamoDB Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Enhancing and Scaling a Powerful Business Messaging Platform 2+ Billion Outcome | A Cutting-edge Product and a Rapidly Growing Business AWS Services Used Overview 中文 (繁體) Bahasa Indonesia OpenSearch Reporting Since adopting AWS, Respond.io is managing over 100 million messages per month for more than 10,000 customers, serving a range of multinational corporations. In September 2022, the company received $7 million in Series A venture funding, and executives see no limits to continued expansion. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Respond.io was running its platform on a serverless architecture from a different cloud provider, but its founders quickly realized that the platform was not equipped to scale. In fact, users were experiencing chat log search latencies and system delays. To scale and continue to serve enterprises like Toyota, McDonald’s, and Decathlon, the company needed a reliable, flexible, and robust cloud provider. 中文 (简体) to handle communication workflows Respond.io currently stores over 1.5 TB from 2.6 billion messages in Amazon DynamoDB, a fully managed, serverless NoSQL database. The platform also uses Amazon Simple Storage Service (Amazon S3) with Amazon Athena to export these messages, facilitating efficient retrieval and minimal search latency. This gives users a way to quickly search and access customer chat logs, follow up on previous customer exchanges, and effectively manage marketing, sales, and support communications. Low latency sales and support messages exchanged monthly Amazon Elastic Container Service (Amazon ECS) Hassan concludes, “We couldn’t have grown and survived without AWS, considering the complexity and the sheer volume of data we handle today.” Respond.io continues to develop an innovative SaaS product that stands out in the marketplace. The product delivers efficient communication across 15 different messaging channels, and its user-friendly dashboard and customizable, no-code workflows make it easy for companies to handle multiple inquiries. The platform’s extensive reporting features and low-latency chat capabilities bring additional value, handing Respond.io’s customers a competitive edge. Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Respond.io is a Malaysia-based SaaS company whose business messaging platform helps organizations seamlessly manage customer communications. To enhance its platform and scale in response to growth, Respond.io migrated to AWS. Türkçe delivering comprehensive marketing insights English Respond.io transformed its business messaging platform with AWS Serverless—expanding search capabilities to efficiently handle over 100 million messages per month. Solution | Supporting a Massive Expansion in Product Features and Customer Volume Deutsch Respond.io Scales Its Messaging Platform and Connects 10,000+ Companies with Customers on AWS We live in an era in which consumer-facing companies cannot survive, much less thrive, without a strategic approach to communicating with customers via WhatsApp, Instagram, and other messaging applications. Companies capable of handling marketing and 1:1 conversation across major messaging channels have a strong edge over those with limited options. Tiếng Việt Italiano ไทย Respond.io’s founders had experience with Amazon Web Services (AWS) and decided to migrate to AWS. Hassan Ahmed, CTO and cofounder of Respond.io, says, “We’re expanding our platform’s features and our user base is growing rapidly. Considering the extensive infrastructure that AWS offered, we were confident in AWS’ ability to help us scale.” We couldn’t have grown and survived without AWS, considering the complexity and the sheer volume of data we handle today.” Amazon DynamoDB for efficient analysis of customer chat logs Learn more » Customer Stories / Software and Internet Today, Respond.io has migrated 90 percent of its workloads to run on a serverless architecture on AWS Lambda. As a result, the companies it serves can now customize and automate workflows. Administrators can create a simple, no-code workflow that triggers an automatic response to incoming messages containing specific keywords, and they can create rules for assigning support tickets based on staff availability, time-in-queue, and many other factors. AWS Lambda Respond.io leverages Amazon Elastic Container Service (Amazon ECS), AWS Fargate, AWS OpenSearch Service, and Amazon DynamoDB to build a low-latency, scalable platform with robust search and reporting capabilities. Since adopting AWS, Respond.io is managing over 100 million messages per month for more than 10,000 customers, serving a range of multinational corporations. Português
Retain original PDF formatting to view translated documents with Amazon Textract Amazon Translate and PDFBox _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Retain original PDF formatting to view translated documents with Amazon Textract, Amazon Translate, and PDFBox by Anubha Singhal and Sean Lawrence | on 03 JUL 2023 | in Amazon Textract , Amazon Translate , Technical How-to | Permalink | Comments |  Share Companies across various industries create, scan, and store large volumes of PDF documents. In many cases, the content is text-heavy and often written in a different language and requires translation. To address this, you need an automated solution to extract the contents within these PDFs and translate them quickly and cost-efficiently. Many businesses have diverse global users and need to translate text to enable cross-lingual communication between them. This is a manual, slow, and expensive human effort. There’s a need to find a scalable, reliable, and cost-effective solution to translate documents while retaining the original document formatting. For verticals such as healthcare, due to regulatory requirements, the translated documents require an additional human in the loop to verify the validity of the machine-translated document. If the translated document doesn’t retain the original formatting and structure, it loses its context. This can make it difficult for a human reviewer to validate and make corrections. In this post, we demonstrate how to create a new translated PDF from a scanned PDF while retaining the original document structure and formatting using a geometry-based approach with Amazon Textract , Amazon Translate , and Apache PDFBox . Solution overview The solution presented in this post uses the following components : Amazon Textract – A fully managed machine learning (ML) service that automatically extracts printed text, handwriting, and other data from scanned documents that goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Amazon Textract can detect text in a variety of documents, including financial reports, medical records, and tax forms. Amazon Translate – A neural machine translation service that delivers fast, high-quality, and affordable language translation. Amazon Translate provides high-quality on-demand and batch translation capabilities across more than 2,970 language pairs, while decreasing your translation costs. PDF Translate – An open-source library written in Java and published on AWS Samples in GitHub. This library contains logic to generate translated PDF documents in your desired language with Amazon Textract and Amazon Translate. It also uses the open-source Java library Apache PDFBox to create PDF documents. There are similar PDF processing libraries available in other programming languages, for example Node PDFBox . While performing machine translations, you may have situations where you wish to preserve specific sections of text from being translated, such as names or unique identifiers. Amazon Translate allows tag modifications, which allows you to specify what text should not be translated. Amazon Translate also supports formality customization, which allows you to customize the level of formality in your translation output. For details on Amazon Textract limits, refer to Quotas in Amazon Textract . The solution is restricted to the languages that can be extracted by Amazon Textract, which currently supports English, Spanish, Italian, Portuguese, French, and German. These languages are also supported by Amazon Translate. For the full list of languages supported by Amazon Translate, refer to Supported languages and language codes . We use the following PDF to demonstrate translating the text from English to Spanish. The solution also supports generating the translated document without any formatting. The position of the translated text is maintained. The source and translated PDF documents can also be found in the AWS Samples GitHub repo . In the following sections, we demonstrate how to run the translation code on a local machine and look at the translation code in more detail. Prerequisites Before you get started, set up your AWS account and the AWS Command Line Interface (AWS CLI). For access to any AWS Services such as Textract and Translate, appropriate IAM permissions are needed. We recommend utilizing least privilege permissions. To learn more about IAM permissions see Policies and permissions in IAM as well as How Amazon Textract works with IAM and How Amazon Translate works with IAM . Run the translation code on a local machine This solution focuses on the standalone Java code to extract and translate a PDF document. This is for easier testing and customizations to get the best-rendered translated PDF document. The code can then be integrated into an automated solution to deploy and run in AWS. See Translating PDF documents using Amazon Translate and Amazon Textract for a sample architecture that uses Amazon Simple Storage Service (Amazon S3) to store the documents and AWS Lambda to run the code. To run the code on a local machine, complete the following steps. The code examples are available on the GitHub repo. Clone the GitHub repo: git clone https://github.com/aws-samples/amazon-translate-pdf Run the following command: cd amazon-translate-pdf Run the following command to translate from English to Spanish: java -jar target/translate-pdf-1.0.jar --source en --translated es Two translated PDF documents are created in the documents folder, with and without the original formatting ( SampleOutput-es.pdf and SampleOutput-min-es.pdf ). Code to generate the translated PDF The following code snippets show how to take a PDF document and generate a corresponding translated PDF document. It extracts the text using Amazon Textract and creates the translated PDF by adding the translated text as a layer to the image. It builds on the solution shown in the post Generating searchable PDFs from scanned documents automatically with Amazon Textract . The code first gets each line of text with Amazon Textract. Amazon Translate is used to get translated text and save the geometry of the translated text. Region region = Region.US_EAST_1; TextractClient textractClient = TextractClient.builder() .region(region) .build(); // Get the input Document object as bytes Document pdfDoc = Document.builder() .bytes(SdkBytes.fromByteBuffer(imageBytes)) .build(); TranslateClient translateClient = TranslateClient.builder() .region(region) .build(); DetectDocumentTextRequest detectDocumentTextRequest = DetectDocumentTextRequest.builder() .document(pdfDoc) .build(); // Invoke the Detect operation DetectDocumentTextResponse textResponse = textractClient.detectDocumentText(detectDocumentTextRequest); List<Block> blocks = textResponse.blocks(); List<TextLine> lines = new ArrayList<>(); BoundingBox boundingBox; for (Block block : blocks) { if ((block.blockType()).equals(BlockType.LINE)) { String source = block.text(); TranslateTextRequest requestTranslate = TranslateTextRequest.builder() .sourceLanguageCode(sourceLanguage) .targetLanguageCode(destinationLanguage) .text(source) .build(); TranslateTextResponse resultTranslate = translateClient.translateText(requestTranslate); boundingBox = block.geometry().boundingBox(); lines.add(new TextLine(boundingBox.left(), boundingBox.top(), boundingBox.width(), boundingBox.height(), resultTranslate.translatedText(), source)); } } return lines; The font size is calculated as follows and can easily be configured: int fontSize = 20; float textWidth = font.getStringWidth(text) / 1000 * fontSize; float textHeight = font.getFontDescriptor().getFontBoundingBox().getHeight() / 1000 * fontSize;   if (textWidth > bbWidth) {     while (textWidth > bbWidth) {         fontSize -= 1;         textWidth = font.getStringWidth(text) / 1000 * fontSize;         textHeight = font.getFontDescriptor().getFontBoundingBox().getHeight() / 1000 * fontSize;      } } else if (textWidth < bbWidth) {      while (textWidth < bbWidth) {          fontSize += 1;          textWidth = font.getStringWidth(text) / 1000 * fontSize;          textHeight = font.getFontDescriptor().getFontBoundingBox().getHeight() / 1000 * fontSize;       } } The translated PDF is created from the saved geometry and translated text. Changes to the color of the translated text can easily be configured. float width = image.getWidth(); float height = image.getHeight();   PDRectangle box = new PDRectangle(width, height); PDPage page = new PDPage(box); page.setMediaBox(box); this.document.addPage(page); //org.apache.pdfbox.pdmodel.PDDocument   PDImageXObject pdImage;   if(imageType == ImageType.JPEG){     pdImage = JPEGFactory.createFromImage(this.document, image); } else {     pdImage = LosslessFactory.createFromImage(this.document, image); }   PDPageContentStream contentStream = new PDPageContentStream(document, page, PDPageContentStream.AppendMode.OVERWRITE, false);   contentStream.drawImage(pdImage, 0, 0); contentStream.setRenderingMode(RenderingMode.FILL);   for (TextLine cline : lines){     String clinetext = cline.text;     String clinetextOriginal = cline.originalText;                            FontInfo fontInfo = calculateFontSize(clinetextOriginal, (float) cline.width * width, (float) cline.height * height, font);     //config to include original document structure - overlay with original     contentStream.setNonStrokingColor(Color.WHITE);     contentStream.addRect((float) cline.left * width, (float) (height - height * cline.top - fontInfo.textHeight), (float) cline.width * width, (float) cline.height * height);     contentStream.fill();       fontInfo = calculateFontSize(clinetext, (float) cline.width * width, (float) cline.height * height, font);     //config to include original document structure - overlay with translated     contentStream.setNonStrokingColor(Color.WHITE);     contentStream.addRect((float) cline.left * width, (float) (height - height * cline.top - fontInfo.textHeight), (float) cline.width * width, (float) cline.height * height);     contentStream.fill();     //change the output text color here     fontInfo = calculateFontSize(clinetext.length() <= clinetextOriginal.length() ? clinetextOriginal : clinetext, (float) cline.width * width, (float) cline.height * height, font);     contentStream.setNonStrokingColor(Color.BLACK);     contentStream.beginText();     contentStream.setFont(font, fontInfo.fontSize);     contentStream.newLineAtOffset((float) cline.left * width, (float) (height - height * cline.top - fontInfo.textHeight));     contentStream.showText(clinetext);     contentStream.endText(); } contentStream.close() The following image shows the document translated into Spanish with the original formatting ( SampleOutput-es.pdf ). The following image shows the translated PDF in Spanish without any formatting ( SampleOutput-min-es.pdf ). Processing time The employment application pdf took about 10 seconds to extract, process and render the translated pdf. The processing time for text heavy document such as the Declaration of Independence PDF took less than a minute. Cost With Amazon Textract, you pay as you go based on the number of pages and images processed. With Amazon Translate, you pay as you go based on the number of text characters that are processed. Refer to Amazon Textract pricing and Amazon Translate pricing for actual costs. Conclusion This post showed how to use Amazon Textract and Amazon Translate to generate translated PDF documents while retaining the original document structure. You can optionally postprocess Amazon Textract results to improve the quality of the translation, for example extracted words can be passed through ML-based spellchecks such as SymSpell for data validation, or clustering algorithms can be used to preserve reading order. You can also use Amazon Augmented AI (Amazon A2I) to build human review workflows where you can use your own private workforce to review the original and translated PDF documents to provide more accuracy and context. See Designing human review workflows with Amazon Translate and Amazon Augmented AI and Building a multi-lingual document translation workflow with domain-specific and language-specific customization to get started. About the Authors Anubha Singhal is a Senior Cloud Architect at Amazon Web Services in the AWS Professional Services organization. Sean Lawrence was formerly a Front End Engineer at AWS. He specialized in front end development in the AWS Professional Services organization and the Amazon Privacy team. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Return Entertainment Case Study.txt
Custom infrastructure Amazon Elastic Compute Cloud (Amazon EC2) Français Scales easily Return Entertainment was founded by gaming industry veterans in 2019. Then entire nations shut down in response to the COVID-19 pandemic, and the startup had to adjust like everyone else. “When everybody suddenly had to stay home and travel was no longer possible, we still needed to demonstrate our games to investors and partners,” says Tuomas Paavola, chief technology officer. Return Entertainment began sending links to potential partners so that they could try its games in the cloud and discovered that it led to increased productivity, collaboration, and innovation. “That’s when we thought, Why not go fully cloud native ourselves? It got us thinking about creating games that could only exist in the cloud, that could fully use cloud-native possibilities—things that wouldn’t be possible with existing services,” says Paavola. Also crucial to Return Entertainment’s development are the monitoring and observation capabilities that AWS services provide. Using Amazon CloudWatch, a service that collects data in the form of logs, metrics, and events, the company can monitor its applications and optimize its resource use. “This is a new field, and no one knows yet how players behave in this environment,” says Juha Suihkonen, lead architect at Return Entertainment. “It’s critical for us to get data. But it’s data that nobody has, so we have to forge our own path.” Using the power of AWS, Return Entertainment is working to lower the boundaries of gaming. Its cloud-native games are designed to be played with others regardless of distance, system, or device—all through one click of a link. The variety, versatility, and reliability of AWS offerings, combined with collaborative support from an AWS solutions architect, empower the startup to explore new horizons in cloud-native gaming. Español AWS Lambda Tuomas Paavola Chief Technology Officer, Return Entertainment  日本語 AWS Services Used Customer Stories / Games 2022 Return Entertainment Speeds Up Development of Cloud-Native Games Using AWS Opportunity | Building Innovative Cloud-Native Gaming 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Contact Sales Optimizes Migrating to AWS was a practical choice for Return Entertainment for several reasons. First was familiarity: the whole Return Entertainment team already knew AWS, having had positive experiences with its offerings through previous work at other companies. AWS also offered the coverage that the startup needed. Amazon EC2 could provide the scalability that Return Entertainment needed to make cloud gaming accessible worldwide. “AWS has global reach, so we could get GPU machines close to our players,” says Paavola. “Using AWS is cost effective because we can serve the games up fast to the players when they want them, whether in the daytime or evening, on weekdays or weekends.” resource use through Amazon CloudWatch for global cloud gaming using Amazon EC2 Running on the cloud makes interactivity simple, but all the cloud computing that the company required needed a powerful server. “We started out with dedicated services at a local hosting company,” says Paavola, “but we quickly figured out that we needed something scalable.” That was when Return Entertainment chose AWS to turn its goals of cloud-native gaming into reality. Solution | Building Innovative Cloud-Native Gaming 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Return Entertainment can deliver global gaming efficiently because the startup has no servers to maintain on its own. For custom backend functionalities in its games, the company uses Amazon DynamoDB, a fully managed, serverless database service, and AWS Lambda, a serverless, event-driven compute service that can run code for virtually any type of application or backend service without provisioning or managing servers. Using these serverless solutions makes Return Entertainment’s game development faster and its operations simpler. Further, the company is ready to scale up or down as needed with ease. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Outcome | Exploring the Potential of the Cloud Using AWS “It’s been very helpful to have a solutions architect who can double-check our designs and our configurations of products and services to make it all work together,” says Emil Kaidesoja, an engineer at Return Entertainment. Based on feedback from the solutions architect, Return Entertainment chose serverless architecture and developed the first version of its cloud-native gaming infrastructure in just a few months. Low-latency Overview The support provided by AWS was another important factor in Return Entertainment’s choice to adopt AWS solutions. The company’s engineers work alongside an AWS game tech solutions architect who can quickly respond to the team’s needs, give prescriptive guidance or demonstrations, and collaborate to help accelerate game development. Shortly after its founding, Helsinki-based gaming developer Return Entertainment realized the potential of going fully cloud native. Rather than developing games for existing cloud gaming services as originally intended, the company became one of the first to design innovative games directly in the cloud. The founders recognized this would be the best way to test the cloud’s limits, harness its powers instantly, and make new forays in gaming development. Get Started Helsinki-based cloud-native gaming company Return Entertainment aims to transform the gaming industry through its use of the cloud. Its games can be played on any device and require no downloads or installations, making them accessible, simple, and fun for everyone. Using AWS is cost effective because we can serve the games up fast to the players when they want them, whether in the daytime or evening, on weekdays or weekends.” Türkçe English Low latency is also key to Return Entertainment’s ability to deliver a seamless gaming experience. Using Amazon CloudFront, a content delivery network service built for high performance, security, and developer convenience, Return Entertainment can serve cloud content with low latency to create an ideal experience for gamers. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. Learn more » built in just a few months Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data export tools.  Learn more » streaming offers satisfying gaming experience About Return Entertainment Deutsch Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Securely deliver content with low latency and high transfer speeds. Learn more »   Tiếng Việt Italiano ไทย Amazon CloudFront Amazon DynamoDB As the startup pursued innovative cloud-native game development, it looked to Amazon Web Services (AWS) for the tools that it needed. Return Entertainment could build cloud-native games using GPU instances from Amazon Elastic Compute Cloud (Amazon EC2), which provides secure, resizable capacity for virtually any workload. With Amazon EC2 and powerful components from a host of other AWS offerings, Return Entertainment saved both time and money. These savings gave Return Entertainment’s designers the freedom to focus on creating the interactive games that people want to play. Learn more » When everything works well, all this cloud gaming power might go unnoticed by gamers. “In the end, users don’t care much about what’s underneath the game, as long as the game works,” says Antti Sartanen, CEO of Return Entertainment. “You shouldn’t even know or care that the games are in the cloud. They’ll just be the most shareable, fun games you can play.” But for designers, harnessing the power of AWS for next-generation game experiences makes all the difference. Português Learn how Return Entertainment built cloud-native gaming infrastructure in a few months, and reduced time and cost using AWS.
Revive lost revenue from bad ecommerce search using Natural Language Processing _ AWS for Industries.txt
AWS for Industries Revive lost revenue from bad ecommerce search using Natural Language Processing by Aditya Pendyala and Siddharth Pasumarthy | on 30 MAY 2023 | in Amazon Comprehend , Amazon Kendra , Amazon OpenSearch Service , Amazon Textract , CPG , Industries | Permalink | Comments |  Share Ecommerce sites are supposed to be prompt, precise and above all, user-friendly. Yet, their search performance history reveals an unsatisfactory reality for shoppers and retailers. According to Baymard Institute , “61% of all ecommerce sites show search results that are misaligned to users’ searches,” forcing shoppers to either enter a new search or abandon their old one entirely. “The frustration involved in the overall product search experience results in an unacceptable level of churn and burn, about 68%,” says Forrester . With Gen Z demanding faster (and more accurate) search results, ecommerce companies are feeling the pressure to modernize their search, but few are choosing to act on it. Those who make this mistake run the serious risk of falling behind their competitors, not just in innovation, but in sales too. In this blog, we’ll discuss why keyword-based searches are burning a hole in retailers’ pockets and how Amazon Web Services (AWS) can help ecommerce companies earn it back with natural language processing (NLP). Challenges with keyword-based searches Not all online shoppers will use the search bar during their shopping experience, but nearly fifty percent do. In its 2022 roadmap report “Must-Have E-Commerce Features,” Forrester found that, “43% of users on retail websites go directly to a search bar when they first land on a website.” This makes prioritizing search results even more important when keeping a customer engaged. Doing so is a lot harder done than said, because most search engines don’t understand natural language. Let’s say you’re looking for a red dress shirt. You pull up your favorite website and type “men’s red dress shirt” into the search bar. Once you do this, the search engine works to understand what you’ve just written. However, because keyword-based search engines only understand keywords as individual terms, any input outside of this can trigger a misaligned search result. Instead of getting results for a red dress shirt, the search engine might return results for dresses or shirts, not a “dress shirt.” For this to change, the search engine needs to understand the search as one term. In other words, it needs to understand the intent of the user. Common challenges to keyword-base searches are: typos, synonyms and regional dialects, feature-based searches, filter-based searches, context-based searches, and thematic searches. Typos: This is when someone accidentally misspells a word in their search. For example, entering “sweeter” as opposed to “sweater.” Synonyms & regional dialects: This is when a user searches for a word that can have a different, regional meaning. For example, someone might search “shades” instead of “sunglasses” and get completely different results. Example: multi-billion-dollar retailer – search results for searching “mens shades” instead of “mens sunglasses” Feature-based search: This is when a user wants to search for a product with a specific feature. For example, one might search “strap sandal.” Keyword-based search engines can only understand keywords, not the intent of the user. Even though sandal and strap are used in the product description, the search engine doesn’t identify the search and returns zero searches. Filter-based search: This is when a user is looking for a particular quality in an item. For example, Earrings under 30, Blue Socks, Polyester upholstery covers and more. Example: multi-billion-dollar retailer – search results showing unrelated items from a search request for “Earrings Under 30” Context-based search: This is when a user searches for something based on context, not a specific product. For example, someone might search “drafty window fix” or “cold remedy” to see what products come up within the search. Context-based searches are the most challenging for retailers because with context-based searches, oftentimes users are searching for keywords that don’t even exist—resulting in zero returns or zero relevant returns for users. Thematic search: This is when a user is searching for a product within a thematic category. For example, someone looking for a specific type of rug might search “hallway rug,” as opposed to simply “rug.” Example: multi-million-dollar retailer – search results showing unrelated items from searching “hallway rug” instead of “rug” “From a user’s point of view, these everyday descriptions are just as correct as the industry jargon, and most of the participants during large-scale testing never thought of trying another synonym when they received poor search results,” states Baymard Institute . “Instead, participants simply assumed that the poor or limited results were the site’s full selection for such products.” Don’t burn a hole in your pocket For shoppers and retailers, these issues are frustrating and taint the overall quality of a shopping experience. However, for retailers, the impacts of these issues are two-fold, negatively impacting their customers’ experiences and their company’s financials. If shoppers can’t find the product they’re looking for, retailers can lose out on revenue, a lot of revenue . Just look at the numbers. According to a study by Econsultancy , the average ecommerce conversion rate is 2.77%. But when shoppers use the search bar and find what they are looking for, the average conversion increases to a rate of 4.63%. That’s nearly double the average ecommerce conversion rate. If searched on Amazon.com , this number increases even more. Every time someone searches on Amazon.com and finds what they’re looking for, the conversion rate increases by 6x . So, what was once a conversion rate of 2% becomes 12%. If we translate these percentages into revenue, this is a huge financial jump for ecommerce companies. How can AWS help refine your ecommerce search? AWS offers artificial intelligence and machine learning (AI/ML) services like Amazon Comprehend, Amazon Kendra, Amazon Textract and Amazon OpenSearch Service that together can be used to improve ecommerce search capabilities. Amazon Comprehend is a natural language processing service that uses machine learning to find meaning, insights and connections in text. This service equips your search engine to index key phrases, entities and sentiment to improve search performance. Amazon Comprehend learns over time, uncovering valuable insights from text in documents, customer support tickets, product reviews, emails, and social media feeds. With Amazon Comprehend, users can: Mine business and call center analytics: Extract insights from customer surveys to improve your products. Index and search product reviews: Focus on context by equipping your search engine to index key phrases, entities, and sentiment, not just keywords. Amazon Kendra is an ML based intelligent search engine that understands natural language. This intelligent enterprise search service helps you search across different content repositories with built-in connectors, giving users highly accurate answers without the need for machine learning expertise. Amazon Textract is a ready-to-use ML service that automatically and accurately extracts text, handwriting and data from scanned documents with no manual effort. Across industries, Amazon Textract can be used to keep data organized and in its original context, as well as eliminate manual review of output. Amazon OpenSearch Service is an open source, distributed search and analytics suite that enables you to perform interactive log analytics, near real-time application monitoring, and website search. With OpenSearch Service, users can quickly find relevant data with a fast, personalized search experience within your applications, websites and data lake catalogs. Conclusion Even with billions of dollars in sales, retailers still are losing out on revenue thanks to poor search performance capabilities. However, it doesn’t have to be that way. When used together, AWS services like Amazon Comprehend, Amazon Kendra, Amazon Textract and Amazon OpenSearch Service can help eliminate this problem. They can create a powerful, improved search experience so retailers can finally focus on lifting revenue, not lowering it. Discover ways you can improve retail search performance and start boosting revenue with AWS AI/ML services. Learn more about AWS for consumer packaged goods (CPG) or contact an AWS Representative. Further Reading Building Blocks for Modern Retail Ecommerce and Media Search with AWS Tech Analysis with Amazon OpenSearch Service and Amazon Comprehend Building an NLU-powered search application with Amazon SageMaker and Amazon Opensearch Service KNN feature TAGS: aws , eCommerce , Natural Language Procesing (NLP) Aditya Pendyala Aditya is a Senior Solutions Architect at AWS based out of NYC. He has extensive experience in architecting cloud-based applications. He is currently working with large enterprises to help them craft highly scalable, flexible, and resilient cloud architectures, and guides them on all things cloud. He has a Master of Science degree in Computer Science from Shippensburg University and believes in the quote “When you cease to learn, you cease to grow.” Siddharth Pasumarthy Siddharth is a Solutions Architect based out of New York City. He works with enterprise retail customers in the fashion and apparel industry, to help them migrate to cloud and adopt cutting edge technologies. He has a B.S. in Architecture from the Indian Institute of Technology and an M.S. in Information systems from Kelley School of Business. In addition to keeping up-to-date with technology, he is passionate about the arts, and creates still life acrylic paintings in his free time. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Revolutionizing Manufacturing with Sphere and Amazon Lookout for Visions XR and AI Integration _ AWS Partner Network (APN) Blog.txt
AWS Partner Network (APN) Blog Revolutionizing Manufacturing with Sphere and Amazon Lookout for Vision’s XR and AI Integration by Arun Nallathambi , Colin Yao , and Alexandra Corey | on 13 JUL 2023 | in Amazon Lookout for Vision , Artificial Intelligence , AWS Marketplace , AWS Partner Network , Case Study , Customer Solutions , Industries , Intermediate (200) , Manufacturing , Thought Leadership | Permalink | Comments |  Share By Arun Nallathambi, Sr. Partner Solutions Architect – AWS By Colin Yao, CTO – Sphere By Alexandra Corey, Head of Marketing – Sphere Sphere Sphere  and Amazon Lookout for Vision are revolutionizing the way that high-value equipment and machines are assembled, maintained, and operated. By combining extended reality (XR) with artificial intelligence (AI), the integration gives manufacturing customers a cutting-edge tool to uncover process issues, identify missing components, detect damaged parts, and more. In this post, we will explore use cases in which the enhanced training procedures and advanced analytics afforded by Sphere and Amazon Lookout for Vision can be applied to real-world scenarios. Sphere is an AWS Partner and AWS Marketplace Seller that’s an immersive collaboration developer and provider, supporting enterprise teams in boosting their bottom line through XR. Sphere is used by leading businesses that are looking to increase productivity, optimize supply chain operations, connect workers worldwide, and reduce errors, safety risks, and environmental footprints. Sphere Overview Sphere is device-agnostic, working with the market’s widest range of augmented, virtual, and assisted reality headsets. It also operates on smartphones, tablets, and PCs. In addition, it’s agnostic across conferencing tools, as well as leading enterprise resource planning (ERP), product lifestyle management (PLM), and customer relationship management (CRM) software. Heavily adopted by the manufacturing, automotive, healthcare, and defense sectors, Sphere’s turnkey solution provides tools for workforce collaboration, enhanced training, access to remote experts, and holographic build planning. Each of Sphere’s add-on packages—including Sell, Connect, Build, and Train—are offered in a single, streamlined platform. Sphere’s integration with Amazon Lookout for Vision is an extension to the company’s Train package. Sphere Train enables immersive guidance in the training, operation, and maintenance of critical equipment and machines. Workflows included in the package consist of a sequence of steps that each contain text instruction, along with optional spatial indicators featuring reference assets and operator actions. Sphere supports 60+ file types, enabling users to bring any media content into XR. These include CAD models, multiple document types, video and audio files, and more. Workflows are automatically saved, generating a report that provides valuable operational insight. Figure 1 – Operator connecting and collaborating to get expert help in XR environment. Benefits of Amazon Lookout for Vision Amazon Lookout for Vision is a cloud-based machine learning (ML) service offered by Amazon Web Services (AWS) that enables you to create and train computer vision models to analyze images. Customers use these models to detect anomalies at scale, such as detecting damaged parts, identifying missing components, uncovering process issues in a setup, and using these visuals to take corrective actions. Amazon Lookout for Vision enables customers to easily and quickly create ML models with the goal of preventing avoidable downtime and reducing supply chain disruptions. Organizations in manufacturing, healthcare, and more use Amazon Lookout for Vision to build efficient image-based inspection processes that are more scalable, reliable, faster, and reduce manual labor dependency. Powering Precision with Sphere and Amazon Lookout for Vision Sphere’s integration with Amazon Lookout for Vision amplifies critical XR use cases to support machine maintenance, up time, and worker effectiveness. The platform is deployed in real-world environments, generating return on investment (ROI) through manufacturing risk reduction using XR combined with AI functionality. By contrasting expected results with actual outputs during Sphere-powered workflows, the integration enables enterprises to move from a retroactive review of completed work to on-demand feedback and verification. Real-time error avoidance saves Sphere customers millions of dollars annually. Example: Combining XR with AI Let’s review an example to help illustrate the integration of Sphere and Amazon Lookout for Visio. As part of the mounting procedure for a precision measurement machine, pins must be placed in extremely specific positions on the holding apparatus. Like all applied AI/ML applications, the solution begins with data. Specifically, we use image data of “normal” expected results, as well as images of “defects” or “anomalies.” Image samples are collected featuring both normal and anomalous cases, and then fed into Amazon Lookout for Vision. In this context, training a model is simple and requires a limited sample to get started. Figure 2 – Mount piece for precision measurement machine. Amazon Lookout for Vision allows us to train models for specific scenarios in a powerful way. Not only can customers create models that recognize if the pins are in the correct place or not, they can also extend it to tell them which pins are misplaced specifically. Amazon Lookout for Vision allows users to create classification models that determine whether an anomaly is present in the input image. This scenario can be thought of as a straight-forward pass or fail. However, this can be taken a step further by training image segmentation models, which gives the location of an anomaly in the image through semantic segmentation. Although this segmentation takes more input data and training, the contextual information can be extremely useful. Once a model is trained, it can be reused continuously to help technicians and operators increase the accuracy of their work. Onsite employees can put on their XR headset and begin the step-by-step procedure that guides them through the setup for the precision measurement machine. With Sphere’s XR solution, the user is spatially guided through the process and receives cues as to where they need to take action, as well as key points of interest to keep in mind. Figure 3 – Operator following instruction and workflow in XR environment. The operator arrives at a step that requires them to set up the mounting apparatus. Once they feel the work has been conducted correctly, they can capture a photo using Sphere which, together with Amazon Lookout for Vision, automatically verifies whether the step was precisely completed. Sphere allows all of the above to be conducted safely and efficiently, while remaining hands-free and unencumbered. What Amazon Lookout for Vision provides is a confidence interval which can be combined with Sphere to build complex workflows with configurable conditions for acceptable quality. If the setup is done correctly, the operator can move forward with running the measurement procedure. If not, and the confidence is low, Sphere will prompt the user to double check pin placement and otherwise provide guidance as to which pins are specifically misaligned. Alternatively, if confidence lies in a gray zone, it may suggest the operator use Sphere to call a remote expert and get a second opinion before continuing. Figure 4 – Amazon Lookout for Vision powers Sphere to conduct quality check on XR space. Through the standard usage of Sphere, combined with Amazon Lookout for Vision, these recognition models improve over time with increased input. Verification attempts are reused to offer more training data beyond the initial training dataset. By creating this continuous feedback loop, Sphere allows companies to further refine the models and adapt them to their changing requirements and account for temporal deviations that may present themselves. Case Study: Micron’s Deployment of The Solution Micron Technology , a Sphere customer as well as investor, uses the platform to provide frontline workers the necessary tools for improving business efficiency. For Micron, access to digitized training functionality with paperless reporting is a step in the right direction when it comes to standard operating procedure (SOP) compliance traceability. However, work performance oversight is just one piece of the puzzle, as it doesn’t prevent process mistakes in the first place. Errors are often paired with costly consequences requiring rework and retroactive corrections, all of which is avoidable if flagged sooner. Sphere has allowed Micron to increase machine availability by 2% and save over 3,000 hours of machine downtime annually. With Sphere plus Amazon Lookout for Vision, Micron gains real-time insight into whether a job is being performed correctly, allowing operators to act immediately if something goes wrong. “For Micron, Sphere is a critical component of business continuation,” says Ning Khang Lee, Director of Smart Manufacturing and AI at Micron. “We use Sphere to connect multinational teams, effectively train workers, and give ourselves an operation edge in the competitive semi-conductor market.” Many of Micron’s procedures require complete hands-free usage, making Sphere’s XR solution a natural fit. For example, complex machine maintenance involves many physical steps which must be conducted by a technician in the correct order. Moving away from the machine to check instructions in a booklet or on a computer is inefficient, unsafe, and can easily lead to errors that result in significant disruptions to the supply chain. Sphere’s Train package allows the technician to remain focused on the task as they’re guided by detailed, holographic workflow steps that are anchored to the appropriate region of the machine. Amazon Lookout for Vision harnesses AI to add an even further layer of risk reduction. Conclusion The manufacturing industry is being revolutionized by the introduction of extended reality (XR) and AI technologies, which have brought about numerous benefits in terms of efficiency and risk reduction. By combining Sphere’s productivity and collaboration platform with Amazon Lookout for Vision’s ability to train and continuously reuse models, the integration provides a streamlined solution for customers to improve SOP compliance, reduce machine downtime, and eliminate costly errors. You can learn more about Sphere in AWS Marketplace . . . Sphere – AWS Partner Spotlight Sphere is an AWS Partner  and immersive collaboration developer and provider which supports enterprise teams in boosting their bottom line through extended reality (XR). Contact Sphere | Partner Overview | AWS Marketplace TAGS: AWS Partner Guest Post , AWS Partner References , AWS Partner Solutions Architects (SA) , AWS Partner Success Stories Comments View Comments Resources AWS Partner and Customer Case Studies AWS Partner Network Case Studies Why Work with AWS Partners Join the AWS Partner Network Partner Central Login AWS Training for Partners AWS Sponsorship Opportunities Follow  AWS Partners LinkedIn  AWS Partners Twitter  AWS Partners YouTube  AWS Email Updates  APN Blog RSS Feed
Rivian Case Study _ Automotive _ AWS.txt
About Rivian Français AWS Select Consulting Partner Amazon FSx for Lustre Español Amazon EC2 Improved availability of compute resources 日本語 AWS Services Used AWS Professional Services Increased software speed by up to 66% 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Rivian depends on computer-aided engineering tools to extend vehicles’ range and maintain high safety standards. But in early 2020, one of the company’s on-premises high-performance computing clusters failed, reducing its compute capacity by half. Rivian looked to the cloud to overcome this challenge. X-ISS provides system and application technical support to Rivian’s computer-aided engineering team for Get Started Accelerating Innovation with Efficient Compute Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances, Rivian’s engineers can scale out to a larger number of cores. Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. Many workloads such as machine learning, high performance computing (HPC), video rendering, and financial simulations depend on compute instances accessing the same set of data through high-performance shared storage. Rivian pushes the pace of automotive innovation with AWS 中文 (繁體) Bahasa Indonesia On AWS, the speed of Rivian’s software tools has improved by up to 66 percent, and Rivian can load a full vehicle bill of materials in 22 minutes. The company uses Using the Breadth of AWS Services Ρусский عربي 中文 (简体) AWS Professional Services, a global team of experts, Rivian improved data availability using AWS CloudFormation, which enables users to speed up cloud provisioning, Rivian can deploy automatically through continuous integration / continuous delivery. Amazon Relational Database Service (Amazon RDS)—which makes it simple to set up, operate, and scale a relational database in the cloud. Backup synchronization, which before took up to 1 day, now takes less than 1 hour. “As Rivian grows at a rapid pace, we need a highly scalable system,” says Surendra Balu, Rivian’s 3DExperience technical lead. “Changes that took 5 days now occur within minutes.” And using Learn more » Benefits of AWS On AWS, interaction with product lifecycle management has increased 66 percent. Rivian also improved failover using To meet accelerated engineering schedules and reduce the need for physical prototypes, electric vehicle manufacturer Amazon EC2 Auto Scaling, which helps users maintain application availability. Amazon Elastic Compute Cloud (Amazon EC2) Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. In 2020, Rivian found that its on-premises research and development information technology infrastructure could not keep up with its performance needs. Resource bottlenecks affected product lifecycle management, computer-aided design, and computer-aided engineering, so Rivian began using Amazon Web Services (AWS) to architect an agile engineering environment. English Amazon RDS People who were skeptical about high-performance computing in the cloud are more open minded after seeing our results on AWS.” 2021 Enabled collaboration through shared storage Madhavi Isanaka Chief Information Officer, Rivian Reduced need for physical prototypes Rivian Executes Vision of Agile Engineering on AWS “Our engineers expected the fix to take 6 months,” says Madhavi Isanaka, Rivian’s chief information officer. Instead, Rivian built a new compute cluster on AWS. “In 3 weeks, we had a working proof of concept on AWS,” says Isanaka. After that success, the company migrated its production environments. In the cloud, Rivian’s engineers can access and automate resources on demand. Deutsch Rivian relies on advanced modeling and simulation techniques. Using high compute capacity, simulations enable engineers to test new concepts and bring their designs to market quickly. Amazon FSx for Lustre, a fully managed storage service, Rivian can access shared storage quickly. And after consulting Tiếng Việt Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Italiano ไทย The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. We work together with your team and your chosen member of the AWS Partner Network (APN) to execute your enterprise cloud computing initiatives. Rivian plans to continue migrating workloads to AWS, enabling more seamless postprocessing and visualization. “People who were skeptical about high-performance computing in the cloud are more open minded after seeing our results on AWS,” says Isanaka. “This is accelerating adoption across the board.” Contact Sales Scale-Out Computing on AWS, which helps customers deploy and operate multiuser environments. “In early product development stages, we don’t have many physical vehicles, so we use AWS to bring the design space to life,” says Isanaka. Using Optimizing for Efficiency and Innovation Rivian is an electric vehicle maker and automotive technology company. It designs and manufactures vehicles and offers services related to sustainable transportation. C5 Instances, which deliver cost-effective high performance at a low price per compute ratio. By using Amazon EC2 C5n Instances and Português
Rumah Siap Kerja (RSK) Case Study - Amazon Web Services (AWS).txt
With AWS, everything from security to scalability is built-in and fully managed in the cloud. This lets us focus on delivering high-quality, high-value education to our users. AWS and Elitery have supported us in completely transforming our business model during the pandemic, as well as sustaining our subsequent business growth.” Français RSK runs Amazon Relational Database Service (Amazon RDS) to automate time-consuming administration tasks, such as hardware provisioning, patching, and backups. This means RSK’s IT team now spends less time on infrastructure maintenance and can redirect its focus to developing new products and improving features. Español RSK deployed its LMS on Amazon Elastic Compute Cloud (Amazon EC2) and Amazon EC2 Auto Scaling to grow or shrink compute capacity depending on demand. While the platform handles about 500 concurrent users on average, it can sometimes reach as many as 20,000 concurrent users. With a scalable cloud-based infrastructure, RSK can deliver consistent, high-quality training video content, even during traffic spikes. Learn More 日本語 Contact Sales These include RSK’s mobile app, which was launched in 2022. The app complements its online LMS, allowing users to attend training sessions on the go, or to seek career coaching services. In 2022, RSK also introduced an entrepreneurship training course via its mobile app that helps aspiring entrepreneurs start their own businesses. Rumah Siap Kerja (RSK) is an Indonesia-based social enterprise that provides professional and entrepreneurship training, and career coaching services. Founded in 2019, RSK was established wholly offline with face-to-face trainings, working with more than 50 trainers across a range of skill sets, expertise, and industries. 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. In 2021, on AWS’s recommendation, RSK integrated Amazon CloudFront with Amazon Simple Storage Service (Amazon S3) to deliver its video content. Built for high performance and security, the content delivery network service has helped RSK halve data transfer charges. Get Started Rumah Siap Kerja Pivots to a Cloud-based E-Learning Platform in 2 Months on AWS Designed, built, and deployed a cloud-based LMS within 2 months AWS Services Used RSK also uses Amazon Simple Email Service (Amazon SES) to support its user registration process and marketing campaigns. Previously, the system was unable to process more than 5,000 registrations/day, leading to user validation errors during registration. Using Amazon SES, RSK can quickly scale and has not encountered issues with validation errors since.  During the COVID-19 pandemic, Indonesia experienced its highest level of unemployment in nearly a decade, and demand for online professional training surged. RSK realized it needed to pivot its business and build an e-learning platform to deliver its courses and programs.  中文 (繁體) Bahasa Indonesia Going Serverless to Improve End-User Experience Ρусский عربي Learn more » “With AWS, everything from security to scalability is built-in and fully managed in the cloud. This lets us focus on delivering high-quality, high-value education to our users. AWS and Elitery have supported us in completely transforming our business model during the pandemic, as well as sustaining our subsequent business growth,” shared Risyad, head of IT at RSK.   中文 (简体) Risyad Head of IT, Rumah Siap Kerja With its LMS in the cloud, RSK could utilize pre-recorded videos and video conferencing apps to train its members virtually. The new LMS features built-in assessment tools, which gives RSK’s trainers and users a comprehensive view of the entire learning journey. RSK also uses the LMS to centrally manage training, tracking students, and reporting analytics, saving them up to 10 hours/week on administrative tasks. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Benefits of AWS RSK turned to the cloud to flexibly adapt to changing pandemic conditions without overcommitting budgets. It chose to work with Amazon Web Services (AWS) as the organization already had a good experience hosting its website on the AWS Cloud. In 2020, RSK began working with Elitery, an Amazon Web Services (AWS) Advanced Tier Services Partner, to set up its e-learning platform on the AWS Cloud.  Achieving Cost Reductions and Improved Customer Service Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Türkçe English Amazon RDS Can automatically resize compute capacity to handle up to 20,000 concurrent users per day About Rumah Siap Kerja (RSK) Reduced data transfer charges by 50 percent Deutsch Amazon SES RSK was able to design, build, and deploy a full-fledged Learning Management System (LMS) in just 2 months. RSK was able to successfully grow its user base by up to 300 percent within a year, and deliver more than 3,700,000 hours of training to at least 500,000 users. As of 2022, RSK has over 1,496 courses on the platform, with 2,000 training videos equivalent to a total of over 3,700,000 viewing hours. Tiếng Việt Amazon S3 Italiano ไทย To learn more, visit https://aws.amazon.com/education. Amazon CloudFront Looking ahead, RSK plans to adopt Amazon Aurora to power its performance-intensive applications in a serverless, fully managed database environment. This hands-off approach to capacity management will allow RSK to focus on expanding its suite of products and features, thus creating a more engaging and comprehensive learning experience for its users.  Supporting RSK’s Pivot to Cloud-based Training Português 2022 Amazon Simple Email Service (SES) lets you reach customers confidently without an on-premises Simple Mail Transfer Protocol (SMTP) system. Rumah Siap Kerja (RSK) is an Indonesia-based education technology startup that provides professional and entrepreneurship training, and career coaching services. Founded in 2019, RSK was established wholly offline with face-to-face trainings, working with more than 50 trainers across a range of skill sets, expertise, and industries.
Run Jobs at Scale While Optimizing for Cost Using Amazon EC2 Spot Instances with ActionIQ _ ActionIQ Case Study _ AWS.txt
Headquartered in New York City, ActionIQ operates a CDP for business, marketing, and analytics that operates on a software-as-a-service model. It helps companies derive business intelligence using data that they already own to improve customer engagement and drive revenue. Previously, ActionIQ ran its solution using Amazon EC2 Reserved Instances, which provide a significant discount compared with On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone. “This compute system is used by every team in the company for data processing,” says Mitesh Patel, tech lead at ActionIQ. “If our system is not running, our teams cannot meet their customers’ SLAs.” Français Optimizes Using this solution, ActionIQ has significantly optimized its compute costs. The hourly price for Spot Instances is $1.93 per hour compared to the cost of Reserved Instances, which was $3 per hour. ActionIQ runs anywhere between 10–500 machines at any given time, and by adopting Spot Instances, it has unlocked significant cost savings. Like with On-Demand Instances, ActionIQ pays only for the capacity it uses when using Spot Instances, instead of having infrastructure always running. This benefit has further optimized its costs. Additionally, ActionIQ has an AWS Savings Plan in place, which reduces costs for workloads that cannot be interrupted. 2023 ActionIQ saw an opportunity to optimize for both scale and cost by choosing a different pricing option on Amazon Web Services (AWS). The company adopted Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which run fault-tolerant workloads at up to a 90 percent discount compared with Amazon EC2 On-Demand Instances, which let companies pay for compute capacity by the hour or second. By making this change, ActionIQ has reduced its compute costs and positioned its business for future growth. Español Outcome | Helping Businesses Derive Better Insights for Customer Engagement on AWS Learn more » 日本語 Contact Sales Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Services Used 中文 (繁體) Bahasa Indonesia analytics capabilities Expands customers' Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Customer Stories / Software & Internet عربي Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 72%) compared to On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone. runs hundreds of parallel jobs per customer 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon EC2 ActionIQ empowers everyone to be a customer experience champion. Its solutions give business teams the freedom to explore and act on customer data while helping technical teams better manage data governance, costs, and performance. Learn more » On-Demand Instances let you pay for compute capacity by the hour or second (minimum of 60 seconds) with no long-term commitments. Overview Amazon EC2 On-Demand Instances Learn how ActionIQ is powering its enterprise customer data platform using Spot Instances. compute costs In about 6 months, ActionIQ transitioned its Reserved Instances to Spot Instances. The company can now run thousands of customer workloads in a way that meets the time constraints set by its SLAs, benefiting customers and internal teams alike. “We had to build on top of Spot Instances to achieve our SLAs, making changes like building resilience across Availability Zones,” says Patel. “We’ve made a lot of progress and have gotten to a stage where we do not need to tune out clusters. We can now predict how they are going to behave at any point, given some traffic.” in concurrency for customer workloads For ActionIQ, deriving fast insights is critical. The software-as-a-service (SaaS) company operates a powerful customer data platform (CDP) that helps large enterprises better understand their customers and improve their experiences. To help its enterprise customers run more workloads in parallel and meet its service-level agreements (SLAs), ActionIQ wanted to improve the scalability and cost-effectiveness of its system. Türkçe English Run Jobs at Scale while Optimizing for Cost Using Amazon EC2 Spot Instances with ActionIQ Because ActionIQ can scale to run more workloads, its customers no longer experience long wait times or backlogs when they need to use the platform. They can add more data to the system, run jobs, and receive results much faster, which improves their speed of innovation. “Before we adopted Spot Instances, our customers regularly had to wait because their jobs were placed in a queue,” says Patel. “Now, there isn’t any backlog anymore because we can scale, and we have constructed our automatic scaling algorithms to prevent these wait times.” About ActionIQ of jobs at scale Opportunity | Using Amazon EC2 Spot Instances to Reduce Costs for ActionIQ Runs thousands By adopting Spot Instances, ActionIQ has opened up a world of opportunities for its business. In the future, the company plans to optimize its machines based on job types and build out its HybridCompute composable architecture feature, which will help customers connect their own datasets from other systems to the ActionIQ platform. “Our competitors can’t effectively derive business value from such a large dataset in a way that could make it truly usable,” says Joffe. “Our system’s ability to handle the size and complexity of the datasets that we work with is a key differentiating factor, and we can accomplish this by using Spot Instances.” Because of the scale and the flexibility that we have gained by using Spot Instances, we can handle larger and more complex workloads than ever before.” Deutsch Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. Learn more » ActionIQ found Reserved Instances to be highly reliable for running its customers’ workloads. However, the volatile nature of workload demand meant there wasn’t a steady, simple-to-predict compute resource requirement. This resulted in ActionIQ paying for Reserved Instances even when workloads were not running, which incurred unnecessary costs. In 2019, ActionIQ chose to adopt Spot Instances so that it could optimize for cost and scale more effectively in response to its customers’ needs. “Spot Instances was a two-for-one solution,” says Patel. “We could achieve the scalability that we wanted because we did not need to prepay for machines in advance.” Amazon EC2 Reserved Instances Tiếng Việt Nitay Joffe Chief Technology Officer, ActionIQ Italiano ไทย Cost-effectively Using Spot Instances, ActionIQ has achieved greater scalability and can run highly complex workloads for its customers. Customers can build segments that are much more complex and run 100 times more workloads in parallel than they could previously. As a result, customer workloads have become 10 times more complex. Its enterprise CDP customers can derive even more value from their data without having to worry about whether the solution can handle their requests. ActionIQ is well positioned for future growth because it can scale more effectively to meet its customers’ compute demands. Solution | Scaling Cost-Effectively to Run Hundreds of Concurrent Jobs per Customer Amazon EC2 Spot Instances With Spot Instances, ActionIQ can scale to run 50,000 workloads and counting without needing to define a long-term commitment for its compute capacity needs. The company can onboard new customers and datasets quickly and as needed. ActionIQ can also run thousands of concurrent jobs per customer in a much more cost-effective way compared with Reserved Instances. As a result, its customers can expand their analytics capabilities. “Because of the scale and the flexibility that we have gained by using Spot Instances, we can handle larger and more complex workloads than ever before,” says Nitay Joffe, chief technology officer at ActionIQ. “We can scale our storage and query capabilities across massive datasets, and we know that we are backed by Amazon EC2.” 100x increase Português
Rush University System for Health Creates a Population Health Analytics Platform on AWS _ Rush Case Study _ AWS.txt
Building on its highly successful COVID-19 analytics hub with support from Amazon Web Services (AWS), RUSH developed the Health Equity Care & Analytics Platform (HECAP). This platform transforms, aggregates, and harmonizes data from different sources to reflect the complex interplay of clinical and social factors on patient health. HECAP uses advanced analytics to provide actionable insights for patients and providers, which RUSH is using to enhance care outcomes and reduce health inequities in Chicago’s West Side. Amazon Comprehend Medical Français 2023 RUSH runs analytics models using Amazon SageMaker, a service that lets users build, train, and deploy machine learning models for any use case. Using Amazon SageMaker, RUSH can identify different factors that could influence health outcomes and generate a risk stratification score, which it uses to identify the most at-risk patients. RUSH queries data using Amazon Athena, an interactive query service that makes it simple to analyze data directly from Amazon HealthLake. Amazon Athena also integrates with Amazon SageMaker so that data scientists can prepare data for machine learning. “One of the biggest challenges that data scientists face is that models are complex, and joining data from multiple sources can be cumbersome,” says Saldanha. “With the low-code environment on Amazon SageMaker, we can simplify healthcare data analysis and also minimize errors, which is very important.” RUSH can then present data to providers using dashboards on Amazon QuickSight, a service that powers data-driven organizations with unified business intelligence at hyperscale. Using this information, providers can make critical decisions about each patient’s care and connect them with important resources like food banks, support for utility payments, and transportation. Español About Rush University System for Health Established in 1837, RUSH is a leading academic healthcare system that encompasses three major hospitals and numerous outpatient care facilities. The system primarily serves Chicago’s West Side residents, who have a lower life expectancy than residents of wealthier sections of the city. “Our patients who live in the most disadvantaged neighborhoods are living 16 years less than our patients from more affluent areas,” says Dr. Michael Cui, internal medicine physician and associate chief medical informatics officer at RUSH. “Our goal with HECAP is to improve these documented, long-standing healthcare disparities.” 日本語 Using HECAP, RUSH can aggregate all available data about a patient and run analytics models and tools to help guide healthcare decisions. The solution collects data from several sources, including the Epic electronic health record (EHR), blood pressure readings, social determinant of health surveys, and claims history. The platform uses Amazon HealthLake, a HIPAA-eligible service offering healthcare and life sciences companies a unified view of individual and population data to inform analysis and intervention at scale. Amazon HealthLake supports Amazon Comprehend Medical, a HIPAA-eligible natural language processing service that extracts key information from text such as physician’s notes and discharge summaries in the EHR. Using this service, RUSH can transcribe and link important data, such as medications and procedures, to standardized medical terminologies, like ICD-10-CM and RxNorm. HECAP can then extract relevant information from this data to derive further insights. “When we are successfully bringing data from multiple sources and we have identified the appropriate machine learning models, we do something called risk stratification,” says Saldanha. “Using these results, we can identify actionable interventions for health equity. Our clinicians and support staff can intervene and make changes to care delivery and other services so that we can improve patient outcomes.” Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Amazon HealthLake Close We have a great opportunity to start bringing in more data from different sources and use the power of AWS to scale massively across our system, significantly benefiting the care of our patients in Chicago.” Produces risk score 한국어 Outcome | Advancing Health Equity in the United States through Data Interoperability and Advanced Analytics using clinical, social, and patient-generated data Solution | Developing a Comprehensive Picture of Patient Risk Using Amazon HealthLake Amazon Comprehend Medical is a HIPAA-eligible natural language processing (NLP) service that uses machine learning that has been pre-trained to understand and extract health data from medical text, such as prescriptions, procedures, or diagnoses. AWS Services Used In addition to medical conditions and lifestyle behaviors, certain factors such as housing, transportation, and access to food, known as the social determinants of health, help healthcare providers understand differences in health status. Patient data can be difficult to capture because it is often siloed across different providers and service organizations. Some data points are often unstructured, such as patient-generated data. Other information is sometimes unavailable, such as employment and neighborhood safety data. Clinicians at RUSH sought to identify the breadth of issues that contribute to the life expectancy gap, so they embarked on a project to make patient data more accurate and actionable. “First, we built a solution on AWS to bring data from multiple sources into a single pane of glass. We successfully enhanced citywide coordination for the COVID-19 pandemic response,” says Anil Saldanha, chief innovation officer of RUSH. “When the Robert Wood Johnson Foundation gave us an additional grant, we expanded the platform capabilities to develop and launch HECAP, with the support of AWS and its Health Equity Initiative.” 中文 (繁體) Bahasa Indonesia Click to enlarge for fullscreen viewing.  Opportunity | Using AWS Services to Identify Health Disparities and Advance Health Equity Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Rush University System for Health (RUSH) is a nationally recognized health system leader in quality and health equity. The hospital network is committed to addressing the underlying causes of the 16-year life expectancy gap among minority and lower-income residents of Chicago’s West Side. RUSH sought to build a comprehensive analytics solution to identify and inform scalable interventions for equitable healthcare based on clinical, cardiometabolic, and social needs. 中文 (简体) for minority and underserved patient populations RUSH HECAP Architecture Learn more » Overview Builds a complete patient profile Rush University System for Health (RUSH) is an academic healthcare system based in Chicago, Illinois. RUSH comprises three major hospitals, a wide network of medical providers, and numerous outpatient care facilities. “We have a great opportunity to start bringing in more data from different sources and use the power of AWS to scale massively across our system, significantly benefiting the care of our patients in Chicago,” says Saldanha. “We want to make HECAP a blueprint that we hope other organizations will use to advance health equity across the United States.” Get Started Türkçe English Aggregates data Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon QuickSight to guide clinical and community intervention Anil Saldanha Chief Innovation Officer, Rush University System for Health RUSH is continuing to build out HECAP by adding more functionality to the provider dashboard, such as enhancing risk prediction modeling and implementing additional tools to enhance care for underserved populations. Using the methodology and architecture that it developed on AWS, RUSH hopes to expand the solution to support other healthcare organizations and improve outcomes for patients everywhere. Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. Deutsch Amazon HealthLake is a HIPAA-eligible service offering healthcare and life sciences companies a chronological view of individual or patient population health data for query and analytics at scale. Tiếng Việt Customer Stories / Healthcare Italiano ไทย Using HECAP on AWS, RUSH can provide its clinicians with a complete picture of their patients and provide patients with tools for better health. “As a clinician, it is incredibly important to see patient data from multiple sources,” says Cui. “Being able to bring in machine learning tools from AWS to analyze this data is a game changer. As a healthcare system, we can take better care of our patients and access a new and richer data source than we currently have access to.” Learn how Rush University System for Health is using AWS to identify disparities and advance health equity. Advances health equity Learn more » from multiple sources using HIPAA-eligible services Architecture Diagram Rush University System for Health Creates a Population Health Analytics Platform on AWS Português Amazon SageMaker
Safe image generation and diffusion models with Amazon AI content moderation services _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Safe image generation and diffusion models with Amazon AI content moderation services by Lana Zhang , James Wu , John Rouse , and Kevin Carlson | on 28 JUN 2023 | in Advanced (300) , Amazon Comprehend , Amazon Rekognition , Amazon SageMaker JumpStart , Generative AI | Permalink | Comments |  Share Generative AI technology is improving rapidly, and it’s now possible to generate text and images based on text input. Stable Diffusion is a text-to-image model that empowers you to create photorealistic applications. You can easily generate images from text using Stable Diffusion models through Amazon SageMaker JumpStart. The following are examples of input texts and the corresponding output images generated by Stable Diffusion. The inputs are “A boxer dancing on a table,” “A lady on the beach in swimming wear, water color style,” and “A dog in a suit.” Although generative AI solutions are powerful and useful, they can also be vulnerable to manipulation and abuse. Customers using them for image generation must prioritize content moderation to protect their users, platform, and brand by implementing strong moderation practices to create a safe and positive user experience while safeguarding their platform and brand reputation. In this post, we explore using AWS AI services Amazon Rekognition and Amazon Comprehend , along with other techniques, to effectively moderate Stable Diffusion model-generated content in near-real time. To learn how to launch and generate images from text using a Stable Diffusion model on AWS, refer to Generate images from text with the stable diffusion model on Amazon SageMaker JumpStart . Solution overview Amazon Rekognition and Amazon Comprehend are managed AI services that provide pre-trained and customizable ML models via an API interface, eliminating the need for machine learning (ML) expertise. Amazon Rekognition Content Moderation automates and streamlines image and video moderation. Amazon Comprehend utilizes ML to analyze text and uncover valuable insights and relationships. The following reference illustrates the creation of a RESTful proxy API for moderating Stable Diffusion text-to-image model-generated images in near-real time. In this solution, we launched and deployed a Stable Diffusion model (v2-1 base) using JumpStart. The solution uses negative prompts and text moderation solutions such as Amazon Comprehend and a rule-based filter to moderate input prompts. It also utilizes Amazon Rekognition to moderate the generated images. The RESTful API will return the generated image and the moderation warnings to the client if unsafe information is detected. The steps in the workflow are as follows: The user send a prompt to generate an image. An AWS Lambda function coordinates image generation and moderation using Amazon Comprehend, JumpStart, and Amazon Rekognition: Apply a rule-based condition to input prompts in Lambda functions, enforcing content moderation with forbidden word detection. Use the Amazon Comprehend custom classifier to analyze the prompt text for toxicity classification. Send the prompt to the Stable Diffusion model through the SageMaker endpoint, passing both the prompts as user input and negative prompts from a predefined list. Send the image bytes returned from the SageMaker endpoint to the Amazon Rekognition DetectModerationLabel API for image moderation. Construct a response message that includes image bytes and warnings if the previous steps detected any inappropriate information in the prompt or generative image. Send the response back to the client. The following screenshot shows a sample app built using the described architecture. The web UI sends user input prompts to the RESTful proxy API and displays the image and any moderation warnings received in the response. The demo app blurs the actual generated image if it contains unsafe content. We tested the app with the sample prompt “A sexy lady.” You can implement more sophisticated logic for a better user experience, such as rejecting the request if the prompts contain unsafe information. Additionally, you could have a retry policy to regenerate the image if the prompt is safe, but the output is unsafe. Predefine a list of negative prompts Stable Diffusion supports negative prompts, which lets you specify prompts to avoid during image generation. Creating a predefined list of negative prompts is a practical and proactive approach to prevent the model from producing unsafe images. By including prompts like “naked,” “sexy,” and “nudity,” which are known to lead to inappropriate or offensive images, the model can recognize and avoid them, reducing the risk of generating unsafe content. The implementation can be managed in the Lambda function when calling the SageMaker endpoint to run inference of the Stable Diffusion model, passing both the prompts from user input and the negative prompts from a predefined list. Although this approach is effective, it could impact the results generated by the Stable Diffusion model and limit its functionality. It’s important to consider it as one of the moderation techniques, combined with other approaches such as text and image moderation using Amazon Comprehend and Amazon Rekognition. Moderate input prompts A common approach to text moderation is to use a rule-based keyword lookup method to identify whether the input text contains any forbidden words or phrases from a predefined list. This method is relatively easy to implement, with minimal performance impact and lower costs. However, the major drawback of this approach is that it’s limited to only detecting words included in the predefined list and can’t detect new or modified variations of forbidden words not included in the list. Users can also attempt to bypass the rules by using alternative spellings or special characters to replace letters. To address the limitations of a rule-based text moderation, many solutions have adopted a hybrid approach that combines rule-based keyword lookup with ML-based toxicity detection. The combination of both approaches allows for a more comprehensive and effective text moderation solution, capable of detecting a wider range of inappropriate content and improving the accuracy of moderation outcomes. In this solution, we use an Amazon Comprehend custom classifier to train a toxicity detection model, which we use to detect potentially harmful content in input prompts in cases where no explicit forbidden words are detected. With the power of machine learning, we can teach the model to recognize patterns in text that may indicate toxicity, even when such patterns aren’t easily detectable by a rule-based approach. With Amazon Comprehend as a managed AI service, training and inference are simplified. You can easily train and deploy Amazon Comprehend custom classification with just two steps. Check out our workshop lab for more information about the toxicity detection model using an Amazon Comprehend custom classifier. The lab provides a step-by-step guide to creating and integrating a custom toxicity classifier into your application. The following diagram illustrates this solution architecture. This sample classifier uses a social media training dataset and performs binary classification. However, if you have more specific requirements for your text moderation needs, consider using a more tailored dataset to train your Amazon Comprehend custom classifier. Moderate output images Although moderating input text prompts is important, it doesn’t guarantee that all images generated by the Stable Diffusion model will be safe for the intended audience, because the model’s outputs can contain a certain level of randomness. Therefore, it’s equally important to moderate the images generated by the Stable Diffusion model. In this solution, we utilize Amazon Rekognition Content Moderation , which employs pre-trained ML models, to detect inappropriate content in images and videos. In this solution, we use the Amazon Rekognition DetectModerationLabel API to moderate images generated by the Stable Diffusion model in near-real time. Amazon Rekognition Content Moderation provides pre-trained APIs to analyze a wide range of inappropriate or offensive content, such as violence, nudity, hate symbols, and more. For a comprehensive list of Amazon Rekognition Content Moderation taxonomies, refer to Moderating content . The following code demonstrates how to call the Amazon Rekognition DetectModerationLabel API to moderate images within an Lambda function using the Python Boto3 library. This function takes the image bytes returned from SageMaker and sends them to the Image Moderation API for moderation. import boto3 # Initialize the Amazon Rekognition client object rekognition = boto3.client('rekognition') # Call the Rekognition Image moderation API and store the results response = rekognition.detect_moderation_labels( Image={ 'Bytes': base64.b64decode(img_bytes) } ) # Printout the API response print(response) For additional examples of the Amazon Rekognition Image Moderation API, refer to our Content Moderation Image Lab . Effective image moderation techniques for fine-tuning models Fine-tuning is a common technique used to adapt pre-trained models to specific tasks. In the case of Stable Diffusion, fine-tuning can be used to generate images that incorporate specific objects, styles, and characters. Content moderation is critical when training a Stable Diffusion model to prevent the creation of inappropriate or offensive images. This involves carefully reviewing and filtering out any data that could lead to the generation of such images. By doing so, the model learns from a more diverse and representative range of data points, improving its accuracy and preventing the propagation of harmful content. JumpStart makes fine-tuning the Stable Diffusion Model easy by providing the transfer learning scripts using the DreamBooth method. You just need to prepare your training data, define the hyperparameters, and start the training job. For more details, refer to Fine-tune text-to-image Stable Diffusion models with Amazon SageMaker JumpStart . The dataset for fine-tuning needs to be a single Amazon Simple Storage Service (Amazon S3) directory including your images and instance configuration file dataset_info.json , as shown in the following code. The JSON file will associate the images with the instance prompt like this: {'instance_prompt':<<instance_prompt>>} . input_directory |---instance_image_1.png |---instance_image_2.png |---instance_image_3.png |---instance_image_4.png |---instance_image_5.png |---dataset_info.json Obviously, you can manually review and filter the images, but this can be time-consuming and even impractical when you do this at scale across many projects and teams. In such cases, you can automate a batch process to centrally check all the images against the Amazon Rekognition DetectModerationLabel API and automatically flag or remove images so they don’t contaminate your training. Moderation latency and cost In this solution, a sequential pattern is used to moderate text and images. A rule-based function and Amazon Comprehend are called for text moderation, and Amazon Rekognition is used for image moderation, both before and after invoking Stable Diffusion. Although this approach effectively moderates input prompts and output images, it may increase the overall cost and latency of the solution, which is something to consider. Latency Both Amazon Rekognition and Amazon Comprehend offer managed APIs that are highly available and have built-in scalability. Despite potential latency variations due to input size and network speed, the APIs used in this solution from both services offer near-real-time inference. Amazon Comprehend custom classifier endpoints can offer a speed of less than 200 milliseconds for input text sizes of less than 100 characters, while the Amazon Rekognition Image Moderation API serves approximately 500 milliseconds for average file sizes of less than 1 MB. (The results are based on the test conducted using the sample application, which qualifies as a near-real-time requirement.) In total, the moderation API calls to Amazon Rekognition and Amazon Comprehend will add up to 700 milliseconds to the API call. It’s important to note that the Stable Diffusion request usually takes longer depending on the complexity of the prompts and the underlying infrastructure capability. In the test account, using an instance type of ml.p3.2xlarge, the average response time for the Stable Diffusion model via a SageMaker endpoint was around 15 seconds. Therefore, the latency introduced by moderation is approximately 5% of the overall response time, making it a minimal impact on the overall performance of the system. Cost The Amazon Rekognition Image Moderation API employs a pay-as-you-go model based on the number of requests. The cost varies depending on the AWS Region used and follows a tiered pricing structure. As the volume of requests increases, the cost per request decreases. For more information, refer to Amazon Rekognition pricing . In this solution, we utilized an Amazon Comprehend custom classifier and deployed it as an Amazon Comprehend endpoint to facilitate real-time inference. This implementation incurs both a one-time training cost and ongoing inference costs. For detailed information, refer to Amazon Comprehend Pricing . Jumpstart enables you to quickly launch and deploy the Stable Diffusion model as a single package. Running inference on the Stable Diffusion model will incur costs for the underlying Amazon Elastic Compute Cloud (Amazon EC2) instance as well as inbound and outbound data transfer. For detailed information, refer to Amazon SageMaker Pricing . Summary In this post, we provided an overview of a sample solution that showcases how to moderate Stable Diffusion input prompts and output images using Amazon Comprehend and Amazon Rekognition. Additionally, you can define negative prompts in Stable Diffusion to prevent generating unsafe content. By implementing multiple moderation layers, the risk of producing unsafe content can be greatly reduced, ensuring a safer and more dependable user experience. Learn more about content moderation on AWS and our content moderation ML use cases , and take the first step towards streamlining your content moderation operations with AWS. About the Authors Lana Zhang is a Senior Solutions Architect at AWS WWSO AI Services team, specializing in AI and ML for content moderation, computer vision, and natural language processing. With her expertise, she is dedicated to promoting AWS AI/ML solutions and assisting customers in transforming their business solutions across diverse industries, including social media, gaming, e-commerce, and advertising & marketing. James Wu is a Senior AI/ML Specialist Solution Architect at AWS. helping customers design and build AI/ML solutions. James’s work covers a wide range of ML use cases, with a primary interest in computer vision, deep learning, and scaling ML across the enterprise. Prior to joining AWS, James was an architect, developer, and technology leader for over 10 years, including 6 years in engineering and 4 years in marketing and advertising industries. Kevin Carlson is a Principal AI/ML Specialist with a focus on Computer Vision at AWS, where he leads Business Development and GTM for Amazon Rekognition. Prior to joining AWS, he led Digital Transformation globally at Fortune 500 Engineering company AECOM, with a focus on artificial intelligence and machine learning for generative design and infrastructure assessment. He is based in Chicago, where outside of work he enjoys time with his family, and is passionate about flying airplanes and coaching youth baseball. John Rouse is a Senior AI/ML Specialist at AWS, where he leads global business development for AI services focused on Content Moderation and Compliance use cases. Prior to joining AWS, he has held senior level business development and leadership roles with cutting edge technology companies. John is working to put machine learning in the hands of every developer with AWS AI/ML stack. Small ideas bring about small impact. John’s goal for customers is to empower them with big ideas and opportunities that open doors so they can make a major impact with their customer. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Samsung Electronics Improves Demand Forecasting Using Amazon SageMaker Canvas _ Samsung Electronics Case Study _ AWS.txt
Saved time Français Increased Amazon SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all machine learning (ML) development steps, from preparing data to building, training, and deploying your ML models, improving data science team productivity by up to 10x. 2023 Samsung Electronics is a multinational company based in South Korea that provides customers around the world with access to technology, such as mobile phones, computers, and smart devices. Español forecasting accuracy for data science team to focus on advanced models 日本語 Customer Stories / Electronics & Semiconductor Samsung Electronics Improves Demand Forecasting Using Amazon SageMaker Canvas Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used From days to hours to generate insights Using Amazon SageMaker Canvas is simple, and the interface is user friendly. Even a business analyst like me can analyze data and get insights using machine learning.” AWS Services Used If the marketing intelligence group does need assistance with a model, it can collaborate with the data science team using AWS services. Business analysts using Amazon SageMaker Canvas can share the same model with data scientists who use Amazon SageMaker Studio, an integrated development environment that provides a single web-based visual interface for data scientists to access tools to perform all ML development steps. Using Amazon SageMaker Studio, data scientists can evaluate model results and parameters. “The data science team is small and has a lot of responsibilities analyzing advanced models,” says Lee. “It makes sense to have business analysts working with simpler models because we can still collaborate with the data science team if we encounter challenges.” 中文 (繁體) Bahasa Indonesia Amazon SageMaker Studio Solution | Increasing Forecasting Accuracy While Reducing the Time to Receive Results by 1–2 Days Contact Sales Ρусский Outcome | Encouraging Other Teams to Use Amazon SageMaker Canvas for Additional Use Cases عربي By equipping business analysts with the skills to use Amazon SageMaker Canvas, Samsung saves time for both business analysts and data scientists. The marketing intelligence group meets weekly to analyze future demand for the company’s resources. In the past, it couldn't determine how a particular factor would impact demand on its own. “Using Amazon SageMaker Canvas, we can quickly see how a factor will affect the model,” says Lee. “Previously, we had to ask our data science team for help and would typically wait for 1–2 days. Now, we can save time by getting the answer using Amazon SageMaker Canvas in 1–2 hours.” The data science team can then focus on working with more advanced models, which is a better use of its expertise. 中文 (简体) Dooyong Lee Manager of Marketing Intelligence, Samsung Electronics Learn more » between business analysts and data scientists Overview Empowered business analysts Forecasting PC set demand and shipments is a small portion of the forecasting that Samsung Electronics does as a large, multinational company. The marketing intelligence group plans to train other members of the team to use Amazon SageMaker Canvas in the future. It is also encouraging other teams to start using the service for additional use cases, such as analyzing mobile, server, and automotive demand. “Using Amazon SageMaker Canvas is simple, and the interface is user friendly,” says Lee. “Even a business analyst like me can analyze data and get insights using ML.” Amazon SageMaker Canvas expands access to machine learning (ML) by providing business analysts with a visual interface that allows them to generate accurate ML predictions on their own—without requiring any ML experience or having to write a single line of code. Türkçe About Samsung Electronics English AWS Data Lab offers accelerated, joint engineering engagements between customers and AWS technical resources to create tangible deliverables that accelerate databases, analytics, artificial intelligence/machine learning (AI/ML), application & infrastructure modernization, and DataOps initiatives. Based in South Korea, Samsung Electronics is a global company offering people around the world access to technology, such as mobile phones, computers, and smart devices. The Samsung Device Solutions division of the company focuses on the inner workings of electronic devices to provide maximum performance, reliability, and longevity. AWS Data Lab Deutsch to build ML models and generate accurate ML predictions Learn how Samsung Electronics in the technology and electronics industry equipped business analysts to forecast demand using Amazon SageMaker Canvas without writing code. Within the Samsung Device Solutions division, the Memory Marketing team analyzes memory needs for electronics produced by the multinational company. It previously forecasted memory chip demand based on customer preferences, external research, and simple regression. However, these inputs were sometimes volatile, inaccurate, and didn’t account for new factors. For example, with new applications and devices on the market and environmental factors like the COVID-19 pandemic impacting business, it became difficult to determine the inflection point by solely looking at previous trends. To overcome these challenges, Samsung Electronics sought a new methodology for demand forecasting. Rather than increasing the workload of its data science team, the company wanted to empower business analysts with no ML or coding experience to inform data-driven decision-making with ML using Amazon SageMaker Canvas, which provides business analysts with a visual interface for generating accurate ML predictions on their own, without writing code. Tiếng Việt Opportunity | Employing No-Code ML for Demand Forecasting Using Amazon SageMaker Canvas Italiano ไทย Samsung Electronics kicked off the project in April 2022. Then, in August 2022, it started training business analysts from the marketing intelligence group, a portion of the Memory Marketing division, through AWS Data Lab, which offers accelerated, joint engineering engagements between customers and AWS technical resources to create tangible deliverables. Five members of the team went through 5 days of training to learn how to use Amazon SageMaker Canvas. By September 2022, the business analysts were using Amazon SageMaker Canvas to analyze data and forecast demand over the next eight quarters for PC shipments. Increased collaboration Learn more » To forecast demand, business analysts imported data from various sources, including internal data and external data from third-party sources, into Amazon SageMaker Canvas. After importing the data and selecting values to predict forecasting demand, Samsung Electronics could automatically prepare the data, explore it, and quickly build ML models. “All of these steps are done with a click, so business analysts can easily use the tool,” says Dooyong Lee, manager of marketing intelligence at Samsung Electronics. After building a demand forecast model using Amazon SageMaker Canvas, Samsung Electronics is seeing highly accurate predictions. “Using Amazon SageMaker Canvas, we can continuously advance the forecast accuracy over time,” says Lee. Amazon SageMaker Canvas Digital devices are everywhere: in homes, offices, and people’s pockets. To keep up with the increasing complexity of digital devices on the market and efficiently meet customer needs, Samsung Electronics needed a better way to predict demand for memory hardware. The company wanted to empower business analysts without coding experience to glean data-driven insights using machine learning (ML), so it sought a solution using Amazon Web Services (AWS). Using features of Amazon SageMaker—fully managed infrastructure, tools, and workflows for building, training, and deploying ML models for any use case—Samsung Electronics enhanced forecasting accuracy while saving time for both its business and data science teams. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Samsung Electronics Uses Amazon Chime SDK to Deliver a More Engaging Television Experience for Millions of Viewers _ Samsung Case Study _ AWS.txt
By working closely with AWS teams during initial design discussions right through to development, the service benefitted from a short development timeframe. “With AWS, we were able to quickly roll out Live Chatting, the world’s first live television and text chat service,” says Seokjae Oh, platform service part lead of the visual display division at Samsung Electronics. Without collaborating with AWS teams, Samsung would have needed additional internal resources to create its new messaging services. Français Taking advantage of this solution, Samsung customers can view chat interfaces on their Samsung smart television, write messages using their remote control or mobile phone, and chat by converting micro voice messages to text via the same devices. Interactive chat functions include emojis and recommended messages based on program genres. Additionally, the live chatting solution can scale automatically to support millions of users by relying on Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration technology. With the Amazon Chime SDK, builders can easily add real-time voice, video, and messaging powered by machine learning into their applications. 2023 Samsung relies on the Amazon Chime SDK to deliver a television live chat solution in months, give viewers a more engaging experience, and scale to support chat services on televisions produced from 2020 to present. Español for television viewers 日本語 Customer Stories / Media & Entertainment Get Started 한국어 With AWS, we were able to quickly roll out Live Chatting, the world’s first live television and text chat service.” Quickly designs and deploys Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Giving Television Viewers a More Interactive and Engaging Experience Scales on demand AWS Services Used Amazon ECS 中文 (繁體) Bahasa Indonesia Samsung Electronics, based in South Korea, is the country’s leading electronics company. Samsung produces consumer devices including televisions, LCD panels, and printers; semiconductors; and communications devices such as smartphones and networking gear. The company consists of nearly 230 subsidiaries across the globe. Seokjae Oh Leader of the Platform Service, Samsung Electronics Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Samsung is currently working to commercialize its new service to target the wider Korean market and continues to work with AWS to improve maintenance and service functions. “We look forward to helping Samsung innovate rapidly to meet evolving demands for new entertainment services,” says Ham. 中文 (简体) Samsung Electronics Uses Amazon Chime SDK to Deliver a More Engaging Television Experience for Millions of Viewers Learn more » Overview The application increases viewer engagement by giving customers the ability to view chat interfaces on their smart television and write messages using their remote control or mobile phone. Solution | Creating an Interactive Chatting Solution Using Amazon Chime SDK Türkçe About Samsung Electronics English to support millions of viewers Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Amazon Chime SDK The Samsung Visual Display Division used the Amazon Chime SDK, a service that enables embedded real-time communication, to create a new live chatting solution. The Amazon Chime SDK makes it easy for developers to add real-time voice, video, and messaging powered by machine learning into their applications. With Amazon Chime SDK, as well as AWS developer tools and databases, Samsung built a live television chat service on AWS, with messaging capabilities integrated into live chatting functionality. Opportunity | Meeting the Demand for Interactive Chat Features Samsung, based in Korea, is a global electronics company and the world’s largest manufacturer of televisions and smartphones. Samsung produces a range of consumer and industry electronics, including appliances, digital media devices, semiconductors, and memory chips. The company’s mission is to create a better future for consumers by using sustainable products. Using Amazon ECS to facilitate automatic scalability, the Samsung live chatting solution scales across multiple channels during spikes in demand from larger groups of viewers during major sports events or popular television show finales. “Viewers across South Korea are looking for new ways to engage with their favorite TV programs. With the agility of AWS, we can scale this new Samsung viewing experience to customers countrywide, driving brand loyalty,” Ham says. Deutsch Tiếng Việt Samsung has combined messaging and entertainment to give viewers a way to share their thoughts, reactions, and emotions during live television shows. The AWS-powered live chatting technology helps users watching the same program to enter and chat in a single chat room, interacting with each other with messages displayed in a panel on the right side of the television screen. Interactivity across all viewing networks and cross-device connectivity makes text chat easy and accessible to South Korean viewers. “Samsung Electronics is making television more engaging with cloud technology,” said Kee Ho Ham, managing director of AWS Korea. “Using AWS, Samsung Electronics brings interactive live television chat services to global customers for the first time.” Italiano ไทย Contact Sales To address its business requirements, Samsung wanted to use a cloud-based solution for scalability and agility. Because the company had previous experience running workloads on AWS, it selected AWS again due to the strong support it had already received. Learn more » live chatting solution Interactive and engaging experience Samsung Electronics (Samsung) is the world’s largest television manufacturer. To meet customer demand, the company decided to build on Amazon Web Servces (AWS) and used Amazon Chime SDK to create a new live television chat service. In recent years, Samsung has seen rising demand from consumers for easy-to-use interactive chat features during television shows and movies. Through its own analysis, the company discovered that customers are seeking a sharing experience built into the television to make watching shows more engaging overall. To meet this customer demand, Samsung wanted to develop a new solution that would integrate messaging capabilities into live chatting. The company also wanted to implement a solution quickly to meet the needs of its customers. Português
Saving 80 on Costs While Improving Reliability and Performance Using Amazon Aurora with Panasonic Avionics _ Panasonic Avionics Case Study _ AWS.txt
Jeremy Welch Cloud Development Data Software Engineer, Panasonic Avionics Corporation Panasonic has delivered over 15,000 in-flight entertainment systems and over 3,400 in-flight connectivity solutions to airlines around the world. Its in-flight entertainment systems capture data about passengers’ activities while onboard an airplane, such as their music and movie preferences. Airlines want this information so that they can make quick decisions based on current data to capture optimal incremental revenue opportunities. Panasonic’s previous on-premises system for collecting this data included a self-managed MySQL database as the backend that had limited flexibility and was difficult to maintain. To provide data to airlines more efficiently, Panasonic sought to improve the scalability, availability, and overall resiliency of its in-flight entertainment applications, reduce the heavy lifting of maintenance work, improve database replication performance, and optimize costs. Français 2023 10+ TB Español Panasonic Avionics Corporation is a supplier of in-flight entertainment and communications systems on commercial airlines. It has delivered over 15,000 in-flight entertainment systems and over 3,400 in-flight connectivity solutions to airlines around the world. Pursuing these objectives led the company to migrate to a cloud-based architecture using a suite of AWS services. “For the heavy-duty data work we need to do, AWS is definitely the best choice for us,” says Edwin Woolf, cloud development team manager at Panasonic. To modernize its legacy database, Panasonic decided to use Amazon Aurora, a relational database service built for the cloud with full MySQL and PostgreSQL compatibility, as its storage engine. Panasonic used Amazon Aurora MySQL-Compatible Edition for its various data marts to develop a new data lake—a centralized repository that supports data storage at virtually any scale—at its core for archiving. Amazon CloudWatch alarms, the built-in monitoring feature of Aurora, also means that Panasonic does not have to run third-party monitoring systems. About Panasonic Avionics Corporation Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Learn more » 日本語 Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customer Stories / Manufacturing Panasonic can now provide the data that airlines want while making flight time more enjoyable for travelers. It can collect, analyze, and store data more efficiently at scale and deliver the data to airlines in near real time. This data provides additional insight into content usage patterns and helps Panasonic to improve product offerings and customer experience. Using Aurora Database Cloning to quickly create duplicates of production databases gives Panasonic a way to reduce costs and improve flexibility when working with its databases. Faster and more efficient than physically copying the data, Aurora Database Cloning supports the creation of a new cluster that uses the same Aurora cluster volume and has the same data as the original. To help improve system reliability, Panasonic incorporates machine learning on Amazon SageMaker, which can be used to build, train, and deploy machine learning models for virtually any use case with fully managed infrastructure, tools, and workflows. Using machine learning, Panasonic has started to predict and identify potential failures of aircraft antennae (needed for passengers to connect to the internet). 한국어 80% reduction Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Building a Data-Driven Mindset of data migrated Using the Amazon Aurora clusters has had a huge impact not just on cost-effectiveness but on operations as well, because there have been huge improvements in performance and, even more significantly, in reliability—less burden on the development team.” After preparing its on-premises databases for migration, Panasonic used AWS Database Migration Service (AWS DMS), which is used to migrate databases to AWS quickly and securely, to handle the replication of its smaller databases from onsite to the cloud. Using AWS DMS, Panasonic could migrate databases with minimal downtime by keeping the source database fully operational. For larger databases, not wanting to saturate their available AWS Direct Connect bandwidth limit, Panasonic used Percona XtraBackup to back up source databases and transfer them to Amazon Simple Storage Service (Amazon S3)—an object storage service offering industry-leading scalability, data availability, security, and performance—before restoring the databases to target Aurora MySQL clusters. Teams at Panasonic also use Amazon Athena, an interactive query service that makes it simple to analyze data in Amazon S3 using standard SQL, to run data analytics queries and extract relevant information from the databases. Because Amazon Athena is serverless, there is no infrastructure to manage, reducing system overhead requirements. When staff can quickly query data without having to set up and manage servers or data warehouses, they can focus on value-adding tasks instead. Panasonic Avionics Corporation (Panasonic) needed to modernize its architecture to keep pace with its day-to-day operations. The commercial airline in-flight entertainment and communications systems supplier wanted to improve the reliability and redundancy of its databases, which were backed by an onsite infrastructure that presented storage and scalability challenges. Looking for a solution to expand its capacity, modernize its infrastructure, and migrate 10 TB of data to the cloud, Panasonic selected Amazon Web Services (AWS). Since migrating, the company can collect, analyze, and store data more efficiently at scale and provide reliable services to its customers to accomplish its primary goal of making flight time as enjoyable as possible for personal and business travelers. AWS Services Used Opportunity | Using Amazon Aurora to Modernize Data Storage and Management Overview 中文 (繁體) Bahasa Indonesia Amazon Aurora Solution | Cutting Query Time up to 20% Using Amazon Aurora While Saving 80% on Costs Contact Sales Ρусский from 10–15 seconds to 0.3 seconds using Aurora MySQL عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Get Started Reduced replication lag time Saving 80% on Costs While Improving Reliability and Performance Using Amazon Aurora with Panasonic Avionics in query time in costs by migrating to the cloud AWS DMS Amazon Athena is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats. Although migrating Panasonic systems to the cloud was complex and involved 10 TB of data, the company could work with the AWS Database Specialist Solutions Architecture team to determine and implement solutions that accomplished Panasonic’s business goals. “It’s been a breath of fresh air to be able to speak to the AWS developers directly. That personal contact is worth a lot,” says Woolf. 18–20% improvement Türkçe English AWS Database Migration Service (AWS DMS) is a managed migration and replication service that helps move your database and analytics workloads to AWS quickly, securely, and with minimal downtime and zero data loss. Learn how Panasonic Avionics Corporation migrated its database environment to the cloud using AWS. By migrating its databases to a managed cloud-native database service like Aurora, Panasonic has saved an estimated 80 percent on costs over its previous onsite environment. Additionally, replication lags have reduced significantly. “Using our on-premises system under heavy loads, the databases experienced up to a 10-to-15-second replication delay between writer and reader. The equivalent database running on Aurora MySQL sees at most a 0.3-second delay, meaning that data is available in near real time,” says Jeremy Welch, cloud development data software engineer at Panasonic, who led the migration effort. Panasonic has also seen an approximately 18–20 percent improvement in query time. Reliable operation and less customer exposure to technical issues are a big plus. Ability to provide data Deutsch to airlines in near real time Tiếng Việt Amazon S3 Italiano ไทย Amazon Athena Learn more » Moving forward, Panasonic wants to develop a data-driven mindset to support access to data so that internal teams can optimize how they use that data within their respective business units. After the success it has seen by migrating to AWS, the company wants to expand its data lake and provide cataloging as a means for data discovery. “Migrating to AWS has been a huge win,” says Woolf. Português
Saving time with personalized videos using AWS machine learning _ AWS Machine Learning Blog.txt
AWS Machine Learning Blog Saving time with personalized videos using AWS machine learning by Humphrey Chen and Aaron Sloman | on 28 JAN 2021 | in Amazon Comprehend , Amazon DynamoDB , Amazon OpenSearch Service , Amazon Rekognition , Amazon SageMaker , Artificial Intelligence | Permalink | Comments |  Share CLIPr aspires to help save 1 billion hours of people’s time. We organize video into a first-class, searchable data source that unlocks the content most relevant to your interests using AWS machine learning (ML) services. CLIPr simplifies the extraction of information in videos, saving you hours by eliminating the need to skim through them manually to find the most relevant information. CLIPr provides simple AI-enabled tools to find, interact, and share content across videos, uncovering your buried treasure by converting unstructured information into actionable data and insights. How CLIPr uses AWS ML services At CLIPr, we’re leveraging the best of what AWS and the ML stack is offering to delight our customers. At its core, CLIPr uses the latest ML, serverless, and infrastructure as code (IaC) design principles. AWS allows us to consume cloud resources just when we need them, and we can deploy a completely new customer environment in a couple of minutes with just one script. The second benefit is the scale. Processing video requires an architecture that can scale vertically and horizontally by running many jobs in parallel. As an early-stage startup, time to market is critical. Building models from the ground up for key CLIPr features like entity extraction, topic extraction, and classification would have taken us a long time to develop and train. We quickly delivered advanced capabilities by using AWS AI services for our applications and workflows. We used Amazon Transcribe to convert audio into searchable transcripts, Amazon Comprehend for text classification and organizing by relevant topics, Amazon Comprehend Medical to extract medical ontologies for a health care customer, and Amazon Rekognition to detect people’s names, faces, and meeting types for our first MVP. We were able to iterate fairly quickly and deliver quick wins that helped us close our pre-seed round with our investors. Since then, we have started to upgrade our workflows and data pipelines to build in-house proprietary ML models, using the data we gathered in our training process. Amazon SageMaker has become an essential part of our solution. It’s a fabric that enables us to provide ML in a serverless model with unlimited scaling. The ease of use and flexibility to use any ML and deep learning framework of choice was an influencing factor. We’re using TensorFlow, Apache MXNet, and SageMaker notebooks. Because we used open-source frameworks, we were able to attract and onboard data scientists to our team who are familiar with these platforms and quickly scale it in a cost-effective way. In just a few months, we integrated our in-house ML algorithms and workflows with SageMaker to improve customer engagement. The following diagram shows our architecture of AWS services. The more complex user experience is our Trainer UI, which allows human reviews of data collected via CLIPr’s AI processing engine in a timeline view. Humans can augment the AI-generated data and also fix potential issues. Human oversight helps us ensure accuracy and continuously improve and retrain models with updated predictions. An excellent example of this is speaker identification. We construct spectrographs from samples of the meeting speakers’ voices and video frames, and can identify and correlate the names and faces (if there is a video) of meeting participants. The Trainer UI also includes the ability to inspect the process workflow, and issues are flagged to help our data scientists understand what additional training may be required. A typical example of this is the visual clues to identify when speaker names differ in various meeting platforms. Using CLIPr to create a personalized re:Invent video We used CLIPr to process all the AWS re:Invent 2020 keynotes and leadership sessions to create a searchable video collection so you can easily find, interact, and share the moments you care about most across hundreds of re:Invent sessions. CLIPr became generally available in December 2020, and today we launched the ability for customers to upload their own content. The following is an example of a CLIPr processed video of Andy’s keynote. You get to apply filters to the entire video to match topics that are auto-generated by CLIPr ML algorithms. CLIPr dynamically creates a custom video from the keynote by aggregating the topics and moments that you select. Upon choosing Watch now , you can view your video composed of the topics and moments you selected. In this way, CLIPr is a video enrichment platform. Our commenting and reaction features provide a co-viewing experience where you can see and interact with other users’ reactions and comments, adding collaborative value to the content. Back in the early days of AWS, low-flying-hawk was a huge contributor to the AWS user forums. The AWS team often sought low-flying-hawk’s thoughts on new features, pricing, and issues we were experiencing. Low-flying-hawk was like having a customer in our meetings without actually being there. Imagine what it would be like to have customers, AWS service owners, and presenters chime in and add context to the re:Invent presentations at scale. Our customers very much appreciate the Smart Skip feature, where CLIPr gives you the option to skip to the beginning of the next topic of interest. We built a natural language query and search capability so our customers can find moments easily and fast. For instance, you can search “SageMaker” in CLIPr search. We do a deep search across our entire media assets, ranging from keywords, video transcripts, topics, and moments, to present instant results. In a similar search (see the following screenshot), CLIPr highlights Andy’s keynote sessions, and also includes specific moments when SageMaker is mentioned in Swami Sivasubramanian and Matt Wood’s sessions. CLIPr also enables advanced analytics capabilities using knowledge graphs, allowing you to understand the most important moments, including correlations across your entire video assets. The following is an example of the knowledge graph correlations from all the re:Invent 2020 videos filtered by topics, speakers, or specific organizations. We provide a content library of re:Invent sessions, with all the keynotes and leadership sessions, to save you time and make the most out of re:Invent. Try CLIPr in action with re:Invent videos, see how CLIPr uses AWS to make it all happen. Conclusion Create an account at www.clipr.ai and create a personalized view of re:Invent content. You can also upload your own videos, so you can spend more time building and less time watching! About the Authors Humphrey Chen ‘s experience spans from product management at AWS and Microsoft to advisory roles with Noom, Dialpad, and GrayMeta. At AWS, he was Head of Product and then Key Initiatives for Amazon’s Computer Vision. Humphrey knows how to take an idea and make it real. His first startup was the equivalent of shazam for FM radio and launched in 20 cities with AT&T and Sprint in 1999. Humphrey holds a Bachelor of Science degree from MIT and an MBA from Harvard. Aaron Sloman is a Microsoft alum who launched several startups before joining CLIPr, with ventures including Nimble Software Systems, Inc., CrossFit Chalk, and speakTECH. Aaron was recently the architect and CTO for OWNZONES, a media supply chain and collaboration company, using advanced cloud and AI technologies for video processing. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates
Scaling Authentic Educational Games Using Amazon GameLift with Immersed Games _ Case Study _ AWS.txt
Lindsey Tropf Founder and Chief Executive Officer, Immersed Games Français By leaving the infrastructure to AWS, Immersed Games freed up time for higher-value work. Amazon GameLift automatically manages the scaling up of a collection of Amazon EC2 servers on the backend to take on player loads. “As a small team, using AWS means we don’t have to deal with all of the knowledge and capability to manage infrastructure servers manually,” says Trussell. When the company still managed infrastructure manually, a developer had to wait in the office on Friday night until every student had signed off before updating the game. That manual effort has been automated along with server management using AWS. Immersed Games is an educational video game studio that builds immersive learning experiences that are standards-aligned. The company’s game, Tyto Online, teaches students scientific problem solving. Español With Amazon Cognito, you can add user sign-up and sign-in features and control access to your web and mobile applications. Learn more » Amazon Cognito 日本語 Amazon GameLift, Immersed Games can scale to accommodate more students than it could when it was manually managing its servers. Now, any time students sign on, Amazon GameLift automatically spins up additional servers as needed, giving the company confidence that it can provide a seamless learning experience to students at any moment. “We’re no longer in panic mode, unsure if we can handle the load of several classes coming online at the same time. Amazon GameLift is taking care of it all,” says Kyle Trussell, technical director at Immersed Games. Customer Stories / Games In addition to enhanced technical support and architectural guidance, Business Support provides access to third-party software support, documentation and forums, AWS Trusted Advisor, AWS Personal Health Dashboard, AWS Support API, and launch and event planning. Learn more » Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Immersed Games used AWS Business Support, which offers technical support and architectural guidance, to hold an AWS Immersion Day that introduced many of its young developers to cloud fundamentals. Saving money on AWS is now a whole-company effort. “We all want to be aware of what’s happening on AWS and how much money that is saving,” says Tropf. The development team started counting dollars saved as “pizza points,” saving up for pizza parties when it delivers significant cost savings. In the last year, the company has seen a 70 percent decline in technology spending, even as it improves the gaming experience for students and achieves scalability. Most importantly, Immersed Games can offer thousands of students compelling and authentic problem-solving experiences that impart real-world thinking skills. AWS Business Support 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Using AWS, we can spend more time on what makes us unique: creating immersive educational games.” Improved Get Started AWS Services Used game experience With its original infrastructure, Tyto Online could host only a maximum of 150 concurrent players, and the team had to manually scale and balance the server loads. The company also struggled to develop an immersive, 3D game that could work in schools that often were unwilling to install an app and wanted to run the game from a web browser. “It is a massive challenge to build a 3D game that runs in the web browsers of cheap, 4-year-old laptops in schools,” says Tropf. As school districts kept registering new students to play and learn, Immersed Games knew it needed to find a new way to manage, develop, and scale its game. Those challenges led Immersed Games to Amazon GameLift, a managed game server hosting solution, in October 2021. The service not only hosts game servers but also manages load balancing and networking. Reduced 中文 (繁體) Bahasa Indonesia As the company expands to new school districts, Immersed Games is also looking at implementing Amazon Cognito—a tool offering secure and frictionless customer identity and access management that scales—to meet security standards. “We don’t want to spend all our time rebuilding the wheel with the same services,” says Tropf. “Using AWS, we can spend more time on what makes us unique: creating immersive educational games.” Immersed Games, an education technology (EdTech) startup, needed scalable infrastructure to host its science education game, Tyto Online. The company faced the added challenge of running games seamlessly in schools that have limited equipment and strict protocols for internet access. After experimenting with many solutions, Immersed Games faced high and uncertain hosting costs, making it difficult for the company to build engaging games and constraining the scale of its operations. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 70% 中文 (简体) Scaling Authentic Educational Games Using Amazon GameLift with Immersed Games 2022   Overview Amazon GameLift is a dedicated game server hosting solution that deploys, operates, and scales cloud servers for multiplayer games. Solution | Improving Server Management and Decreasing Technology Costs by 70% Using Amazon GameLift labor hours scalability Seeing the success that it has had so far, Immersed Games has high ambitions. “Our goal is to let other people start building games on our solution in the future,” says Tropf. “We want companies to make other equally compelling game-based learning content.” By proving that it is possible to deliver an immersive educational gaming experience in a web browser, Immersed Games hopes to spearhead a new wave of innovation. Türkçe English Achieved About Immersed Games In 2019, Immersed Games chose to offload its infrastructure to Amazon Web Services (AWS) to help it develop and scale games effectively. “Using AWS means that we can spend more time on the things that are important to us: designing an amazing educational experience and focusing on students and teachers,” says Lindsey Tropf, founder and chief executive officer of Immersed Games. Now using AWS, Immersed Games can scale simply, reduce costs, and free developers to focus on developing features for the game instead of managing infrastructure. Outcome | Starting a Wave of Educational Game Development using AWS Deutsch Tiếng Việt reduction in overall tech costs Italiano ไทย Opportunity | Using Amazon GameLift to Scale Tyto Online for Immersed Games Contact Sales Learn how Immersed Games in EdTech delivered 70 percent cost savings using Amazon GameLift. Learn more » Amazon EC2 Immersed Games is an educational video game studio headquartered in Buffalo, New York. The idea for the company came when Tropf was working on a PhD in education and saw the parallels between learning theory and the kind of authentic problem-solving scenarios that happen in gaming. Immersed Games launched in 2015, but with funding sparse, the company built its cloud infrastructure using free credits from a variety of hosting providers. It eventually settled on AWS because of the support the cloud provider offers EdTech companies. “The fact that I had dedicated AWS contacts who understood the education market meant a lot, especially because I couldn’t get a hold of a real person at the companies we used previously,” says Tropf. The company used Amazon Elastic Compute Cloud (Amazon EC2), a cloud solution offering secure and resizable compute capacity for virtually any workload, to host the game servers. Português Amazon GameLift
Scaling Data Pipeline from One to Five Satellites Seamlessly on AWS _ Axelspace Case Study _ AWS.txt
costs by lifecycling data Axelspace began building its custom, scalable data pipeline in 2019, with the intention of using fully managed services to automate as many steps in its process as possible and alleviate the operational burden on its development team. In general, the pipeline works as an intermediary between the satellites and AxelGlobe. First, the company downlinks data from its satellites. Then, the data proceeds through a series of modules, which represent different processing steps. For storing processing metadata and capture information, Axelspace adopted Amazon Relational Database Service (Amazon RDS), which makes it simple to set up, operate, and scale a relational database in the cloud. As the company continued to grow, it looked to AWS for solutions that would facilitate innovation within its data-processing pipeline and free up time for its team of developers to focus on testing new algorithms. Axelspace was also searching for a cost-effective solution that would help it deliver data to its customers at the lowest possible cost. “One of our key differentiators is affordability,” says Jay Pena, senior product manager at Axelspace. “It’s our goal to provide satellite imagery to everyone.” Français Throughout this project, Axelspace’s global team accessed multilingual documentation on AWS for technical support and cloud best practices. Using its custom-built data pipeline, the company can deliver data to its customers in under 5 hours. This speed is especially crucial in emergency cases, such as satellite imagery of natural disasters. These innovations have also given Axelspace’s development teams the ability to focus on improving the overall quality of its satellite imagery and operations. For instance, Axelspace has deployed additional custom tasking features that give its customers the ability to choose the capture frequency and term of any given satellite. “We love the fully managed solutions on AWS,” says Fechko. “They help our teams focus on algorithm development instead of infrastructure maintenance.” Customer Stories / Aerospace and Satellite  Español to deliver data to customers We love the fully managed solutions on AWS. They help our teams focus on algorithm development instead of infrastructure maintenance.” 日本語 Axelspace Scales Data Pipeline from One to Five Satellites Seamlessly on AWS 2022 Amber Fechko Cloud Engineering Unit Leader, Axelspace Get Started 한국어 Axelspace specializes in manufacturing both satellite hardware and compatible software, such as AxelGlobe, a subscription-based platform that gives customers the ability to access satellite imagery from anywhere. Since the launch of its first GRUS microsatellite in 2018, the company has rapidly expanded its fleet of remote sensing satellites to five, which it uses to capture Earth-observation data. Its customers can use this data across a wide variety of different applications, including land monitoring, disaster prevention, city planning, and more. Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Scaling Its Global Operations Amazon Lambda according to demand One to five satellites AWS Services Used 中文 (繁體) Bahasa Indonesia of modules simultaneously processing data Amazon Relational Database Service (Amazon RDS) While designing its custom scaling system, Axelspace also wanted to provide an environment for monitoring that would remain secure. So the company implemented Amazon CloudWatch, which provides companies with observability of their AWS resources and applications on AWS and on premises. Using Amazon CloudWatch, Axelspace receives near-immediate notifications of system anomalies through internal notification channels. “We can better sleep at night using AWS services, knowing that our data is in a controlled environment,” says Pena. Axelspace also focused on increasing its cost savings by innovating its use of Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from virtually anywhere. Instead of storing its data in one Amazon S3 class, the company cycles its intermediary data for either removal or migration into lower Amazon S3 classes, helping it save tens of thousands of dollars on storage costs. Ρусский Solution | Building a Custom, Scalable Data Pipeline on AWS عربي Axelspace manufactures both satellite hardware and compatible software. The company has produced nine microsatellites, including five GRUS satellites, and it provides an Earth-observation platform, AxelGlobe, and a one-stop service for microsatellite missions, AxelLiner. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Elastic Container Service (Amazon ECS) Axelspace uses AWS Lambda to kick-start the processing and determine which AWS compute service is appropriate for the job. “Our workloads are variable but predictable,” says Fechko. “By building a custom scaling system, we can provision our resources on demand according to the processing requirements of our individual modules.” Depending on the size and type of module, Axelspace uses either Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload; Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service that makes it easy for companies to deploy, manage, and scale containerized applications; or AWS Fargate, a serverless compute service for containers. With its custom-built data pipeline in place, Axelspace can process data in a virtually unlimited number of modules simultaneously. “It doesn’t matter if we have 10 captures processing or 100,” says Fechko. “We’ve been able to scale from one satellite to five seamlessly.” Opportunity | Expanding Its Fleet of Satellites Overview Amazon RDS is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud.  Learn more » Amazon EC2 offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  Learn more » Türkçe Under 5 hours Deploys resources Space technology company Axelspace has made satellite imagery and data more accessible for its global customer base by using microsatellites. Because the company handles both the manufacturing and operation of these satellites, along with the processing and analysis of satellite data, it needed a robust compute infrastructure that could dynamically scale to support all its operations, especially as it began sending more microsatellites into space. English About Axelspace Amazon Elastic Compute Cloud (Amazon EC2) Virtually unlimited number Saves on storage Deutsch AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  seamlessly scaled data pipelines From the beginning, Axelspace chose Amazon Web Services (AWS) as the cloud service provider for its custom, event-based scaling system using a combination of AWS services, including AWS Lambda, which gives companies the ability to run code without thinking about servers or clusters. By automating the provisioning of its infrastructure based on workloads, Axelspace instantly scaled its data-processing operations to support additional data from four new satellites while optimizing compute costs and running increasingly complex algorithms on its satellite data. Tiếng Việt To process data from its satellites, Axelspace has built a data-processing pipeline on AWS, which runs advanced algorithms that produce clear, accurate images for its customers. Each satellite capture produces tens of gigabytes of data. As the company launched more satellites into space and increased its capture frequency, the demand on its data-processing pipeline increased tenfold. “Our data-processing pipeline is our heaviest usage of AWS,” says Amber Fechko, cloud engineering unit leader at Axelspace. Italiano ไทย Contact Sales Learn more » Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Learn more » Because Axelspace has built a scalable, event-based infrastructure, it’s now undertaking an expansion of its global operations. With a well-established customer base in Japan, the company is looking at building its portfolio overseas. Axelspace is also exploring the possibility of increasing the resiliency of its processing operations by deploying across multiple AWS Regions. “I have nothing but wonderful things to say about the AWS team,” says Fechko. “AWS is an incredible asset to us at Axelspace.” Português
Scaling Sustainability Solutions for Buildings Using AWS with BrainBox AI _ Case Study _ AWS.txt
Jean-Simon Venne Chief Technology Officer, BrainBox AI Français 2023 BrainBox AI’s autonomous decarbonization technology connects to existing building management systems or cloud-connected thermostats, gathers data, and uses ML to determine optimal settings for the heating, ventilation, and air conditioning (HVAC) systems of the building. “It adds a brain to a building so that it can act preemptively rather than reactively,” says Rebecca Handfield, vice president of marketing and public relations at BrainBox AI. Español We could never reproduce that scalability on our own. AWS is part of our secret recipe.” When it launched in May 2019, the company had 12 staff members and managed 15 buildings. As it grew, it needed more flexibility and began using AWS in 2020. Using AWS, BrainBox AI could expand to new regions and quickly onboard new buildings to keep up with the demand for sustainable solutions. Now, in 2023, BrainBox AI has over 150 people and manages hundreds of buildings worldwide 24/7. 日本語 Get Started 한국어 Up to 40% Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » Manages hundreds of buildings 24/7 Scaling Sustainability Solutions for Buildings Using AWS with BrainBox AI AWS Services Used Amazon Elastic Compute Cloud (Amazon EC2) provides secure and resizable compute capacity for virtually any workload. Up to 25% 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي Headquartered in Montreal, BrainBox AI is a decarbonization technology company that provides cloud-based AI/ML solutions to decrease the emissions and improve the energy efficiency of buildings in over 20 countries. Outcome | Spreading Solutions to Reduce Carbon Impact 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Using AWS, BrainBox AI can replicate and redeploy its solution to new regions rapidly. It can also keep latency under 500 ms by using AWS servers that are located closer geographically to the buildings where it is expanding. Using services such as Amazon Elastic Compute Cloud (Amazon EC2)—which offers secure and resizable compute capacity—and Amazon Relational Database Service (Amazon RDS)—services to set up, operate, and scale databases in the cloud—BrainBox AI can scale flexibly. “All the tools, all of the monitoring, observability, and autoscaling capacity is already there on AWS,” says Jean-Simon Venne, chief technology officer and cofounder at BrainBox AI. Overview Canadian technology scale-up BrainBox AI is helping building owners reduce emissions and energy consumption using cloud-based artificial intelligence (AI) and machine learning (ML) on Amazon Web Services (AWS). Using AWS, BrainBox AI can deliver deep learning solutions with low latency to multiple regions and scale quickly to meet the demand for a growing number of building owners who want to reduce their emissions. reduction in HVAC energy costs BrainBox AI wants to accelerate emissions reductions to make a lasting, tangible impact on climate change for future generations. Multisite retailers and other commercial building owners are showing interest in using the solution to manage rising energy costs and comply with environmental legislation. Using AWS, BrainBox AI can scale to meet the demand. “We could never reproduce that scalability on our own,” says Venne. “AWS is part of our secret recipe.” Türkçe When BrainBox AI installs its solution in a new building, it must train a new ML model to control the building’s HVAC systems. The models are trained for 2–3 months using internal and external data streams, such as equipment data, utility patterns, and weather patterns. After installation, BrainBox AI models determine the optimal settings for running the building’s HVAC systems—the component that often consumes the most energy in a building—and control the system by adjusting boilers, pumps, fans, and other physical equipment. The ML models reassess the data every 5 minutes to optimize for comfort, cost, and energy efficiency. English Amazon RDS BrainBox AI scaled its autonomous energy management solution to new regions using AWS, reducing the carbon emissions of the buildings that it is installed in by up to 40 percent. About BrainBox AI Scaled out to 20 countries reduction in building HVAC emissions Opportunity | Using AWS to Expand Service for BrainBox AI Deutsch Tiếng Việt Italiano ไทย Using BrainBox AI, building owners reduce HVAC energy costs by up to 25 percent and reduce HVAC-related greenhouse gas emissions by up to 40 percent. The solution has been implemented in 20 countries, and by the end of 2022, BrainBox AI was onboarding 20 new buildings per week. Using AWS, the company hopes to increase its capacity to onboard up to 1,000 new buildings per week. Learn more » Amazon EC2 Solution | Reducing Carbon Emissions by Up to 40% Using ML Português
Scaling Text to Image to 100 Million Users Quickly Using Amazon SageMaker _ Canva Case Study _ AWS.txt
Français It wasn’t only speed to market that was a concern for Canva but, more importantly, user trust and safety. The advent of AI-generated art has brought about new ways for users to create problematic content. In some cases, these AIs might even create offensive images on their own. Manually moderating each image would have required Canva to hire hundreds of moderators working around the clock. Instead, it turned to Amazon Rekognition, which offers pretrained and customizable computer vision capabilities to extract information and insights from images and videos. “Amazon Rekognition was really useful,” says Pink. “We’re not allowing users to enter prompts that could potentially generate malicious content, and we are using Amazon Rekognition to identify not-safe-for-work images that the model generates.” If a user enters an offensive image prompt, Canva simply returns no results to the user. There is also an option for users to report generated images they deem offensive. Español Learn more » Canva sets its image-creation sequence up so that after a user enters a text prompt, it uses an Amazon SageMaker Real-Time Inference endpoint to generate an image. When the images are generated, the system filters them through the Amazon Rekognition model. At the end of the pipeline, Canva displays a selection of images to the end user. With this cutting-edge text-to-image technology, users can create unique, high-quality images in seconds rather than in hours or days. 日本語 Amazon SageMaker Solution | Rapidly Bringing New Features to Users Using Amazon SageMaker Outcome | Scaling Up for Future Growth 한국어 Amazon Rekognition Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used no items found  Under 3 weeks More Canva Stories … Canva Scales Text to Image to 100 Million Users Quickly Using Amazon SageMaker AWS Services Used 1 Canva now uses Amazon SageMaker for over 60 ML models, affecting nearly every stage of image creation in the service. “Getting models into customers’ hands and then building momentum around that is very important. AWS has been absolutely essential for us to do any of this,” says Pink. Canva rolled out this innovative new feature to its users so quickly in large part due to the amount of employee time that the company saves using AWS. Using AWS also reduced costs by saving Canva a costly hardware investment up front. “AWS is a very good option for robust scaling in terms of return on investment because we can deploy effectively and quickly,” says Pink. 中文 (繁體) Bahasa Indonesia Opportunity | Using Amazon SageMaker to Accelerate Deployment for Canva Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Canva already used ML through Amazon Web Services (AWS) and Amazon SageMaker, a service to build, train, and deploy ML models for virtually any use case with fully managed infrastructure, tools, and workflows. The company wanted to introduce a feature that would let users enter a text prompt and get an AI-generated image, but doing so on its own would take at least 6 months of dedicated engineering work and a huge number of GPUs. By using Amazon SageMaker Real-Time Inference functionality, Canva could bring the new feature to users in less than 3 weeks. by adding content moderation 2022 Overview in ML for users Get Started About Canva Türkçe Learn how Canva rolled out its image-generating app using Amazon SageMaker and Amazon Rekognition. English to ship text-to-image feature to users Improved productivity Glen Pink Director of ML, Canva Accelerated innovation Global visual communications platform Canva wanted to use machine learning (ML) to bring an artificial intelligence (AI)-image-generation feature to its 100 million monthly active users—and do so quickly. Since its founding in 2013, its goal has been to empower anyone to communicate visually, on any device, from anywhere in the world. Canva is an online platform for creating and editing everything from presentations to social media posts, videos, documents, and even websites. The company aims to democratize content creation so that everyone, from enterprises down to the smallest-scale bloggers, has access to advanced visual communication tools. With the development of programs that use ML and AI to create images based on text input, building a text-to-image function in Canva aligned with the organization’s goal of empowering creativity and making design as simple as possible. “There has been a huge explosion in generated content,” says Glen Pink, director of ML at Canva. “AI-generated images have only recently become more than a toy. It’s become something that can actually be used as part of the creative design process.” Deutsch When an engineer at Canva built a text-to-image demo based on Stable Diffusion—an open-source, deep learning text-to-image ML model released in 2022—the company invested in integrating it with Canva. Pink’s first step in creating this tool was to turn to AWS, because Canva has been using services from AWS for nearly its entire existence. “It would have probably taken 6 months to implement on our own,” Pink says. “I wouldn’t even know how to approach the scaling from the hardware perspective.” Indeed, it would have been impossible for Canva to set up enough GPUs to make its text-to-image function a reality in time to meet business needs. Tiếng Việt By using Amazon SageMaker, Canva could ship the new text-to-image feature to users in the space of 3 weeks. “That’s a normal turnaround time for some models,” Pink says, “but this is heavy lifting and cutting edge. Before AWS, Canva couldn’t ship big, modern, cutting-edge models quickly, and now we can.” Build new applications with generative AI. Italiano ไทย Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from your images and videos. With over 100 million monthly active users, Canva is seeking to expand the intelligent services that it offers along with its global user base. The company plans to continue using AWS to build these tools at the scale that it needs to serve its growing Canva for Teams users. Using Amazon SageMaker makes it simple for Canva’s ML engineers to innovate rapidly and shape the future of team collaboration. “This is where AWS is actively involved in delivering the underlying environment to support the really heavy ML models,” Pink says. Learn more » Founded in 2013, Canva is a free online visual communications and collaboration platform with a mission to empower everyone in the world to design. “Using AWS, the Canva ML environment does very well at scaling to large numbers of users,” he says. “We can be confident that whatever we build on top of AWS, it’s going to scale.” Português Using AWS, the Canva ML environment does very well at scaling to large numbers of users.”
Scaling to Ingest 250 TB from 1 TB Daily Using Amazon Kinesis Data Streams with LaunchDarkly _ LaunchDarkly Case Study _ AWS.txt
LaunchDarkly provides scalable feature flag management software as a service that decouples feature rollout and code deployment, helping development teams to manage risk. and evaluate around 20 trillion feature flags daily LaunchDarkly streams event-data-processing records in real time into AWS Lambda, a serverless, event-driven compute service that lets companies run code for virtually any type of application or backend service without provisioning or managing servers. LaunchDarkly uses Lambda functions to process and transform data before sending it downstream to Amazon Kinesis Data Firehose, which reliably loads near-real-time streams into data lakes, warehouses, and analytics services. LaunchDarkly has doubled its data analytics use cases using Amazon Kinesis Data Analytics, which lets companies interactively query and analyze data in real time and continuously produce insights for time-sensitive use cases. For example, customers can evaluate flags not just by user but also by context, a generalized way to refer to the people, services, machines, or other resources that encounter feature flags. Analytics workloads no longer fail due to a large influx of data, helping LaunchDarkly to scale to safely accommodate an increasing number of customer experiments. Instead of conventional processing methods that update data every 30 minutes, LaunchDarkly’s solution helps customers to analyze the effect of new feature releases in just a few minutes. “Using Amazon Kinesis Data Analytics, we have much more flexibility and can optimize our customers’ experiences,” Zorn says. For example, LaunchDarkly uses Kinesis Data Analytics to filter noise from user data and streamline pertinent information for customers. “We are able to realize the full value of our data,” says Zorn. “We don’t need to compromise analyses due to data volume issues.” Français durability 2023 Español Outcome | Continuing to Support Customer Experimentation While Managing Risk AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Learn more » 日本語 AWS Services Used Contact Sales Amazon Kinesis Data Analytics is the easiest way to transform and analyze streaming data in real time using Apache Flink. 99.999% Opportunity | Using Amazon Kinesis Data Streams to Optimize Availability for LaunchDarkly 한국어 LaunchDarkly provides a feature-management solution for development teams that seek to manage risk as they deploy new software features. The company had already built a scalable compute architecture on Amazon Web Services (AWS), and it needed a data streaming solution to handle proliferating volumes of event data. The solution also needed to provide high availability to critical workloads so that LaunchDarkly customers could better manage risk by minimizing disruption and by quickly identifying threats. The company turned to services from Amazon Kinesis, which makes it simple to collect, process, and analyze near-real-time streaming data so that companies can get timely insights and react quickly to new information. Using Amazon Kinesis services, LaunchDarkly has scaled to ingest 250 TB of data in near real time and evaluate around 20 trillion feature flags daily, double its data analytics use cases, and provide 99.999 percent availability for customers. of data retention Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Mike Zorn Software Architect, LaunchDarkly 99.999999% Get Started Since adopting Kinesis Data Streams, LaunchDarkly has solidified the reliability of the events API it provides to customers, with five nines of availability and eight nines of data durability. “If we still had our previous architecture, we’d probably have around 1 or 2 percent availability,” Zorn says. “The availability of our events API has been rock solid since we adopted Amazon Kinesis Data Streams.” Scaling to Ingest 250 TB from 1 TB Daily Using Amazon Kinesis Data Streams with LaunchDarkly 中文 (繁體) Bahasa Indonesia use cases Click to enlarge for fullscreen viewing. Ρусский Customer Stories / Software & Internet عربي LaunchDarkly creates an additional layer of safety by using the configurable retention window of Kinesis Data Streams, which lets a company store data for 1–7 days. If a software misconfiguration or bug causes data to be processed incorrectly, LaunchDarkly engineers can use the added layer of safety to simply reingest historical data for customers. “That’s something I didn’t fully anticipate or appreciate when we first adopted Amazon Kinesis,” says Zorn. “It’s super simple to do, and it makes our customers very, very happy.” LaunchDarkly is using Kinesis Data Analytics to continue to enhance the functionality that its feature flags offer to customers. To process the ever-growing data volume, LaunchDarkly continues to use Kinesis services and other AWS services to enhance the reliability of the API it provides to customers, protecting customers from data loss and optimizing their ability to test new features. “It would have made it really hard to introduce an experimentation product that people would have any faith in if we were dropping data all the time,” Zorn says. “Using Amazon Kinesis Data Streams has removed the risk from our data system’s growth to a pretty large extent.” 中文 (简体) Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. Solution | Building Robust Data Streaming Tools to Ingest, Process, and Analyze Data at Scale Overview Architecture Diagram - With Kinesis, Before Kinesis, Kinesis & KDA Using Kinesis Data Streams, LaunchDarkly collects volumes of granular customer data concerning which users experience specific feature flags and whether certain feature flags are still in use. LaunchDarkly has scaled from ingesting a single terabyte a day to roughly 250 TB a day, while evaluating about 20 trillion flags daily. “Using Amazon Kinesis Data Streams helped us solve how to create a layer of indirect processing that protects our workloads from one another,” Zorn says. “What’s more, it’s helped us to safely reach the level of scale that we’re at now.” Scaled to ingest 250 TB 1–7 days Türkçe English Learn how LaunchDarkly built a scalable event-processing pipeline with 99.999 percent availability using Amazon Kinesis Data Streams. Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram AWS Lambda availability Amazon Kinesis Data Streams Amazon Kinesis Data Analytics Using Amazon Kinesis Data Streams has removed the risk from our data system’s growth to a pretty large extent.” Founded in 2014, LaunchDarkly provides software as a service that empowers customers’ development teams to safely deliver and control software releases through the use of feature flags. A feature flag is a kind of toggle that facilitates continuous delivery of software by decoupling feature rollout and deployment, concealing the code pathway. Customers’ software teams deploy new features “darkly”—meaning “off”—and control their releases rather than risk an all-or-nothing launch into production. For example, LaunchDarkly customers can release a feature to a small number of users to track performance, and then gradually increase the rollout. This reduces the risk profile for software teams that don’t need to scramble to repair errors in a widespread feature release. In short, feature flags help LaunchDarkly customers scale safe releases for real users. Deutsch Tiếng Việt Amazon Kinesis Data Firehose Italiano ไทย Architecture Diagram Close Learn more » To run its servers, LaunchDarkly had been using Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. It managed incoming requests by optimally routing traffic using Elastic Load Balancing (ELB), which automatically distributes incoming application traffic across one or more Availability Zones. At first, the company was using its servers both to ingest data and to run all its analytics processing, but the strain had begun to cause a rise in workload failures. “That was a solution that worked well when we were a really small company,” says Mike Zorn, software architect at LaunchDarkly. “But as our data volume increased, it showed that this system needed to be more reliable.” The cumulative volumes of data slowed the analytics workloads, and the company needed to scale up its data processing so that it could keep up with demand. With the idea of isolating workloads to optimize availability as the company continued to grow, LaunchDarkly adopted Amazon Kinesis Data Streams, a serverless streaming data service that makes it simple to capture, process, and store data streams at virtually any scale. Amazon Kinesis Data Firehose is an extract, transform, and load (ETL) service that reliably captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. Doubled data analytics About LaunchDarkly Português
Scaling Up to 30 While Reducing Costs by 20 Using AWS Graviton3 Processors with Instructure _ Case Study _ AWS.txt
Instructure could also manage more requests while reducing its response times from 1.5 seconds to 500 ms using the Amazon EC2 C7g Instance clusters. As a result, millions of concurrent users can complete tasks with less interruption. “We’re able to take that in-person student-teacher experience and either extend it or, where needed, replace it,” says Pendleton. AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. Learn more » Français Amazon EC2 Auto Scaling Opportunity | Adopting a Scalable Solution with Better Performance Español After migrating to AWS Graviton3 processors, Instructure saw a 30 percent boost in throughput performance and improved load performance running on Amazon EC2 C7g Instances over Amazon EC2 instances not based on AWS Graviton3 processors. “Migrating to AWS Graviton3 processors has helped us save costs on scaling while empowering us to offer our users a smoother and faster experience,” says Pendleton. The company achieved up to 20 percent better performance from its application servers while running fewer instances at peak times. The organization also observed that the Amazon EC2 C7g Instances were delivering better results against their cost, which was reduced by 15–20 percent. “These cost savings mean that we can invest in more novel, interesting solutions, like new data services and machine learning. Our engineers can also spend less time doing mundane tasks and more time innovating to benefit customers,” says Pendleton. Established in 2008, Instructure, the maker of Canvas LMS, is a US-based education technology company with global operations. The Instructure Learning Platform includes learning solutions for higher education and K–12 schools to elevate student success, amplify the power of teaching, and inspire everyone to learn together. Zach Pendleton Chief Architect, Instructure Up to 30% 日本語 2023 Amazon EC2 C7g Instances “AWS has consistently been a fantastic vendor for us. It is flexible and responsive,” says Pendleton. “Working alongside AWS, we can build solutions that meet our customers’ needs.” 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Get Started Customer Stories / Education error rates Instructure first migrated its compute-intensive workloads to AWS Graviton2 processor–based Amazon EC2 C6g Instances, which optimize for both higher performance and lower cost per vCPU. The migration from Amazon EC2 C5 Instances was seamless. The primary programming languages used by Instructure, Ruby and Java, support Arm-based instances. Hence, there were no source code changes required. When AWS launched AWS Graviton3 processors in 2022, Instructure performed load tests on AWS Graviton3 processors that are based on Amazon EC2 C7g Instances. These offer up to 25 percent better performance over the sixth-generation Amazon EC2 C6g Instances based on AWS Graviton2 processors. The load tests assessed the new instances’ cost and performance benefits, and the results compelled the company to migrate to AWS Graviton3–based instances. Instructure is a cloud-native company, having chosen Amazon Web Service (AWS) for its reliability, global reach, and sustainability, says Pendleton. “We saw the value of the cloud from the beginning and moved in that direction.” Instructure is running on Amazon Elastic Compute Cloud (Amazon EC2) instances, which provide secure and elastic compute capacity for virtually any workload. When online learning increased during the COVID-19 pandemic, Instructure began to explore using AWS Graviton–based Amazon EC2 instances, powered by custom-built AWS Graviton processors, to deliver high performance at a lower price for cloud workloads. Amazon EC2 C7g instances, powered by the latest generation AWS Graviton3 processors, provide the best price performance in Amazon EC2 for compute-intensive workloads. Learn more » AWS Services Used Migrating to AWS Graviton3 processors has helped us save costs on scaling while empowering us to offer our users a smoother and faster experience.” Reduced 中文 (繁體) Bahasa Indonesia 15-20% Instructure uses AWS Graviton processors to scale its solution and uses Amazon EC2 Auto Scaling, which makes it possible for users add or remove compute capacity dynamically to meet changing demand. Ρусский improvement in throughput performance عربي Amazon Elastic Compute Cloud (Amazon EC2) 中文 (简体) Overall, Instructure observed up to 30 percent improved performance by migrating to AWS Graviton–based instances. “We saw better 99th percentile performance during load testing of the Amazon EC2 C7g instances, which led to lower error rates. That kind of consistency and reliability is meaningful to us and our customers,” says Pendleton. Outcome | Spending More Time on Innovation Instead of Infrastructure Management Solution | Reducing Costs by Up to 20 Percent and Increasing Performance by Up to 30 Percent by Migrating to AWS Graviton Processors Overview Instructure faced a spike in user traffic due to the quick and sudden spread of the COVID-19 pandemic and had to invest significant time and resources to scale to meet learners’ and institutions’ online learning needs. “Our business is highly dynamic in its scaling requirements,” says Zach Pendleton, chief architect at Instructure. “We scale down to almost nothing on a weekend, and then during an exam period or the beginning of a semester, we have dramatic jumps in load.” To curb costs as it scaled, Instructure investigated ways to approach its compute needs efficiently without compromising performance. AWS Graviton Processor Instructure plans to migrate its remaining databases running on older instance types to AWS Graviton3 processors. The company is reinvesting its savings from Amazon EC2 into developing data services on AWS that give customers insight into at-risk students so that it can engage them proactively. To do so, Instructure expects to expand its use of Amazon Simple Storage Service (Amazon S3)—which offers industry-leading scalability—and add additional AWS services, such as Amazon Redshift, which offers cloud data warehousing, and Amazon EMR, a cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning. Scaling Up to 30% While Reducing Costs by 20% Using AWS Graviton3 Processors with Instructure About Instructure Türkçe English On serverless AWS solutions, Instructure streamlines its infrastructure management to further optimize the way that it uses compute power. The company uses AWS Fargate, a serverless, pay-as-you-go compute engine for building applications. Instructure also uses AWS Lambda, a serverless, event-driven compute service to run code for nearly any type of application or backend service. Instructure provides learning management solutions for higher education and K–12 schools worldwide. The company offers various digital tools for collaborating through videoconferencing and online discussions. Students can manage their calendars, read course content, and submit assignments. Teachers can grade the work on the same platform and submit feedback. Deutsch from 1.5 seconds to 500 ms in load testing Tiếng Việt Learn how education technology company Instructure improved throughput by up to 30 percent using AWS Graviton–based Amazon EC2 instances. Because much of education moved to online learning in 2020, education technology company Instructure, the creator of Canvas LMS, adjusted its compute spend to scale its business efficiently, boosting performance and streamlining the online learning experience for millions of schools. Italiano ไทย Contact Sales increase in cost savings Learn more » Amazon EC2 Auto Scaling helps you maintain application availability and lets you automatically add or remove EC2 instances using scaling policies that you define.  Learn more » Reduced response time Amazon EC2 offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Securing Workforce Access at Scale Using AWS IAM Identity Center with Xylem _ Xylem Case Study _ AWS.txt
Français 2023 Xylem has already migrated 15 products to the new solution and plans to have the process completed by early 2023. After that, the company plans to operationalize this approach to identities and use it for more AWS services. “The only way we’re going to keep building and growing as a company is to strengthen identity as our foundation, and that’s exactly what we did using AWS,” says Jacobs. Español Customer Stories / Energy - Power & Utilities workforce identity management and onboarding in AWS from days to hours 日本語 AWS IAM Identity Center Xylem is a water technology company based in the United States that provides efficient, innovative, and sustainable technology solutions to businesses in more than 150 countries. Josh Jacobs Senior Manager for Global Security Operations, Xylem Learn how Xylem, a leading water technology company, applies access controls for its workforce users as it accelerates AWS adoption using AWS IAM Identity Center. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Founded in 2011, Xylem provides smart water solutions—from water meters to leak detection services—to utility companies and other customers in 150 countries. When Xylem began to provide operational security controls across its cloud products, it discovered that identity credentials were not uniform across its 140 AWS accounts. When team members shifted roles, they needed to gain access to other accounts. To create a common identity and access framework enforceable across the company and its AWS accounts, Xylem decided to use AWS IAM Identity Center. “We have a consistent identity solution that we manage within any group, we’re able to audit access, and we can enforce consistent identity policies, multifactor authentication, password complexity and password rotation, and on and on,” says Josh Jacobs, senior manager for global security operations at Xylem. “We’re able to do a lot with limited resources.” About Xylem AWS IAM Improved Water technology company Xylem has adopted a multiaccount strategy to improve efficiency and security posture, using over 140 Amazon Web Services (AWS) accounts. Many of these accounts used native AWS Identity and Access Management (AWS IAM) to securely manage identities and access to AWS services and resources for individual accounts. As Xylem started to increase the number of AWS accounts to increase its business agility and innovation, the company was looking for a solution to consistently apply information security policies across these multiple accounts. Using AWS IAM Identity Center and AWS Organizations to centrally manage workforce access to multiple AWS accounts, Xylem could reduce employee onboarding time, improve its security posture, and achieve a comprehensive view of the security of its accounts. Get Started Outcome | Expanding the Security Approach to More AWS Services AWS Organizations lets you create new AWS accounts at no additional charge. With accounts in an organization, you can easily allocate resources, group accounts, and apply governance policies to accounts or groups. AWS Services Used With AWS Identity and Access Management (AWS IAM), you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS. Reduced 中文 (繁體) Bahasa Indonesia Learn more » Ρусский The only way we’re going to keep building and growing as a company is to strengthen identity as our foundation, and that’s exactly what we did using AWS.” عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Opportunity | Using AWS IAM Identity Center to Improve Workforce Identity and Access Management in AWS Securing Workforce Access at Scale Using AWS IAM Identity Center with Xylem Achieved Overview security posture across AWS accounts comprehensive view of security and access across all AWS accounts Türkçe By using AWS IAM Identity Center, Xylem can provide workforce access at scale as it continues to accelerate cloud adoption and innovate solutions for customers. New business acquisitions can be assimilated into workforce access while consistently applying policies across multiple AWS accounts. AWS IAM Identity Center (successor to AWS Single Sign-On) helps you securely create or connect your workforce identities and manage their access centrally across AWS accounts and applications. Learn more » English The company began migrating workforce identities to AWS IAM Identity Center in 2021. These identities include the company’s data lake team, one of its most security-conscious development teams. The migration is going smoothly, with no downtime for Xylem products. The company also uses AWS Security Hub to automate AWS security checks and centralize security alerts. Xylem uses it to monitor data and security 24/7, improving its security posture. Xylem has sped up the onboarding of new employees to AWS; their identities are set up before they begin working, instead of days later. “Everybody at Xylem has an identity, and if they shift into a role where they will be using AWS, it’s essentially zero time to get the identity piece of that added,” says Jacobs. This improvement in identity management and access controls helps employees develop products faster, resulting in better time to market. Deutsch Solution | Benefiting from Multiaccount Identity and Access Management Using AWS Tiếng Việt Italiano ไทย Contact Sales AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation. Learn more » Learn more » AWS Organizations AWS Security Hub Português
SecurionPay _ Amazon Redshift _ Amazon Quicksight _ Amazon Kinesis _ AWS.txt
The development platform is consistent and designed for 99.995 percent reliability, helping developers to test and build new services and address bespoke merchant requirements. “Using AWS, we can adapt and change our services fast,” says Jankowiak. “If any errors occur, we can fix them and roll out improvements immediately. This flexibility adds to our competitive advantage—the sky is the limit.” Français Benefits of AWS Alerting Merchant Customers to Potential Fraud Español Close cooperation with the AWS account team facilitated the development of an alerting system based on Amazon QuickSight. The system spots every abnormal behavior in the traffic and immediately notifies customers about the event. "This is exactly what we needed. We had an idea of how to do it, but AWS suggested we build a custom engine, which we did," says Szymon Święcki, DevOps engineer at SecurionPay. "Our workshops with AWS have been super helpful." To implement the highest standards, SecurionPay has based its multi-layered approach to security on AWS. Using AWS Key Management Service (AWS KMS) made it easy to create and manage cryptographic keys, saving the IT team time for maintenance and backup tasks. The company has needed less time to complete the Payment Card Industry (PCI) compliance audit process. Reducing 3 days to half a day has freed up the team to focus on innovation. Within 3 months of migrating to AWS, employees from across the business—from operations to sales—were using data-driven insights to make decisions. For example, the risk team can now easily drill down into the details of suspicious events without involving the data analytics team, speeding up time to resolution. They can also quickly spin up dashboards on topics such as merchants, regions, or traffic-per-card issuer. Reports that previously took hours are now almost instantaneous, delivering timely insights.  Learn more 日本語 Contact Sales SecurionPay Manages Complex Online Payments, Scales to 300% Growth Using AWS Released daily product updates while maintaining 99.995% uptime SecurionPay runs an online credit card payment platform that handles 1 out of every 1,500 transactions worldwide for Mastercard and Visa. The company combines the latest technology with customer-centric user experience to create a product that is optimized to meet future needs. It facilitates complex payments for both low- and high-risk global merchants.  한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Since chargebacks, fraud, and failed payments can hamper revenue growth if not managed properly, the company has a security-first approach. Maintaining the highest possible level of data security for fast, increasing volumes of transactions is central to its business.  Learn more SecurionPay wanted to maintain its customer experience standards while rapidly growing its customer base. To do this, the company decided to draw from real-time insights based on its customer behavior. It used Amazon Redshift for its fast, easy, and secure cloud data warehousing. It also reached for Amazon QuickSight, a cloud-based business intelligence (BI) tool for creating dashboards.  Time-effectiveness goes hand in hand with cost decrease. Costs were reduced by up to 90 percent, since SecurionPay began using AWS Lambda, a serverless, event-driven compute service, and Amazon Kinesis, making it easy to collect, process, and analyze real-time streaming data. Get Started Generated business analytics reports in seconds Customer Sales Rise By 22% Using AWS AWS Services Used SecurionPay has improved the efficiency of product development using AWS DevOps tools. The team set up a continuous integration/continuous delivery (CI/CD) pipeline that delivers daily product updates, so customers always have access to the latest features.  中文 (繁體) Bahasa Indonesia Scaled to handle 25 million monthly transactions and 300% growth Amazon Kinesis Ρусский عربي 中文 (简体) Amazon Redshift Better Business Intelligence Using Amazon QuickSight Highly effective anti-fraud features are essential to quickly spotting fraudulent charges and taking the necessary action.  Encouraging Innovation with 99.995% Uptime SecurionPay built a payment platform relying on 60–70 AWS services. “Using AWS, we can scale to meet demand, and we were profitable within 6 months of launching the business,” says Lucas Jankowiak, CEO and co-founder at SecurionPay.  Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Amazon QuickSight allows everyone in your organization to understand your data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning. Türkçe English SecurionPay facilitates complex payments for global merchants regardless of whether the businesses are low- or high-risk. A reliable, secure, and scalable platform ensures the flawless processing of millions of transactions. It provides the flexibility to deploy new payment options at a pace for SecurionPay's customers to increase their sales conversions. Being backed up by AWS, SecurionPay has scaled to meet 300 percent business growth, improved customers’ sales by 19 percent, and used data analytics to support smarter business decisions. SecurionPay supports flexible payment options such as one-click upgrades, offers, cancellations, and upsales for customers, while providing secure authentication. Drawing upon AWS services, SecurionPay provides a checkout process that is 2–4 minutes faster than previously, because customers avoid redirecting payments to third-party websites and reduce the number of forms to fill out. Such improvement in convenience contributed to increased customer sales conversions by an average of 19 percent, and overall sales by 22 percent. “Passing the benefits of our secure and scalable service to our customers gives us a competitive advantage,” says Jankowiak. About SecurionPay SecurionPay turned to Amazon Web Services (AWS) to build a secure platform that can reliably process millions of concurrent transactions and support the company’s 300 percent year-on-year growth. It also developed a flexible architecture that promotes innovation so that it can offer new payment options to merchants to boost their sales conversions. Deutsch Tiếng Việt Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. Italiano ไทย In turn, its merchant customers benefit from the comprehensive BI tool that allows them to easily group and filter transactions, giving them better information about their businesses.  Lucas Jankowiak CEO and Co-founder, SecurionPay 2022 Using AWS, we can adapt and change our services fast. If any errors occur, we can fix them and roll out improvements immediately. This flexibility adds to our competitive advantage—the sky is the limit.” Amazon QuickSight Global merchants require scalable resources to ensure they have the capacity to meet variable buyer demand and process payments in a timely fashion.  SecurionPay provides a platform for online payments, serving global enterprises, and mid-sized and small companies. It supports a total of 160 currencies and 23 languages, making it an ideal service for cross-border transactions. Português Increased customer sales conversions by an average of 19%
Security Posture Strengthened Using AWS Shield Advanced with OutSystems _ Case Study _ AWS.txt
of security solution deployment without additional resources Français Español For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced. About OutSystems 日本語 response times for issues Managed complexity Using AWS services, we reduced 2 hours of work to less than 5 minutes.” Recognized by Gartner in 2021 as a leader in enterprise low-code application development platforms, OutSystems supports customers in a variety of industries, including customers managing business-to-business applications, business-to-employee applications, and business-to-consumer applications. Its customers’ applications have different usages and traffic patterns depending on the use case, making it challenging for OutSystems to manage the wide range of behaviors and security postures. Prior to using AWS services for a security solution, OutSystems supported two customers with their own custom security protection solution. However, this solution required a significant amount of manual effort from the company and didn’t offer protection at scale. Starting in 2020, OutSystems implemented a security solution using Firewall Manager, Shield Advanced, and AWS WAF—which helps protect web applications from common web exploits—to meet the varying needs of its customers because it had already built its application development platform using AWS services. “It was a natural choice for us because our product runs natively on AWS, and we have experience with it internally, so we could implement the security solution with less overhead,” says Igor Antunes, head of security architecture at OutSystems. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS WAF AWS Services Used AWS Shield Advanced Opportunity | Using AWS Shield Advanced Supported Managing the Complexity of Security Solutions for OutSystems      Solution | Improving Response Times to Security Issues and Reducing Costs Using AWS Shield Advanced, AWS WAF, and AWS Firewall Manager Using services like AWS Shield Advanced, a managed distributed denial-of-service protection service, OutSystems successfully scaled to manage the complexity of over 4,000 web application firewalls (WAFs) while improving the response time to security issues after finding a malicious indicator from approximately 2 hours to under 5 minutes. OutSystems paired Shield Advanced with AWS Firewall Manager, a security management service for centrally configuring and managing firewall rules across accounts and applications. Because Firewall Manager supports Shield Advanced policies, OutSystems used both services to accomplish its goal of managing the complexity of security solutions while improving response time. Igor Antunes Head of Security Architecture, OutSystems 中文 (繁體) Bahasa Indonesia Cost and Time Savings Achieved, Security Posture Strengthened Using AWS Shield Advanced with OutSystems Contact Sales Ρусский Customer Stories / Software & Internet عربي The security solution for OutSystems needed to support the complexity and large scale required by its customers. The company manages a large and growing number of application load balancers—over 4,000 as of 2022—and serves thousands of applications across all load balancers. To protect its customers across multiple geographic regions, OutSystems uses AWS WAF. “Using AWS services, we can manage the security posture of all customers from a central place by deploying rules that are specific to our technology and blocking malicious events,” says Antunes. “We also have the granularity to address very specific challenges.” Using Firewall Manager, OutSystems can define rules while leaving room for local configuration options based on a country’s regulations or a company’s policies. For example, OutSystems can support configurations related to geo-blocking for individual customers in a specific environment while relying on a basic rule set for configurations that don’t vary across customers. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. Learn more » 2022 Using AWS services, OutSystems achieved significant time savings so that the company could reallocate resources to other projects. “Previously, an analyst and an operator would have to create the local WAF and deploy the rules with the solution when reacting to an event,” says Antunes. “Using AWS, we reduced 2 hours of work to less than 5 minutes.” This saved time is particularly impactful with the company’s ever-growing number of WAFs because it would be unsustainable to change rules manually for all the WAFs or to adapt the rules to a set of customers. If a cyber issue occurs, OutSystems can resolve it quickly because AWS Shield Advanced also provides early detection of possible distributed denial of service attacks and tight collaboration with the AWS response team. Overview in monthly costs 88% reduction Outcome | Continuing to Fine-Tune the Security Solution Using AWS Firewall Manager Türkçe English OutSystems reduced its costs by 88 percent per month by upgrading to Shield Advanced. The company gains these significant cost savings on an ongoing basis despite its scale because it no longer needs to pay for each WAF or rule. “Using AWS Shield Advanced and AWS Firewall Manager, we pay a fixed rate and get as much protection as we need,” says Antunes. Founded in Portugal, OutSystems is a global software vendor that provides a low-code, high-performance application development platform that helps its customers develop applications quickly with minimal coding knowledge. OutSystems provides a high-performance, low-code application development platform that helps its customers develop applications quickly with minimal coding knowledge. Founded in 2001 in Portugal, OutSystems has become a global software vendor that supports 13 AWS Regions with offices around the world. AWS Firewall Manager When deploying its security solution, OutSystems worked closely alongside AWS teams to address challenges and meet customer needs. OutSystems plans to continue implementing additional capabilities of AWS Firewall Manager to fine-tune its security solution and better protect its customers. “Throughout the full lifecycle, from the inception of an idea until the end, we always used AWS to get the right support at the right time,” says Antunes. 4,000 Deutsch Learn how OutSystems in the software industry managed thousands of web application firewalls using AWS Firewall Manager. application load balancers supported Tiếng Việt Italiano ไทย AWS WAF helps you protect against common web exploits and bots that can affect availability, compromise security, or consume excessive resources. Learn more » When deploying the security solution, OutSystems also saved on implementation costs compared with the cost of a solution from another vendor because the company didn’t need to obtain additional resources or capacity above what it was already using internally. Additionally, by using the infrastructure of Firewall Manager for the deployment of its solution, OutSystems could focus on its own product instead of designing its security solution from scratch. Throughout the process, OutSystems received support from the teams at AWS to manage the complexity of the solution. For example, when OutSystems exceeded AWS limits of internal APIs because of the scale of its security solution, the AWS WAF and Firewall Manager teams worked alongside the company to troubleshoot. “The teams at AWS were always available to work with us and provide guidance on the best practices for deploying this solution,” says Antunes. Learn more » Less than 5-minute As software vendor OutSystems grew its business, it needed a scalable security solution for its cloud service to further protect customers from cyber issues and simultaneously reduce operational overhead. In 2020, OutSystems looked to Amazon Web Services (AWS) for centralized security management so that the company could offer protection at scale while limiting manual interventions. Português
Selecting the right foundation model for your startup _ AWS Startups Blog.txt
AWS Startups Blog Selecting the right foundation model for your startup by Aaron Melgar | on 22 JUN 2023 | in AWS for Startups , Generative AI , Thought Leadership | Permalink |  Share When startups build generative artificial intelligence (AI) into their products, selecting a foundation model (FM) is one of the first and most critical steps. A foundation model is a large machine learning (ML) model pre-trained on a vast quantity of data at scale resulting in a model that can be adapted to a wide range of downstream tasks. Model selection has strategic implications for how a startup gets built: Everything from user experience and go-to-market, to hiring and profitability, can be affected by selecting the right model for your use case. Models vary across a number of factors, including: Level of customization – The ability to change a model’s output with new data ranging from prompt-based approaches to full model re-training Model size – How much information the model has learned as defined by parameter count Inference options – From self-managed deployment to API calls Licensing agreements – Some agreements can restrict or prohibit commercial use Context windows – How much information can fit in a single prompt Latency – How long it takes for a model to generate an output Following are some of the most impactful aspects to consider when selecting a foundation model to meet your startup’s needs. Application-specific benchmarks As startups evaluate the performance of different models for their use case, a critical step in the process is establishing a benchmark strategy, which helps a startup quantify how well the content that a model generates matches expectations. “There are a large number of models out there, ranging from closed source players…to open-source models like Dolly, Alpaca, and Vicuna. Each of these models have their own tradeoffs — it’s critical that you choose the best model for the job,” explains Noa Flaherty, chief technology officer (CTO) and co-founder of Vellum . “We’ve helped businesses implement a wide variety of AI use cases and have seen first-hand that each use case has different requirements for cost, quality, latency, context window, and privacy.” Generalized benchmarks (such as Stanford’s Holistic Evaluation of Language Models ) are a great starting point for some startups because they help prioritize which foundation models to start experimenting with. However, generalized benchmarks may be insufficient for startups that are focused on building for a specific customer base. For example, if your model needs to summarize medical appointments or customer feedback, the model should be evaluated against how well it can perform these specific tasks. “To do custom benchmarking, you need a workflow for rapid experimentation – typically via trial and error across a wide variety of scenarios. It’s common to over-fit your model/prompt for a specific test case and think you have the right model, only for it to fall flat once in production,” Noa advises. Custom benchmarking may include techniques such as calculating BLEU and ROUGE scores ; these are two metrics that help startups quantify the number of corrections that are necessary to AI-generated text before giving it final approval for human-in-the-loop applications. Quality metrics and model evaluation are critical, which is why Noa founded Vellum in the first place. This Y-Combinator backed startup focuses their product offerings on experimentation: Per Noa, “The more you can compare/contrast models across a variety of cases that resemble what you’ll see in production, the better off you’ll be once in production.” Smaller, purpose-built models are on the rise Once quality benchmarks have been established, startups can begin to experiment with using smaller models meant for specific tasks, like following instructions or summarization. These purpose-built models can significantly reduce a model’s parameter count while maintaining its ability to perform domain-specific tasks. For example, startup GoCharlie is partnered with SRI to develop a marketing-specific multi-modal model with 1B parameters. “One-size-fits-all models will never truly solve an end user’s needs, whereas models designed to serve those needs specifically will be the most effective,” explains Kostas Hatalis, the chief executive officer (CEO) and co-founder of GoCharlie. “We believe purpose-built models tailored to specific verticals, such as marketing, are crucial to understanding the genuine requirements of end users.” The open-source research community is driving a lot of innovation around smaller, purpose-built models such as Stanford’s Alpaca or Technology Innovation Institute’s Falcon 40B . Hugging Face’s Open LLM Leaderboard helps rank these open-source models across a range of general benchmarks. These smaller models deliver comparable benchmark metrics on instruction-following tasks, with a fraction of the parameter count and training resources. As startups customize their models for domain-specific tasks, open-source foundation models empower them to further customize and fine-tune their systems with their own datasets. For example, Parameter-Efficient Fine-tuning (PERT) solutions from Hugging Face have shown how adjusting a small number of model parameters, while freezing most other parameters of the pre-trained LLMs, can greatly decrease the computational and storage costs. Such domain adaptation based fine-tuning techniques are generally not possible with API-based proprietary foundation models which can limit the depth to which a startup can build a differentiated product. Focusing usage on specific tasks also makes the foundation model’s pre-trained knowledge across domains like mathematics, history, or medicine, generally useless to the startup. Some startups choose to intentionally limit the scope of foundation models to a specific domain by implementing boundaries, such as Nvidia’s open-source NeMo Guardrails , within their models. These boundaries help to prevent models from hallucination: irrelevant, incorrect, or unexpected output. Inference flexibility matters Another key consideration in model selection is how the model can be served. Open-source models, as well as self-managed proprietary models, grant the flexibility to customize how and where the models are hosted. Directly controlling a model’s infrastructure can help startups ensure reliability of their applications with best practices like autoscaling and redundancy. Managing the hosting infrastructure also helps to ensure that all data generated and consumed by a model is contained to dedicated cloud environments which can adhere to security requirements set by the startup. The smaller, purpose-built models we mentioned earlier also require less compute intensive hardware, helping startups to optimize unit economics and price performance. In a recent experiment , AWS measured up to 50% savings in inference cost when using ARM-based AWS Graviton3 instances for open-source models relative to similar Amazon Elastic Compute Cloud (EC2) instances. These AWS Graviton3 processors also use up to 60% less energy for the same performance than comparable Amazon EC2 instances, which helps startups who are considering the environmental impacts of choosing power hungry inference hardware.  A study from World Economic Forum detailed the energy consumption of data centers. Once considered an externality, environmental implications have risen to top of minds of many and AWS enables startups to quantify their environmental impact through offerings such as Carbon Footprint Reporting , which helps companies compare the energy efficiency of different hardware selections. Conclusion Wherever your startup is in its generative AI journey—getting the infrastructure ready, selecting a model, or building and fine-tuning–AWS provides maximum flexibility for customers. Amazon Bedrock , a fully managed service, gives you access to foundation models from leading foundation models including Amazon’s own Titan family of models, available via a fully managed API. Amazon SageMaker JumpStart is self-service machine learning hub. It offers built-in algorithms, pre-trained foundation models, and easy-to-use solutions to solve common use cases for customers like fine-tuning their models or customizing their infrastructure. Check out these generative AI resources for startups building and scaling on AWS 🚀. Need help deciding which model or solution to choose? Want to work with AWS to offer your own model or algorithm?  Reach out to our team today ! TAGS: AIML Aaron Melgar Aaron empowers the AI/ML Startups & Venture Capital ecosystem at AWS, focused on early stage company growth. He is a former founder, series-A product manager, machine learning director, and strategy consultant. He is a second-generation Latin American who loves tennis, golf, travel, and exchanging audiobook recommendations about economics, psychology, or business. Resources AWS Activate AWS for Startups Resources Build Your Startup with AWS AWS for Startups Events Follow  AWS Startups Twitter  AWS Cloud Twitter  AWS Startups Facebook  AWS Startups Instagram  AWS Startups LinkedIn  Twitch  Email Updates
Shgardi Case Study.txt
increase in monthly orders Opportunity | Amazon EKS Auto-scaling Helps Shgardi Cut Infrastructure Costs by 40% Français increase in conversion rate for new customers 20% Español Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that streamlines setup, operation, and management of message brokers on AWS. Learn more » Dahab says the migration has made Shgardi more efficient and placed it in a much healthier position than it was pre-pandemic. “We are going to continue using more AWS services,” he says. “This is helping us improve our market share in the MENA region and will improve our ability to expand country by country.” Learn how »  The company migrated to Amazon Web Services (AWS) in 2021. Containerization on AWS made it easier for Shgardi to manage its underlying infrastructure, and instead focus on innovation and business development. It also deployed microservices, so its delivery platform could automatically scale to match demand, and it used several AWS services to improve the performance, security, and reliability of its platform. 日本語 2023 Amazon MQ The move to microservices and an auto scaling infrastructure has freed up Shgardi’s developers so they can focus on coming up with new ways to improve efficiency and customer experience, instead of maintaining servers. Previously, deploying updates to the platform, whether it was to fix bugs or add a new feature, would take up to a week. Dahab says that it now takes a few hours, which is a reduction of at least 70 percent. Get Started 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Solution | Shgardi Increases Revenue and Conversion Rate Using Amazon Personalize Amazon Personalize Shgardi Boosts Monthly Orders by 20%, Cuts Costs by 40%, and Prepares for Growth Using AWS Tarek Dahab, chief technical officer (CTO) at Shgardi, explains that its previous infrastructure wasn’t designed to scale quickly. “During the COVID-19 pandemic, our traffic was increasing every day, so we kept ordering new servers,” he says. “But by the time they arrived and were set up, we needed more.” According to Dahab, in one year the infrastructure grew from a single dedicated server to 40 servers operating in clusters. “Our technical team was overwhelmed maintaining these servers, and they had no time for innovating or performing higher-value tasks,” he says. Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Amazon CloudFront AWS Services Used 中文 (繁體) Bahasa Indonesia About Company Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 70% 中文 (简体) Dahab says it took less than 2 weeks to fully deploy the service. So far, it has been a huge success. “In the 3 months since we used Amazon Personalize to build our recommendation engine, we have increased monthly orders by 20 percent and boosted the conversion of new visitors into customers by 30 percent,” he says. On the customer-facing side, Shgardi used Amazon CloudFront,a content delivery network (CDN) service built for high performance, security, and developer convenience. Dahab says Shgardi uses Amazon CloudFront to cache images and objects, so its customers have the best possible browsing experience. 40% Shgardi is a Saudi Arabia-based delivery service that operates in 80 cities and arranges deliveries of food, pharmaceuticals, groceries, parcels, and other goods. Shgardi was looking for a way to convert more of its website visitors to customers. It also wanted to increase the average value of its active customers’ shopping baskets. The company decided to use Amazon Personalize, which lets developers quickly build and deploy curated recommendations and intelligent user segmentation at scale using machine learning. Overview Migrating to AWS has not only improved the reliability and performance of Shgardi’s platform, it has also reduced costs, increased revenue, and improved the productivity of the company’s developers. This has resulted in the delivery service attracting external investment and being recognized in the Forbes Middle East and North Africa (MENA) list of the most-funded startups, for having raised more than $37 million. This has resulted in increased uptime and revenue, as well as savings of 40 percent on infrastructure costs, which were previously around $15,000 per month. It also reduced the time taken to deploy platform updates by 70 percent, which improved the productivity of Shgardi’s IT staff. Migrating to AWS has allowed Shgardi to be more flexible, agile, and reliable in serving its clients. Shgardi has about 600 employees, 70 of whom have a technical background in coding and engineering skills. The company used its in-house expertise to containerize its platform using Amazon Elastic Kubernetes Service (Amazon EKS) for auto scaling—a managed service to run Kubernetes in the AWS Cloud—and deployed hundreds of microservices. To aid connectivity in the backend and make the most efficient use of its new architecture, it used Amazon MQ, a fully managed service for open-source message brokers. In the 3 months since we used Amazon Personalize to build our recommendation engine, we have increased monthly orders by 20% and boosted the conversion of new visitors into customers by 30%.” Shgardi also wanted to maximize the platform’s performance and minimize latency to ensure that visitors would have a good user experience. On the backend, the company used Amazon Relational Database Service (Amazon RDS)—a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. It also used Amazon Aurora, a database designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. “Using Amazon RDS and Amazon Aurora was the simplest way to enable our existing databases to scale on demand while also reducing management overheads,” says Dahab. Outcome | Shgardi Attracts Millions in Investments and Eyes Expansion AWS Customer Success Stories Türkçe Amazon MQ allows diverse applications on various platforms to communicate and exchange information. “These AWS services work together and allow us to easily manage our platform,” says Dahab. “When the traffic spikes, they auto scale. And compared to our previous infrastructure, we are spending about 40 percent less. This was the perfect solution for us.” English reduction in platform-update deployment time Before it migrated to AWS, Shgardi was unhappy that its on-premises platform experienced an uptime rate of just 90 percent. Dahab explains that, on top of problems faced during traffic spikes, the company had to plan regular maintenance downtime. “We had servers going down, the interconnect between servers going down, and we had to schedule regular maintenance windows—when customers wouldn’t be able to access the platform,” says Dahab. “Since migrating to AWS, this is no longer an issue and we have had more than 99 percent uptime.” Tarek Dahab Chief Technical Officer, Shgardi Deutsch Customer Stories / Startup / Saudi Arabia The lockdowns during the COVID-19 pandemic caused demand for Shgardi’s delivery services to increase exponentially. As a result, the company struggled to keep its platform from being overwhelmed and potentially crashing. The company would regularly add new dedicated servers to its on-premises infrastructure to try to match demand. This was a manual and time-consuming process and caused its infrastructure costs to continually increase. Tiếng Việt Shgardi is a Saudi Arabia-based delivery service. The company started out delivering food when it launched in 2019. However, during the COVID-19 pandemic it diversified to include general deliveries, parcels, groceries, and pharmaceuticals. It now has over 3 million customers and has completed more than 5 million orders. But increased demand challenged Shgardi’s infrastructure capacity, which caused unplanned downtime and negatively impacted the customer experience. Italiano ไทย Amazon EKS Amazon Personalize allows developers to quickly build and deploy curated recommendations and intelligent user segmentation at scale using machine learning (ML). Learn more » saving on infrastructure costs Learn more » 30% Português
Showpad Accelerates Data Maturity to Unlock Innovation Using Amazon QuickSight _ Case Study _ AWS.txt
Français In 2021, sales enablement solution company Showpad envisioned using the power of data to unlock innovations and drive business decisions across its organization. Showpad’s legacy solution was fragmented and expensive, with different tools providing conflicting insights and lengthening time to insight. The company decided to use Amazon Web Services (AWS) to unify its business intelligence (BI) and reporting strategy for both internal organization-wide use cases and in-product embedded analytics targeted at its customers. Amazon QuickSight has become our go-to solution for any BI requirement at Showpad—both internally and externally, especially when it comes to correlating data across departments and business units.”   2023 Español Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. With QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries. 日本語 increase in dashboard development activity Founded in 2011 and with offices around the world, Showpad provides a single destination for sales representatives to access all sales content and information, along with coaching and training tools to create informed, upskilled, and trusted buying teams. The platform also provides analytics and insights to support successful information sharing and fuel continuous improvement. In 2021, Showpad decided to take the next step in its data evolution and set forth the vision to power innovation, product decisions, and customer engagement using data-driven insights. This required Showpad to accelerate its data maturity by mindfully using data and technology holistically for its customers. dashboard production from months to weeks About Showpad 한국어 For the second work stream of in-product customer reporting, Showpad released its first version of QuickSight reporting to customers in June 2022. “We went through user research, development, and beta tests in a span of 6 months, which was a big win for us,” says Minnaert. With the foundational architecture in place, shipping to a customer can happen in a few sprints, focusing on iterating and fine-tuning insights instead of solution engineering. The company can then follow up with tailor-made reporting for each customer using the same data so that it tells a consistent story. Overview | Opportunity | Solution | Outcome | AWS Services Used The company already used AWS in other aspects of its business and found that using Amazon QuickSight would meet all its BI and reporting needs with seamless incorporation into the AWS stack. “We chose Amazon QuickSight because of its embedded analytic capabilities, serverless architecture, and consumption-based pricing,” says Minnaert. Get Started Learn how Showpad used Amazon QuickSight to streamline data access and reduce insights turnaround time from months to weeks. increased speed, performance gains with SPICE (Super-fast, Parallel, In-memory Calculation Engine) AWS Services Used Showpad built new customer embedded dashboards within Showpad eOSTM and migrated its legacy dashboards to Amazon QuickSight, which powers data-driven organizations with unified BI at hyperscale.   Jeroen Minnaert Head of Data, Showpad 中文 (繁體) Bahasa Indonesia 10x Opportunity | Using Amazon QuickSight to Streamline Data-Driven Decisions   Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 3x increase in ROI expected   Overview Showpad users can quickly prototype reports in a well-known environment—building reports using QuickSight and then testing them with customers—and have increased dashboard development activity by three times across the organization. “After we settle on reports or dashboards, it does not take much engineering effort to bring them to production,” says Minnaert. After a dashboard is agreed on, it can go through Showpad’s automated dashboard promotion process that can take an idea from development to production in weeks, not months. Showpad’s users and customers also benefit from performance gains with 10 times increased speed when using SPICE (Super-fast, Parallel, In-memory Calculation Engine), which is the robust in-memory engine that QuickSight uses. It takes only seconds to load dashboards. Using the serverless QuickSight, Showpad expects to see a three-times increase in projected return on investment in 2023. It can deprecate custom reporting, infrastructure, and multiple tools with the new data architecture and QuickSight. “The serverless model was also compelling because we did not have to pay for server instances nor license fees per reader. On Amazon QuickSight, we pay for usage. This makes it easy for us to provide access to everyone by default,” says Minnaert. And by providing dashboard and report building across 600 employees, including analysts and nontechnical users, Showpad reduced the time to build and deliver insights from months to weeks. Solution | Architecting a Portable Data Layer and Migrating to Accelerate Time to Value Founded in 2011, Showpad has offices around the world. It helps sales representatives share personalized content and deliver better buyer experiences, and it provides coaching and analytics insights to businesses. Türkçe By helping business users and experts rapidly prototype dashboards and reports to meet user and customer needs, Showpad uses the power of data to innovate and drive growth across its organization. “Amazon QuickSight has become our go-to solution for any BI requirement at Showpad—both internally and externally, especially when it comes to correlating data across departments and business units,” says Minnaert. English Fast time to value 6 months Outcome | Unlocking Innovation with Self-Service BI and Rapid Prototyping   After determining an approach and building the foundation, the team wanted to scale. But with 70 dashboards with over 1,000 visuals and over 1,000 tables ingesting data from more than 20 data sources, the team decided to prioritize the migration order. The company started with dashboards that had the fewest dependencies and worked up to customer success and marketing dashboards that combined product and engineering and revenue operations data. Showpad launched the first dashboard set in April 2022 and completed its internal BI migration by the end of 2022. As of January 2023, Showpad’s QuickSight instance includes over 2,433 datasets and 199 dashboards. Showpad Accelerates Data Maturity to Unlock Innovation Using Amazon QuickSight On the internal reporting front, the data team took a “Working Backwards” approach to make sure it had the right process before going all in with its existing dashboards. The company also reimagined its data pipeline and architecture, creating a portable data layer by decoupling the data transformation from visualization, machine learning, or one-time querying tools and centralizing its business logic. The portable data layer facilitated the creation of data products for varied use cases, made available within various tools based on the need of the consumer. Deutsch Showpad continues to expand in-product reporting while optimizing performance for an improved customer experience. Showpad hopes to further reduce the time that it takes to load a dashboard and make and ship a report to a customer. To make self-service even easier, Showpad will soon launch embedded Amazon QuickSight Q, which empowers anyone to ask questions in natural language and receive accurate answers with relevant visualizations that help them gain insights from the data. Tiếng Việt Italiano ไทย to first reporting version deployment   Learn more » After choosing QuickSight as its solution in November 2021, Showpad took on two streams of development: migrating internal organization-wide BI reporting and building in-product reporting using embedded analytics. Showpad worked closely alongside the QuickSight team for a smooth rollout. Amazon QuickSight But the company’s legacy BI solution and data were fragmented across multiple tools. “If each tool tells a different story because it has different data, we won’t have alignment within the business on what this data means,” says Jeroen Minnaert, head of data at Showpad. Consistency, ownership, and insufficient data access were also challenges for Showpad across its targeted user base due to a complex BI access process, licensing issues, and insufficient education. Showpad wanted to bring all the data into a unified interface, democratize that data, and drive and unlock innovation through advanced insights. Português
Sixth Force Solutions _ Amazon Web Services.txt
Amazon Elastic Compute Cloud (Amazon EC2). With this capability, Sixth Force customers can remotely access Prolaborate and Enterprise Architect on AWS. The cloud version of Enterprise Architect enhances security through AWS services such as the Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français To accommodate its customers’ requirements, Sixth Force decided to offer a cloud version of the Benefits Reducing Deployment Time from Months to Weeks Helping Enterprise Architect Users Collaborate Reliably Across the Globe Español Because of the agility and flexibility that comes with AWS, Sixth Force customers can deploy the new versions of Enterprise Architect and Prolaborate faster than before. “Previously, because of bureaucracy and multiple processes, it could take a large enterprise up to nine months to go live with the on-premises version of our software,” says Saleem. “Now, with AWS, that time is reduced to a few weeks at the most. The day a new version of the software is released, people can start streaming it immediately. This represents a real transformation for our customers.” Sparx Architecture Platform on Amazon Web Services (AWS). “We chose AWS for its global scale, ease of use, strong support ecosystem, and compliance and security capabilities,” says Nizam. “Additionally, by selecting AWS, we knew we could easily deploy and scale our solution across multiple geographies.” Learn more » Nizam Mohamed Founder, Prolaborate Amazon AppStream 2.0 is a fully managed non-persistent desktop and application service for remotely accessing your work. 日本語 NICE DCV is a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions. Get Started 한국어 Customers in the Sparx Systems ecosystem who are adopting the cloud and streaming versions of Prolaborate and Enterprise Architect are experiencing increased reliability because of the underlying technology on AWS and are taking advantage of multiple Availability Zones. “Most of our customers are distributed across multiple cities, and they often struggled with latency and delays,” says Nabil Saleem, product manager for Sparx Systems Prolaborate. “Because of the high availability and reliability of AWS, those problems have become a thing of the past. Our solutions perform better and have decreased latency because of AWS, so we know Prolaborate users can collaborate easily, no matter where they are in the world.” By using AWS, Sixth Force has quickly grown its customer base for the new cloud and streaming versions of its software. “We grew our cloud-hosted and SaaS versions of Prolaborate and Enterprise Architect from zero to more than 60 in less than a year since using AWS,” says Nizam. “We have also generated 150 percent revenue growth in the past year and a half. Much of this is due to the scalability and flexibility we have by running on AWS.” Amazon Web Application Firewall (AWS WAF). Sixth Force also uses AWS to deliver a software as a service (SaaS) version of Prolaborate and Enterprise Architect, Reduces deployment time from months to weeks Amazon GuardDuty Sparx Enterprise Architect (Enterprise Architect) every day to design and create software systems and business processes. Enterprise Architect is an integrated visual modeling and design tool offered by Australia-based Sparx Systems, a leader in architecture modeling tools. AWS Services Used Amazon AppStream 2.0 capabilities for the Sparx Architecture Platform. Amazon AppStream 2.0 is a fully managed, non-persistent desktop and application service for remote access. “This is a 20-year-old application with a very strong user base, and we are now bringing it to more users through this AWS-powered EA SaaS solution,” says Nizam. Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. For several years, Sixth Force has sought to respond to customer demands for a cloud version of the Sparx Architecture Platform. “Many of our customers had the on-premises versions of Enterprise Architect and Prolaborate and used dedicated resources to maintain data centers, roll out applications, and change management,” Nizam says. “These customers wanted to take advantage of the agility, cost savings, and scalability of the cloud.” 中文 (繁體) Bahasa Indonesia Sixth Force Solutions, based in India, provides Enterprise Architecture consulting for customers in a range of industries. A Sparx strategic partner, Sixth Force offers Prolaborate collaboration software and supports companies in deploying Sparx Enterprise Architect. Creating Cloud and Streaming Versions of Enterprise Architect and Prolaborate Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский عربي Increases revenue by 150% in 1.5 years In the future, Sixth Force expects to roll out its Amazon AppStream–based solution to more enterprises in Europe and the US. Nizam concludes, “With the Amazon AppStream–based streaming solution, we can deliver greater scale while offering better collaboration capabilities. We look forward to expanding this solution to give our remote workers worldwide the best possible tools.” 中文 (简体) Creates 60+ versions of cloud-hosted and SaaS solutions in less than 1 year   To learn more, visit aws.amazon.com/products/end-user-computing. Prolaborate, a sharing and collaboration software platform. Prolaborate integrates seamlessly with Enterprise Architect and gives software architects the ability to analyze, interact, and make key decisions based on Enterprise Architect model data. “Prolaborate and Enterprise Architect combined help architects create a digital architecture platform by leveraging model data to build dashboards and graphs,” says Nizam Mohamed, founder of Prolaborate. “As a result, users can gain business insights and share and collaborate more easily, no matter where they are located.” Sixth Force customers are also lowering costs by implementing Prolaborate and Enterprise Architect on AWS. “A lot of enterprises spent a significant amount of money to manage data centers and infrastructure before our cloud offerings were available, but they no longer need to worry about those things,” Nizam says. NICE DCV Growing Revenue by 150% on AWS We grew our cloud-hosted and SaaS versions of Prolaborate and Enterprise Architect from zero to more than 60 in less than a year since using AWS.” Türkçe Amazon Elastic Compute Cloud The company has seen a specific increase in cloud deployments for large banks, telecommunications firms, and manufacturing organizations in Europe and the US, most of which have security and compliance requirements that Sixth Force can help meet on AWS. “Many of our SaaS customers in particular are returning for renewals and asking for more advanced features,” says Nizam. “This encourages us to continue working on innovative new offerings.” English Sixth Force Solutions Delivers Cloud Version of Enterprise Architect Software to Meet Customer Demand for Rapid Tool Deployment Amazon AppStream 2.0 About Sixth Force Solutions Amazon GuardDuty threat detection service and Deutsch Since 2018, Sparx strategic partner Tiếng Việt Learn More Italiano ไทย In November 2020, Sixth Force launched a newer SaaS streaming version of Enterprise Architect that extends Contact Sales EA SaaS. The solutions are streamed through web browsers powered by Amazon EC2 instances and NICE DCV, an AWS remote desktop and application streaming service. “We felt that offering a streaming solution on AWS would cater to our customers in a more customized way and give them more configuration capabilities,” Nizam says. 2022 Sixth Force Solutions has complemented Enterprise Architect by offering Nearly 1 million people across the globe use Sixth Force launched an AWS-powered cloud hosting service for the Sparx ecosystem, including Enterprise Architect, that runs on Helps employees collaborate reliably across the globe Português
SKODA Uses AWS to Predict and Prevent Production Line Breakdowns.txt
Envisioning the Future of MAGIC EYE and the ŠKODA Approach Français Español 日本語 Contact Sales When a single minute of lost production costs automotive manufacturers the revenue of one car, there’s no room for production downtime. To meet its production demands and avoid unnecessary revenue loss, ŠKODA AUTO (ŠKODA) knew it needed a way to prevent production line issues from occurring instead of just reacting to them.  Milan Dědek Manager for Predictive Maintenance, ŠKODA Get Started 한국어 With an eye toward improvement, ŠKODA assessed its existing production and maintenance processes and determined that its current reactive approach to assembly line disruptions was not meeting its needs. It needed a way to accurately predict potential problems to prevent breakdowns before they occur. Predictive maintenance leaves no room for failure and breakdown, making it a strong pillar for the ŠKODA maintenance strategy. Fortunately, using AWS, ŠKODA had the technology it needed to make—and scale—such a high-level process improvement. “ŠKODA is a big company with lots of processes and a very fragmented infrastructure, so we need to cooperate with a strong service provider,” says Milan Dědek, manager for predictive maintenance at ŠKODA. “AWS offers plenty of services not only for today but also for future projects.”  About ŠKODA Reduces assembly line downtime ŠKODA TECHNICAL: My Machine – Magic Eye Amazon EC2 Adopting a Proactive Approach to Production Line Maintenance AWS Services Used Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Increased staff productivity 中文 (简体) Learn more >> ŠKODA Uses AWS to Predict and Prevent Production Line Breakdowns Learn more » Harnessing the Power of AI and Compute Vision Benefits of AWS Using a combination of AWS services and in-house technology, ŠKODA got to work developing MAGIC EYE, a new way to manage auto production, in 2020. MAGIC EYE compute vision technology collects, monitors, and analyzes equipment data to identify vulnerabilities and calculate different breakdown scenarios before they create a problem. “The aim of our department is to identify these weak places and find a solution to limit or remove the breakdown,” says Dědek. “MAGIC EYE is one of the most important parts of our approach because it’s directly on the main production line.” Optimized production costs For flexible and scalable compute, MAGIC EYE uses Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. The solution also uses additional AWS services, like Amazon Relational Database Service (Amazon RDS), which provides users with the ability to set up, operate, and scale a relational database in the cloud with just a few clicks. For visualization of the MAGIC EYE solution, the company uses Amazon QuickSight, which helps everyone in an organization to understand data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning. The company’s strategic combination of cost-efficient AWS services and onsite expertise has set ŠKODA up for increased cost savings—whether from reduced downtime, faster maintenance, or overall increased efficiency per circuit—across the assembly line. In addition to optimizing costs, ŠKODA can scale with ease to meet fluctuating production needs using AWS. With this scalability, ŠKODA will be able to develop MAGIC EYE into an even more powerful standard solution that can eventually be rolled out to more of the Volkswagen Group’s factories. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.  To address this need, ŠKODA turned to Amazon Web Services (AWS) and used AWS Internet of Things services to create MAGIC EYE, an innovative manufacturing solution that works to prevent issues and reduce costly and avoidable downtime. English Facilitated a predictive approach to production line maintenance· Amazon RDS For the Volkswagen Group as a whole, MAGIC EYE is one part of an ambitious long-term plan for improving production processes, increasing productivity, and optimizing cost savings. It’s the first stage in what will be an industry-wide shift to replace reactive production strategies with a more effective, proactive approach. “The flexibility and potential to further roll out MAGIC EYE beyond the production line is definitely important to us,” says Dědek. “There’s no place for failure or breakdown. I think this is the mantra for all production. In the long-term, this approach is a good investment.” Czech auto manufacturer ŠKODA operates under the Volkswagen Group umbrella. Its automobiles are sold in over 100 countries, and the global demand for these vehicles leaves little room for production stalls. Every vehicle not produced costs auto manufacturers like ŠKODA thousands of dollars in lost revenue, so a continuous production line is key for keeping production moving quickly and efficiently and driving revenue generation.  ŠKODA is a big company with lots of processes and a very fragmented infrastructure, so we need to use a strong service provider. AWS solutions offer much more potential for us to grow.” Deutsch Tiếng Việt Amazon S3 Amazon QuickSight allows everyone in your organization to understand your data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning. Italiano ไทย Türkçe ŠKODA is a Czech automobile manufacturer headquartered in Mladá Boleslav, Czech Republic. It is part of the Volkswagen Group. 2022 ŠKODA’s MAGIC EYE solution uses six cameras mounted by a conveyor frame to monitor equipment and reach places human operators can’t access with ease. In the process of manufacturing electric vehicles, the increased weight of the battery puts additional pressure on the belts, which also requires more monitoring. In the amount of time it takes a car to move through the ŠKODA production line, these cameras collect nearly 450,000 photos. The cameras connect to a powerful computer on the assembly line frame, where 10 artificial neural networks collect and analyze the photos. The results are sent directly to the cloud and stored using Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere.  If MAGIC EYE detects an irregularity, like dirt in the power line area, loose or cracked bolts, or aluminum track damage, it alerts the maintenance operator, who then decides the best approach to take, such as remedial action or scheduling future repair work during planned downtime. This process is a major shift from ŠKODA’s previous reactive approach, when equipment was only checked during scheduled inspections or when a malfunction became significant enough to impact the assembly line. By then, production could be stalled for minutes, hours, or even days, depending on the problem. Using MAGIC EYE, maintenance operators can see potential concerns in advance and create the best course of action. “With enough data, I’m able to predict when failures could come and the percentage of potential problems,” says Dědek. MAGIC EYE’s neural networks can now recognize a total of 14 defect types and 178 classes, including several subcategories, positioning it to detect hundreds of different scenarios and conditions. Amazon QuickSight Português
SmartSearch-case-study.txt
On-demand scalability Français To accelerate the migration, SmartSearch adopted AWS Application Migration Service, (CloudEndure Migration), which minimizes time-intensive, error-prone manual migration processes by automatically converting source servers from physical, virtual, and cloud infrastructure to run natively on AWS. In only 6 months, the company replicated its servers on AWS without disruptions to its clients. Since completing the first phase of its migration, SmartSearch has achieved vastly improved performance. AWS Application Migration Service Español Migrating to AWS to Support Growth and Improve System Performance SmartSearch Completes a Seamless Migration to the Cloud Using AWS Application Migration Service Learn more » Learn More 日本語 would provide out of the gate." About SmartSearch SmartSearch knew that hosting its system on AWS would both reduce the burden of data center maintenance and unlock key performance improvements. It chose to migrate its entire system to AWS. “We compete with the best in our industry, and we want our customers to have access to the most stable platform,” says Morris. “We evaluated everything and realized that AWS was the best choice for us. So, we made the decision to go all in.”  Get Started 한국어 Then, they began to replicate SmartSearch’s Microsoft SQL Servers on the cloud using AWS Application Migration Service. By June 2021, SmartSearch had completed the first phase of its migration with virtually no disruption to its clients or system downtime. In fact, the cutover window took only 9 hours and was scheduled over a weekend. “In the war on talent, staffing companies sometimes have minutes to submit a resume, satisfy their customer, and gain a huge commission,” says Morris. “Our primary goal was to minimize customer impact due to a migration. RedNight Consulting partnered with us to develop a plan, and we carried it out flawlessly. The migration to AWS was beautiful.”  Nanoseconds to spin up new servers As a provider of digital recruiting and staffing solutions, SmartSearch knows that performance and resilience are critical components for its software environment. Reliability and uptime are key for clients to perform at a high level, capturing lucrative commissions and valuable contracts. To continue to meet client expectations and improve system performance, the software company chose to migrate its self-managed data center to Amazon Web Services (AWS).  L. J. Morris President and Chief Technology Officer, SmartSearch  On the advice of its parent company, SmartSearch engaged RedNight Consulting, an AWS Partner, to accelerate the migration. RedNight Consulting has significant technical expertise on AWS and worked with SmartSearch to create a comprehensive migration strategy. “RedNight Consulting recommended that we completely recreate our network on AWS first and then optimize it,” says Morris. In January 2021, the teams set out to duplicate SmartSearch’s environment on AWS, with a goal to complete the project in 6 months. 9-hour cutover window AWS Services Used To learn more, visit aws.amazon.com/application-migration-service. 中文 (繁體) Bahasa Indonesia Benefits: Ρусский عربي Using AWS Application Migration Service to Minimize Downtime and Customer Disruptions The performance that we got on 中文 (简体) When the new system went live, SmartSearch saw immediate improvements. “The performance that we got on AWS from day one was breathtaking,” says Morris. “We didn’t realize how much more headroom AWS would provide out of the gate.” Since the migration, SmartSearch customers have expressed great satisfaction with the system’s performance and reliability. SmartSearch also uses Amazon CloudWatch, a monitoring and observability service, to monitor its system. Using this tool, the SmartSearch IT team can quickly identify and resolve potential performance issues before they affect the client experience.  SmartSearch is a software company that develops solutions for the staffing and recruiting industry. Global clients rely on SmartSearch’s comprehensive talent acquisition tool to centralize sourcing, hiring, and applicant tracking activities. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. First, SmartSearch and RedNight Consulting completed a proof of concept that identified components that SmartSearch needed to adjust prior to the migration. Based on these findings, the teams performed a domain update, simplified the network architecture, and decommissioned servers that were no longer in use.  Now that it has duplicated its entire environment on AWS, SmartSearch will continue to modernize its infrastructure. In particular, it is in the process of migrating from its SQL Servers to Amazon Aurora, a relational database management system built for the cloud with full MySQL and PostgreSQL compatibility. “We will see meaningful cost and performance improvements by migrating to Amazon Aurora,” says Morris. SmartSearch is also exploring serverless solutions like AWS Lambda, a serverless, event-driven compute service.  Founded in 1986, SmartSearch provides talent acquisition software that centralizes sourcing, recruiting, applicant tracking, and hiring activities. To power its service, the company had previously self-managed an on-premises data center. Improving performance or increasing memory was a costly, time-consuming experience for SmartSearch. “We were successful hosting our own data center for years, but as we prepare for rapid growth and acceleration, we want to invest our time and resources in the product and customer needs,” says L. J. Morris, president and chief technology officer of SmartSearch. “By migrating to AWS, we can focus on building great products, which is what we do best.”  Türkçe Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. English AWS SmartSearch will continue to use AWS to deliver high-performing services to its clients. “Using AWS Application Migration Service, we duplicated an aging system and completely recreated that network in the cloud,” says Morris. “We couldn’t have accomplished this without the support of AWS.” Deutsch To comply with regulations for its global clients, SmartSearch can quickly launch its environment in new AWS Regions, which are physical locations where AWS clusters data centers. “For General Data Protection Regulation compliance, we were able to power up a new instance of our network in Germany,” says Morris. “This duplication took a matter of weeks on AWS but would have been a yearlong project on premises.” Now, SmartSearch can seamlessly grow alongside its customers and configure its system to meet their evolving technical requirements.  Tiếng Việt Access to modernization tools Italiano ไทย SmartSearch now powers its software environment using virtual servers on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. It can spin up new Amazon EC2 instances in nanoseconds when it needs additional capacity and can scale back servers when they are no longer needed. This on-demand scalability saves time and opens opportunities for innovation among its IT team. “We promoted our IT director to director of operations, which would not have been possible previously,” says Morris. “This is a testament to the fact that he doesn’t have to focus solely on our network since the migration.”  Amazon CloudWatch Contact Sales AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automatically converting your source servers to run natively on AWS. It also simplifies application modernization with built-in optimization options. Continuing to Optimize and Improve Software Systems Using Amazon Aurora 2022 Amazon EC2 from day one was breathtaking. We didn’t realize how much more headroom Português No client disruption during migration
Snap optimizes cost savings with Amazon S3 Glacier Instant Retrieval _ Snap Case Study _ AWS.txt
Explore Snap's journey of innovation using AWS in download latency in some Regions Français Amazon S3 added since being on AWS Español Solution | Saving Tens of Millions on Infrastructure and Improving Visibility into Object Storage Migrating Snap’s content to Amazon S3 has also improved operations and visibility. Using Amazon S3 Storage Lens, a feature that delivers organization-wide visibility into object storage usage, the company has better insight into what it’s storing so that it can make more informed, data-driven decisions. Snap also migrated to AWS to scale its infrastructure to support its growth: the amount of content that it stores has grown by 5–10 percent each year. Meanwhile, Snap transitioned other parts of its infrastructure from its previous monolithic architecture to one based on microservices to host many of the services that powered its app. To accomplish this, it turned to Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud or on premises. “We worked extensively with the AWS team to migrate some of our features and components to microservices on AWS,” says Manoharan. Each microservice can be deployed in multiple Regions, simplifying the management of its infrastructure. As a result, Snap saw a 20–30 percent reduction in download latency in certain Regions for refreshing feeds, downloading media, and doing near-real-time communications. 日本語 As Snap’s storage needs increased, the company needed to optimize storage without diminishing performance or compromising user experience. To achieve this, Snap migrated its data from another cloud provider to Amazon Web Services (AWS) and used Amazon Simple Storage Service (Amazon S3), an object storage service that offers industry-leading scalability, data availability, security, and performance. The fact that no customer noticed this major migration to Amazon S3 Glacier Instant Retrieval was a big win for us. It was a seamless experience for end users, and we had no production issues during the entire migration.” Contact Sales Greater than 99.99% Outcome | Gaining Insights on AWS to Prioritize Business Needs 한국어 Snap migrated more than 2 exabytes of data—roughly equivalent to 1.5 trillion media files—seamlessly to Amazon S3 Glacier Instant Retrieval from Amazon S3 Standard-IA. “The fact that no customer noticed this major migration to Amazon S3 Glacier Instant Retrieval was a big win for us,” says Manoharan. “It was a seamless experience for Snapchatters, and we had no production issues during the entire migration.” As a result of the migration, the company saved tens of millions of dollars on storage. Snap has configured Amazon S3 in 20 AWS Regions around the world so that customers anywhere can retrieve data in milliseconds. The AWS Global Infrastructure is the most secure, extensive, and reliable Global Cloud Infrastructure for a business’s applications. The global reach of AWS lets Snap store media closer to the place where Snapchatters are creating it for optimal performance. Snap is also able to deliver content efficiently using Amazon CloudFront, a content delivery network service built for high performance, security, and availability. “We’ve been able to off-load all of the regionalization work and costs to AWS so that we can focus on developing new features,” says Manoharan. As a result, Snapchat continues to meet its quarterly cost-optimization goals. Overview | Opportunity | Solution | Outcome | AWS Services Used 2 exabytes Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. … In 2016, Snap migrated its data to AWS. “We chose to migrate to AWS because of its global reach, excellent performance, and competitive pricing that, in turn, gave us the ability to reinvest in our business,” says Vijay Manoharan, manager of the media delivery platform team at Snap. Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. AWS Services Used In 2017, Snap migrated one of the app’s most central features—Snapchat Stories—to Amazon DynamoDB, a fully managed, serverless, NoSQL database designed to run high-performance applications at virtually any scale. Using Amazon DynamoDB, the company experienced greater than 99.99 percent availability and can better manage the metadata associated with customers’ photos and videos. The company estimates that it has added 200 million daily active users since 2016 and has dramatically improved its ability to grow and innovate on AWS. 1 中文 (繁體) Bahasa Indonesia Amazon S3 Glacier Instant Retrieval To optimize the cost of storing permanent content, Snap adopted Amazon S3 Glacier Instant Retrieval, which is designed to deliver low-cost storage for long-lived data that is rarely accessed. By using Amazon S3 Glacier Instant Retrieval for its long-term, rarely accessed media files, Snap is saving tens of millions of dollars while delivering the same performance and powering new business opportunities, such as innovative app features and new hardware products. About Snap Inc. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. no items found  On AWS, Snap is ready to handle more growth and roll out innovative features in a way that’s both cost efficient and delivers a great user experience. “By gaining new insights on AWS,” Manoharan says, “we can strike the right balance between further reducing costs and maintaining performance.” Learn more » Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. 2022 Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in AWS and on-premises data centers. Learn more » Overview of data migrated Snap Inc. is a camera company that aims to improve the way that people live and communicate through Snapchat, its photo- and video-sharing app, and through its hardware products designed to make capturing and sharing media easier. Snap Optimizes Cost Savings While Storing Over 1.5 Trillion Photos and Videos on Amazon S3 Glacier Instant Retrieval Get Started More Snap Stories Türkçe English Snap Inc. (Snap) builds the popular visual messaging app Snapchat, which enhances relationships with friends, family, and the world. More than 363 million daily active users use Snapchat to share and save photos and videos. Though it started with a focus on ephemeral content, such as photos that would disappear after a few seconds, the app has become a place for Snapchatters—as Snapchat users are called—to store media and memories long term, if they choose. Opportunity | Optimizing Storage by Migrating to AWS 200 million daily active users Saved tens of millions of dollars Deutsch Snap had been storing saved media on Amazon S3 Standard-Infrequent Access (S3 Standard-IA), a storage class for data that is infrequently accessed (once every 1–2 months) but requires rapid access when needed. With the launch of Amazon S3 Glacier Instant Retrieval in November 2021, the company realized that it could save even more on costs with virtually no impact on performance. The Snap team even influenced the development of this archive storage class by providing feedback and collaborating with the Amazon S3 team as the storage class was being designed. To determine if Amazon S3 Glacier Instant Retrieval delivered a lower total cost than Amazon S3 Standard-IA, Snap began by analyzing the access patterns of its data. This analysis showed that using Amazon S3 Glacier Instant Retrieval would reduce costs because the storage class is ideal for data that needs immediate access but is only accessed once per quarter. So, Snap began migrating to the storage class in March 2022 using Amazon S3 Lifecycle policies. By June 2022, Snap had migrated all existing content and was storing all new content in Amazon S3 Glacier Instant Retrieval. Tiếng Việt 20–30% reduction Italiano ไทย Amazon EKS Amazon CloudFront on storage Snap plans to continue looking for opportunities to achieve further cost savings while focusing on innovation. “The AWS team provided us with tremendous support,” says Manoharan. “That commitment has really helped us prioritize our business needs.” Learn more » availability achieved Vijay Manoharan Manager of the Media-Delivery Platform Team, Snap Inc. Snap’s needs accelerated in 2016 after the launch of Snapchat Memories, a feature that automatically archives media and resurfaces it over time. “Snapchat Memories is our predominant use case for storing media for long periods,” says Manoharan. Snapchatters might view this content for a few days and then not view it again for months or years, so the company wanted to optimize its storage on AWS for further cost savings. Português
Software Colombia and AWS Team Up to Create Powerful Identity Verification Solution _ Software Colombia Case Study _ AWS.txt
Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables.  Français 2023 in overall identification and onboarding process Español Amazon Cognito 日本語 Get Started 한국어 Learn how Software Colombia builds on Amazon Web Services (AWS) to transform the identity management landscape. Overview | Opportunity | Solution | Outcome | AWS Services Used for its customers and digital processes AWS and our new eLogic biometrical solution helps us reduce fraud and risk by 95%, while making our product more inclusive and accessible." Improved security Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from your images and videos. AWS Services Used AWS Amplify Software Colombia now mitigates identity spoofing attacks by 95 percent and the time spent by end users onboarding into systems and platforms was reduced by 92 percent. This enhances the user experience in the authentication process and enables more secure electronic communication channels that organizations can use to quickly and safely distribute products and services. 中文 (繁體) Bahasa Indonesia Software Colombia, an organization founded and headquartered Bogotá, Colombia, are specialists in the virtualization of procedures, electronic invoices, digital signatures, chronological stamping, and applications of PKI technology for customers worldwide. Software Colombia innovate in digital signatures, authentication, and e-commerce solutions with the highest quality standards for its customers, and its mission is to become the leading digital verification and authentication company in the region by 2025. With Amazon Cognito, you can add user sign-up and sign-in features and control access to your web and mobile applications. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Outcome | Enhancing Identity Verification with Face Liveness Detection 中文 (简体) In the modern business environment, identity management has become a vital concern for enterprises that conduct digital transactions. With the proliferation of online platforms and the need to safeguard sensitive data from malicious actors, companies require robust solutions to manage user identity securely. Software Colombia needed an efficient, accurate, and robust biometric facial recognition solution capable of verifying user identity by using advanced algorithms to analyze facial features and match them against existing records. The solution would be used for the processes of issuing X509 digital certificates and to secure the signature of documents online, as well as to protect other important web transactions. Such a solution will help Software Colombia and its customers reduce the cost and risk of fraud on business-critical processes. Amazon Textract Learn more » 95% reduction Overview Designed and prototyped in identity spoofing attacks and risk Increased speed and accuracy Türkçe English Opportunity | Increasing Accuracy while Reducing Costs with Face Identity Verification Amazon Rekognition Solution | Expanding Software Colombia Solutions with Machine Learning to prove a person’s identity in minutes regardless of location Software Colombia is a top-tier software development company based in Bogotá, Colombia, providing cutting-edge technology solutions globally. The company has a team of skilled experts in machine learning (ML), artificial intelligence (AI), software development, mobile app development, web development, cloud computing, and big data. It has completed over 300 successful projects for clients globally, including healthcare, finance, logistics, and education. The company's focus on innovation, quality, and client satisfaction has earned it recognition as a top software development company in Colombia. Alex Chacón Software Colombia, CEO Software Colombia Creates a Powerful Identity Verification Solution Using AWS Deutsch an identity verification solution in 4 weeks Tiếng Việt AWS Amplify is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve. Italiano ไทย Contact Sales About Software Colombia Learn more » Customer Stories / Software and Internet Software Colombia's new solution is called eLogic Biometrics and is designed and prototyped with the AWS envision engineering team, capable of mitigating the identity spoofing attacks and risk by 95 percent through a biometric face recognition and authentication mechanism regardless of whether the user is providing the image through a phone or another camera. Aditionally, the identity verification, authentication, and the overall onboarding process of new customers was reduced by 92 percent, enhancing the user experience in the authentication process and enabling more secure electronic communication channels that organizations can use to distribute products and services. eLogic Biometrics was developed with a serverless architecture, using AWS services such as Amazon Cognito, Amazon SQS, and Amazon Textract for document processing. Software Colombia deploys the solution with Amazon Amplify, which supports the new Amazon Rekognition Face Liveness API. Português 92% reduction
Spacelift Case Study.txt
10 AWS Lambda Configuring Cloud Environments 3x Faster Français Spacelift supports customers working in pure cloud environments as well as those running hybrid models because they need to store certain data on premises to comply with security or privacy regulations. Cut down on security and compliance issues by a factor of 10 Español This means customers require fewer senior-level IT staff or can increase productivity of current developers, so they have more time to create innovative products and features. “When new developers join a company, they can spin up all the infrastructure they need in seconds with little product knowledge, and then quickly minimize error risks and correct any misconfigurations,” says Wyszynski. “Through automation, our customers’ DevOps teams can configure cloud environments 3 times faster than doing the same work manually.” 90% To ensure that its platform is flexible and able to scale rapidly, Spacelift uses AWS Lambda, which allows users to run code without thinking about servers or clusters. This helps the company deal with unpredictable workload demand from customers. “A single customer might launch a thousand tasks that need addressing, and then have nothing to process for the next hour,” says Kuba Martin, software engineer at Spacelift. “Using AWS Lambda, we can quickly spin up compute capacity to deal with incoming requests, so they can be resolved quickly and tasks don’t accumulate. This means our customers experience reliable performance—and they stay happy.” Easing Communication for Hybrid Environments Using AWS IoT Core Developers working for Spacelift’s customers can set up cloud environments immediately, even if they have minimal cloud experience, because Spacelift provides an easy-to-use interface to the underlying AWS setup. 日本語 Contact Sales 2022 Spacelift Reduces Time Spent on Cloud Management by 90% Using AWS 한국어 Sped up cloud environments’ configurations by 300% Overview | Opportunity | Solution | Outcome | AWS Services Used This reduces the complexity of the infrastructure so it can be managed with fewer DevOps engineers. It also means that new environments needed for startups or a large company opening a new office, for instance, can be set up quickly, easing corporate expansions. Automation also reduces the error rates compared to manual configurations, so its customers’ platforms are more reliable for their end customers. Based in Silicon Valley and Poland, Spacelift has created a platform that simplifies the management of complex cloud environments. That means IT teams can focus on creating innovative products, rather than maintaining infrastructure. The approach has proved popular and spurred the company’s growth from 1 to 40 employees over 2 years. To ensure high levels of reliability, security, and compliance for its platform, Spacelift turned to Amazon Web Services (AWS). Using AWS, the start-up has helped customers such as Checkout.com and Kin cut down on the time spent on repetitive infrastructure maintenance tasks by 90 percent. For example, by automating security and data privacy configurations, the company has reduced the time needed to handle these issues by a factor of 10 compared to doing the work manually. Get Started Spacelift is now part of AWS ISV Accelerate, a co-sell program for organizations that provide software solutions that run on, or integrate with, AWS. Its solution is also available for businesses to download and deploy from AWS Marketplace. “We’re always looking to deepen our use of AWS,” says Wyszynski. “Working together closely helps us to build on our success and supports ongoing product development, meaning we can continually improve our services for customers.” Getting Up and Running on AWS in Half the Expected Time AWS Activate AWS Services Used AWS Activate provides startups with a host of benefits, including AWS credits*, AWS support plan credits, and architecture guidance to help grow your business. Learn more » 中文 (繁體) Bahasa Indonesia Spacelift Reduces Time Spent on Cloud Management by 90% Using AWS Spacelift’s platform combines continuous integration and deployment (CI/CD) processes to manage infrastructure as code (IaC), so customers can easily and quickly set up and maintain cloud architectures. Using the Spacelift platform, customers can replicate code with common open-source IaC tools instead of configuring new cloud environments manually. Ρусский Customer Stories / Software & Internet عربي Spacelift also cuts down on the time required from developer teams to fix code issues when replicating code. “Using AWS, we can simply roll back to a reset with just 3 clicks and minimize the engineers’ involvement, if there are any code errors,” says Wyszynski. “This is one of the biggest advantages of having a highly available system.” Marcin Wyszynski, Founder and Chief Product Officer, Spacelift 中文 (简体) Reduced customers’ repetitive development tasks by 90% To facilitate information flow between the cloud and the on-premises system, Spacelift uses AWS IoT Core, which easily and securely connects devices to the cloud. “With a direct cloud connection to a customer’s IT environment, we can easily route communications,” says Wyszynski. “This helps to keep the technical complexity of the platform low and means the client doesn’t have to worry about managing additional infrastructure.” AWS IoT Core 300% Overview About Company Built platform on AWS in 4 months—half the expected time We’ve moved so fast thanks to help from the AWS support teams and the AWS Activate program. We were able to quickly verify product assumptions and the support team helped us to get key functionalities right.” Türkçe English Spacelift helps businesses to easily set up and manage complex cloud environments, so they can do more with fewer team members. Its platform combines continuous integration and deployment (CI/CD) processes to manage infrastructure as code (IaC). This speeds up code development, and increases the efficiency of workflow management by reducing error rates and automating key manual tasks. Using AWS, Spacelift has helped customers like Checkout.com and Kin to cut down on repetitive infrastructure maintenance tasks by 90 percent. Automating security and data privacy configurations means customers reduce the time spent on these issues by a factor of 10. Almost half of IT recruiters worldwide report difficulties in finding qualified developer candidates. Fast-growing startup Spacelift addresses this shortage of technical staff by helping businesses do more with the DevOps and engineering talent they have. 4 months The AWS ISV Accelerate Program is a co-sell program for organizations that provide software solutions that run on or integrate with AWS. Learn more » The company built its system on AWS from day one, and was up and running in just 4 months, twice as fast as it had estimated it would take. “We moved so quickly thanks to help from the AWS support teams and the AWS Activate program,” says Marcin Wyszynski, founder and chief product officer at Spacelift. “We were able to quickly verify product assumptions and the support team helped us to get key functionalities right.” Deutsch Spacelift helps businesses to easily set up and manage complex cloud environments, so they can do more with fewer team members. Its platform combines continuous integration and deployment (CI/CD) processes to manage infrastructure as code (IaC). Tiếng Việt Spacelift offers a collaborative platform to manage cloud infrastructures and services. Its platform uses continuous integration and deployment (CI/CD) processes and supports infrastructure as code management tools to speed runtime configuration, version management, and state management. It has 40 employees and is based in Poland and the US. Italiano ไทย AWS ISV Accelerate AWS IoT Core lets you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure.Message Broker Mirror Device State Built-in Alexa LoRaWAN Devices. Learn more » AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » Spacelift chose AWS to ensure ease of use for its customers, as the majority of them were already using AWS. “All of our customers use AWS in a sophisticated way, so the fact that we use the same technologies and tools means it’s easy for them to get set up with our platform too,” says Wyszynski. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português
Sprout Social Reduces Costs and Improves Performance Using Amazon EMR _ Case Study _ AWS.txt
Amazon Simple Storage Service Sprout Social’s migration to Amazon EMR meant a 40 percent reduction in costs and a 30–50 percent decrease in batch data processing time. It also meant that Sprout Social could focus less on technical issues and more on core business goals, like research and improving features for customers. Français As a company that provides social media management software for businesses, Sprout Social processes enormous amounts of data. But with its self-managed batch processing tech stack nearing its end of life, the company needed a new solution. Sprout Social was already using several Amazon Web Services (AWS); so, after evaluating a few other service providers, the company ultimately migrated to Amazon EMR, a cloud big data solution for running large-scale distributed data processing jobs, interactive SQL queries, and machine learning applications using open-source analytics frameworks such as Apache Spark, Hive, and Presto. Español Amazon EMR is a cloud big data platform for running large-scale distributed data processing jobs, interactive SQL queries, and machine learning (ML) applications using open-source analytics frameworks. Solution | Reducing Costs and Improving Operations improved batch job performance 日本語 2022 Sprout Social Reduces Costs by 40% and Improves Performance by 50% Using Amazon EMR AWS offerings make it possible for us to continue investing heavily in research and development and developing customer features as opposed to fighting a battle to keep costs under control.” Get Started 한국어 About Sprout Social Sprout Social saw the benefits of migrating to Amazon EMR almost immediately. The biggest benefit was that Sprout Social saw reduced costs using Amazon S3 storage over Amazon EBS volumes. “Amazon EMR is orders of magnitude cheaper for the large dataset we have,” says Johnson. “What that means is that we have more predictability around our cost as our company and our dataset expands.” Using Amazon EMR, scaling clusters is now significantly more straightforward than its self-managed solution, which saves many hours of Sprout Social engineers’ time. Also, the Sprout Social team estimates that it saved roughly 40 percent in total costs over its previous data storage solution. Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). Learn more » AWS Services Used With this self-managed batch processing solution nearing end of life, however, Sprout Social took the opportunity to investigate other solutions. The company had wrestled with long-standing pain points. Commonly, it had to scale its Apache Hadoop cluster multiple times per year. Doing so required a significant amount of guesswork and time from Sprout Social engineers. “There was this kind of low-grade babysitting that would reach a peak when we needed to scale,” says Matt Trumbell, director of engineering on the Listening team at Sprout Social. “We would try to always stay ahead of it but knowing when we needed to scale was kind of like reading the tea leaves.” Dan Johnson Principle Site Reliability Engineer 中文 (繁體) Bahasa Indonesia Because Amazon EMR is a managed service that works using Apache Hadoop, it was a natural fit for the needs of Sprout Social. As a result, the company had an almost-seamless migration to Amazon EMR. In fact, the Sprout Social team could quickly import a snapshot it had taken of its existing Apache Hadoop cluster, and the service was up and running in a matter of hours. After migrating its first cluster in August 2021, Sprout Social completed the migration of two additional clusters by January 2022. The AWS team provided support for Sprout Social through the migration process, both with technical issues, like specific cluster-level settings to maximize performance, and cost-related issues, like testing without going over budget. “Because Amazon EMR is very easy to stand up, it was trivial for us to test this process a few times in advance,” says Johnson. “We had full confidence going into it that we knew what the actual migration window would be and could communicate that with the rest of engineering and support.” Founded in 2010, Sprout Social merges the complex landscape of social media channels into one comprehensive and navigable system. Sprout Social’s customer base can then use its software to communicate with their customers, plan and publish content to various channels, and measure how customers are engaging with their brand. To accomplish this, the company ingests billions of data points in the form of messages and metrics from different social network channels. It then uses open-source software Apache HBase as the primary data store for the social media data that it analyzes. Contact Sales Ρусский Sprout Social has also seen improvements of 30–50 percent in overall batch job performance, traditionally its biggest bottleneck given how much data must be processed in any given job. “Amazon EMR has been an absolute game changer because of our ability to scale compute independently from storage,” Johnson says. “And we’ve seen less instability due to disk input/output and overall better and more predictable job run times on Amazon EMR, as opposed to our old traditional Apache Hadoop stack.” عربي 30-50% cost reduction from previous solution 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 40% Overview Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload Learn more » Going forward, Sprout Social is planning to further optimize its use of Amazon EMR. Specifically, the team wants to explore how it could reduce the size of its main cluster and start using ephemeral clusters to handle batch jobs on a more as-needed basis. By doing so, it hopes to reduce costs associated with operational overhead and provide new features to its customers that wouldn’t have been possible before it migrated to Amazon EMR. Social media management software company Sprout Social reduced costs by 40 percent and reduced batch job processing times by 30–50 percent using Amazon EMR. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Opportunity | Migrating to Effective Data Storage Türkçe Looking for a data solution that would scale with ease, the Sprout Social team tested using Amazon EMR alongside Amazon S3 and EMRFS in June 2021. Using these services, Sprout Social engineers found that they could chart a very clear, smooth path to successfully migrate. The Amazon S3 throughput of Amazon EMR was not only keeping up with Sprout Social’s use of Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud, and Amazon Elastic Block Store (Amazon EBS), easy to use, high-performance block storage at any scale, but surpassing it. “We were able to continue running our services without needing to reinvent the wheel, all while hitting the triangle of faster, cheaper, and more reliable,” says Dan Johnson, principal site reliability engineer at Sprout Social. English on core business objectives Sprout Social has a history of using AWS solutions. For example, the company built its employee advocacy tool, Bambu, entirely on AWS in 2014, using solutions like Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. But it had been using a self-managed Hadoop solution for its batch analytics system. Amazon EMR Amazon Elastic Block Store “Tools like Amazon EMR are critical to our ability to invest our money wisely and in areas other than data storage,” says Johnson. “AWS offerings make it possible for us to continue investing heavily in research and development and developing customer features as opposed to fighting a battle to keep costs under control.” Deutsch Tiếng Việt Overview | Opportunity | Solution | Outcome | AWS Services Used Italiano ไทย Sprout Social is a B2B SaaS company that provides integrated social media management. It offers a solution that provides tools for brand monitoring and social customer care, content planning and publishing, and other capabilities. Improved focus Learn more » time-consuming data storage scaling Customer Stories / Software and Internet Amazon Elastic Compute Cloud Decreased Outcome | Optimizing Data Storage to Focus on Overall Company Performance Português
Spryker Case Study _ Amazon Elastic Compute Cloud _ AWS.txt
Français Spryker can also adjust resources with a few clicks to accommodate seasonal traffic spikes that retailers experience on busy shopping days such as Black Friday. This means Spryker's customers can rely on speedy performance when their own customers need them most. “We use auto-scaling configurations to deliver solid reliability and performance,” says Lunov. “And because AWS provides both on-demand and reserved, we pay only for what we use, so we can achieve the right balance of cost versus performance.” Español Providing reliable service to customers regardless of where they are in the world is essential to many of Spryker’s customers, which are global businesses operating in dozens of countries. Spryker reduces latency issues and improves the customer experience, using AWS Regions and Availability Zones, which provide discrete data centers in 84 locations. AWS Availability Zones also make it easy for Spryker customers to comply with local data protection regulations that require data to remain within a geographic region, because customers can specify where they want their information to be hosted. Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. 日本語 Spryker customers benefit from the use of AWS to increase their efficiency and enable them to rapidly expand their operations around the world. Spryker has gained a competitive advantage by shortening customer onboarding and has launched across the globe in EMEA, North America, and APAC including China. It has also improved its ability to innovate, so it can continue to enhance its services to meet customers’ evolving needs. Spryker offers its services in any region where AWS is available, including mainland China. It used AWS support to help it solve the legal and technical complexities of developing a solution for China, so its customers can reach shoppers in that growing market. Scales to meet rising demand when customers doubled in 1 year 한국어 Amazon RDS About Spryker Get Started Shortens customer onboarding time from months to 1-2 days To speed development of core features, engineers can spin up development environments running on AWS as needed, to test out new ideas. “Being able to validate hypotheses by combining our core product with AWS results in a better experience for our customers,” says Lunov. Spryker now has a flexible infrastructure and the tools it needs to innovate. It spends less time on maintenance and uses AWS services for tasks such as managing containerized workloads, instead of developing its own. This means its IT team can focus on innovation and adapting to meet customer needs. AWS Services Used 中文 (繁體) Bahasa Indonesia Reduces maintenance overheads so Spryker can focus on innovation We value our collaboration with AWS. It has helped us to find the right technology to deliver composable commerce solutions to some of the world’s biggest brands.” Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Learn more » 中文 (简体) Spryker owes its success to staying close to its customers and solving their problems. When it noticed customers were spending significant resources running complex backends to support Spryker software, it built a cloud-based solution on Amazon Web Services (AWS) to make it more efficient.  Increasing Innovation and Collaborating to Meet Business Goals Spryker provides a cloud-based platform that global businesses rely on to run their B2B, marketplace, and direct-to-consumer (D2C) commerce businesses. When Spryker noticed customers were running complex backends to support its software, it turned to AWS. Using AWS, it improved the customer experience and shortened customer onboarding from months to days. Spryker has also expanded its operations worldwide, including launching in APAC, and improved its ability to innovate. Volodymyr Lunov Director of Cloud Engineering, Spryker The COVID-19 pandemic proved a catalyst to Spryker’s already fast-growing business, as many businesses shifted to online sales channels when physical shops were forced to close. Using Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity, Spryker can scale compute and storage resources to accommodate its expanding number of customers, which doubled over the past year. Amazon ECS Benefits of AWS Getting customers up and running on the cloud solution is straightforward. Onboarding takes from as few as 4 hours to 1-2 days, compared to months previously. Because many Spryker customers already use AWS, it further simplifies the process. Customer Onboarding Shortened from Months to Days Spryker provides businesses of all sizes with cloud-native solutions for B2B and marketplace commerce. Founded in 2014 in Berlin, it has over 600 global employees, including offices in Germany, the Netherlands, Ukraine, the UK, and the US.  Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Quick onboarding is a competitive advantage for Spryker. It won a major customer, Aldi, when the grocer needed to rapidly ramp up online sales during the COVID-19 pandemic. Spryker is commissioned to migrate Aldi’s digital commerce solutions to its cloud solution on a global scale. English Supports customers’ global expansion Spryker Brings Composable Commerce to Global Businesses in Days Using AWS Growing to Meet Doubling Customer Numbers Using Amazon EC2 Spryker provides technology that global businesses rely on to run their B2B and marketplace commerce businesses. Founded in 2014 in Berlin, it’s growing rapidly—sales have doubled over the past 3 years—and it now has over 600 employees. Its customers include major brands such as Aldi, Hilti, Ricoh, Siemens, and Metro.  Deutsch Tiếng Việt Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Italiano ไทย Contact Sales 2022 Amazon EC2 The Spryker Cloud Commerce OS solution, built on AWS, is developed and run by Spryker. Previously, customers ran the software on custom backends that were time-consuming to maintain. “Many of our customers were managing a range of different technologies,” says Volodymyr Lunov, senior director of cloud engineering at Spryker. “Our vision is that customers should focus on their core business to create sophisticated solutions, and not worry about the infrastructure. Instead, Spryker takes care of that.”  Throughout its journey from startup to global company, Spryker has appreciated a close relationship with AWS. “We value our collaboration with AWS,” says Lunov. “It’s helped us to find the right technology to deliver ecommerce solutions to some of the world’s biggest brands.” Português
Staffordshire University Uses AWS Academy to Help Students Meet Business Demand for Cloud Skills _ Case Study _ AWS.txt
Français Build your cloud skills at your own pace, on your own time, and completely for free Top 10 Español Founded in 1914, Staffordshire University serves over 15,000 students across three schools and four campuses. Maintaining a focus on solving wide-reaching challenges, the university reports that 78 percent of its research is world-leading or of international importance, according to the Research Excellence Framework 2014. So it was only natural that Staffordshire University’s School of Digital, Technologies, and Arts became one of the first educational institutions in the United Kingdom to offer cloud computing skills training using AWS Academy.  日本語 2022 Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Educate Staffordshire University Uses AWS Academy to Help Students Meet Business Demand for Cloud Skills Customer Stories / Education Opportunity | Building In-Demand Cloud Skills with AWS Academy AWS Services Used AWS Academy Learner Labs Dr. Justin Champion Senior Lecturer, School of Digital, Technologies and Arts, Staffordshire University 中文 (繁體) Bahasa Indonesia AWS Academy Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 6 AWS Courses Along with adding AWS Academy courses to its curriculum, Staffordshire University also became one of the first adopters of AWS Academy Learner Labs. These are hands-on lab environments where educators can bring their own assignments and invite their students to gain experience with the AWS Cloud. “AWS Academy Learner Labs let students make mistakes and still learn about cloud computing along the way, and that’s invaluable,” says Dr. Champion.  中文 (简体) As a public research university that helps students connect their studies to real-world needs, England’s Staffordshire University was ready to expand its curriculum to include cloud computing skills. IT-related roles make up 13 percent of all job vacancies in the United Kingdom. Businesses want to hire candidates with digital skills—especially cloud computing, which employers identified as the top skill they look for in job candidates. In late 2020, Staffordshire University participated in the AWS Educate University Challenge. This was an interuniversity competition where students learned essential cloud computing skills at the same time as they competed to earn points, badges, and prizes for their universities. Many students from across the United Kingdom and Ireland participated—including three students from Staffordshire University, who placed in the top 10 by the end of the challenge. “The performance raised the profile of our students among potential employers,” says Dr. Carolin Bauer, senior lecturer at the School of Digital, Technologies, and Arts at Staffordshire University. “Many companies have been in touch regarding placements for our students and graduates as well as other projects. It’s been a great success.” Overview 93% Validate technical skills and cloud expertise to grow your career and business. Learn more » Amazon Web Services (AWS) Education Programs collaborate with education institutions and the public sector to provide access for individuals to develop cloud computing and digital skills. To help graduates boost their employability, Staffordshire University worked with the AWS team to introduce cloud computing skills training and add cloud courses to its credit-bearing computer science modules. Staffordshire University offers courses through AWS Academy, which empowers higher education institutions to prepare students for industry-recognized certifications and careers. Since the university added AWS Academy courses to its curriculum in 2017, several hundred students have participated. Of those students, 93 percent have achieved employment within 6 months of graduation. Empowered students Türkçe Solution | Learning by Doing Using AWS Learner Labs English With AWS Academy, our students love that they’re not just taking theory lessons. They get to work in actual environments with real AWS tools.”  Next up, Staffordshire University is expanding on the success of its cloud courses by launching additional programs of study developed in collaboration with the AWS team. Staffordshire University and the AWS team designed these programs by "Working Backwards" — an Amazon process that encourages companies to brainstorm solutions by using a customer challenge as the starting point — from the cloud skills employers are currently seeking in the United Kingdom and across the global labor market. One of these programs, which launches in September 2022, is a cloud computing course that features both cloud computing and cybersecurity modules and will offer students more opportunities to discover what’s possible with the AWS Cloud. “What we want to encourage is for students to play with AWS services as well as build confidence with the tools,” says Dr. Champion. to learn remotely using any hardware and earn AWS Certifications Staffordshire University added cloud computing skills training to its curriculum using AWS Education Programs, helping 93 percent of participants find employment within 6 months of graduation. covering cloud skills AWS Certification during the AWS Educate University Challenge Deutsch of graduates find jobs within 6 months Tiếng Việt Italiano ไทย Outcome | Developing New Cloud Coursework About Staffordshire University Staffordshire University is a public research university in Staffordshire, England. Founded in 1914, the university serves over 15,000 students across three schools and four campuses. The United Kingdom has experienced a technology boom in recent years, with technology funding tripling in the first 6 months of 2021 compared to the same period in 2020. In particular, employers need professionals with cloud computing skills ranging from cloud development to machine learning and data analytics. To meet demand, Staffordshire University offers students their choice of six AWS courses covering these key skills and more. Facilitated by two AWS educators using a ready-to-teach curriculum and resources provided by AWS Academy, the program can easily scale up as interest grows. Students enjoy a hands-on approach to their studies and get the chance to use AWS services. “With AWS Academy, our students love that they’re not just taking theory lessons,” says Dr. Justin Champion, senior lecturer at the School of Digital, Technologies, and Arts at Staffordshire University. “They get to work in actual environments with real AWS tools.” Learn more » Because learning with AWS Academy takes place in the cloud, Staffordshire University can offer remote learning regardless of students’ computer hardware. Learners enjoy the flexibility of practicing cloud computing skills from anywhere. They can also prepare to earn AWS Certifications, which validate technical skills and cloud expertise to grow your career. As a result, Staffordshire University students get the opportunity to prepare for the workforce and boost their employability long before they graduate.  Português Long-running hands-on lab environments where educators can bring their own assignments and invite their students to get experience using select AWS Services. Learn more »
Stanford Multimodal Data Case Study _ Life Sciences _ AWS.txt
Reduces costs with the pay-as-you-use model On AWS, the DDRCC team designed its MyPHD and SDO solutions to import, query, and analyze large medical databases securely, at high speeds, and at a low cost. “Each of our tools have unique needs, especially as they move outside of the research environment and are deployed for clinical use,” says Dr. Philip Tsao, associate chief of staff for precision medicine for the VA Palo Alto Health Care System and professor of medicine at Stanford University. “To design scalable and secure medical applications, it is critical to form cross-functional teams of experts and facilitate effective collaboration.” Français Benefits of AWS Español Amazon EC2 Organizations of all sizes across all industries are transforming and delivering on their missions every day using AWS. Contact our experts and start your own AWS Cloud journey today. Precision medicine research relies on an individualized understanding of multimodal data (like genomic, microbiomic, and proteomic data) so that clinicians and researchers can personalize therapy for patients. The large amount of data derived from wearable sensors, electronic medical records, and molecular profiles adds another dimension. This increased scale and complexity raises new challenges around data availability, acquisition, storage, integration, and analysis. Therefore, it is imperative for researchers to have an agile and elastic data strategy. "Deep data is the future of medicine. We need it for monitoring health and for diagnostics, prognostics, and treatments, all at a personal level," says Dr. Michael Snyder, chair and professor of genetics at Stanford University. Amazon Athena 日本語 DDRCC at Stanford University Uses AWS for Research in Precision Medicine Leveraging Multimodal Data Improves elasticity of the SDO for educational use To facilitate precision medicine research, DDRCC created the My Personal Health Dashboard (MyPHD), a secure, scalable, and interoperable health management system for consumers. MyPHD provides efficient data acquisition, storage, and near-real-time analysis capabilities for researchers using Amazon Web Services (AWS). The team also developed the Stanford Data Ocean (SDO), which is the first serverless precision medicine educational solution for researchers to educate, innovate, and collaborate over code and data. By building on AWS, DDRCC is using the elasticity, scalability, and security of the cloud to benefit both consumers and biologists and improve the field of precision medicine. 한국어 Stanford Deep Data Research Computing Center is in the Department of Genetics at Stanford Medicine in Palo Alto, California. The team works on design and development of systematic and intelligent solutions for large-scale biomedical applications. Service Workbench on AWS Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Improves adaptability for collaborative research 中文 (繁體) Bahasa Indonesia Service Workbench on AWS enables IT teams to provide secure, repeatable, and federated control of access to data, tooling, and compute power that researchers need. Dr. Amir Bahmani Director of Deep Data Research Computing Center (DDRCC), Stanford Contact Sales Ρусский عربي 中文 (简体) About Stanford Deep Data Research Computing Center Precision medicine depends on integrating disparate, multimodal datasets to draw inferences. Typically, these datasets are large and siloed across disparate sources. For researchers, it is important to determine the right compute and storage configurations that are needed to apply complex computational algorithms to these large datasets. The DDRCC team developed SDO to help researchers efficiently allocate resources to experiment with code. Using SDO, researchers can explore important questions around precision medicine and scale innovative solutions. By running SDO workloads on AWS, DDRCC has achieved high scalability while meeting stringent security requirements. Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Learn more »   To improve biologists’ ability to complete vital health research, DDRCC uses Amazon SageMaker and Service Workbench on AWS. Using SageMaker, bioinformaticians can build, train, and deploy machine learning models for virtually any use case with fully managed infrastructure, tools, and workflows. The team uses Service Workbench on AWS to facilitate the secure, repeatable, and federated control of access to data, tooling, and compute power that researchers need. Researchers can securely access large datasets on Amazon Simple Storage Service (Amazon S3), an object storage service with industry-leading scalability, data availability, security, and performance. Improves security of precision medicine solutions Get Started DDRCC requires high scalability to process data from MyPHD and SDO and relies on Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud. “Not only can we scale MyPHD and support different numbers of users, but we can also scale our algorithms based on the number of workloads,” says Dr. Arash Alavi, research and development lead of the DDRCC at Stanford University. To run preprocessing pipelines for large-scale genomics and transcriptomics applications, the team also uses Amazon Genomics CLI, an open-source tool for genomics and life science customers, and AWS Batch, a service for fully managed batch processing at virtually any scale. Amazon Genomics CLI simplifies and automates cloud infrastructure deployments, while AWS Batch makes it simple to run hundreds of thousands of batch computing jobs on AWS. Amazon EC2 provides secure and resizable compute capacity to support virtually any workload. Achieves scalability of MyPHD for virtually any number of users Security is a major requirement for applications that handle medical data. DDRCC’s solutions do not use, store, or process protected health information, and all data in transit and at rest is completely encrypted and anonymized. To maintain a high level of security, DDRCC has adopted AWS services like Amazon Cognito, a service that lets teams add user sign-up, sign-in, and access control to web and mobile apps. “The security features that AWS provides include out-of-the-box logging, auditing, and monitoring, which we use to protect our data,” says Bahmani.  Türkçe English   Deutsch Building Innovative Solutions on AWS for Multimodal Data Analysis The Deep Data Research Computing Center (DDRCC) at Stanford University, one of the many initiatives originating out of Stanford Synder Labs, is part of the Department of Genetics at Stanford Medicine in Palo Alto, California. Its goal is to create tools that bridge the gap between biology and computer science, and help researchers in precision medicine deliver tangible medical solutions. Tiếng Việt Amazon S3 Amazon Cognito Italiano ไทย DDRCC’s MyPHD provides a secure, comprehensive environment for biometrical data analytics at a massive scale. It can store, organize, and process complex health datasets and support near-real-time data analysis and visualization at the individual and cohort levels. This is designed to refine the accuracy of diagnoses and medical prescriptions, and improve precision medicine. To support the large-scale analysis of participants’ data for individual health management, DDRCC can scale resources for MyPHD based on the number of workloads. It also uses AWS security services as the foundation for its medical applications, which deal with large volumes of highly sensitive personal data. DDRCC also uses Amazon Athena, an interactive query service, to facilitate the analysis of data stored in Amazon S3 using standard SQL. Because this service is highly elastic, researchers can query data collected by SDO and MyPHD on demand and move more quickly in their projects. Additionally, Athena is serverless, so there is no infrastructure for DDRCC to manage. The team pays for only the queries they run, reducing costs. “The ability to scale resources dynamically based on the size of the workload—this pay-as-you-go model—is astonishing,” says Dr. Amir Bahmani, director of the DDRCC at Stanford University. 2022 The support from AWS was incredibly valuable to DDRCC, and the company plans to continue using AWS services to design innovative and creative solutions for precision medicine on the cloud. “You can be anywhere in the world, and you can access these large medical datasets,” says Bahmani. “We’ve achieved this by running our infrastructure on AWS.” Designing Solutions for Precision Medicine Research Using Multimodal Data Collaborating on Precision Medicine You can be anywhere in the world, and still access these large medical datasets. We’ve achieved this by running our infrastructure on AWS.” Português
Sterling Auxiliaries Case Study _ Amazon Web Services.txt
SAP customers can fully realize all the benefits of SAP S/4HANA in the AWS Cloud for systems of all sizes. AWS Backint Agent Français SAP S/4HANA on AWS. SAP S/4HANA on AWS to meet its year-end deadline and improve system performance. Español Delivered highly available content to millions of users   Sterling Auxiliaries is now saving time and human resources formerly dedicated to backing up SAP data manually on premises. Inteliwaves helped the company implement Customer Stories / Manufacturing Sterling Auxiliaries is an international manufacturer of surfactants and industrial chemicals based in India. To meet its go-live timeline and improve system performance, the company upgraded its on-premises SAP R/3 system to 日本語 The infrastructure setup and onboarding to SAP S/4HANA on AWS was a smooth process, and we’ve had a great experience with Inteliwaves.” 2022 Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Eliminated hardware upgrades and data center maintenance costs Get Started 한국어 Opportunity | Seeking Fast Deployment with No Downtime Overview | Opportunity | Solution | Outcome | AWS Services Used Vishal Shah Director, Sterling Auxiliaries Inteliwaves Technologies, but encountered critical issues with its on-premises data center vendors. Infrastructure and provisioning were delayed, threatening to disrupt Sterling Auxiliaries’ project timeline. The business needed to go live with SAP S/4HANA by the beginning of April, the start of the new financial year. About Sterling Auxiliaries AWS Services Used Amazon Elastic Compute Cloud (Amazon EC2) instances. Two staff members, each of whom spent 4–5 hours daily on backups, have been redeployed to the infrastructure team because backups are now automated on AWS. Anil Chavan, accounts head at Sterling Auxiliaries, says, “Our headaches due to the need to constantly monitor SAP are gone, which is a big relief.” Receives real-time responses to issues  Overview 中文 (繁體) Bahasa Indonesia Sterling Auxiliaries Pvt. Ltd. launched in 1984 in India as a manufacturer of surfactants and other industrial chemicals. The business began its exporting business in 2000, and to date, international customers account for 60 percent of total sales.  Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). Learn more » AWS Backint Agent to back up and restore SAP HANA workloads running on Contact Sales Ρусский On the recommendation of عربي Learn more » Sterling Auxiliaries is a manufacturer specializing in the production of industrial surfactants and chemicals. Headquartered in Mumbai, India, Sterling Auxiliaries exports to customers in 65 countries. Its domestic and international sales are increasing yearly. 中文 (简体) AWS Backint Agent for SAP HANA is an SAP-certified backup and restore solution for SAP HANA workloads running on Amazon EC2 instances. AWS Backint Agent backs up your SAP HANA database to Amazon S3 and restores it using SAP management tools, such as SAP HANA Cockpit, SAP HANA Studio, or SQL commands. Learn more » Amazon EBS snapshots to automate backups. The company has improved productivity and employee satisfaction with seamless system performance and achieved 100 percent uptime since migration. The company worked with AWS Partner Inteliwaves on the SAP implementation and migration and is using Vishal Shah director at Sterling Auxiliaries, says, “The infrastructure setup and onboarding to SAP S/4HANA on AWS was a smooth process, and we’ve had a great experience with Inteliwaves. Virtual servers in the production and development environments were available when we needed them, which allowed us to meet our deadline.” Low latency Since launching SAP S/4HANA on AWS, Sterling Auxiliaries reports significant time savings, faster performance, and improved employee satisfaction with a highly available SAP environment. “We’re saving a lot of time now that we don’t need to wait around during server lags,” says Chavan. We can accomplish the work quicker and use our time for other activities such as SAP system audits and application planning.” Scaled to support 10x increase in web traffic   Furthermore, with the older SAP R3 Setup on premises, the business experienced one or two days of downtime each month and regular connectivity challenges slowed down or prevented employees from carrying out their work. For example, when there was a power outage in Mumbai—where Sterling Auxiliaries’ SAP R3 servers were located—factory workers in Gujarat couldn’t access the system, leading to delays in dispatching materials. Backups were also a cumbersome, time-consuming process on premises. Sterling Auxiliaries worked with partner Inteliwaves to implement SAP S/4HANA on AWS, automating backup, and saving time with improved system performance. Sterling Auxiliaries Resolves SAP Downtime and Boosts Productivity to Fuel Business Expansion on AWS Türkçe Amazon Elastic Compute Cloud Deployed resources in minutes versus weeks English Performance improvements, plus the new implementation of the SAP DMS module on AWS, have also facilitated document transfer among teams. “Overall, our factory and back-office employees are much happier with SAP S/4HANA on AWS. They can retrieve documents, save data, and generate reports faster,” Chavan adds. When asked about the company’s plans, Shah says, “The success of this project has prompted us to evaluate cloud-based solutions for other legacy systems in 2023. We’re also planning to implement SAP S/4HANA on AWS for other business divisions that have been running SAP on premises.” Sterling Auxiliaries’ export business is growing annually, so having a low-latency SAP backbone will be key as the business expands. Support SAP S/4HANA on AWS AWS Partner Inteliwaves, Sterling Auxiliaries deployed 2 weeks Amazon Elastic Block Store Deutsch AWS Backint Agent with Tiếng Việt The company has been using SAP software since 2006, with SAP as the foundation for operations at its headquarters and its main factory in the state of Gujarat. The business began migrating from SAP R/3 to SAP S/4HANA at the start of 2022 with the help of Italiano ไทย Outcome | Saving Time while Boosting Employee Satisfaction Within two weeks, Inteliwaves helped migrate Sterling Auxiliaries’ SAP S/4HANA development, quality, and production environments from its data center servers to AWS. With the support of Inteliwaves, Sterling Auxiliaries was able to go live with SAP S/4HANA by the start of the new financial year. 25–30% Since launching S/4HANA on AWS, Sterling Auxiliaries has also eliminated server downtime and delays due to connectivity issues. Improved connectivity has driven a 25–30 percent rise in productivity. Solution | Improving Productivity with Automated Processes Amazon Elastic Block Store (Amazon EBS) snapshots to automate daily backups and SAP-certified Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português 10 hours
Storengy Case Study.txt
Amazon Simple Storage Service Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Accelerating Time-to-Market for Geoscientific Studies Español 日本語 By leveraging CCME, Storengy engineers can use a simple portal for submitting jobs and they can run complex simulations using MATLAB and other scientific applications. As a result, Storengy researchers can deploy new HPC environments faster than before. “Using the CCME tool on AWS, we can deploy HPC resources in 30 minutes, compared to the weeks or months it would take to procure servers and provision compute in our on-premises environment,” says Thebault. “That means we can speed time-to-market for our scientific studies.” Get Started 한국어 Because Storengy now pays for HPC workloads as a service rather than per month, the company expects to save thousands of dollars each month. Also, once it begins taking advantage of Amazon FSx for Lustre, Storengy will spend considerably less than it pays for its BeeGFS parallel file system. Scaling on Demand to Meet Business Growth Working with UCit to Deploy HPC on AWS “Using Amazon FSx for Lustre, we will have more flexibility in terms of cost and performance, depending on the application requirements,” says Thebault. Overall, the AWS solution gives Storengy engineers the flexibility to spend more time on research. “Using AWS, we can give our researchers a new way of working, and this is only the beginning. We are not limited by our technology tools anymore—the tools have adapted to our research, which frees us to focus entirely on innovation,” Thebault concludes. AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. The company has increased its HPC cluster performance by 2.5 times since moving to AWS. With faster performance, Storengy can more quickly validate experiments before moving them into production. “We always use the latest AWS CPU to run our HPC clusters, which ensures we always have the best performance,” says Thebault. “This is a major improvement over our on-premises environment, and it helps us perform geoscientific studies faster than before so we can more quickly determine the location of underground natural gas.” 中文 (繁體) Bahasa Indonesia About Storengy Ρусский عربي 中文 (简体) Storengy can now scale its HPC clusters on demand, making it simpler and faster to explore the company’s 10 trillion cubic meters of natural gas underground. “Whenever we want to initiate a new gas exploration project, we can add the capacity we need to support it without limitations,” says Thebault. “Because of AWS, we have the scalability and high availability to perform hundreds of simulations at a time. Additionally, the CCME solution scales automatically up or down to support our peak workload periods, which means we don’t have any surprises with our HPC environment.” Benefits of AWS Jean-Frederic Thebault Engineer, Storengy Storengy Moves HPC to AWS, Runs Geoscientific Simulations 2.5 Times Faster Storengy, a subsidiary of ENGIE, is a global leader in underground natural gas storage. The company owns 21 natural gas storage sites and offers innovative products to customers across the globe. Using the CCME tool on AWS, we can deploy HPC resources in 30 minutes, compared to the weeks or months it would take to procure servers and provision compute in our on-premises environment.” AWS ParallelCluster Türkçe Amazon Elastic Compute Cloud English Scales on demand to meet business growth Deploys HPC environments in 30 minutes instead of weeks or months AWS ParallelCluster is an AWS-supported open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Deutsch Storengy, a subsidiary of the ENGIE Group, is a leading supplier of natural gas. The company offers gas storage, geothermal solutions, carbon-free energy production, and storage technologies to enterprises worldwide. To ensure its products are properly stored, Storengy uses high-tech simulators to evaluate underground gas storage, a process that requires extensive use of high-performance computing (HPC) workloads. The company also uses HPC technology to run natural gas discovery and exploration jobs. For many years, Storengy ran its HPC workloads in an on-premises IT environment, but it struggled to manage an increase in jobs. “Our HPC environment was not designed to scale easily. We had to do larger simulations in a very short time as our business grew, and we lacked the ability to support the gas exploration workloads,” says Jean-Frederic Thebault, engineer at Storengy. Storengy also sought to accelerate the deployment of HPC clusters for its engineers. “It typically took weeks or sometimes months to provision server clusters for a new project,” says Thebault. “We wanted our engineers spending their time on research, not provisioning.” Running Simulations 2.5 Times Faster Tiếng Việt Expects to save thousands of dollars monthly Italiano ไทย Contact Sales Storengy addressed its limitations by choosing to move its HPC environment to Amazon Web Services (AWS). “We knew the cloud would give us the scalability and flexibility we were looking for, and AWS offers more services than any other cloud provider we evaluated,” says Thebault. The company collaborated with AWS Partner UCit to implement the UCit Cloud Cluster Made Easy (CCME) solution, which enables Storengy researchers to quickly build customizable HPC clusters and create multiple cluster profiles that match workload type to the number of compute resources. CCME runs on AWS ParallelCluster and Amazon Elastic Compute Cloud (Amazon EC2) instances, and it stores HPC data in Amazon Simple Storage Service (Amazon S3) buckets. “We evaluated each HPC workload and collaborated with Storengy engineers to determine which workloads were right for AWS,” says Philippe Bricard, chief executive officer and founder of UCit. “We also used one of our internal cost optimization tools to help Storengy budget for the cost of running workloads on AWS.” In addition, to reduce costs, Storengy plans to use Amazon FSx for Lustre as a managed service for HPC workloads, replacing its previous BeeGFS parallel file system. 2021 Learn more » Runs simulations 2.5 times faster than before Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Português
Streamline and Standardize the Complete ML Lifecycle Using Amazon SageMaker with Thomson Reuters _ Thomson Reuters Case Study _ AWS.txt
Using AWS services like Amazon SageMaker, we can create our own customized solutions while tapping into core ML functionalities.” Français 2023 Frees time Español Maria Apazoglou Vice President of AI/ML and Business Intelligence Platforms, Thomson Reuters Streamline and Standardize the Complete ML Lifecycle Using Amazon SageMaker with Thomson Reuters for data scientists to focus on ML model building 日本語 Amazon SageMaker Customer Stories / Media & Entertainment Get Started 한국어 Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used to AWS AI services from months to days Learn how Thomson Reuters streamlined ML development using its Enterprise AI Platform powered by Amazon SageMaker. in ML lifecycle When ML models are ready for deployment, TR uses multiple services based on whether a model is deployed in TR’s products or is destined for internal use. “To deploy models that are going into our products, our product engineering team often uses Amazon SageMaker endpoints,” says Apazoglou. “For teams that are creating AI for internal consumption, we have developed a deployment service that codes Amazon SageMaker bots to run inferences for the models on a periodic schedule.” To monitor its ML models for drift or potential bias and to provide explainability of generated insights, TR uses Amazon SageMaker Model Monitor, a service that keeps ML models accurate over time. It also relies on Amazon SageMaker Clarify, which detects bias in ML data and explains model predictions. By extending these solutions, the company can schedule and evaluate AI models’ performance according to predefined metrics and receive notifications whenever bias or drifts are detected. A series of significant acquisitions accompanied TR’s organic AI growth. To improve collaboration, trust, and transparency in ML development, it chose to unify AI use across its business units and acquired data science teams. When TR Labs used AWS services to develop a promising experimentation solution, TR chose to extend this effort and build an enterprise-wide solution on top of it. “Using AWS services like Amazon SageMaker, we can create our own customized solutions while tapping into core ML functionalities,” says Apazoglou. TR architected and built its Enterprise AI Platform with support from the Amazon Machine Learning Solutions Lab (Amazon ML Solutions Lab), which pairs teams with ML experts to help identify and build ML solutions, and the Data Lab Resident Architect (RA) program. AWS Services Used 中文 (繁體) Bahasa Indonesia About Thomson Reuters With Amazon SageMaker Model Monitor, you can select the data you would like to monitor and analyze without the need to write any code. Contact Sales Ρусский Outcome | Improving Trust and Transparency throughout the ML Lifecycle عربي On AWS, TR can better meet its ML model governance standards and empower its data scientists to build innovative, secure, and powerful AI services to serve end users. The company is using the solution at scale across its entire enterprise and has seen widespread adoption across its data science teams. In fact, more than 150 AI professionals are using the solution. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. With the SageMaker model registry you can catalog models for production, manage model versions, associate metadata, such as training metrics, with a model, manage the approval status of a model, deploy models to production and automate model deployment with CI/CD. Learn more » Amazon SageMaker Clarify provides machine learning (ML) developers with purpose built tools to gain greater insights into their ML training data and models. Learn more » Overview Opportunity | Using Amazon SageMaker to Streamline Collaboration and Accelerate Innovation Faciliates large-scale Shortens access Accelerates With the Enterprise AI Platform, TR has improved governance and reduced the time to market of complete AI solutions built across business units in a secure environment. Using AWS services, the company has effectively solved the challenge of adhering to standards regarding ethics, monitoring, explainability, and risk assessment across a range of AI use cases while streamlining collaboration. Now, TR’s data scientists and stakeholders have access to a centralized environment where they can collectively view and manage metadata and health metrics. For experimentation and training, TR needed secure access to data in the cloud to accelerate the development of AI solutions. Using the Enterprise AI Platform, it can quickly spin up ML workspaces based on AWS CloudFormation infrastructure, which speeds up cloud provisioning with infrastructure as code. These workspaces can handle heavy computational workloads and provide access to tools such as Amazon SageMaker Notebooks, which offer fully managed notebooks for exploring data and building ML models. By incorporating purpose-built ML tools with data scientists’ workflows, TR can efficiently run experiments, work on advanced ML projects, and deal with large volumes of data. For example, it analyzed over two million audio files to identify common customer complaints and helped an 11-person team securely and efficiently collaborate on a document-analysis project. “We’ve now streamlined the process for how we create and set up ML resources,” says Dave Hendricksen, senior architect at TR. “In the past, creating an account would take 2–3 months. Now, we can provision one in 2 or 3 days.” Türkçe English TR built its Enterprise AI Platform on Amazon Web Services (AWS) to provide its ML practitioners with a simple-to-use, secure, and compliant environment that is embedded with services that address the complete ML lifecycle. This solution is based on Amazon SageMaker, a service that makes it simple to build, train, and deploy ML models for various use cases. Now, TR can deliver advanced AI services to end users at a faster pace. innovation Using the Enterprise AI Platform, TR has effectively unified its multiaccount, multipersona ML landscape. In the future, it will continue to build out the solution using Amazon SageMaker and will explore ways to run its over 100 legacy ML models on the solution. “We have definitely increased the transparency and improved the governance of our ML models on AWS,” says Apazoglou. “TR operates on trust, so these capabilities are really fundamental.” Amazon SageMaker Model Monitor Finally, the Enterprise AI Platform’s Model Registry provides a central repository for all TR AI/ML models. This component is partly based on Amazon SageMaker Model Registry, which companies use to catalog models for production, manage model versions, and associate metadata—such as training metrics—with a model. Using this service, the company makes ML models that are developed across multiple AWS accounts and are owned by different business units available to view and potentially to reuse, making it simple for teams to collaborate. TR also gains transparency and orchestration of model workflows as well as a centralized view of models’ metadata and health metrics. Thomson Reuters is a leading provider of business information services. Its products include highly specialized software and tools for legal, tax, accounting, and compliance professionals as well as its global news service, Reuters. Embeds governance standards TR formed when Thomson Corporation acquired Reuters Group. In addition to its global news service, TR provides its customers with products that include highly specialized software and tools for legal, tax, accounting, and compliance professionals. With roots dating back to 1851, TR first incorporated AI in the 1990s to streamline and automate manual processes for its customers. It later established TR Labs to embed AI/ML into its products. “Over time, we have seen an increase in the use of AI both within our products and within our company for deriving better insights from our data,” says Maria Apazoglou, vice president of AI/ML and business intelligence platforms at TR. Deutsch Amazon SageMaker Model Registry Tiếng Việt Amazon SageMaker Clarify Solution | Scaling the Enterprise AI Platform across TR Using Amazon SageMaker Italiano ไทย collaborative projects Learn more » To create a customized Enterprise AI Platform, TR needed to accommodate a variety of AI use cases, solutions, and AI practitioners’ personas. It also needed to consider scalability, flexibility, governance, and security throughout the ML lifecycle, from model training and deployment to monitoring and explainability. Português Thomson Reuters (TR) is on a mission to facilitate innovative projects through the increase of machine learning (ML) and artificial intelligence (AI). The content-driven technology company is a leading provider of business information services. AI and ML technologies are at the core of these solutions, but development processes vary across TR’s business units and data science teams. To facilitate cross-team collaboration and speed up the development of creative solutions, TR set out to build an agile environment that standardizes AI/ML workflows.
Streamline Workflows Using the AWS Support App in Slack with Okta _ Okta Case Study _ AWS.txt
AWS Support App in Slack enables you and your team members to manage cases, collaborate, and chat with AWS support agents directly from your Slack channel.  Requests live support Français AWS Management Console 2023 On the AWS Support App in Slack, the ability to create the support ticket and initiate contact with a live person right away directly from our Slack channel is invaluable for us.” Okta’s products use identity information to grant people access to applications on multiple devices at any time, while still enforcing strong security protections. Since 2009, the company has used AWS services to develop its software solutions and protect customer infrastructure. Okta uses AWS Enterprise Support, which provides concierge-like service for companies with business or mission-critical workloads in AWS where the main focus is helping them achieve business outcomes and find success in the cloud. Español 日本語 Get Started 한국어 Opportunity | Using AWS Support App in Slack to Resolve Questions Faster for Okta AWS Support App in Slack Overview | Opportunity | Solution | Outcome | AWS Services Used Jarret Peterson Manager of Site Reliability Engineering, Okta Accelerates The AWS Support App in Slack is now a key communication and collaboration tool for Okta, and the company plans to implement the application in more of its business units in the future. “The AWS Support App in Slack accelerates the support process drastically,” says Peterson. “The response times have been good, and the information has been valuable.” Previously, Okta managed all of its support requests through the AWS Management Console. For security purposes, only a few of Okta’s team members were authorized to access the console. As a result, the process of requesting support was lengthy and cumbersome. “The only way for an engineer to open an AWS Support ticket was to ask someone with access to our production accounts to log a ticket for them,” says Calvin Austin, senior director of site reliability engineering at Okta. “It was very slow and very onerous.” It would take 1 week to open up a new ticket, and resolving a query could involve weeks of inefficient back-and-forth communication. If a team member files a support case, only that person will have access to that case for security purposes, unless they grant access to others as well. Managing these controls added additional work for Okta engineers, and the company knew that it needed a faster and more efficient workflow. After engaging their AWS Technical Account Managers for advice, Okta’s Workforce Identity and Customer Identity business units chose to adopt the AWS Support App in Slack. Outcome | Empowering Okta Engineers to Get the Most Out of AWS Support sessions on demand AWS Services Used The Workforce Identity unit at Okta, which develops the company’s core identity and access management software, was a beta tester for the AWS Support App in Slack. It took an afternoon for the team to install Slack and implement the application. Twenty-four engineers on the Workforce Identity team use the application to ask questions about AWS documentation and receive support for development questions. Following this successful implementation, Okta’s Customer Identity team, which protects customers’ infrastructure, adopted the application. “It took maybe an hour for us to set up the AWS Support App in Slack,” says Jarret Peterson, manager of site reliability engineering at Okta. “There wasn’t too much on our side that we had to do. The AWS Support team took care of most of the implementation.” Four of the managers on the Customer Identity team rely on the AWS Support App in Slack to quickly resolve critical, time-sensitive questions. AWS Management Console provides everything you need to access and manage the AWS Cloud, all in one web interface. 中文 (繁體) Bahasa Indonesia To streamline its internal workflows, Okta adopted the AWS Support App in Slack, an application that makes it simple to create, update, search for, and resolve support cases in Slack channels. Now, the company can create and manage AWS Support cases at a faster pace, collaborate on tickets, and even request live support directly from Slack, empowering its engineers to get the most out of AWS Support resources. innovation Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn how Okta is empowering engineers to get the most out of AWS resources using the AWS Support App in Slack.   Solution | Opening Tickets for AWS Support in Minutes Instead of Weeks to resolve issues AWS Support provides a mix of tools and technology, people, and programs designed to proactively help you optimize performance, lower costs, and innovate faster. Overview AWS Support Using the AWS Support App in Slack, Okta’s engineers are empowered to ask questions and find answers to critical issues and development roadblocks. The company has accelerated its speed of innovation and improved efficiency and productivity across the Customer Identity and Workforce Identity teams. Türkçe English Using the application, Okta’s engineers can open AWS Support tickets in Slack in a few minutes instead of 1 week and resolve their queries at a much faster pace. “Before, it would take at least 1 or 2 weeks to get an answer because access to AWS Management Console was required to create support cases. Now, it’s gone down to 1 day because any engineer can create a case,” says Austin. “That’s a huge, huge time saving.” Multiple engineers can collaborate on the same ticket, providing teams with full visibility and insight into AWS Support requests. Because the Workforce Identity team can quickly receive answers about AWS documentation and services, its engineers can focus their time on developing new features—resulting in a faster speed of innovation. Okta creates products that use identity information to grant people access to applications on multiple devices at any time, while still enforcing strong security protections. Streamline Workflows Using the AWS Support App in Slack with Okta Several weeks to 1 day For Okta, its ability to innovate quickly and maintain high levels of security is the key to its success. The identity and access management company creates cloud-based identity platform software, built on Amazon Web Services (AWS), that helps companies protect access to their assets and technologies. Many of its business units rely on AWS Support, which offers expert guidance and assistance, to aid in the development of new features and resolve mission-critical issues. However, it could take a week or longer for engineers to open a support ticket and receive a response. Engineers can only engage with the AWS Support team through the AWS Management Console, a web application where businesses can access everything they need to manage their AWS resources, and Okta maintains strict controls on which employees can access the console. About Okta Deutsch to open up support tickets Tiếng Việt 1 week to minutes Italiano ไทย Using AWS Support App in Slack, Okta can obtain AWS Support with fewer steps and fewer people involved. Additionally, more people on Okta’s engineering team have the ability to request live support on demand without having to sign into the AWS Management Console. By streamlining these internal workflows and expanding engineers’ access to AWS Support, Okta can resolve issues at a much faster pace. This speed is crucial for the Customer Identity team, which often submits time-sensitive requests. “During a critical event, literally every minute counts. Before, we would have had to arrange a phone call or reach out to our AWS Technical Account Manager,” says Peterson. “On the AWS Support App in Slack, the ability to create the support ticket and initiate contact with a live person right away directly from our Slack channel is invaluable for us.” Learn more » Português
SUPINFO Creates 5-Year Master of Engineering Degree Implementing AWS Education Programs _ Case Study _ AWS.txt
SUPINFO International University increased employability for its students and gave them hands-on cloud experience by implementing AWS Academy courses into its master of engineering curriculum. Français by preparing students for industry-recognized AWS Certifications AWS Academy Español Opportunity | Helping Future Engineers Develop Cloud Computing Skills  AWS Certified Solutions Architect–Associate Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud. 日本語 in the cloud regardless of students’ home computing setup 2022 SUPINFO’s master of engineering degree program officially launched in 2020, with approximately 200 students enrolled per year. During the program, students take several courses through AWS Academy. Second-year students take AWS Academy Cloud Foundations, an introductory course intended for students who seek an overall understanding of cloud computing concepts. In their fourth year, students take AWS Academy Cloud Architecting, an intermediate-level course that covers the fundamentals of building IT infrastructure on AWS. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used By implementing AWS Education Programs into its master of engineering curriculum, SUPINFO is preparing future cloud talent with the skills that they need to succeed in the growing industry. The school currently has six AWS-accredited educators on its board and plans to upskill more instructors as the program expands.  As part of this program, students also gain real-world work experience because SUPINFO requires students to participate in internships with employer partners. “After their first year, students will begin their internships by working with our employer partners for 3 days a week, then another 2 days at school,” says Paul-Antoine Kempf, an educator at SUPINFO. “This approach is central to the education at SUPINFO. By giving students opportunities in development-related jobs, it trains them and helps develop confidence with the latest cloud platforms and tools.” Customer Stories / Education Hands-on experience AWS Services Used Overview To provide practical, hands-on experiences for students, SUPINFO uses tools from AWS Academy Learner Labs. These lab environments provide opportunities for educators to bring their own assignments and invite their students to get experience using select AWS services. “Being able to manipulate and experiment with tools on AWS is the most constructive approach to learning,” says Kempf. “All the classes have AWS Academy Learner Labs built in, and it is the reason why the program has been so successful.” 中文 (繁體) Bahasa Indonesia Outcome | Continuing to Build Cloud Skills for the Future Contact Sales Ρусский While creating the master of engineering program, SUPINFO wanted to tap into the potential of the cloud and equip its students with skills that are in high demand among potential employers. Seeking a hands-on solution, it engaged AWS Education Programs and implemented several AWS Academy courses into its master of engineering curriculum. Additionally, SUPINFO provided opportunities for students to earn AWS Certifications, which validate technical skills and cloud expertise. To facilitate the launch of the degree program, the AWS team assigned a technical program manager to train SUPINFO educators quickly, helping them prepare to teach students by the program’s start date. عربي Remote learning 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Being able to manipulate and experiment with tools on AWS is the most constructive approach to learning. All the classes have AWS Academy Learner Labs built in, and it is the reason why the program has been so successful.”  SUPINFO students have significantly improved their employability and are better prepared for the cloud workforce by participating in the master of engineering program. By earning AWS Certifications, students can demonstrate their cloud expertise to future employers. They can also apply the skills that they learned through their internships and AWS Academy Learner Labs activities to their future roles, streamlining their transitions from school to the workforce. “SUPINFO’s master of engineering curriculum offers a comprehensive approach to cloud education,” says Lounes Behloul, a student at SUPINFO. “Aside from connecting students to potential employers and helping us gain work experience, I appreciate the hands-on nature of the activities. The ability to take the tools that I learn at school then immediately apply them to my job in the future is invaluable.” Increase employability In-demand cloud skills with AWS services Validate technical skills and cloud expertise to grow your career and business. Learn more » About Company Get Started Solution | Implementing AWS Education Programs for Students Türkçe Based in France, SUPINFO International University (SUPINFO) is a private higher education institution with a specialty in computer science. Founded in 1965, it is a member of IONIS Education Group, which serves more than 30,000 students worldwide. SUPINFO will also increase specializations within the curriculum, especially for fourth- and fifth-year students. The institution will also expand its adoption of AWS Education Programs to complement these specializations, further demonstrating its commitment to building a highly skilled, well-trained cloud workforce of the future. English In addition to teaching valuable cloud skills through these mandatory courses, SUPINFO’s program provides students with the option to earn industry-recognized AWS Certifications. For example, AWS Academy Cloud Architecting teaches students the skills that they need to pursue AWS Certified Solutions Architect–Associate. This AWS Certification validates the ability to design and implement distributed systems on AWS. Students have also earned AWS Certified Cloud Practitioner, which validates cloud fluency and foundational AWS knowledge. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. Learn more » AWS Certification This credential helps organizations identify and develop talent with critical knowledge related to implementing cloud initiatives. Learn more » Deutsch Paul-Antoine Kempf Educator, SUPINFO Tiếng Việt Italiano ไทย SUPINFO Creates 5-Year Master of Engineering Degree Implementing AWS Education Programs AWS Certified Cloud Practitioner Working with AWS Education Programs, which prepares diverse learners for in-demand, entry-level cloud roles around the world, SUPINFO implemented multiple courses from AWS Academy into its 5-year master of engineering curriculum. AWS Academy provides higher education institutions with a free, ready-to-teach cloud computing curriculum that prepares students to pursue industry-recognized certifications and in-demand cloud jobs. Through this initiative, SUPINFO increased employability for its students by giving them hands-on experience using AWS services and the opportunity to earn industry-recognized credentials. Learn more » Based in France, SUPINFO is a private institution of higher education and a member of IONIS Education Group. Cloud computing is an integral part of SUPINFO’s master of engineering degree program and for good reason. Driven by digital transformation across various industries in Europe, the United States’ $35 billion cloud computing market is expected to grow at a compound annual growth rate of 15 percent between 2020 and 2028. Additionally, the French government announced a €1.8 billion support plan for the nation’s cloud computing sector to keep the country competitive on a global scale. For SUPINFO International University (SUPINFO), cloud knowledge is a critical part of higher education curriculum. The educational institution, which specializes in computer science, understands the potential of the cloud computing market as it has grown and expanded at a steady pace. To equip its engineering students with in-demand skills for careers in the cloud, SUPINFO turned to Amazon Web Services (AWS). for careers in the cloud Português
SURF Drives Ground-Breaking Research Accelerates Time to Insight Using AWS.txt
In late 2020, SURF called for proposals to support research projects using Amazon Web Services (AWS) across the Netherlands. SURF supports these projects with 160 hours of consultancy and €5,000 to spend on AWS cloud consumption. Español {font-family:&quot;Cambria Math&quot;; 日本語 SURF is the National Research and Education Network (NREN) in the Netherlands. It is one of the most active and innovative NRENs in GÉANT, the pan-European data network for the research and education community. Headquartered in Utrecht, SURF facilitates collaboration on projects ranging from biological science to earth observation. SURF is a membership organization comprising more than 100 institutions, including research universities, universities of applied sciences, secondary vocational educational institutions, and university medical centers.  mso-font-pitch:variable; 한국어 margin-top:0cm; mso-bidi-font-size:12.0pt; AWS Services Used mso-fareast-language:EN-US;}p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast {margin-bottom:0cm;} Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware, and empowers developers to focus on differentiating work. mso-fareast-language:EN-US;}p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst Gives researchers access to tailored solutions mso-pagination:widow-orphan; Project Crunchbase involves scraping the text data from 30,000 start-ups to identify which are developing products or services to limit CO2 emissions. The research team deployed automated compute infrastructure, which adjusts compute resources as needed, to perform the data analysis. mso-fareast-language:EN-US;}div.WordSection1 {page:WordSection1;}ol Learn more SURF Facilitates Collaboration on Projects Ranging from Biological Science to Earth Observation mso-ansi-language:EN-US; ไทย mso-default-props:yes; Powering Cutting-Edge Research The research team has combined the dataset with its own data to improve the accuracy of analysis. It uses AWS Fargate Spot—a new purchase option for AWS Fargate that enables developers to launch tasks on spare capacity with a steep discount, and AWS Batch to run multiple computing tasks relating to the data.   panose-1:2 4 5 3 5 4 6 3 2 4; Português line-height:107%; AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face Français The University of Twente phenology project looks at the impact of climate change on plants by using geodata such as timings of the start of the spring season over many years. The challenge was to design an architecture that made it possible to scale the analysis in resolution of time or space, as well as use AWS to integrate satellite data. Discover how AWS is enabling the Benelux public sector to drive prosperity, collaboration, and safety of citizens through digital transformation and innovation. SURF has a long history of IT and data expertise, but using AWS presents a new learning curve for the organization. The SURF team regularly consults with AWS to find innovative solutions for particular use cases, tailored specifically to unique research needs. “Using AWS, and cloud generally, you need to keep on top of the art of the possible,” says Griffioen. “New products and services are going live every week. We need to learn how to knit all these things together, so that researchers get the best from our services.” SURF and AWS worked closely together to support these projects, which will continue through 2022, when SURF plans to publish another open call for proposal. With these initiatives, AWS is supporting SURF on its mission to bring cloud power to research communities, shortening the time from research to scientific discovery. mso-ansi-font-size:11.0pt; panose-1:5 0 0 0 0 0 0 0 0 0; {mso-style-priority:34; 中文 (繁體) Bahasa Indonesia mso-style-type:export-only; mso-style-unhide:no; panose-1:2 11 6 4 2 2 2 2 2 4; 2022 Getting the Most Value from Data SURF is supporting a number of ground-breaking research projects using AWS. Türkçe English Project MinE from University Medical Center (UMC) Utrecht is using the TOPMed genomics dataset in a project involving the movement of DNA sequencing data relating to amyotrophic lateral sclerosis (ALS)—a form of motor neurone disease—from the US to Europe. The initial size of this dataset was 6 petabytes and could already be partially processed using AWS, reducing its size. Project Phenology Achieves Resolution of Time and Space Using Amazon EMR {mso-style-unhide:no; Amazon EMR Tiếng Việt SURF Drives Ground-Breaking Research and Accelerates Time to Insight Using AWS Project Crunchbase Scrapes Data from 30,000 Companies Using AWS Lambda and Amazon SQS Brings optimal IT services to research projects @font-face mso-ascii-theme-font:minor-latin; mso-add-space:auto; {mso-style-type:export-only; Benefits of AWS Using AWS, we can mix services and find the best solutions for researchers to not only manage their data, but also store, stage, and share it—as well as analyze it in different ways. mso-bidi-font-family:&quot;Times New Roman&quot;; mso-hansi-font-family:Calibri; {font-family:Wingdings; Speeds time to scientific discovery {margin-bottom:0cm;}ul Amazon EC2 mso-fareast-theme-font:minor-latin; margin-left:36.0pt; margin:0cm; font-family:&quot;Calibri&quot;,sans-serif; mso-generic-font-family:decorative; {font-family:Calibri; عربي Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. mso-font-signature:0 0 0 0 0 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal mso-font-signature:3 0 0 0 -2147483647 0;}@font-face Amazon EMR is a cloud big data platform for running large-scale distributed data processing jobs, interactive SQL queries, and machine learning (ML) applications using open-source analytics frameworks such as Apache Spark, Apache Hive, and Presto. panose-1:2 15 5 2 2 2 4 3 2 4; The combination of SURF and AWS has helped accelerate the development of services for research projects and opens new opportunities for researchers. “As datasets continue to grow, they become more expensive to store and move,” says Robert Griffioen, program coordinator, scalable data analytics team, SURF. “Using AWS, we can mix services and find the best solutions for researchers to not only manage their data, but also store, stage, and share it—as well as analyze it in different ways.” mso-font-alt:&quot;Times New Roman&quot;; mso-style-qformat:yes; mso-fareast-font-family:Calibri; mso-bidi-theme-font:minor-bidi; Improves accessibility and analysis of research data SURF is the National Research and Education Network (NREN) in the Netherlands, a collaborative organization for IT in Dutch education and research. Institutions in this community work together in the SURF cooperative to develop the best possible digital services and encourage knowledge sharing through continuous innovation. SURF has 350 employees, 113 connected institutions, and 1 million users. The research team deployed Amazon EMR, a managed cluster platform that simplifies the running of big data frameworks. The ability to scale analysis as required was achieved by using infrastructure as code, which makes it easy to configure new architecture and pay only for what is used.   Deutsch AWS Lambda mso-fareast-language:EN-US;}p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph Italiano margin-right:0cm; mso-font-charset:0; font-size:11.0pt; Amazon Simple Queue Service (SQS) mso-bidi-font-family:&quot;Times New Roman \(Body CS\)&quot;; margin-bottom:8.0pt; margin-bottom:0cm; Robert Griffioen Program Coordinator, Scalable Data Analytics Team, SURF Learn more » SURF, the National Research and Education Network (NREN), has a publicly funded mission to bring the latest IT capabilities to education and research communities. In 2020, SURF called for proposals to support research projects using Amazon Web Services (AWS) across the Netherlands. It has since used AWS for projects focused on motor neurone disease, machine learning, and geodata for ecological insights. Bringing research loads to the cloud is shortening the journey from research to scientific discovery and making data more shareable and accessible. mso-ascii-font-family:Calibri; Using AWS, SURF supports researchers by bringing the power of the cloud to their research and helping make data easier to replicate. SURF uses Terraform to deploy infrastructure-as-code and uses Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud. Containers can be deployed in the cloud and on premises, so that data and research can be far more portable. “Doing the best research today is not only about the work itself,” says Griffioen, “but also about how easily and securely data can be moved, shared, and reproduced.” About SURF mso-font-pitch:auto; mso-font-signature:-469750017 -1073732485 9 0 511 0;}@font-face mso-fareast-language:EN-US;}p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle Ρусский mso-hansi-theme-font:minor-latin; The research group was already using Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for workloads, but it wanted to look further into machine learning capabilities and cost-saving opportunities. It created an Amazon Machine Image (AMI), which helps experiments run faster. And, using Amazon EC2 Spot Instances, the research team has been able to access all the compute resources it needs, while containing costs. mso-font-charset:77; mso-ansi-language:EN-GB; 中文 (简体) {font-family:&quot;Times New Roman \(Body CS\)&quot;; mso-fareast-language:EN-US;}.MsoChpDefault The previous setup consisted of an Amazon EC2 solution that ran on 60 servers. Now, the researchers are using AWS Lambda, a serverless, event-driven compute service for running code, while Amazon Simple Queue Service (SQS) sequences workflows. The results are saved in Amazon Simple Storage Service (Amazon S3), which can retrieve any amount of data from anywhere. Using this infrastructure, the research team are able to scrape data from the websites in a controlled manner, improving monitoring, cutting costs, and making the tools available for future scraping projects. Project AutoML is helping to tune machine learning algorithms in a data-driven way. The process of benchmarking machine learning models requires a complex orchestration of hundreds of compute tasks on a large infrastructure stack. In the AutoML project, AWS co-developed a more cost-effective deployment of the AutoML benchmark framework in the cloud, reducing benchmark runtime and cutting infrastructure costs. Project MinE Shifts DNA Sequencing Data Using AWS Fargate Spot and AWS Batch mso-style-parent:&quot;&quot;; mso-generic-font-family:swiss; Get Started mso-generic-font-family:roman; Project AutoML Accelerates Experiments Using Amazon Machine Image
Syngenta Case Study _ Amazon Web Services.txt
Improves the user experience Amazon Simple Storage Service Improves average response time by up to 20% Français The migration kicked off in July 2020 and included more than 45 SAP applications, over 2,500 interfaces, and a new implementation of the SAP S/4HANA Central Finance application. After migrating to AWS, the Syngenta SAP environment now has over 450 virtual machines running on Amazon Elastic Compute Cloud (Amazon EC2) instances, with over 600 TB of data stored in Amazon Elastic Block Store (Amazon EBS). For this migration, Syngenta collaborated with AWS Partner DXC Technology for technical migration assistance and Infosys for application testing support. Migrating SAP to AWS for Scalability and High Availability Around 70–80 percent of Syngenta’s core business runs entirely on an SAP environment, using business-critical applications such as SAP ECC, SAP PO, SAP BW, SAP SLT, and SAP S/4HANA. For years, the company hosted these business-critical SAP applications in a traditional data center, incurring high costs and technological constraints around hardware capacity, server sizes, network bandwidth, and hosting next-generation applications. Hardware was refreshed once every four years, leading to technology debt. To learn more, visit aws.amazon.com/sap/. Español Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. Reducing Operating Expenses by 28% Learn More 日本語 About Syngenta Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Elastic Load Balancing By migrating to AWS, Syngenta has improved its SAP application performance by up to 20 percent. As a result of this performance improvement, end user productivity also increased. Becomes future ready to support Syngenta’s SAP S/4HANA roadmap By gaining scalability, flexibility, and cost savings on AWS, Syngenta can allocate more resources toward innovation. The company has started focusing on platform modernization by leveraging new technologies like Auto Scaling and AWS Backint Agent. Additionally, Syngenta is exploring the adoption of AWS Launch Wizard for SAP to easily provision and configure SAP S/4HANA on AWS. By implementing AWS Launch Wizard and other AWS services, Syngenta will continue to focus on innovation and modernizing its SAP landscape. AWS Services Used Scales infrastructure capacity to support high seasonal business demand 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Contact Sales Ρусский عربي 中文 (简体) Eliminates the dependency on hardware refresh Amazon Elastic Compute Cloud Learn more » Sohil Laad SAP Operations & Technology Lead, Syngenta Benefits of AWS Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones (AZs). Syngenta, based in Basel, Switzerland, is a global, science-based agricultural technology company with a presence in over 90 countries across the globe. Syngenta innovates with world-class science to protect crops and improve seeds. The company has more than 30,000 employees and reported 2021 global sales of $16.7 billion. Türkçe Reduces business downtime for key maintenance activities English Improving SAP Performance by up to 20% Optimizes costs with an overall reduction of 28% in SAP TCO In addition, by adopting AWS best practices, Syngenta followed the principle of having one application per server. This significantly increases application availability and minimizes business downtime. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch Syngenta adopted a Multi-Availability Zone for SAP on AWS High Availability Setup, which comprised SAP load balancers and Elastic Load Balancing to automatically distribute incoming application traffic across multiple targets to improve scalability. Additionally, Syngenta adopted Amazon CloudWatch to monitor application and platform performance and optimize compute resource usage. Tiếng Việt The Syngenta IT team decided that moving its SAP applications to the public cloud was the right solution for the company’s challenges. After an initial assessment period and discussions with different cloud providers, the organization chose Amazon Web Services (AWS) as its cloud provider because AWS offered the most flexibility and the right technical features. Italiano ไทย Amazon CloudWatch Scaling our SAP applications seamlessly on AWS not only helps us meet rapid growth but also helps us manage seasonal demand. This was not feasible in our previous on-premises environment.” Syngenta Improves Application Performance and Reduces Costs with SAP on AWS 2022 Syngenta is a global company with headquarters in Switzerland. The company has more than 30,000 employees in over 90 countries working to transform how crops are grown and protected. Syngenta innovates with world-class science to protect crops and improve seeds. Its two core businesses support farmers with technologies, knowledge, and services so they can sustainably provide the world with better food. The SAP on AWS migration was a success for the Syngenta Global IT department. Aside from improvements in system availability and performance, operations costs have gone down by 28 percent since the migration. Syngenta will now be able to proactively forecast cost savings in the future. Now, Syngenta can scale its SAP environment based on demand, which helps the company better support its yearly business growth. Furthermore, as a highly seasonal business, Syngenta can upscale or downscale compute capacity on demand, with no limitation. “Scaling our SAP applications seamlessly on AWS not only helps us meet rapid growth but also helps us manage seasonal demand. This was not feasible in our previous on-premises environment,” says Sohil Laad, SAP Operations & Technology Lead at Syngenta. Português
Taggle Systems Case Study _ Amazon Web Services.txt
Helping Councils and Water Utilities Cut Costs Scales to ingest data from 80,000 new sensors across Australia in 2022 Français Simplifying Integration and Accelerating Time to Market With AWS, Taggle can integrate seamlessly with third-party devices, applications, and radio networks. “We provide an end-to-end IoT solution, and AWS helps us support our own proprietary radio network to collect data from devices, as well as third-party devices and networks,” says Bowker. Español The Taggle IoT platform runs on AWS, using Amazon Kinesis Data Streams to store and ingest streaming data in real time from sensors and meters in the field. The platform also uses AWS Lambda functions to process ingested sensor and meter data to convert for consumption through the company’s visualization and analytics packages, or for export to external analytic or management systems. Taggle relies on Amazon Relational Database Service (Amazon RDS) to store live data, and Amazon Simple Storage Service (Amazon S3) to store archived data for querying. The Taggle solution database currently holds over one billion rows of data, all encrypted using AWS security components to meet customers’ stringent data privacy requirements. Additionally, the Taggle engineering team runs its development and test environments on AWS. Learn More As Taggle grew, it needed an IT environment that could scale easily to support high volumes of IoT data as well as analytical and visualization applications. Geoff Bowker, cloud solutions director at Taggle says, “We’re looking to add about 80,000 more sensors in the next 12–18 months, with each one reporting data hourly at a minimum, and in some cases every 15 minutes where there are alarming conditions such as rapidly rising flood water or sewer blockage. While our load is generally predictable, we do experience sudden spikes which can lead to rapid increases in IoT Platform demand at critical times.” Processing this data and meeting service level agreements for its customers is why Taggle required a platform capable of scaling responsively. 日本語 Contact Sales To learn more, visit aws.amazon.com/iot/. With Taggle’s AWS-based IoT platform, councils and water utilities across Australia are reducing their operating costs. “Our solution helps customers defer some of their capital expenses by saving water through identifying leaks, which is money they’re losing,” Bowker says. According to Taggle, industry benchmarks indicate that non-revenue water—water that has been produced and is "lost" before it reaches the customer—can make up to 25 percent of water flows. “By reducing consumption on the consumer side of the network, our customers can defer capital expenditures on additional storage, water treatment, and distribution capacity.” 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Taggle is looking to enhance its relationship with AWS by joining the AWS Partner Program. “We want to take advantage of the AWS Partner Network to leverage the AWS brand and scalability,” says Bowker. “We already have a dominant market share in Australia but have more room for growth in the smart water space, both locally and internationally. Partnering with AWS will certainly make a difference.” Get Started Streaming and Ingesting IoT Data on an AWS-Based Platform About Taggle Systems AWS Services Used 中文 (繁體) Bahasa Indonesia One of Taggle’s challenges has been finding ways to integrate with other systems cost-effectively and quickly. For example, there are thousands of billing system vendors Taggle would need to work with if it expands to the US.  Ρусский عربي 中文 (简体) Integrates seamlessly with third-party devices, applications, and networks Learn more » The company’s developers also rely on AWS to reduce development time, decreasing time to market by 15 percent for new features and solution enhancements. For example, Taggle recently developed a range of new tag types that integrate with the IoT platform. “AWS has helped us optimize performance and throughput for the tags on our existing system as our sales volumes have increased,” says Bowker.  Learn more » Geoff Bowker Cloud Solutions Director, Taggle Benefits of AWS Taggle also sought a technology solution to support its growing ecosystem of third-party devices and radio networks that help deliver data to asset management, emergency management, and supervisory control and data acquisition (SCADA) applications. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. Türkçe English Taggle IoT Platform Tracks Thousands of Smart Water Sensors to Help Utilities Cut Costs Amazon Relational Database Service AWS Lambda Amazon Kinesis Data Streams Taggle is also taking advantage of the high availability and reliability of AWS services to ensure it meets its customers’ requirements for data continuity. “Our IoT platform has a range of redundancy features built into it. So, if we lose transmission from a tag or have an extended outage, we can restore data continuity quickly,” Bowker says. “This is critical in helping our customers avoid data loss. It also ensures they can identify water leaks or loss within their network, as that can only happen with continuity of data to read.” Taggle is Australia’s leading supplier of smart water solutions for local and regional councils and water utilities. The company provides a complete smart water solution that’s open, interoperable, and scalable. Taggle has more than 270,000 meters and sensors deployed across Australia. Deutsch Tiếng Việt Italiano ไทย Using AWS, we know we can scale as necessary to accommodate the 80,000 additional sensors we’re rolling out this year. We’re confident that we can continue our fast pace of growth with AWS.” Scaling to Reliably Ingest Data from 80,000 New Sensors By running its IoT platform on AWS, Taggle can scale on demand to support high volumes of IoT data as the company grows. “Using AWS, we know we can scale as necessary to accommodate the 80,000 additional sensors we’re rolling out this year,” says Bowker. “We’re confident we can continue our fast pace of growth with AWS.” Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 2022 More than 50 councils and water utilities across Australia rely on Taggle smart water solutions to gather data from Internet of Things (IoT) sensors and meters. These provide insights on leak detection, demand management, network optimization, customer engagement, and billing. Taggle has more than 270,000 meters and sensors deployed throughout Australia, reading over 2 billion data points annually. The sensors accumulate data on water flow for metering, water levels for floodplains, water catchment and wastewater, water pressure for network and pipeline management, and rainfall. Taggle’s network delivers more than 5 million readings to councils and water utilities daily. Although Taggle considered several IoT technologies to support its platform, Amazon Web Services (AWS) best met its business requirements for scalability. “We chose AWS because it offered the technology stack and production environments to meet our needs now and into the future,” says Bowker. Helps utilities and councils cut costs Português Reduces time to market for new features by 15%
Takeda Accelerates Digital Transformation by Migrating to AWS _ Takeda Case Study _ AWS.txt
Outcome | Exploring New Technological Possibilities for Healthcare on the AWS Cloud  AWS Lake Formation Français 80% Enhanced Español Takeda Accelerates Digital Transformation by Migrating to AWS AWS Control Tower Learn how Takeda, a 240-year-old company, uses AWS to increase operational agility, reduce technical debt, and modernize its business. 日本語 2023 Most importantly, Takeda has built its own digital muscle by effectively modernizing its technology landscape, which is promoting cloud-based innovation across its business units. “We are creating a digital flywheel, of which the cloud migration and modernization is just the first step. We are finding opportunities to encourage new ways of working and to empower digital initiatives across our organization,” says Pehrson. “Project Fuji was a digital transformation journey from the outset. Our digital journey addressed all the dimensions of what we needed to do to achieve patient outcomes. Certainly, the foundation of this was the technology infrastructure on AWS and data as a digital solution facilitator.” Get Started 한국어 The AWS Cloud spans 99 Availability Zones within 31 geographic regions around the world, with announced plans for 12 more Availability Zones and 4 more AWS Regions in Canada, Israel, New Zealand, and Thailand. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Using AWS Services to Modernize the Digital Landscape for Takeda  Now, Takeda is better equipped to engage in powerful digital initiatives and respond to the world’s challenges with agility. At the beginning of the COVID-19 pandemic, pharmaceutical companies came together to form the CoVIg-19 Plasma Alliance, which aimed to use immunoglobulin therapy to treat COVID-19 patients and help them recover faster. In 1 weekend, Takeda could spin up a secure, collaborative environment using AWS Control Tower, which is used to set up and govern a secure, multi-account AWS environment, and AWS Lake Formation, which creates secure data lakes, making data available for wide-ranging analytics. As a result, the alliance proceeded rapidly to a phase III clinical trial for hyperimmune therapy. AWS Services Used AWS Lake Formation easily creates secure data lakes, making data available for wide-ranging analytics. Learn more » virtual machines running on AWS 中文 (繁體) Bahasa Indonesia Using AWS services, the team at Takeda knows that access to advanced technologies is one API call away. “Whether you’re climbing a mountain or changing a 240-year-old company, the real alchemy is to bring forth the best in your people and use technology for the benefit of the patient,” says Pehrson. “It’s our will in using AWS resources that will drive the future of Takeda.” 7 out of 13 sustainability and automated compliance ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) From a data perspective, Takeda did not have a centralized catalog, pipeline, or data lake. As a result, its teams were purchasing commercial datasets repeatedly without making them accessible internally, and there was no mechanism to share the data it produced with its partners. “There was no single source of truth,” says Pehrson. “This was a difficult situation for the Data, Digital, and Technology team, and we needed to disrupt ourselves.” data centers closed, with 2 soon to close Overview Solution | Accelerating Digital Transformation by Migrating over 600 Applications to AWS  Ryan Pehrson Head of DevOps and Cloud Enablement, Takeda Pharmaceutical Company Limited Takeda turned to Amazon Web Services (AWS) for Project Fuji, an initiative to empower self-service, on-demand access to cloud technologies across the organization. With this project, it aimed to migrate 80 percent of its business applications in core data centers to AWS and other software-as-a-service solutions, and rationalize its technology estate. The business transformation resulting from modernized solutions and accelerated data services established an internal engine for innovation and equipped its employees with new skills and ways of working. Takeda is a global, values-based, research and development–driven biopharmaceutical company headquartered in Japan. It strives to discover and deliver life-transforming treatments, guided by its commitment to patients, people, and the planet. Türkçe About Takeda On AWS, Takeda is exploring new digital ambitions. In the future, it plans to use AWS to develop telehealth, personalized treatment, and smart manufacturing initiatives. “We don’t actually know how the healthcare landscape of tomorrow will be composed, but we know this: we will continue to build on AWS,” says Pehrson. English Takeda chose AWS because of its wide range of cloud offerings and high adoption among life science companies. The team also felt that AWS had a strong compliance and security posture with its shared responsibility model, further validated by certifications and third-party audited artifacts. Additionally, Takeda appreciated the contributions from AWS to the open-science community and global alliances to progress scientific innovations. “We believed we could reset expectations of what the Data, Digital, and Technology team can do and become the trusted innovation partner that our business always wanted,” says Pehrson. “We needed a catalyst and more capabilities than we had within Takeda to get it done.” Takeda Accelerates Digital Transformation by Migrating to AWS We don’t actually know how the healthcare landscape of tomorrow will be composed, but we know this: we will continue to build on AWS.” With a mission to improve health and create a brighter future for the world, Takeda wanted to respond to patients’ needs with greater speed and agility and to be at the intersection of human health, technology, and business growth. But, having grown through acquisitions over the years, it needed to deal with the weight of the past. Despite a significant application rationalization initiative, Takeda still had thousands of business applications and significant technology debt, and its IT infrastructure needed to be modernized. “Our Data, Digital, and Technology team’s energy was spent mostly on maintaining the old, not building the new,” says Ryan Pehrson, head of DevOps and cloud enablement at Takeda. “We could not support or build the latest technology in our data centers. Though we had and have great technology professionals on staff, we neither had the skills nor the funding to keep up with the leading edge of innovation.” Amazon Regions In 2019, Takeda chose to migrate to AWS. It embarked on an intense 2-year journey toward cloud modernization to create an innovation engine that could drive better patient outcomes. After analyzing each of its applications, the company used an agile migration factory approach to shift what was necessary to the cloud and close 10 of its 13 data centers—and Project Fuji was born. Deutsch AWS Control Tower simplifies AWS experiences by orchestrating multiple AWS services on your behalf while maintaining the security and compliance needs of your organization. Learn more » Tiếng Việt of core data center applications migrated Italiano Customer Stories / Life Sciences Contact Sales With the support of the AWS team and Takeda’s technology partners, Takeda followed a rinse-and-repeat model to migrate its applications to AWS. In 8 months, the company has migrated 80 percent of its applications to six AWS Regions, which are locations around the world where AWS clusters data centers. The 615 migrated applications amount to over 10 PB of data; the company also runs over 8,000 average daily virtual machines on AWS. By migrating to AWS, Takeda could retire 7 out of its 13 data centers, with 2 more to close soon, improving its operational agility. “We certainly wouldn’t have access to the advanced technologies that we have today if we had stayed in our data centers,” says Pehrson. “We wouldn’t have any of the cloud-native innovations available, and we would still be stuck in the ongoing overhead and administration of all that technology.” By getting out of its data centers, Takeda has also reduced its carbon footprint by 1,918.5 metric tons, improving its environmental impact. Founded in 1781, Takeda Pharmaceutical Company Limited (Takeda) is a global, values-based, research and development–driven company committed to discovering and delivering life-transforming treatment options. With a deep focus on patients, trust, and reputation, it aims to become the world’s most trusted digital global biopharmaceutical company. When Takeda’s aging technology landscape hindered its pace of innovation, the company knew that it was time to embark on a cloud transformation journey. 1,918.5 metric tons of carbon removed 8000 Português
Tally Solutions _ Amazon Web Services.txt
Achieves 42% cost savings Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Working remotely with TallyPrime on AWS is simple for Tally’s customers, who only need a Tally license and TallyPrime on AWS pack from Elcom to begin working through NICE DCV. “With TallyPrime powered by AWS, customers are onboarded by Elcom in 5–10 minutes. It’s a seamless process,” Joyce says. “There’s no need for training or excessive time spent in learning the solution.” Français About Tally Solutions Private Ltd Using NICE DCV to Stream TallyPrime on AWS Joyce Ray Head of India Business, Tally Solutions To learn more, visit  aws.amazon.com/smart-business.  Español NICE DCV With TallyPrime powered by AWS, Tally is set to scale its platform seamlessly as user traffic increases. AWS offers the necessary scalability and reliability to ensure the best experience for Tally's global customers. Tally and AWS collaborated to architect a scalable, reliable, and cost-effective solution which can serve the unique needs of the Indian SMB market. AWS solution architects and prototyping engineers worked with Tally engineers to design, prototype, build, and test innovative features for a seamless user experience. The AWS team helped Tally rapidly iterate by testing multiple solutions and selecting the best techno-commercial fit. Tally’s AWS Partner, Elcom now has over 15,000 TallyPrime users empowered by AWS, thanks to NICE DCV enabling them to work from anywhere on any device at any time.” 80% of small and medium businesses (SMBs) in India rely on Tally’s business management software to manage their accounting, inventory, taxation compliance, and overall finances. In the last 36 years, 日本語 Get Started 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Using NICE DCV, Tally provides its customers with anytime, anywhere access to TallyPrime regardless of location. “Elcom now has over 15,000 TallyPrime users empowered by AWS, thanks to NICE DCV enabling them to work from anywhere on any device at any time,” says Joyce. “Many of these customers are growing enterprises with multiple locations, and this greatly simplifies things for them. Whether there are travel restrictions or other interruptions, users have more flexibility now.” Benefits Onboards new users in 5–10 minutes The company’s flagship business management software—TallyPrime—provides modern experience and features for SMBs to run their businesses seamlessly. Joyce Ray, head of India Business at Tally, says, “Over the past 36 years, we’ve been able to simplify the lives of millions of entrepreneurs across India by providing everything SMBs need to run their businesses smoothly.” AWS Services Used Tally Solutions, headquartered in India, is a technology company that delivers business software for small and medium businesses. Founded more than three decades ago in 1986, Tally Solutions caters to millions of users across a range of industries in more than 120 countries.  AWS Key Management Service (AWS KMS) lets you create, manage, and control cryptographic keys across your applications and AWS services. AWS Key Management Service (AWS KMS) by creating and controlling cryptographic keys and automated application scalability with 中文 (繁體) Bahasa Indonesia Ensuring Secure Streaming while Reducing Costs Gives users reliable remote access to ERP application anytime, anywhere Amazon Elastic Compute Cloud (Amazon EC2) instances and streams the application to on-premises client machines. It leverages AWS Auto Scaling, which adjusts capacity based on demand. Application-level two-factor authentication based on state-of-the-art asymmetric cryptography adds an additional layer of mandatory authentication for every user accessing the system. Ρусский عربي 中文 (简体) Tally Solutions Private Ltd. has provided enterprise resource planning (ERP) software to more than 2 million businesses and over 7 million users across the globe. Using NICE DCV, TallyPrime runs remotely on Tally appointed Elcom Digital as its national distributor for marketing and sales of TallyPrime through Tally Partners. Elcom implements   Scaling the User Base Rapyder Solutions, also assisted in co-developing and testing the software, while Türkçe Amazon Elastic Compute Cloud Tally has achieved cost optimization and affordability by migrating from a Windows to Linux environment, resulting in approximately 42% cost savings. Furthermore, NICE DCV is offered as a complimentary service running on Amazon EC2 with no additional charges, allowing Tally to offer TallyPrime at a competitive price to its customers. English Around Tally Solutions Securely Streams Its ERP Solution with NICE DCV, Providing Remote Access Anytime, Anywhere AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Securely streams software to thousands of customers NICE DCV, an AWS high-performance remote display protocol, to securely stream the hosted Tally application. “We chose NICE DCV because of flexibility and cost optimization, alongside the experience and support of AWS,” says Joyce. With the onset of the pandemic in early 2020, businesses were forced to adapt quickly to new ways of working. To access TallyPrime remotely, the main system on which it was installed needed to be switched on and connected. However, with offices shut down during the pandemic, maintaining these systems and connections became more challenging. As a result, there was an increasing demand for anytime, anywhere access to TallyPrime, which was previously managed through remote access. NICE DCV is a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions. Deutsch AWS Enterprise Support ensured successful deployment and rapid on-demand support. AWS Auto Scaling Tiếng Việt Learn More Italiano ไทย Contact Sales Tally enhanced security through Learn more » 2023 Tally sought a cloud-based application streaming solution that would serve the growing demand of anytime, anywhere access. “We considered various remote display protocol solutions for high-performance and opted for a multi-modal solution supported by AWS,” Joyce says. AWS Key Management Service Giving Users Remote Application Access from Anywhere Tally is securely streaming its ERP software to thousands of customers by running on NICE DCV, which integrates with the company’s two-factor authentication process. NICE DCV provides custom security layers, which, alongside AWS KMS encryption, helps enhance the security of TallyPrime. Amazon Elastic Container Service (Amazon ECS) to run a containerized application environment, with each container associated with a user; a unique instance of NICE DCV server assigned on a per-user basis handles the streaming end-user session setup and rendering. Português