doctype
stringclasses
1 value
section
stringclasses
4 values
topic
stringclasses
10 values
content
stringlengths
402
13.1k
White Paper
Abstract
ACHIEVE COMPLETE AUTOMATION WITH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
As agile models become more prominent in software development, testing teams must shift from slow manual testing to automated validation models that accelerate the time to market. Currently, automation test suites are valid only for some releases, placing greater pressure on testing teams to revamp test suites, so they can keep pace with frequent change requests. To address these challenges, artificial intelligence and machine learning (AI/ML) are emerging as viable alternatives to traditional automation test suites. This paper examines the existing challenges of traditional testing automation. It also discusses five use-cases and solutions to explain how AI/ML can resolve these challenges while providing complete and intelligent automation with little or no human intervention, enabling testing teams to become truly agile.
White Paper
Introduction
ACHIEVE COMPLETE AUTOMATION WITH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
Industry reports reveal that many enterprise initiatives aiming to completely automate quality assurance (QA) fail due to various reasons resulting in low motivation to adopt automation. It is, in fact, the challenges involved in automating QA that have prevented its evolution into a complete automation model. Despite the challenges, automation continues to be a popular initiative in today's digital world. Testing communities agree that a majority of validation processes are repetitive. While traditional automation typically checks whether things work as they are supposed to, the advent of new technologies like artificial intelligence and machine learning (AI/ML) can support the evolution of QA into a completely automated model that requires minimal or no human intervention.
White Paper
Main Content
ACHIEVE COMPLETE AUTOMATION WITH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
Pain points of Complete Automation Introduction Let us look at the most pertinent problems that lead to low automation statistics: ' Frequent requirement changes ' While most applications are fluid and revised constantly, the corresponding automation test suite is not. Keeping up with changing requirements manually impedes complete automation. Moreover, maintaining automated test suites becomes increasingly complicated over time, particularly if there are frequent changes in the application under test (AUT) ' Mere scripting is not automation ' Testing teams must evolve beyond traditional test automation that involves frequent manual script-writing. ' Inability to utilize reusable assets ' It is possible to identify reusable components only after a few iterations of test release cycles. However, modularizing these in a manner that can be reused everywhere is a grueling task. ' Talent scarcity ' Finding software development engineers in test (SDETs) with the right mix of technical skills and QA mindset is a significant challenge QA teams today are looking for alternatives to the current slow and manual process of creating test scripts using existing methodologies. It is evident that intelligent automation (automation leveraging AI/ML) is the need of the hour. How can enterprises achieve complete automation in testing? Use cases and solutions for intelligent test automation The key to achieving complete automation lies in using AI/ML as an automation lever instead of relegating it to scripting. Optimizing manual test cases using AI/ML is a good start. Helping the application selflearn and identify test suites with reusable Use case 1: Optimizing a manual test suite Testing teams typically have a large set of manual test cases for regression testing, which are written by many people over a period of time. Consequently, this leads to overlapping cases. This increases the burden on automation experts when creating the automation test suite. Moreover, as the test case suite grows larger, it becomes difficult to find unique test cases, leading to increased execution effort and cost. Solution 1: Use a clustering approach to reduce effort and duplication A clustering approach can be used to group similar manual test cases. This helps teams easily recognize identical test cases, thereby reducing the size of the regression suite without the risk of missed coverage. During automation, only the most optimized test cases are considered, with significant effort reduction and eliminating duplicates. Use case 2: Converting manual test cases into automated test scripts Test cases are recorded or written manually in different formats based on the software test lifecycle (STLC) model, which can be either agile or waterfall. Sometimes, testers can record audio test cases instead of typing those out. They also use browserbased recorders to capture screen actions while testing. Solution 2: Use natural language processing (NLP) In the above use case, the execution steps and scenarios are clearly defined, assets can be more advanced utilities for automated test suite creation. Leveraging AI in test suite automation falls into two main categories ' 'automation test suite creation using various inputs' after which the tester interprets the test cases, designs an automation framework and writes automation scripts. This entire process consumes an enormous amount of time and effort. With natural language processing (NLP) and pattern identification, manual test cases can be transformed into ready-to-execute automation scripts and, furthermore, reusable business process components can be easily identified. This occurs in three simple steps: ' Read ' Using NLP to convert text into the automation suite. ' Review ' Reviewing the automation suite generated. ' Reuse ' Using partially supervised ML techniques and pattern discovery algorithms to identify and modularize reusable components that can be plugged in anywhere, anytime and for any relevant scenario. and 'automation test suite repository maintenance'. The following section discusses various use cases for intelligent automation solutions under these two categories which address the challenges of end-to-end automation. All these steps can be implemented in a tool-agnostic manner until the penultimate stage. Testers can review the steps and add data and verification points that are learnt by the system. Eventually, these steps and verification points are used to generate automation scripts for the tool chosen by the test engineer (Selenium, UFT or Protractor) only at the final step. In this use case, AI/ML along with a toolagnostic framework helps automatically identify tool-agnostic automation steps and reusable business process components (BPCs). The automation test suite thus created is well-structured with easy maintainability, reusability and traceability for all components. The solution slashes the planning and scripting effort when compared to traditional mechanisms. Use case 3: Achieving intelligent DevOps Software projects following the DevOps model usually have a continuous integration (CI) pipeline. This is often enabled with auto-deploy options, test data and API logs with their respective requests-response data or application logs from the DevOps environment. While unit tests are available by default, building integration test cases requires additional effort. Solution 3: Use AI/ML in DevOps automation By leveraging AI/ML, DevOps systems gain analytics capabilities (becoming 'intelligent DevOps') in addition to transforming manual cases to automated scripts. Fig 1: Achieving complete automation on a DevOps model with AI/ML AI Components Optimized Test Cases Automated Unit Tests Virtual Service / Component Test Cases Functional Test Scenarios SMART DIAGNOSTIC ANALYTICS Recovery Techniques Test Reports / Defect Logs Application Logs Test Reports / Defect Logs Request -Response Data / Test Data Develop Test Deploy Application Code Code Quality report Continuous Testing Test Automation in DevOps ' Quality at every step As shown in Fig 1, the key steps for automating a DevOps model are: ' Create virtual service tests based on request-response data logs that can auto update/self-heal based on changes in the AUT. ' Continuously use diagnostic analytics to mine massive data generated to proactively identify failures in infrastructure/code and suggest recovery techniques. ' Leverage the ability to analyze report files and identify failure causes/ sequences or reusable business process components through pattern recognition. ' Enable automated script generation in the tool-of-choice using a tool-agnostic library. ' Analyze past data and patterns to dynamically decide what tests must be run for different teams and products for subsequent application builds. ' Correlate production log data with past code change data to determine the risk levels of failure in different application modules, thereby optimizing the DevOps process. External Document ' 2020 Infosys Limited Fig 2: How reinforcement algorithms in testing identify critical flows in an application B. Automation test suite maintenance Use case 4: Identifying the most critical paths in an AUT When faced with short regression cycles or ad hoc testing requests, QA teams are expected to cover only the most important scenarios. They rely on system knowledge or instinct to identify critical scenarios, which is neither logical nor prudent from a test coverage perspective. Thus, the inability to logically and accurately identify important test scenarios from the automation test suite is a major challenge, especially when short timelines are involved. Solution 4: Use deep learning to determine critical application flows If the AUT is ready for validation (even when no test cases are available in any form), a tester can use a reinforcement learning-based system to identify the critical paths in an AUT. Use case 5: Reworking the test regression suite due to frequent changes in AUT If a testing team conducts only selective updates and does not update the automation test suite completely, then the whole regression suite becomes unwieldly. An AI-based solution to maintain the automation and regression test suite is useful in the face of ambiguous requirements, frequent changes in AUTs The five use cases and solutions discussed above can be readily implemented to immediately enhance an enterprise's test suite automation process, no matter their stage of test maturity. and short testing cycles, all of which leave little scope for timely test suite updates. Solution 5: Deploy a self-healing solution for easier test suite maintenance Maintaining the automation test suite to keep up with changing requirements, releases and application modifications requires substantial effort and time. A self-healing/self-adjusting automation test suite maintenance solution follows a series of steps to address this challenge. These steps are identifying changes between releases in an AUT, assessing impact, automatically updating test scripts, and publishing regular reports. As shown in Fig 3, such a solution can identify changes in an AUT for the current release, pinpoint the impacted test scripts and recommend changes to be implemented in the automation test suite.
White Paper
Conclusion
ACHIEVE COMPLETE AUTOMATION WITH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
Complete automation, i.e., an end-to-end test automation solution that requires minimal or no human intervention, is the goal of QA organizations. To do this, QA teams should stop viewing automation test suites as static entities and start considering these as dynamic ones with a constant influx of changes and design solutions. New technologies like AI/ML can help QA organizations adopt endto-end automation models. For instance, AI can drive core testing whether this involves maintaining test scripts, creating automation test suites, optimizing test cases, or converting test cases to automated ones. AI can also help identify components for reusability and self-healing when required, thereby slashing cost, time and effort. As agile and DevOps become a mandate for software development, QA teams must move beyond manual testing and traditional automation strategies towards AI/ML-based testing in order to proactively improve software quality and support self-healing test automation.
White Paper
Introduction
RPA: The future of enterprise test automation
If one word defines the 21st century, it would be speed. Never before has progress happened at such a breakneck pace. We have moved from discrete jumps of innovation to continuous improvement and versioning. The cycle time to production is at an all-time low. In this era of constant innovation where the supreme need is to stay ahead of the competition and drive exceptional user experiences, product quality deployed in production is paramount to ensure that speed does not derail the product as a whole. A robust testing mechanism ensures quality while allowing faster release and shorter time to market ' so essential for that competitive edge. Today, USD 550bn is spent on testing and validation annually. It is also the second largest IT community in the world. That is a significant investment and effort being put into this space already, but is it delivering results?
White Paper
Main Content
RPA: The future of enterprise test automation
In the past five or so years, there has been a push from CXOs, based on recommendations from industry experts and analysts, to go for extreme automation. Companies have been adopting multiple tools, opensource technologies, and building enterprise automation frameworks. This attempt to achieve end-to-end automation has created a mammoth network of tool sets in the organization that may or may not work well with each other. This is how test automation was done conventionally; it still requires elaborate effort to build test scripts, significant recurring investment for subscription and licensing, and training and knowhow for multiple tools. By some estimates, traditional testing can take up to 40% of total development time ' that is untenable in the agile and DevOps modes companies operate in today. What if this ongoing effort can be eliminated? What if the need for multiple tools can be done away with? Enter Robotic Process Automation (RPA) in testing. While originally not built for testing, RPA tools show great potential to make testing more productive, more efficient, and help get more features to the market faster ' giving them an edge over conventional tools (see Fig 1). The state of testing and validation in the enterprise Product features Traditional automation tools RPA Tools Coding Knowledge ' Coding knowledge is essential to develop automated scripts ' Programming knowledge and effort is needed to build the framework, generic reusable utilities and libraries ' These tools offer codeless automation. Developing automated scripts requires some effort for configuration and workflow design. However, coding is minimal compared to traditional tools ' Generic reusable utilities are available as plug-and-play components Maintenance Extensive maintenance effort required Minimal test maintenance effort required Cognitive automation No support for cognitive automation RPA tools are popular for supporting cognitive automation by leveraging AI Plugin support Limited plugins are available for different technologies Plugins are available for all leading technologies Orchestration and load distribution Load distribution during execution requires additional effort to develop the utilities and set up the infrastructure This feature is available in most RPA tools. For example, feature of a popular RPA tool helps in load distribution during execution without any additional effort aside from configuration Automation development Productivity Test development productivity is low since custom coding is required most of the time Test development productivity is high as most generic activities are available as plug-and-play OCR for text recognition This feature is not available This feature is available in all RPA tools Advanced image recognition This feature is not available. Either additional scripting or a third-party tool is needed to support this This feature is available in all RPA tools In-built screen and data scarping wizards This feature is not available and requires integration with other tools This feature is available in all RPA tools Fig. 1: Traditional automation tools vs. RPA External Document ' 2020 Infosys Limited External Document ' 2018 Infosys Limited RPA ' the next natural evolution of testing automation In the last decade, automation has evolved and matured with time along with changing technologies. As discussed, automation in testing is not new but its effectiveness has been a challenge ' especially the associated expense and lack of skill sets. RPA can cut through the maze of tool sets within an enterprise, replacing them with a single tool that can talk to heterogenous technology environments. From writing stubs to record and playback, to modular and scriptless testing, and now to bots, we are witnessing a natural evolution of test automation. In this 6th Gen testing brought about by RPA orchestration, an army of bots will drastically change the time, effort, and energy required for testing and validation. We are heading towards test automation that requires no script, no touch, works across heterogenous platforms, creates extreme automation, and allows integration with opensource and other tools. According to Forrester', 'RPA brings production environment strengths to the table.' This translates into production level governance, a wide variety of use cases, and orchestration of complex processes via layers of automation. RPA allows companies to democratize automation very rapidly within the testing organization. RPA has an advantage over traditional tools in that it can be deployed where they fail to deliver results (see Fig 2). For instance, when: ' the testing landscape is heterogenous with complex data flows ' there is a need for attended and unattended process validation ' there is a need to validate digital system needs An RPA solution can bring in a tremendous amount of simplicity for building out bots quickly and deploying them with the least amount of technical know-how and skills that even business stakeholders can understand. Challenge Areas Performance of RPA tools Test Data Management Data-driven testing is supported by many traditional tools. RPA can manage data form files like Excel/JSON/XML/DB and use these for testing Testing in different environments End-to-end business processes navigate through various environments like mainframe/web/DB/client server applications. RPA tools can easily integrate this process across multiple systems. Thus, RPA tools simplify business orchestration and end-to-end testing compared to other testing tools Traceability While RPA tools do not directly provide test script traceability, there are methods to enable this functionality. For instance, user stories/requirements stored in JIRA can be integrated with RPA automation scripts using Excel mappings to create a wrapper that triggers execution Script versioning A batch process can be implemented in the RPA tool to address this CI-CD integration This is available in most of the RPA Tools Reporting and defect logging RFA tools have comprehensive dashboards that showcase defects that can be logged in Excel or JIRA through a suitable wrapper Error handling This feature is available in all RPA tools. AssistEdge for Testing in Action One of our clients, a large investment company based in Singapore realized the benefits of RPA based testing when it helped them save 60% of testing efforts. They were running legacy modernization as a program using mainframe systems that are notoriously difficult to automate using traditional automation tools. RPA with its AI and OCR capabilities and the ability to traverse through and handle any technology, was easily able to automate 800+ test cases in the mainframe. In another instance, a large banking client was using package-based applications that used multiple technologies to build different screens. It becomes difficult to integrate multiple tools in this scenario. With RPA, we were able to automate the end-to-end workflow for each application using just one tool. This helped reduce the overall maintenance effort by over 30%. Another one of our clients was facing a quality assurance (QA) challenge where bots were being deployed without testing. We developed specific QA bots with added exceptional handlers to check whether the bot is actually handling exceptions and if it fails then how it comes back to the original state. By validating the bots, we improved the overall efficiency by 30%.
White Paper
Conclusion
RPA: The future of enterprise test automation
Take advantage of the Edge The COVID-19 pandemic has accelerated the organizations' need to be hyperproductive. Companies are realizing that they have to transform to build the capabilities that will prepare them for the future. Companies are thinking of ways to drive efficiency and effectiveness to a level not seen before. There is a strong push for automation to play a central role in making that happen. This is also reflected in the testing domain, where any opportunity for improvement will be welcome. Realizing the need to drive testing efficiencies and reduce manual effort, organizations want to adopt RPA in testing. We are at a tipping point where the benefits of RPA adoption are clear, what is needed is that first step towards replacing existing frameworks. Recognizing the potential of RPA in testing, EdgeVerve and Infosys Validation Solutions (IVS) have been helping clients simplify and scale up test automation with AssistEdge for testing. AssistEdge brings the experience of handling tens of thousands of processes with different technologies and environment systems to test automation, helping navigate heterogenous environments with ease. By building an army of bots for functional, regression, and user acceptance testing, it can help achieve 100% test automation with incredible accuracy. In addition to being faster to build and deploy, AssistEdge reduces the overall time to value and also the investment needed for deploying and managing RPA infrastructure. Infosys Validation Solutions' (IVS) engineering-led QA capabilities enable enterprises to effortlessly scale up testing in real-time, delivering unprecedented accuracy, flexibility, and speed to market. With its vast experience, IVS enables clients across industry verticals to successfully implement an automated testing strategy, allowing them to move away from tedious and error prone manual testing, thereby improving performance and software quality while simultaneously resulting in effort- and cost-savings. The journey to RPA-based test automation has to be implemented. And those that adopt faster will hold a competitive advantage in faster realization of benefits. The question is, are you willing to take the leap? Want to know more about the potential of RPA in testing? Write to us at askus@infosys.com. About AssistEdge AssistEdge offers a cohesive automation platform that enables enterprises to scale in their automation journey. It offers enterprises with a comprehensive suite of products enabling them to drive initiatives around process discovery, intelligent automation and digital workforce orchestration. AssistEdge has helped enterprises unlock value in the form of reduced service time, faster sales cycles, better resource allocation, accelerated revenue recognition and improved efficiency among others.
White Paper
Abstract
RPA VALIDATION PITFALLS AND HOW TO AVOID THEM
While many organizations are adopting robotic process automation (RPA) to increase operational efficiency, the approach for testing and validating robots is still a software-first strategy. The lack of the right testing and validation strategy can result in under-performing robots that are unable to meet the desired business and efficiency outcomes. This paper outlines erroneous RPA testing strategies and provides an effective approach to RPA validation.
White Paper
Introduction
RPA VALIDATION PITFALLS AND HOW TO AVOID THEM
To execute any business operation or workflow in large enterprises, services personnel or business teams usually work with multiple IT applications. Robotic process automation (RPA) can automate these business workflows across multiple and disparate applications using software robots that mimic the actions of human users as depicted below: The increased adoption and criticality of RPA for business process automation makes it essential to design and build RPA robots that exceed the level of productivity and quality delivered by human users. However, many RPA robots tend to underperform due to poor design and improper implementation in addition to workload scheduling decisions. Further, QA teams struggle to identify these issues during robot validation primarily due to recurring test strategy and test execution challenges as described below: What to test How to test Where to test Focus on validating IT application functionality: Since business process automation is always implemented on applications used by the business, it is fair to assume that these applications are tested and work properly in production. However, QA teams continue to invest significant effort to validate application behavior like querying the database to verify data updates, checking error messages in case of negative scenarios, etc. Missing non-functional requirements driven by the operating environment: Sometimes, validation teams miss out on capturing and validating operating environment requirements like business SLAs, process execution window and dependent activities. Validation of robots as software: One of the most striking challenge is that validation teams treat robotic process testing as software validation. Their approach focuses on test steps automation, validation of the Robot inputs with database and re-verification of steps performed by Robots. While some of these validation steps are relevant, teams tend to miss the key validations required for seamless and predictable functioning of RPA robots. Automating RPA testing: There are many instances where efforts are made to automate RPA testing using test automation tools without much clarity on what needs to be automated when RPA itself works in an automated manner. Validation in incorrect environments: Many QA teams begin validating RPA robots in SIT and other lower environments where the application baseline is usually not in sync with the production environment. This asynchrony results in robots failing to perform in the production environment during the first few runs.
White Paper
Main Content
RPA VALIDATION PITFALLS AND HOW TO AVOID THEM
A successful example of robot validation A good example for RPA validation can be gleaned from the manufacturing industry that has successfully deployed physical robots over many years with a very high level of quality and predictability. On comparing how physical robots are tested with how RPA robots should be tested, the following key testing attributes emerge: 1. The robot's ability to perform as per the instructions 2. The robot's ability to perform tasks autonomously 3. The robot's ability to perform at higher efficiency 4. The robot's ability to handle exceptions gracefully A recommended approach for RPA validation Combining the knowledge of software validation with the processes followed by the manufacturing industry for robot validation provides a fitting approach for RPA validation that overcomes the testing challenges mentioned above: What to test How to test Where to test Validation scenarios: Since robots are designed to mimic business workflows, companies can re-use/create test cases similar to UAT test cases for RPA validation. The focus should be on validating business flows and business exceptions rather than application functionality Non-functional requirements validation in the operating environment: Teams should capture non-functional requirements like the ability to operate autonomously, SLA delivery efficiency and expected volume to be processed within preset timeframes Functional validation of robots: Validation teams should set up the input case data for all business cases within the determined scope and then begin robot processing. They should also validate the outcomes and re-runs in cases of errors/unexpected outcomes Exception scenarios: Teams should set up the inputs case data with exception scenarios and validate the robot's ability to identify and report these exceptions for human intervention. They should also re-run and validate these tests in case of errors/unexpected outcomes UAT or higher environments: QA teams should perform validation of RPA only in UAT or higher environments to prevent issues arising from incorrect application versions or environment set-up Stabilization in production: Teams should train robots with simple cases and at low volume during initial runs and gradually increase complexity and volume to ensure minimal impact in case of any processing mismatch. Skill recommendations RPA validation teams should have a deep understanding of business processes across regular and exception workflows to ensure that robots are tested for all possible business scenarios. Additionally, the teams should be well aware about the scheduling mechanism and controller team operating model of the RPA robots so that they can validate the operational efficiencies of RPA robots before deployment into production. Each RPA validation team should also leverage a shared team of experts who have in-depth knowledge of object design and error handling mechanisms. This is important to maintain focus on the performance tuning of robots to meet the operational efficiency requirements. As the RPA ecosystem continues to evolve, one can expect more complex robot implementations with optical character recognition (OCR) and natural language processing (NLP) inputs as well as AI integrations. Moreover, robots will increasingly compete for shared resources like server time and application access to complete the business processes. In such a scenario, RPA validation teams should continue acquiring in-depth knowledge on evolving validation needs and improve the strategies to support these complex business requirements. Exit criteria Before a robot is deployed into production, it is important to check that it has been validated to exceed the business benchmarks and is predictable. Here are some key parameters that should be considered as the exit criteria for robots before they are deployed into live production: ' The adherence to business SLAs should be higher than that of manual processing SLAs ' The number of business processing exceptions should be equal to or less than the number of manual processing exceptions ' The total number of cases processed without human intervention should be equal to or greater than 95% of the inscope cases ' Robot availability should exceed 98% Typically, robots meeting these criteria are well-placed to take over business processing responsibilities from human users and are proven to deliver the desired business benefits.
White Paper
Conclusion
RPA VALIDATION PITFALLS AND HOW TO AVOID THEM
RPA has the potential to transform how organizations execute business workflows by enabling higher efficiency, faster outcomes and almost error-free operations. One of the key success drivers is the right testing strategy for RPA validation. Validating robots using existing software testing models is ineffective because it often results in non-performing or underperforming assets. To overcome this challenge, organizations need a strategy that tests the robot's capacity to work autonomously, handle exceptions well, operate at higher efficiency, and follow preset instructions. With the right validation approach, skills and exit criteria, RPA can help organizations meet the desired business outcomes of error-free, predictable and efficient business operations
White Paper
Abstract
ENABLING QA THROUGH ANAPLAN MODEL TESTING
Anaplan is a cloud-based platform that can create various business models to meet different organizational planning needs. However, the test strategy for Anaplan varies depending on the application platform, cross-track dependencies and the type of testing. This white paper examines the key best-practices that will help organizations benefit from seamless planning through successful Anaplan testing.
White Paper
Introduction
ENABLING QA THROUGH ANAPLAN MODEL TESTING
What is Anaplan? Anaplan is a cloud-based operational planning and business performance platform that allows organizations to analyze, model, plan, forecast, and report on business performance. Once an enterprise customer uploads data into Anaplan cloud, business users can instantly organize and analyze disparate sets of enterprise data across different business areas such as finance, human resources, sales, forecasting, etc. The Anaplan platform provides users with a familiar Excel-style functionality that they can use to make data-driven decisions, which otherwise would require a data expert. Anaplan also includes modules for workforce planning, quota planning, commission calculation, project planning, demand planning, budgeting, forecasting, financial consolidation, and profitability modelling.
White Paper
Main Content
ENABLING QA THROUGH ANAPLAN MODEL TESTING
7 best-practices for efficient Anaplan testing 1.Understand the domain As Anaplan is a platform used for enterprise-level sales planning across various business functions, its actual users will be organizational-level planners who have an in-depth understanding of their domains. Thus, to certify the quality of the Anaplan model, QA personnel must adopt the perspective of end-users who may Fig 2: Representation of Anaplan data integration be heads of sales compensation or sales operations departments, sales managers, sales agents, etc. 2.Track Anaplan data entry points One of the features that makes Anaplan a popular choice is its provision of a wide range of in-built and third-party data integration points, which can be used to easily load disparate data sources into a single model. For most business users, data resides across many granular levels and it cannot be handled reliably with traditional Excel spreadsheets. Anaplan offers a scalable option that replaces Excel spreadsheets with a cloud-based platform to extract, load and transform data at any granular level from different complex systems while ensuring data integrity. It is essential for QA teams to understand the various up-stream systems from where data gets extracted, transformed and loaded into the Anaplan models. Such data also needs to be checked for integrity. ERP HRMS CRM OMF ' Revenue ' OPEX ' Chart of Accounts ' Workforce Info ' Compensation ' Sales Bookings ' Deal Info ' Sales Rep A i t ' Product Forecast ' Service Forecast Data Summarization or Transform & Data Load Master Data Repository Sales Territory Hierarchy Sales Bookings Product Hierarchy Forecast Numbers Sales rep. Hierarchy Employee Hierarchy Compensation Data Sales Adjustment Data Refresh Jobs to load data to Anaplan Anaplan UI External Document ' 2018 Infosys Limited 3.Ensure quality test data management The quality of test data is a deciding factor for testing coverage and completeness. Hence, the right combination of test data for QA will optimize testing effort and cost. Since Anaplan models cater to the financial, sales, marketing, and forecasting domains of an organization, it is essential to verify the accuracy of the underlying data. Failure to ensure this could result in steep losses amounting to millions of dollars for the organization. Thus, it is recommended that the QA teams dedicate a complete sprint/cycle to test the accuracy of data being ingested by the models. The optimal test data management strategy for testing data in an Anaplan model involves two steps. These are: ' Reconciling data between the database and data hub ' Data from the source database or data files that is received from the business teams of upstream production systems should be reconciled with the data hub. This will ensure that the data from the source is loaded correctly into the hub. In cases of hierarchical data, it is important to verify that data is accurately rolled up and that periodic refresh jobs are validated to ensure that only the latest data is sent to the hub according to the schedule. ' Loading correct data from the hub into the model ' Once the correct data is loaded into the hub, testing moves on to validate whether the correct data is loaded from the hub to the actual Anaplan model. This will ensure that the right modules are referenced from the hub in order to select the right data set. It also helps validate the formulas used on the hub data to generate derived data. For every model that is being tested, it is important to first identify the core hierarchy list that forms the crux of the model and ensure that the data is validated across every level of the hierarchy, in addition to validating the roll-up of numbers through the hierarchy or cascade of numbers down the hierarchy as needed. External Document ' 2018 Infosys Limited External Document ' 2018 Infosys Limited 4.Validate the cornerstones It is a good practice to optimize test quality to cover cornerstone business scenarios that may be missed earlier. Some recommendations for testing Anaplan models are listed below: ' Monitor the periodic data refresh schedules for different list hierarchies used in the model and validate that the data is refreshed on time with the latest production hierarchy ' As every model may have different user roles with selective access to dashboards, ensure that functional test cases with the correctly configured user roles are included. This upholds the security of the model since some dashboards or hierarchical data should be made visible only to certain pre-defined user roles or access levels ' The Anaplan model involves many data fields, some of which are derived from others based on one or more business conditions. Thus, it is advisable to include test scenarios around conditions such as testing warning messages that should be displayed, or testing whether cells that need to be conditionally formatted based on user inputs that either pass or fail the business conditions 5.Use automation tools for faster validation IT methodologies are evolving from waterfall to agile to DevOps models, leading to higher releases per year, shorter implementation cycle times and faster time-to-market. Thus, the test strategy of QA teams too should evolve from the waterfall model. Incorporating automation in the test strategy helps keep pace with shorter cycle times without compromising on the test quality. Anaplan implementation programs leverage agile and possess a wide range of testing requirements for data, integration and end-to-end testing, thereby ensuring that testing is completed on time. However, there are times when it becomes challenging to deliver maximum test coverage within each sprint because testing Anaplan involves testing each scenario with multiple datasets. Thus, it is useful to explore options for automating Anaplan model testing to minimize delays caused during sprint testing and ensure timely test delivery. Some of these options include using simple Excel-based formula worksheets and Excel macros along with open source automation tools such as Selenium integrated with Jenkins. This will help generate automated scripts that can be run periodically to validate certain functionalities in the Anaplan model for multiple datasets. These options are further explained below: Reusable Excel worksheets ' This involves a one-time activity to recreate the dashboard forms into simple tables in Excel worksheets. The data fields in Anaplan can be classified into 3 types: ' Fields that require user input ' Fields where data is populated from various data sources within Anaplan or other systems ' Fields where data gets derived based on defined calculation logic. Here, the formula used to derive the data value is embedded into the Excel cell such that the derived value gets automatically calculated after entering data in the first two kinds of fields Using such worksheets accelerates test validation and promotes reusability of the same Excel sheet to validate calculation accuracy for multiple data sets, which is important to maintain the test quality : ' Excel macros ' To test the relevant formula or calculation logic, replicate the Anaplan dashboard using excel macros. This macro can be reused for multiple data sets, thereby accelerating and enhancing test coverage ' Open source tools ' Open source tools like Selenium can be used to create automation scripts either for a specific functionality within the model or for a specific dashboard. However, using Selenium for Anaplan automation comes with certain challenges such as: ' Automating the end-to-end scenario may not be feasible since Anaplan requires switching between multiple user roles to complete the end-to-end flow ' The Anaplan application undergoes frequent changes from the Anaplan platform while changes in the model build require constant changes in the scripts ' Some data validation in Anaplan may require referencing other data sets and applying calculation logic, which may make the automation code very complex, causing delays in running the script 6.Ensure thorough data validation Anaplan can provision a secure platform to perform strategic planning across various domains. It provides flexibility and consistency when handling complex information from distinct data sources from various departments within the same organization. Identifying the correct underlying data is crucial for successful quality assurance of business processes using the Anaplan model. There are two key approaches when testing data in Anaplan. These include: ' User access level ' Business process requirements in some Anaplan models allow only end-users with certain access levels to view data and use it for planning. For instance, a multi-region sales planning model will include sales planners from different sales regions as end users. However, users should be allowed to only view the sales, revenue and other KPIs pertaining to their region as it would be a security breach to disclose the KPIs of other sales regions ' Accuracy of data integrated from various systems ' When the data being tested pertains to dollar amounts, for example, sales revenue, it is critical to have a thorough reconciliation of data against the source because a variation of a few dollars could lead to larger inaccuracies or discrepancies when the same data is rolled up the hierarchy or used to calculate another data field Since most Anaplan models contain business-critical numbers for financial reporting, it is important to run thorough tests to ensure accuracy. External Document ' 2018 Infosys Limited 7.Integration testing Integration testing should be an integral part of the testing strategy for any Anaplan application. Typically, there are multiple integration points that may have to be tested in an Anaplan implementation program owing to: ' Different data integration points ' Disparate data sets are imported into the data hub and then into the Anaplan models using different kinds of integration options such as flat files, Anaplan connect, Informatica, in-built data import, etc. ' Different Anaplan models ' There may be more than one Anaplan model being implemented for a medium to largescale organization for different kinds of planning. These need to be integrated with each other for smooth data flow. For instance, the output of a model built exclusively for sales forecasting may be an important parameter for another model that deals with sales planning across an organization's regional sales territories. Thus, besides testing the integration points between these models, it is advisable to have dedicated end-to-end cycle/sprint testing with scenarios across all these models and the integrated systems ' Different data sets ' The periodic refresh of data sets used across Anaplan models happens through Informatica, manual refresh, tidal jobs, etc. QA teams should understand how each data set is refreshed, identify the relevant job names and test these to ensure that the latest active hierarchy is periodically refreshed and loaded into the model. This will eliminate inaccuracies arising from data redundancies owing to inactivation or changes in the data structure in the upstream systems Anaplan implementation programs can be either standalone or interlinked models. Irrespective of the type of implementation, an approach that follows the 7 best practices outlined in this paper will help QA teams optimize their strategy for Anaplan test projects.
White Paper
Conclusion
ENABLING QA THROUGH ANAPLAN MODEL TESTING
Anaplan's cloud-based, enterprise-wide and connected platform can help global organizations improve their planning processes across various business sectors. The Anaplan model is a simple, integrated solution that enables informed decisionmaking along with accelerated and effective planning. The strategic approach is one that leverages domain knowledge, test data management, automation, data validation, and integration testing, to name a few. The Infosys 7-step approach to effective Anaplan testing is based on our extensive experience in implementing Anaplan programs. It helps testing teams benefit from the best strategy for QA across various business functions, thereby ensuring smooth business operations.
White Paper
Abstract
THE RIGHT TESTING STRATEGY FOR AI SYSTEMS
Over the years, organizations have invested significantly in optimizing their testing processes to ensure continuous releases of high-quality software. When it comes to artificial intelligence, however, testing is more challenging owing to the complexity of AI. Thus, organizations need a different approach to test their AI frameworks and systems to ensure that these meet the desired goals. This paper examines some key failure points in AI frameworks. It also outlines how these failures can be avoided using four main use cases that are critical to ensuring a well-functioning AI system
White Paper
Introduction
THE RIGHT TESTING STRATEGY FOR AI SYSTEMS
Experts in nearly every field are in a race to discover how to replicate brain functions ' wholly or partially. In fact, by 2025, the value of the artificial intelligence (AI) market will surpass US $100 billion1. For corporate organizations, investments in AI are made with the goal of amplifying the human potential, improving efficiency and optimizing processes. However, it is important to be aware that AI too is prone to error owing to its complexity. Let us first understand what makes AI systems different from traditional software systems: S.No Software systems AI systems 1 Features ' Software is deterministic, i.e., it is pre-programmed to provide a specific output based on a given set of inputs Features ' Artificial intelligence/machine learning (AI/ML) is non-deterministic, i.e., the algorithm can behave differently for different runs 2 Accuracy ' Accurate software depends on the skill of the programmer and is deemed successful if it produces an output in accordance with its design Accuracy ' Accuracy of AI algorithms depends on the training set and data inputs 3 Programming ' All software functions are designed based on if-then and for loops to convert input data to output data Programming ' Different input and output combinations are fed to the machine based on which it learns and defines the function 4 Errors ' When software encounters an error, remediation depends on human intelligence or a coded exit function Errors ' AI systems have self-healing capabilities whereby they resume operations after handling exceptions/errors.
White Paper
Main Content
THE RIGHT TESTING STRATEGY FOR AI SYSTEMS
S.No Evolution stage in AI Typical failure points How can they be detected in testing 1 Data sources ' Dynamic or static sources ' Issues of correctness, completeness and appropriateness of source data quality and formatting ' Variety and velocity of dynamic data resulting in errors ' Heterogeneous data sources ' Automated data quality checks ' Ability to handle heterogeneous data during comparison ' Data transformation testing ' Sampling and aggregate strategies 2 Input data conditioning ' Big data stores and data lakes ' Incorrect data load rules and data duplicates ' Data nodes partition failure ' Truncated data and data drops ' Data ingestion testing ' Knowledge of development model and codes ' Understanding data needed for testing ' Ability to subset and create test data sets 3 ML and analytics ' Cognitive learning/ algorithms ' Determining how data is split for training and testing ' Out-of-sample errors like new behavior in previously unseen data sets ' Failure to understand data relationships between entities and tables ' Algorithm testing ' System testing ' Regression testing 4 Visualization ' Custom apps, connected devices, web, and bots ' Incorrectly coded rules in custom applications resulting in data issues ' Formatting and data reconciliation issues between reports and the back-end ' Communication failure in middleware systems/APIs resulting in disconnected data communication and visualization ' API testing ' End-to-end functional testing and automation ' Testing of analytical models ' Reconciliation with development models 5 Feedback ' From sensors, devices, apps, and systems ' Incorrectly coded rules in custom applications resulting in data issues ' Propagation of false positives at the feedback stage resulting in incorrect predictions ' Optical character recognition (OCR) testing ' Speech, image and natural language processing (NLP) testing ' RPA testing ' Chatbot testing frameworks The right testing strategy for AI systems Given the fact that there are several failure points, the test strategy for any AI system must be carefully structured to mitigate risk of failure. To begin with, organizations must first understand the various stages in an AI framework as shown in Fig 1. With this understanding, they will be able to define a comprehensive test strategy with specific testing techniques across the entire framework. Here are four key AI use cases that must be tested to ensure proper AI system functioning: ' Testing standalone cognitive features such as natural language processing (NLP), speech recognition, image recognition, and optical character recognition (OCR) ' Testing AI platforms such as IBM Watson, Infosys NIA, Azure Machine Learning Studio, Microsoft Oxford, and Google DeepMind ' Testing ML-based analytical models ' Testing AI-powered solutions such as virtual assistants and robotic process automation (RPA) External Document ' 2018 Infosys Limited Use case 1: Testing standalone cognitive features Natural language processing (NLP) ' Test for 'precision' return of the keyword, i.e., a fraction of relevant instances among the total retrieved instances of NLP ' Test for 'recall', i.e., a fraction of retrieved instances over total number of retrieved instances available ' Test for true positives (TPs), true negatives (TNs), false positives (FPs), and false negatives (FNs). Ensure that FPs and FNs are within the defined error/fallout range Speech recognition inputs ' Conduct basic testing of the speech recognition software to see if the system recognizes speech inputs ' Test for pattern recognition to determine if the system can identify when a unique phrase is repeated several times in a known accent and whether it can identify the same phrase when it is repeated in a different accent ' Test deep learning, the ability to differentiate between 'New York' and 'Newark' ' Test how speech translates to response. For example, a query of 'Find me a place I can drink coffee' should not generate a response with coffee shops and driving directions. Instead, it should point to a public place or park where one can enjoy his/her coffee Image recognition ' Test the image recognition algorithm through basic forms and features ' Test supervised learning by distorting or blurring the image to determine the extent of recognition by the algorithm ' Test pattern recognition by replacing cartoons with the real image like showing a real dog instead of a cartoon dog ' Test deep learning using scenarios to see if the system can find a portion of an object in a larger image canvas and complete a specific action Optical character recognition ' Test OCR and optical word recognition (OWR) basics by using character or word inputs for the system to recognize ' Test supervised learning to see if the system can recognize characters or words from printed, written or cursive scripts ' Test deep learning, i.e., whether the system can recognize characters or words from skewed, speckled or binarized (when color is converted to grayscale) documents ' Test constrained outputs by introducing a new word in a document that already has a defined lexicon with permitted words Use case 2: Testing AI platforms Testing any platform that hosts an AI framework is complex. Typically, it follows many of the steps used during functional testing. Data source and conditioning testing ' Verify the quality of data from various systems ' data correctness, completeness and appropriateness along with format checks, data lineage checks and pattern analysis ' Verify transformation rules and logic applied on raw data to get the desired output format. The testing methodology/automation framework should function irrespective of the nature of data ' tables, flat files or big data ' Verify that the output queries or programs provide the intended data output ' Test for positive and negative scenarios Algorithm testing ' Split input data for learning and for the algorithm ' If the algorithm uses ambiguous datasets, i.e., the output for a single input is not known, the software should be tested by feeding a set of inputs and checking if the output is related. Such relationships must be soundly established to ensure that algorithms do not have defects ' Check the cumulative accuracy of hits (TPs and TNs) over misses (FPs and FNs) API integration ' Verify input request and response from each application programming interface (API) ' Verify request response pairs ' Test communication between components ' input and response returned as well as response format and correctness ' Conduct integration testing of API and algorithms and verify reconciliation/visualization of output System/regression testing ' Conduct end-to-end implementation testing for specific use cases, i.e., provide an input, verify data ingestion and quality, test the algorithms, verify communication through the API layer, and reconcile the final output on the data visualization platform with expected output ' Check for system security, i.e., static and dynamic security testing ' Conduct user interface and regression testing of the systems External Document ' 2018 Infosys Limited Use case 3: Testing ML-based analytical models Organizations build analytical models for three main purposes as shown in Fig 2 Fig 2: Types and purposes of analytical models The validation strategy used while testing the analytical model involves the following three steps: ' Split the historical data into 'test' and 'train' datasets ' Train and test the model based on generated datasets ' Report the accuracy of model for the various generated scenarios Fig 2: Types and purposes of analytical models External Document ' 2018 Infosys Limited External Document ' 2018 Infosys Limited Fig 3: Testing analytical models Organizations build analytical models for three main purposes as shown in Fig 2 Fig 2: Types and purposes of analytical models Use case 4: Testing of AI-powered solutions Chatbot testing framework ' Test the chatbot framework using semantically equivalent sentences and create an automated library for this purpose ' Maintain configurations of basic and advanced semantically equivalent sentences with formal and informal tones and complex words ' Automate end-to-end scenario (requesting chatbot, getting a response and validating the response action with accepted output) ' Generate automated scripts in Python for execution RPA testing framework ' Use open source automation or functional testing tools (Selenium, Sikuli, Robot Class, AutoIT) for multiple applications ' Use flexible test scripts with the ability to switch between machine language programming (where required as an input to the robot) and high-level language for functional automation ' Use a combination of pattern, text, voice, image, and optical character recognition testing techniques with functional automation for true end-to-end testing of applications
White Paper
Conclusion
THE RIGHT TESTING STRATEGY FOR AI SYSTEMS
AI frameworks typically follow 5 stages ' learning from various data sources, input data conditioning, machine learning and analytics, visualization, and feedback. Each stage has specific failure points that can be identified using several techniques. Thus, when testing the AI systems, QA departments must clearly define the test strategy by considering the various challenges and failure points across all stages. Some of the important testing use cases to be considered are testing standalone cognitive features, AI platforms, ML-based analytical models, and AIpowered solutions. Such a comprehensive testing strategy will help organizations streamline their AI frameworks and minimize failures, thereby improving output quality and accuracy.
White Paper
Abstract
DO MORE WITH LESS IN SOFTWARE TESTING
Faced with pressure to deliver more despite ever-shrinking budgets and shorter timelines, most companies struggle to balance the cost of innovation with business demands. Testing is a critical area where neither speed nor quality of output can be compromised as this leads to negative business impact. This paper explains some cost-effective strategies that enable testing organizations to improve efficiency within their testing teams while ensuring high-quality output.
White Paper
Introduction
DO MORE WITH LESS IN SOFTWARE TESTING
Even as the demands for agile software increase exponentially, testing budgets continue to shrink. Software testing teams within most organizations struggle to deliver quality software in shorter timelines and tighter budgets. Further, most software tests tend to reside in silos making integration, collaboration and automation challenging. Thus, organizations need innovative testing solutions and strategies to balance quality, speed and cost.
White Paper
Main Content
DO MORE WITH LESS IN SOFTWARE TESTING
Agile testing solutions and strategies 1. Test with an end-user mindset The job of testing goes beyond checking software against preset requirements or logging defects. It involves monitoring how the system behaves when it is actually being used by an end-user. A common complaint against testers is that they do not test software from the perspective of business and end-users. Effective riskbased testers are those who understand the system's end-users and deliver valueadded testing services that ensure quality products and meet clients' expectations. To do this, testers must evaluate products by undertaking real business user journeys across the system and test commonly used workflows in short testing windows. By mimicking real user journeys, such testers identify higher number of critical production defects. 2. Empower cross functional teams Agile and DevOps methodologies in software testing are forcing teams to work together across the software development lifecycle (SDLC). Engendering test independence is not about separating testing from development as this can lead to conflicts between developers and testers. Most teams have a polarized dynamic where testers search for defects and must prove how the program is erroneous while programmers defend their code and applications. Cross-functional teams eliminate such conflict by gathering members with different specializations who share accountability and work toward common goals. For instance, 'testers' are simply team members with testing as their primary specialization and 'programmers' are those within the team who specialize in coding. This team structure encourages people to work with a collaborative mindset and sharpen their expertise. Such teams can be small, with as few as 4-8 members who are responsible for a single requirement or part of the product backlog. Cross-functional teams provide complete ownership and freedom to ensure high-quality output ' an important step to realizing the potential of agile. 3. Automate the automation Running an entire test suite manually is time-consuming, error-prone and, often, impossible. While some companies are yet to on-board agile and DevOps capabilities, others have already integrated the practice of continuous integration (CI) and continuous delivery (CD) into their testing services. Irrespective of the level of DevOps maturity, CI/CD will provide only limited value if not paired with the right kind and degree of testing automation. Thus, organizations need a robust, scalable and maintainable test automation suite covering the areas of unit, API, functional, and performance testing. Automated testing saves effort, increases accuracy, improves test coverage, and reduces cycle time. To ensure automation success, organizations must focus on: ' Automating the right set of tests, particularly business-critical end-user journeys and frequently used workflows ' Integrating various components that may change continuously and need to be regressed frequently ' Automating data redundant tests ' Using the right set of automation tools and frameworks ' Moving beyond just the user interface and automating unit, APIs and nonfunctional tests ' Continuous maintenance and use of automated test suite On its own, test automation is important. However, when automation is integrated with a CI/CD pipeline to run every time new code is pushed, the benefits in time, cost and quality are multiplied.
White Paper
Conclusion
DO MORE WITH LESS IN SOFTWARE TESTING
In a fast-paced and evolving digital world, companies want their IT partners to do more with less. When it comes to software testing, this places heavy pressure on software testers who are required to push high-quality code faster at lower cost. To streamline software testing, organizations need an approach where their testers adopt an end-user mindset by testing real user journeys and critical business transactions. They must also create efficient cross-functional teams that collaborate to achieve common goals and deliver value-added testing services. Finally, automating different layers of testing and practicing CI/CD will facilitate continuous testing and reduce time-to-market. These cost-effective strategies will help software testing professionals improve productivity and deliver more with less.
White Paper
Abstract
A Parametric Approach for Security Testing of Internet Applications
Security is one of the prime concerns for all the Internet applications. Often it is noticed that during testing of the application, security doesn't get due focus and whatever security testing is done, it is mainly limited to security functionality testing as captured in the requirements document. Any mistake at requirement gathering stage can leave the application vulnerable to potential attacks. This paper discusses the issues and challenges in security requirement gathering and explains how it is different from normal functionality requirement gathering. It presents an approach to handle different factors that affect application security testing.
White Paper
Introduction
A Parametric Approach for Security Testing of Internet Applications
As compared to traditional client-server or mainframe-based applications, Internet applications have different security requirements [1]. For any Internet application, there is a need to provide an unrestricted global access to the user community. This also brings in the threat of outside attack on the system. The hackers may try to gain unauthorized access to these systems for a variety of reasons and disturb the normal working of the application. Presented at the 3rd International Software Testing Conference, India The security of Internet applications necessarily means prevention of two things: destruction of information and unauthorized availability of information. The broad spectrum of application security can be expressed as 'PA3 IN' viz. Privacy, Authentication, (Authorization, Audit Control,) Integrity and Non-repudiation. The first issue in application security is to provide data privacy to the users. This means protecting information confidentiality from the prying eyes of unauthorized internal users and external hackers. Before allowing the access permissions to the users, it is essential to ensure the legitimacy. The process of identifying users is called authentication. Establishing the users' identity is only half the battle. The other half is access control/ authorization which means attaching information to various data objects denoting who can and who cannot access the object and in what manner (read, write, delete, change access control permissions, and so forth). The third 'A' is for audit control which means maintaining a tamper proof record of all security related events. And then when the data is in transit across the network, we need to protect it against malicious modification and maintain its integrity. The non-repudiation means the user should be unable to deny the ownership of the transactions made by them. It is implemented using Digital Signature [3]. This paper presents an approach for testing security of Internet applications.
White Paper
Main Content
A Parametric Approach for Security Testing of Internet Applications
The two main objectives of an application security testing are to: 1. Verify and validate that the security requirements for the application are met. 2. Identify the security vulnerabilities of the application under the given environment For security testing of an application, the first step is to capture requirements related to each of the security issue. This could not only be a tricky and painstaking exercise because the security requirements are seldom known very clearly at the time of project initiation. The traditional Use case approaches have proven quite useful in general requirements engineering, both for eliciting requirements and getting a better overview of the stated requirements [4]. However security requirements Presented at the essentially need to concentrate on what should not happen in the system and this cannot be captured by traditional Use case approach. Also, the security model of Internet applications has undergone many changes. The traditional model for securing an application from outside elements mainly relied on access control. This model was based on creating a hard perimeter wall around the system and providing a single access gate that can be opened only for authenticated users. This security model has worked well with most of the simple Internet based applications. The gateway here refers to a firewall that classifies all users as 'trusted' or 'untrusted'. In this simplistic model of security, every 'trusted' user who is allowed to cross the gate, gains access to every portion of one's business and no further security checks are done. As the requirements for security increases, the applications need to implement a fine-grained security. Instead of allowing the trusted user to access every portion of the business, the modern security models divide the business domain into many regions and ensure different levels of security for each region. This means creating a security perimeter for each region. The first level of security check can not be very rigorous as one would want to let in prospective customers, vendors and service providers as quickly as possible. Most of the Internet applications today have multiple security regions with different levels of security and these regions could be nested or overlapping with other regions within same single application. While developing an approach for testing security of these kinds of Internet applications, a proper strategy planning is very much essential. It is better to plan security testing at requirement gathering stage itself because there are many issues, which cannot be captured later on by usual methods of testing [2]. One such example is testing for denial of service attack. The issues discussed above make security testing of Internet a very challenging task. In the next section, a parametric approach to security testing is presented and each step is explained with the help of a sample example. Presented at the 3rd International Software Testing Conference, India The Parametric Approach for Security Testing As discussed in previous section, many of the security parameters cannot be captured and tested using traditional approach. In the proposed parametric approach for security testing, before we start the requirement gathering, a template to enlist all security parameters are created. This task can be achieved in four steps as described below: 1. Create an exhaustive list of all security issues in the application 2. Identify all possible sub parameter for each of these issues 3. List all the testing activities for each sub-parameter 4. Assign weightages corresponding to the level of security and priority. Once this template is ready, it can be used to streamline each stage of testing lifecycle. The lifecycle process of security testing is similar to the software development lifecycle process. In the proposed approach, the security testing lifecycle stages are as follows: Test bed Implementation Test Design and Analysis Capture security test requirements Test Reporting Figure 3.1: Security Testing life cycle a) Capture security test requirements b) Analyze and design security test scenarios c) Test bed implementation d) Interpreting test reports In the following sub-section each of these stages are explained. Presented at the 3rd International Software Testing Conference, India 3.1 Parametric Approach to Capture Security Test Requirements To define the scope of security testing, check the stated requirement against the parametric template. This will help in identifying all the missing elements or the gaps in security requirement capture. The next step would be to allocate appropriate weightages to the various security sub parameters. These weightages would typically be determined by the nature of business domain that the application caters to and the heuristic data available with the security test experts. For example, consider an e-shopping application for an online store. The key security requirement for such an application would be to ensure that the user who provides his own payment information is indeed, who he claims to be and to make sure that the information provided is correct and is not visible to anyone else. From that perspective, authentication, confidentiality and non-repudiation are the key parameters for consideration. One possible set of weightages for these parameters could be 0.5, 0.25 and 0.25 respectively (on the scale of 0 to 1). This is in line with the impact; a security breach could have on the business. The advantage of assigning appropriate weightages is that it helps to optimize the test scenarios for each of these parameters. On the next page, table 3.1 presents a sample parametric table. The snapshot in table 3.1 only comprises of those parameters that are relevant for testing the application requirement. The main security parameters, as listed in the table 3.1 are - Authentication, Authorization, Access control, Non-Repudiation and Audit control. This list is just a sample list and in actual scenario there can be many more parameters in the template. These parameters are further classified according to testing subtask. For example the requirement of authentication can be implemented using userid/password, smart card, biometric and digital certificates. And each one of this implementation has different testing requirement. Based on the testing complexity, appropriate weightages are assigned. The weightages can be determined based on past experience and the series of interactions conducted with the customer during the requirement gather- Presented at the 3rd International Software Testing Conference, India ing stage. Going back to the example, since biometric requires substantial investment from the client's side the digital certificates score higher than the other modes of implementing authentication. The last column holds the priority level assigned to each sub-parameter. The business domain and the experience of the test expert determine the priority level assigned to each sub-parameter. For example, consider a case of Internet banking application. Here the authentication is slightly of higher priority than that of authorization or access control. Where as in case of a b2b application the authorization and the access control takes higher priority. All the parametric values are recorded and this constitutes a requirement document outlining the areas and the level of testing that has to be executed. No. Parameters Dimension (scale of 1 - 5) Dimension (scale 1 - 5) Dimension (scale 1 - 5) Dimension (scale 1 - 5) Dimension (scale 1 - 5) Priority (scale of 1 ' 100) 1 Authentication -corroboration that an entity is who it claims to be. No authentication (0) Password Based (3) Smart card and PIN (3) Biometric (4) Digital certificates. (5) 10 2 Authorization No Authorization (0) Role Based (3) User Based (3) Using Electronic Signatures (for nonrepudiation) (3) Role and User Based (4) 10 3 Access Control No ACL (0) Context Based (3) User Based (3) Role Based (3) All 3 (4) 7 4 Non repudiation Nothing (0) Digital signature (3) Encryption and digital signature (4) 5 5 Audit control Nothing (0) Use Fire wall/ VPN logs (1) Custom made logs (3) Employ managed security services (4) 5 Table 3.1 Presented at the 3rd International Software Testing Conference, India 3.2 Parametric Approach to Security Test Analysis and Design Once the weightages for each security parameter is finalized, the next task is to create test scenarios for testing the security requirements. The weightages assigned to each sub-parameter goes on to suggest the number of scenarios that need to be tested for a particular requirement and the complexity involved. The parametric approach also helps in identification of the extent of data that would be needed for test execution and also the distribution of data based on various scenarios. Also, this will help in identifying the kind of tools that will be needed and determine the appropriateness of a tool for a given type of test. Now the data that is recorded with the parametric template can be tabulated in the below given form: No. Parameter Weightages (Scale of 1- 5) Priority (Scale: 1 ' 100) 1 Authentication 3 10 2 Authorization 2.75 10 3 Access Control 2.6 7 4 Non-Repudiation 2.3 5 5 Audit control 2.0 5 Table 3.2 The table 3.2 above is comprised of the security parameter with its corresponding values of Weightages and Priority levels. These weightages are the average of weightages assigned for the testing activity of each sub-parameter as illustrated in the table 3.1. With the availability of these values for each sub-parameter the test expert is now able to plan the test scenarios to the desired level of priority and depth. 3.3 Security Test bed Implementation In case of application level testing the approach of testing the security vulnerabilities should be more logical. Here logical security deals with the use of computer processing and/or communication capabilities to improperly access information. Second, access control can be divided by type of perpetra- P tor, such as employee, consultant, cleaning or service personnel, as well as categories of employees. The type of test to be conducted will vary upon the condition being tested and can include: Determination that the resources being protected are identified, and access is defined for each resource. Access can be defined for a program or individual. Evaluation as to whether the designed security procedures have been properly implemented and function in accordance with the specifications. Unauthorized access can be attempted in on-line systems to ensure that the system can identify and prevent access by unauthorized sources. Most of the security test scenarios are oriented to test the system in the abnormal conditions. This throws up the challenge of simulating the real life conditions that the test scenarios require. There are various tools that can be used to test various vulnerabilities that the system may have. The choice of such a tool is determined by the extent of vulnerability, the tool is able to test, Customization of the tool should be possible and testing of the system using both the external as well as the internal views. The local users are the internal view and the external users constitute the external views. There are various programming tools that can be programmed to simulate the various access control breaches that can happen. 4 Interpreting Security Test Reports Once tests have been executed and security gaps have been identified, the gaps need to be analyzed in order to suggest improvements. The reports need to be validated as many a times as possible. The reports may be misleading and the actual loopholes might be elsewhere or might not exist at all. The security test reports are like X-ray reports, where a security expert is essential to decipher the vulnerabilities reported and bring out the actual problem report. There are various security reporting tools. Presented at the 3rd International Software Testing Conference, India While selecting a security-reporting tool, the following parameters have to be taken in to consideration: 1. Extent if vulnerability, the tool is able to report. 2. Customization of the tool should be possible. 3. Testing of the system using both the external as well as the internal views. Once the analysis of the report is completed a list of vulnerabilities and the set of security features that are working is generated. The list of vulnerabilities is classified. The classification level of vulnerability is determined by the amount risk involved in the security breach and the cost of fixing the defect. Once the classification is done the vulnerability is fixed and retested.
White Paper
Conclusion
A Parametric Approach for Security Testing of Internet Applications
A parametric approach to security testing not only, works very well for security requirement capture, effort estimate and task planning but also for testing different levels of security requirements by different components within same application. This approach for requirements gathering comes out to be real handy in capturing those security requirements, which cannot be captured by traditional means. This approach enables one to adopt the parametric matrix at each and every stage of security testing, and improve the values in the matrix after each execution of a project. Due to this capability of incorporating experiential data, the parametric approach can provide the most effective mechanism for security testing of Internet applications.
White Paper
Abstract
DATA ARCHIVAL TESTING
Today, there is an exponential rise in the amount of data being generated by organizations. This explosion of data increases IT infrastructure needs and has an immense impact on some important business decisions that are dependent on proficient data analytics. These challenges have made data archival extremely important from a data management perspective. Data archival testing is becoming increasingly important for businesses as it helps address these challenges, validate the accuracy and quality of archived data and improve the performance of related applications. The paper is aimed at helping readers better understand the space of data archival testing, its implementation and the associated benefits.
White Paper
Introduction
DATA ARCHIVAL TESTING
One of the most important aspects of managing a business today is managing its data growth. On a daily basis, the cost of data management outpaces the data storage costs for most organizations. Operational analytics and business intelligence reporting usually require active operational data. Data that does not have any current requirement or usage, known as inactive data, can be archived to a safe and secure storage. Data archiving becomes important for companies who want to manage their data growth, without compromising on the quality of data that resides in their production systems. Many CIOs and CTOs are reworking on their data retention policies and their data archival and data retrieval strategies because of an increased demand for data storage, reduced application performance and the need to be compliant with the ever changing legislations and regulations.1
White Paper
Main Content
DATA ARCHIVAL TESTING
Data Archival Testing ' Test Planning Data Archival is the process of moving data that is not required for operational, analytical or reporting purposes to offline storage. A data retrieval mechanism is developed to restore data from the offline storage. The common challenges faced during data archival are: ' Inaccurate or irrelevant data in data archives ' Difficulty in the data retrieval process from the data archives Data archival testing helps address these challenges. While devising the data archival test plan, the following factors need to be taken into consideration: Data Dependencies There are many intricate data dependencies in an enterprise's architecture. The data which is archived should include the complete business objects along with metadata, that helps retain the referential integrity of data across related tables and applications. Data archival testing needs to validate that all related data is archived together for easy interpretation, during storage and retrieval. Data Encoding The encoding of data in the archival database depends on the underlying hardware for certain types of data. To illustrate, data archival testing needs to ensure that the encoding of numerical fields such as integers also archives the related hardware information, for easier future data retrieval and display of data with a different set of hardware. Data Retrieval Data needs to be retrieved from archives for regulatory, legal and business needs. The validation of the data retrieval process ensures that the archived data is easily accessed, retrieved and displayed in a format which can be clearly interpreted without any time consuming manual intervention. Data Archival Testing ' Implementation The data archival testing process includes validating processes which encompass data archival, data deletion and data retrieval. Figure 1 below describes the different stages of a data archival testing process, the business drivers, the different types of data that can be archived and the various offline storage modes. Figure 1: The Data Archival Testing Process External Document ' 2018 Infosys Limited Test the Data Archival 1 process ' Testing the Data Archival process ensures that business entities that are archived includes master data, transaction data, meta data and reference data ' Validates the storage mechanism and that the archived data is stored in the correct format. The data also has to be tested for hardware independence Test the Data Deletion 2 process ' Inactive data needs to be archived and moved to a secure storage for retrieval at a later point and then deleted from all active applications using it. This validation process would verify that that the test data deletion process has not caused any error to any existing applications and dashboards ' When archived data is deleted from systems, verify that the applications and reports are conforming to their performance requirements Test the Data Retrieval 3 process ' Data that has been archived needs to be easily identified and accessible in case of any legal or business needs ' For scenarios that involve urgent data retrievals, processes for the same need to be validated within a defined time period Benefits of Data Archival Testing The benefits of data archival testing are often interrelated and have a significant impact on the IT infrastructure costs for a business. Some of the benefits are: Accomplishing all these benefits determine the success of a data archival test strategy.
White Paper
Conclusion
DATA ARCHIVAL TESTING
Due to the critical business needs for data retention, regulatory and compliance requirements and a cost effective way to access archived data, many businesses have started realizing and adopting data archival testing. Therefore an organization's comprehensive test strategy needs to include a data archival test strategy which facilitates smooth business operations, ensures fulfillment of all data requirements, maintains data quality and reduces infrastructure costs.
White Paper
Abstract
AN INSIGHT INTO MICROSERVICES TESTING STRATEGIES
The ever-changing business needs of the industry necessitate that technologies adopt and align themselves to meet demands and, in the process of doing so, give rise to newer techniques and fundamental methods of architecture in software design. In the context of software design, the evolution of 'microservices' is the result of such an activity and its impact percolates down to the teams working on building and testing software in the newer schemes of architecture. This white paper illustrates the challenges that the testing world has to deal with and the effective strategies that can be envisaged to overcome them while testing for applications designed with a microservices architecture. The paper can serve as a guide to anyone who wants an insight into microservices and would like to know more about testing methodologies that can be developed and successfully applied while working within such a landscape.
White Paper
Introduction
AN INSIGHT INTO MICROSERVICES TESTING STRATEGIES
Microservices attempt to streamline the software architecture of an application by breaking it down into smaller units surrounding the business needs of the application. The benefits that are expected out of doing so include creating systems that are more resilient, easily scalable, flexible, and can be quickly and independently developed by individual sets of smaller teams. Formulating an effective testing strategy for such a system is a daunting task. A combination of testing methods along with tools and frameworks that can provide support at every layer of testing is key; as is a good knowledge of how to go about testing at each stage of the test life cycle. More often than not, the traditional methods of testing have proven to be ineffective in an agile world where changes are dynamic. The inclusion of independent micro-units that have to be thoroughly tested before their integration into the larger application only increases the complexity in testing. The risk of failure and the cost of correction, post the integration of the services, is immense. Hence, there is a compelling need to have a successful test strategy in place for testing applications designed with such an architecture.
White Paper
Main Content
AN INSIGHT INTO MICROSERVICES TESTING STRATEGIES
The definition of what qualifies as a microservice is quite varied and debatable with some SOA (service-oriented architecture) purists arguing that the principles of microservices are the same as that of SOA and hence, fundamentally, they are one and the same. However, there are others who disagree and view microservices as being a new addition to software architectural styles, although there are similarities with SOA in the concepts of design. Thus, a simpler and easier approach to understand what microservices architecture is about, would be to understand its key features: ' Self-contained and componentized ' Decentralized data management ' Resilient to failures ' Built around a single business need ' Reasonably small (micro) The points above are not essentially the must-haves for a service to be called a microservice, but rather are 'good-to-have.' The list is not a closed one either, as it can also include other features that are common among implementations of a microservices architecture. However, the points provide a perspective of what can be termed as a microservice. Now that we know what defines a microservice, let us look at the challenges it poses to testers. The distributed and independent nature of microservices development poses a plethora of challenges to the testing team. Since microservices are typically developed by small teams working on multiple technologies and frameworks, and are integrated over light-weight protocols (usually ReST over HTTPs, though this is not mandatory), the testing teams would be inclined to use the Web API testing tools that are built around SOA testing. This, however, could prove to be a costly mistake as the timely availability of all services for testing is not guaranteed, given that they are developed by different teams. Furthermore, the individual services are expected to be independent of each other although they are interconnected with one another. In such an environment, a key factor in defining a good test strategy would be to understand the right amount of testing required at each point in the test life cycle. Additionally, if these services integrate with another service or API that is exposed externally or is built to be exposed to the outside world, as a service to consumers, then a simple API testing tool would prove to be ineffective. With microservices, unlike SOA, there is no need to have a service level aggregator like ESB (enterprise service bus) and data storage is expected to be managed by the individual unit. This complicates the extraction of logs during testing and data verification, which is extremely important in ensuring there are no surprises during integration. The availability of a dedicated test environment is also not guaranteed as the development would be agile and not integrated. Microservices architecture Challenges in testing microservices External Document ' 2018 Infosys Limited In order to overcome the challenges outlined above, it is imperative that the test manager or lead in charge of defining the test strategy appreciates the importance of Mike Cohn's Test Pyramidi and is able to draw an inference of the amount of testing required. The pictorial view emphasizes the need to have a bottom-up approach to testing. It also draws attention to the number of tests and in turn, the automation effort that needs to be factored in at each stage. The representation of the pyramid has been slightly altered for the various phases in microservice testing. These are: i. Unit testing The scope of unit testing is internal to the service and in terms of volume of tests, they are the largest in number. Unit tests should ideally be automated, depending on the development language and the framework used within the service. ii. Contract testing Contract testing is integral to microservices testing and can be of two types, as explained below. The right method can be decided based on the end purpose that the microservice would cater to and how the interfaces with the consumers would be defined. a) Integration contract testing: Testing is carried out using a test double (mock or stub) that replicates a service that is to be consumed. The testing with the test double is documented and this set needs to be periodically verified with the real service to ensure that there are no changes to the service that is exposed by the provider. b) Consumer-driven contract testing: In this case, consumers define the way in which they would consume the service via consumer contracts that can be in a mutually agreed schema and language. Here, the provider of the service is entrusted with copies of the individual contracts from all the consumers. The provider can then test the service against these contracts to ensure that there is no confusion in the expectations, in case changes are made to the service. iii.Integration testing Integration testing is possible in case there is an available test or staging environment where the individual microservices can be integrated before they are deployed. Another type of integration testing can be envisaged if there is an interface to an externally exposed service and the developer of the service provides a testing or sandbox version. The reliance on integration tests for verification is generally low in case a consumerdriven contract approach is followed. iv. End-to-end testing It is usually advised that the top layer of testing be a minimal set, since a failure is not expected at this point. Locating a point of failure from an end-to-end testing of a microservices architecture can be very difficult and expensive to debug. Mike Cohn's Testing Pyramid E2E UI Testing Scope of Testing Execution Time Number of Tests Integration Testing Contrast Testing Unit Testing Approach to testing microservices and testing phases External Document ' 2018 Infosys Limited ' For unit testing, it would be ideal to use a framework like xUnit (NUnit or JUnit). The change in data internal to the application needs to be verified, apart from checking the functional logic. For example, if reserving an item provides a reservation ID on success in the response to a REST call, the same needs to be verified within the service for persistence during unit testing. ' The next phase of testing in contract testing. In case there are several dissimilar consumers of the service within the application, it is recommended to use a tool that can enable consumer-driven contract testing. Open source tools like Pact, Pacto, or Janus can be used. This has been discussed in further detail in the last example and hence, in the context of this example, we will assume that there is only a single consumer of the service. For such a condition, a test stub or a mock can be used for testing by way of REST over HTTPS Item ID. date Reservation ID Application Mocked Service Unit Testing Scope Integration Testing Scope Select an Item Reserve an item Integration Contract Testing Scope integration contract testing. Data being passed between the services needs to be verified and validated using tools like SOAPUI. For example, an item number being passed between the services that selects it to the one that reserves it. ' E2E tests should ensure that dependency between microservices is tested at least in one flow, though extensive testing is not necessary. For example, an item being purchased should trigger both the 'select' and 'reserve' microservices. In order to get a clear understanding of how testing can be carried out in different scenarios, let us look at a few examples that can help elucidate the context of testing and provide a deeper insight into the test strategies used in these cases. ' Scenario 1: Testing between microservices internal to an application or residing within the same application This would be the most commonly encountered scenario, where there are small sets of teams working on redesigning an application by breaking it down into microservices from a monolithic architecture. In this example, we can consider an e-commerce application that has two services a) selecting an item and b) reserving an item, which are modelled as individual services. We also assume there is a close interaction between these two services and the parameters are defined using agreed schemas and standards. Testing scenarios and test strategy External Document ' 2018 Infosys Limited ' Unit tests should ensure that the service model is catering to the requirements defined for interacting with the external service, while also ensuring that internal logic is maintained. Since there is an external dependency, there exists a need to ensure that requirements are clearly defined and hence, documenting them remains key. TDD approach is suggested where possible and any of the popular frameworks discussed in the previous example can be chosen for this. ' Contract testing can be used in this case to test the expectations from consumer microservices, that is, the applications internal service, decoupling it from the dependency on the external web service to be available. In this context, test doubles, created using tools like Mockito or Mountebank, can be used to define the PayPal API's implementation and tested. This is essentially integration contract testing and again needs to be verified with a live instance of the external service periodically, to ensure that there is no change to the external service that has been published and consumed by the consumer. ' Integration tests can be executed if the third-party application developer ' Scenario 2: Testing between internal microservices and a third-party service Here, we look at a scenario where a service with an application consumes or interacts with an external API. In this example, we have considered a retail application where paying for an item is modelled as a microservices and interacts with the PayPal API that is exposed for authenticating the purchase. Let us look at the testing strategy in each phase of the test cycle in this case: provides a sandbox (e.g. PayPal's Sandbox APIii ) for testing. Live testing for integration is not recommended. If there is no availability of a sandbox, integration contract testing needs to be exercised thoroughly for verification of integration. ' E2E tests should ensure that there are no failures in other workflows that might integrate with the internal service. Also, a few monitoring tests can be set up to ensure that there are no surprises. In this example, selecting and purchasing an item (including payment) can be considered an E2E test that can run at regular and pre-defined intervals to spot any changes or breaks. PayPal (External) API PayPal Sandbox API Unit Testing Scope Application Pay for an Item Test Double Virtual Provider Firewall Contract Testing Scope Integration Testing Scope REST over HTTPS External Document ' 2018 Infosys Limited ' Unit tests should cover testing for the various functions that the service defines. Including a TDD development can help here to ensure that the requirements are clearly validated during unit testing. Unit test should also ensure that data persistence within the service is taken care of and passed on to other services that it might interact with. ' Contract testing ' In this example, consumers need to be set up by using tools that help define contracts. Also, the expectations from a consumer's perspective need to be understood. The consumer should be well-defined and in line with the expectations in the live situation and contracts should be collated and agreed upon. Once the consumer contracts are validated, a consumer-driven contract approach to testing can be followed. It is assumed that in this scenario, there would be multiple consumers and hence, individual consumer contracts for each of them. For example, in the above context, a local retailer and an international retailer can have ' Scenario 3: Testing for a microservice that is to be exposed to public domain Consider an e-commerce application where retailers can check for availability of an item by invoking a Web API. different methods and parameters of invocation. Both need to be tested by setting up contracts accordingly. It is also assumed that consumers subscribe to the contract method of notifying the provider on the way they would consume the service and the expectations they have from it via consumer contracts. ' E2E tests ' minimal set of E2E tests would be expected in this case, since interactions with external third parties are key here
White Paper
Conclusion
AN INSIGHT INTO MICROSERVICES TESTING STRATEGIES
Improvements in software architecture has led to fundamental changes in the way applications are designed and tested. Teams working on testing applications that are developed in the microservices architecture need to educate themselves on the behavior of such services, as well as stay informed of the latest tools and strategies that can help deal with the challenges they could potentially encounter. Furthermore, there should be a clear consensus on the test strategy and approach to testing. A consumer-driven contract approach is suggested as it is a better way to mitigate risk when services are exposed to an assorted and disparate set of consumers and as it further helps the provider in dealing with changes without impacting the consumer. Ensuring that the required amount of testing is focused at the correct time, with the most suitable tools, would ensure that organizations are able to deal with testing in such an environment and meet the demands of the customer
White Paper
Introduction
ENHANCING QUALITY ASSURANCE AND TESTING PROCEDURES
In today's world, most large and mid-size organizations have opted to centralize their software quality assurance (QA) and testing functions. If you are part of any such dedicated QA and testing team, looking to learn more about the latest QA trends, then, this paper is for you. The Software World Quality Report, 2015- 2016 indicates an average expenditure of 26% in 2014 and 35% in 2015 on QA and testing. In fact, many organizations began allocating a yearly testing budget since about a decade ago, or even before. These budgets are allocated for the actual testing, the testing processes, procedures, tools, etc. But what happens if the defined processes are not implementable or the teams find them to be outdated and teams are unable to stick to these standards? The obvious outcome in such a scenario would be a decrease in QA effectiveness, increase in time taken and team frustration ' all leading to lower production quality. That's why testing processes need continuous review and enhancement, more so with newer technologies and shorter sprints (idea to production). In this paper, I have outlined seven keys areas that the QA and testing function must focus on to enhance their organizational maturity and bring innovation in their dayto-day work.
White Paper
Main Content
ENHANCING QUALITY ASSURANCE AND TESTING PROCEDURES
Defining quality The ISO standard defines quality as 'the totality of features and characteristics of a product or service that bears its ability to satisfy stated or implied needs.' The important part of this definition is the conformance to requirements and the expectations from the quality assurance function. Quality, contextually, depends on the organizational setup, business demands, and the inherent nature of business competition in the era of mobile and increased social interaction. In my opinion, the first step to improve quality should be to understand the expected level of quality. Accordingly, a decision can then be made whether to establish a dedicated testing function or simply follow a federated model. In both cases, a basic discipline to ensure the software quality processes and periodic enhancement to methodology and lifecycle, procedures, etc. should be instituted. Such discipline will ensure that products and services satisfy the stated or implied needs. Standardize, centralize and optimize As discussed, depending on the needs and expectations from the quality function, organizations can choose to either centralize their quality function or not. Let's discuss the challenges and the possible steps to address them in case the organization intends to centralize their QA and testing function in a Testing Center of Excellence (TCoE). Challenges: TCoE processes are time consuming and expensive. In addition, often times, the application development team spends a large portion of their time explaining requirements or creating various builds for QA. These challenges are magnified when an organization has multiple lines of businesses (LOBs) and lacks common grounds to leverage each LOB's capabilities and strengths. Solution: Think of a scenario where each LOB follows similar processes! Would this not help integrate more easily? In my experience, it certainly would. So, the sequence of centralization should be to first standardize processes and tools for each LOB, then, proceed with centralization. These foundational steps will ensure optimization of idle resources, tools, tool licenses, and lower total cost of ownership. Dashboards can provide critical QA analytics here. External Document ' 2018 Infosys Limited Improve QA processes ' QA and test maturity assessment: To baseline and improve the organizational QA capability, it's recommended to measure the maturity of existing processes and tools. ' Test governance / clear policies: Just like you cannot navigate to a new place without a map, QA teams need clear direction in term of the test methodology, how the testing lifecycle aligns with the development lifecycles, and the responsibilities of a tester, test lead and test manager. ' Test management process (TMP): TMP is an artifact that can be developed at the organizational level and individual lines of business or application areas can develop their specific test strategy. For example, the strategy can outline its strategic direction on what areas would get automated, which customer facing applications would be piloted for security testing, and which applications would go mobile and be hosted on the cloud. These may be piloted with mobile- or cloud-based testing. The executive and operations committee, once instituted, should liaison between the business, application development, and the operations teams to align QA and testing methodologies with them. ' Shift left and get the requirements right: It's proven that a shift left strategy in the software development lifecycle (SDLC) helps find issues earlier. The industry is moving towards using a single application lifecycle and finding ways by which different teams can increasingly collaborate and become agile to respond to each other's needs. Reducing requirement volatility and developing agile teams can significantly improve strained dialogues between business and IT. ' True vs. hybrid ' Agile and DevOps: Again, this depends on how we define quality. If the need of the organization fits well with a hybrid agile model, then, the advocacy of true agile processes would be immature. To achieve reduced cycle time and quicker time-to-market, continuous integration, continuous development and testing concepts are commonly being used. ' Smoke test / build quality: Approvals, UAT support, and metric-based focus area for regression helps break silos with the development teams and the business (top down and bottom up approach). A new-age testing saga Many IT professionals often face the question ' What's next? I am sure you must have come across such situations too. To answer it, here are the latest trends in QA that organizations can leverage to reduce the risk to IT applications: Predictive analytics Service virtualization Data testing and test data management Mobile- and cloudbased testing Risk-based and combinatorial testing Infrastructure testing External Document ' 2018 Infosys Limited Predictive analytics Like other industries, predictive analytics and machine learning concepts are now being leveraged in software QA and testing as well. Most QA organizations accumulate huge amount of data on defects, and test cases prepared and executed. Just like Facebook can predict and show what you may like and Netflix knows what type of movies you may like, QA teams can now predict the type of defects that may occur in production or the error prone areas of an application or the entire IT landscape based on the production or past QA defects and failed test case information. Service virtualization In today's world, different teams and multiple applications under the same or different programs, often, come to a point where one team cannot develop or test if the second application is not ready. In such situation, it is best to adopt service virtualization. This concept is mainly based on the fact that common scenarios can be simulated using a set of test data, allowing interdependent teams to proceed, without having to wait. Data testing and test data management (TDM) A majority of organizations have immense data issues, including data quality, availability, masking, etc. In fact, the system integration and testing (SIT) and user acceptance testing (UAT) teams can enhance the effectiveness of testing by leveraging various test data tools available. Apart from the tools, test data management is becoming an integrated part of the shared service organization. Many financial organizations across the globe have dedicated TDM functions to manage their test data as well as support various teams to create test data. Mobile- and cloud-based testing Mobile devices are ubiquitous these days. Today's mobile world is not just about smartphones or tablets, rather it is pervasive with handheld devices in retail stores, point of service (POS) systems, mobile payment devices, Wi-FI hotspot, etc. The list goes on! In the QA world, these pose unique challenges. For example, these devices and applications need to perform at speed and in various network conditions while using different browser, operating systems, and many more such conditions. Club this mobile challenge with applications and data hosted in a cloud environment such as Microsoft Azure, Amazon Web Services, to name few and you have magnified the testing teams challenges by manifold. Since most organizations are not really equipped with mobile test labs, these are some areas where they can tie up with various vendors to perform mobile testing. Another trend that helps overcome these challenge is the adoption of newer methodologies such as agile Scrum, test-driven development, behavior-driven development, and DevOps. However, most of these methodologies demand progressive automation or model-based testing concepts where testers may need to be reskilled to wear multiple hats. Risk-based testing (RBT) / algorithm and combinatorial testing RBT is not a new concept and we all apply it in almost every project, in one way or the other. However, depending on the nature of the project or applications, RBT can be tricky and risky. QA and testing teams need tools that can generate various permutations and combination to optimally test and reduce the cost. For instance, in mobile testing, you may come across many operating systems and browsers, hence, many permutations and combinations are possible. Combinatorial testing is another techniques that has gained fresh momentum in recent years and organizations can now use tools to derive an optimal set of combinations when attempting to test a huge number of possible scenarios. Infrastructure testing The recent Galaxy Note 7 debacle costed Samsung millions. And this is not a stray incident. In fact, the list is endless, making it important to thoroughly test the infrastructure. Many organizations now have dedicated infrastructure testing teams working in the shared service model. It is recommended to review the infrastructure testing needs and ensure that the services are well aligned with the IT Infrastructure teams who provision the internal and external hardware needs such as VDI, Windows patches, databases, etc. External Document ' 2018 Infosys Limited Automated testing As per the latest QA trends, automation testing is now a necessary testing type as against being optional, 5-7 years back. Many leaders still question the value of automation ' what is the ROI? How is automation directly benefiting, etc.? In my view, the key is to do it right. For instance, ' Allocate automation funding for applications and not seek funding from projects to develop new automation scripts and maintain the automation framework ' Automate regression testing and not functional testing ' Establish application-specific regression baseline ' Perform impact analysis using predictive analytics as described earlier and plan for automated testing at the release level instead of on project basis or simply, funded ' Track automation ROI and coverage metrics and showcase the value of automation as compared to manual regression ' Adopt and enable existing automation to take up new methodology and technology as it is related with Agile Scrum, Test and Behavior driven development , and DevOps model (as discussed in the new-age testing saga section) Test environment, data and security The test environment, test data, and the overall IT security challenges are the most agreed upon and accepted challenges. However, many organizations find it very difficult to build multiple QA environments and replicate their production processes. While conducting a process maturity assessment, I was surprised to hear that, a bank spent more than USD1 million but failed to implement a production-like environment. Part of the reason is a lack of planning and budgeting itself, key to effective testing environment. And if budgeted but not approved, such initiatives take back seat and the organization continues to solve reactively instead of proactively. In my experience, successful organizations typically promote centralization of these two functions and form test environment and test data management teams. These teams are responsible for ensuring the right data in the right environment at the right time. Both of these functions can benefit from developing an operating and engagement model that would allow them to get funding, manage service requests, and obtain the necessary access to applications, jobs and data. For security testing, there are many tools available in market these days. But it is recommended to look for tools that can integrate with application development and testing tools as well as support cloud and mobile infrastructure. External Document ' 2018 Infosys Limited Metrics, dashboard and analytics Metrics and dashboard concepts are not new but how the data is collected, retrieved, processed, displayed, and finally, analyzed to make informed decision has surely changed. There are many tools now in the market that can integrate with many technology platforms and drill down the capability in a very interactive manner. Some tools that are gaining popularity are Tableau, Quick View, etc. Many organizations develop in-house tools or leverage SharePoint as their metrics tool. Whatever the choice of tool, below are some key consideration that QA managers and leaders would find beneficial: ' Capture and communicate the key performance indicators (KPIs) to senior management on production and QA defects, engagement feedback, cost avoidance, and application level defect density ' Define project level vs. aggregated view of the metrics ' For multiple departments or lines of business, apply consistent database scheme ' Define standard folder structure in the available QA or test management tools ' Develop integration for analytics tool ' Define key metrics to track and ensure data accuracy and quality ' Ensure automatic generation and analytical capability to assist in decision making ' Develop QA specific predictive analytics, for example, production and QA defects data can be used to predict potential areas for rigorous functional or regression testing, an upcoming trend
White Paper
Conclusion
ENHANCING QUALITY ASSURANCE AND TESTING PROCEDURES
In summary, it's more beneficial to know 'what' is happening at the macro level rather than 'why' it is happening at the micro level. While it's important to measure perfectly but due to huge amount of QA data such as count of test cases prepared, executed, effort consumed etc., it's more beneficial to understand the trend at high level and not the detailed statistics. This will help define the Quality in the context of an individual organization as opposed to industry standard QA definition. Such definition can then guide organization specific QA metrics to collect, new age testing types and methodology to adopt as well as any consideration towards automation, improvement in QA processes and any supporting elements such as test data and test environments build up.

Dataset Card for "wp_combined_dataset"

More Information needed

Downloads last month
1
Edit dataset card