Chapter 1 - Test Automation Objectives
Chapter 4 - Tool Evaluation and Selection
Chapter 5 - Generic Test Automation Architecture
Chapter 6 - Approaches for Automating Test Cases
Chapter 7 - Test Automation Reporting and Metrics
Chapter 9 - Summary: Test Automation Foundations
In this foundation course, we will introduce various aspects of test automation that you should take into account before adopting a new test automation tool, plus practical approaches to build an efficient test automation strategy. No matter what professional role you are – project managers, clients, developers, manual or automation testers, getting back to the basics of automation testing is essential.
During 8 chapters, we will walk you through:
Chapter 1 - Test Automation Objectives
To start this course, we'll begin with chapter 1 about test automation's objectives. Not only for testers or developers but also for people who participate in the software development process, it's important to know the typical purposes, potential benefits, as well as possible risks of test automation.
Now, let's get to know some basic concepts. In software testing, test automation is one or more of the following tasks: using tools to control and set up preconditions, executing tests, then comparing the outcome with the expected result. Remember, when applying test automation, objectives may vary from each team based on their real scenarios and needs. Before getting more in-depth, spend 5 seconds thinking about what people use test automation for.
You're right! Among those purposes, here are 3 common objectives that most teams define when applying test automation. First, test automation improves the efficiency of testing by supporting manual test activities throughout the test process. It helps your team reduce the burden of repetitive work such as running regression tests when changes appear, thus it frees you to design new tests or explore the products on production. Second, test automation automates activities that cannot be executed manually. For example, adding one thousand entries to your database, for performance testing, is a pain to do manually.
Furthermore, test automation increases the reliability of testing. For a better understanding, imagine yourself as a tester who needs to test the log-in form. If a process of testing the log-in form is done the same way each and every time, then the reliability of that process will remain constant over time. By writing one automated test script, you can test your log-in forms in multiple environments with different scenarios and times, hence increasing the reliability of the application under test.
Now you've discovered how test automation helps you achieve your testing ultimate goals. Simply acquiring a tool does not guarantee success. Each new tool introduced into an organization will require effort to achieve real and lasting benefits. There are potential benefits and opportunities with the use of tools in testing, but there are also risks.
Let's walk through some outstanding benefits that your team will gain when applying test automation. At first, test automation comes into play to give early and frequent feedback about the software quality. As the tests are run by tools, you can now also eliminate the risks of including biased or subjective evaluation.
In addition, using automated tools reduces human efforts in real-time, remote, or parallel tests. As a result, more tests can be run faster. For instance, a unit test can only run within milliseconds with the assistance of test automation. Last but not least, test automation enhances the consistency and repeatability for the fact that tests are executed by the tool in the same order and in the same frequency. For instance, you may apply nightly build to run regression testing or sanity testing on Staging or Production.
Keep in mind that a successful plan lies solely on how software development teams utilize test automation to maximize productivity and mitigate the risks that may occur. Everything has two sides, so does test automation. Besides the advantages, there are many concerns when using tools to support testing. Let me take you through this part.
First, the time, cost, and effort for the initial introduction of a tool may be under-estimated. For example, your team may need guidance from external automation experts for setup and training. To achieve significant and continuing benefits from the test automation, you may also need to change your current test process, not to mention make some continuous improvement throughout your tool implementation process.
Another problem occurs when the version control of test assets may be neglected since your production code does not fit with the test automation code. I know you may feel overwhelmed right now, but hang on, we still have the last two important risks to pay attention to.
There is a wide range of market offerings, from open-source automated testing tools to free and commercial versions. With the open-source tools, your team may face the problem when it is suspended since there are no longer updated, or new features added. With commercial tools, the vendor may provide a poor response for support, upgrades, and defect fixes.
Further details about these types of tools coming soon in the later chapters. To summarize, chapter 1 introduces the objectives, benefits, and risks that you need to focus on when evaluating and selecting a test automation solution.
Chapter 2 - Application Under Test’s Environmental Factors Influencing Test Automation
In the previous chapter, we have learned the key objectives of test automation as well as the potential benefits and risks that come with it. In order to succeed in using test automation, it is important to consider all the factors that influence your implementation. This chapter will walk you through the critical factors that might influence your success.
There are many factors that influence test automation. In this chapter, we categorized them as organizational factors and application under the test's environmental factors. First, let's go through organizational factors. Organizational factors refer to those within the company or the team itself, namely the background skills and knowledge of your testers.
For instance, if you're a manual tester, using an automation library, like Selenium, to kickstart a test automation framework will be problematic. In this case, intensive technical knowledge and advanced programming skills are required. Keep in mind that the purpose of implementing test automation is to test other software products.
So, if you use a test automation tool made by a third party like Katalon studio, manual work will be reduced significantly. All you really need to do is manage and generate test cases. As you become more experienced, you can start building your own test cases or test scripts, as tools like this do support script extension to build custom keywords.
Now let's go into details about the application under test and its environmental factors. When evaluating the context of the application under test and its environment, factors that influence test automation may consist of the interfaces, architecture, sizes, and complexity of the application under test, as well as third-party software.
First, for the application under test interface, the user interface isn't the only one out there that needs to be considered. Think about API, CLI, or voice command as well. Since no test automation tool can work on all interfaces, as a tester, you should carefully consider the right one to pick. Second, your system's architecture also impacts your test automation solution. For instance, a monolithic architecture surely needs a different approach compared to a Microservice architecture. That's why you need a different approach for each type of architecture.
Next up, how about we mention the size and complexity of the application under test? If your system is small and simple, a framework that is meant for a more complicated approach wouldn't work and vice versa. For instance, when you need to test various steps and features of your complex web application such as online payment, a simple test automation tool might not be able to help. You might attempt to integrate many tools to try on your application such as test design tools, test execution tools, and test reports, all from multiple vendors.
But what if these tools do not ensure interoperability? In order to solve this problem, you should choose a QA Orchestration (eco-system) tool that includes multiple solutions like Katalon, they have Katalon Studio to design tests easily and quickly, Katalon Test Engine to run tests from CI tools, and TestOps for reporting.
Lastly, we often don't only build our products but get integrated with third-party software as well. Oftentimes, complex systems consist of various components including third-party software, both internal and external. Because they are managed by third parties, they might be changed suddenly. That might affect your systems or at least, a feature in your systems, so you should consider test automation that helps prevent or detect these problems earlier. Thus, automated integration testing or contract testing should be supported by your test automation solution.
Choosing the wrong test automation solution might make the job even more difficult, which defeats the purpose of using test automation in the first place. Placed in different contexts, you need to consider all key characteristics of the application under test, also the maturity of your team to pick the most suitable solution.
Chapter 3 - Test Automation Pyramid: A Simple Strategy for Your Tests
Welcome back to Katalon course. In this chapter, we will dive into basic concepts of the Test Automation Pyramid, and the strategy to apply various types of tests and testing frequency in each level. Before launching to the market, every software system needs testing at different levels to ensure its high quality for specific objectives. Typical test levels consist of, from the pyramid base to the top: unit testing, integration testing - or services testing, system testing, and acceptance testing. All will be discussed later in this chapter.
Generally, unit testing comes in the first stage where modules, features, and functions are separately tested. Integration testing takes place in the next stage to test the interaction between those functions or systems, which is why they are often called system integration testing. Next up, system testing is where the complete system is validated for both functional and non-functional aspects before moving to the last level. User acceptance testing conducted by clients or end-users to validate whether the application meets their expectation. Although all of these testing activities can be done manually or in automation, acceptance testing is preferred to conduct manually.
Test automation pyramid was first introduced by Mike Cohn in his book ''Succeeding with Agile.'' It emphasizes having a large number of tests at the lower levels (bottom of the pyramid) and the number of tests decreases as development moves to the upper levels. In Agile projects, this concept is also referred to as shift-left, where defects are eliminated as early as possible in the life cycle.
Before getting more in-depth about unit testing, let's determine the definition of a unit. In short, a unit is a piece of functionality you can test independently on multiple aspects and functions. For instance, each unit corresponds to each small element of the LEGO duck which can be separated. To be more specific, unit tests call a function with different parameters and receive expected values. In this process, external dependencies are removed by applying a test implementation or a mock object created by a testing framework. Thus what you need here is a suitable unit testing framework supporting your programming languages.
Why do we need unit testing in agile projects? Yet, introducing regression in the Agile method is at high risk due to extensive code churn. To prevent software build from breaking, development teams usually run automated unit tests before the source code is checked into the mainline (e.g. UAT or Main) of the CM system (e.g. Github or Bitbucket). Keep in mind that the automated unit test results provide immediate feedback on code and build quality, but not on product quality.
Moving up to the next level, we will talk about integration testing, a type of software testing focusing on components or systems interactions. This level is often called ''service testing'', which implies that the testing action is performed in the interfaces to test the interactions of integrated services. There are many ways to define integration testing, hence asking yourself what exactly you need to test is important. Perhaps, your demand is to call other application's APIs, the communication between modules within a system, or even operations between hardware and software.
There are two levels: component integration testing and system integration testing. Put it simply, let's say your system comprises two modules, A and B. After finishing unit tests, you will start verifying if these two modules communicate properly in both functional and non-functional aspects. It's a different story when you have to test between two different systems. For example, after your tests are executed in Katalon Studio, the testing reports are automatically sent to you through TestOps, which is an external service. Testing the communication between these two systems is called system integration testing.
As mentioned above, each test level has its own objective. To begin with, integration testing helps ensure the effective interaction between internal and external services, whereas unit testing verifies individual service or function. However, integration testing requires high effort since you have to merge related elements and deploy them to a test environment. This is true for both manual and automated.
There's one solution to handle the integration testing problem, that is contract testing. It is a methodology for ensuring that two separate systems (such as two microservices) are compatible with one other. It also captures the interactions exchanged between each service, stores them in a contract, then verifies that both parties adhere together.
Coming to the next level, it's now time for user interface testing, often named UI testing. With web interfaces, your UI testing can achieve two objectives. The first one is to make sure that your application's presentation layer works well across: multiple browsers (such as Chrome, Microsoft Edge), various devices (such as PC, tablet, and mobile devices that have different screen sizes), or different platforms (such as iOS, and Android). Now, you may think about visual testing and yes, there are tools to support this testing type.
The second objective that we need to mention here is the UI end-to-end testing, which mainly aims to verify the business processes and workflows of the application under test. For the fact that you need to deploy the complete application under test to the test environment to testing, this is definitely a costly process. Moreover, you might need plenty of end-to-end test scenarios, from simple to complex, to satisfy the functionality and non-functionality of the application under test.
Besides manual testing, regression test suites are usually automated to run faster, in parallel, and consistently. And among popular frameworks, Selenium has constantly been among the top choices when it comes to UI end-to-end testing. With Selenium, you can pick up any browser and let it automate your website, submit various data, and update changes in the user interface. This kind of automated test allows tests to be run on a Driver, or in headless mode. A headless browser is a tool for automated testing and server environments in which without a visible UI shell you can integrate your tests into CI/CD tools. Nowadays, almost all browsers can support headless mode.
When working with UI automated testing, you may face many obstacles. Typically, the 3 common challenges are:
- Applications change frequently - for instance, your competitors have just released a new feature or there are new technologies to adopt into your current application.
- Different ways in error handling in the same system - For example, in the same web system, there are some screens/pages that use inline error messages while others use dialog or js popup to show error messages.
- And, cross-browser testing - with websites, you need to make sure that your system works well on different supported browsers such as Chrome, Firefox, and IE11.
To resolve these obstacles, you need to have a solid test automation solution that requires your team's intensive automation testing knowledge and programming skills. Luckily, there are codeless UI testing solutions available for you, such as Katalon Studio. With it, you are able to test your web, desktop, and mobile applications without a single line of code.
So, in a nutshell, we've walked you through the overall concept of the test automation pyramid, also its benefits and considerations for applying test automation. Though detecting new bugs in your system is not only the main responsibility, it does verify that your existing features are functioning well, especially when code changes are too frequent. Remember that regression testing should be automated to gain a higher ROI.
Chapter 4 - Tool Evaluation and Selection
Welcome back to Katalon courses! In the previous lesson, we talked about the test automation pyramid, which shows the kinds of tests and amounts needed in each testing level. In this chapter, we will discuss what factors you should consider when evaluating and selecting the best-fit test automation tool for your team projects.
Now, we'll walk you through the differences between free and paid tools with some examples of how to choose the most suitable one for your team. There's a variety of different issues that your team must consider when selecting test tools. Among these top considerations, many of them make the decision based on which type of tools would best serve their needs, whether it's a freebie or a paid solution.
Let be honest, have you ever wondered why the stage of selecting tools is that important? Simply, it's because starting with the best-fit tool will allow your team to reap its full potential during the whole project, all the way from the software development steps to long-term operation and maintenance. With the right tool, QA teams have an effective way to mitigate hidden risks like regression risks that may occur due to constant updates and maintenance. To ensure the success of testing projects, your team must carefully investigate the total cost of ownership throughout the expected life of the tool by performing a cost-benefit analysis. This topic is covered later in the ROI section.
Now, we'll take a closer look at the 2 types of test automation tools, among which exists the best match solution for your team. First, you have the option of a free and open-source tool. Their most significant advantage is that they come with no expenses or the license fee. With their proven flexibility and open capabilities, you can customize and create a more tailored solution for your in-house framework.
Of course, intensive training to relevant coding knowledge is a basic need for this type of tool. Besides, the frequency of releasing new updates, such as new features or bug fixes, is one more thing you may consider when using free or open-source tools. With that being said, your team has to keep an eye on all the new updates and latest features, especially breaking changes, without timely announcements and dedicated support from the tool vendor and community.
Commercial tools, with the only disadvantage of requiring paid licenses, sometimes also offer free trial for weeks or free versions, but it's for personal use only. Unlike free tools, commercial tools allow testers to start with a lot of built-in functions such as test design via UI without programming skills needed, plus it calls for less effort to maintain test assets.
Regarding the new feature enhancement and support, paid licenses protect your test automation solutions with a warranty plan by frequently releasing updates and defect fixes in a certain amount of time. For small testing teams with little-to-no programming skills, it may be a better choice to just start with a vendor-based tool instead of a free option based on the pros and cons we already mentioned. Although teams have to pay for licenses, developing and maintaining test automation frameworks as well as cases are much easier.
Regardless of the chosen free or paid tools, you should start with a proof-of-concept to validate the tool suitable with your application's infrastructure and architect, before introducing them into your organization. Moving on to the next part, there are examples you may consider before selecting any tool.
First, tools for unit testing must be different from UI testing tools depending on users' objective of using automated tools and the test level. So for instance, assuming GUI elements are visible but cannot be captured using Xpath or CSS Selector when performing UI end-to-end testing for web or mobile applications. This scenario is also known as a false positive. In contrast, when the element's locator is interactable by automated tests without being visible to the end-users, it will then cause a false negative.
Remember, as mentioned in the previous chapter, these are the top 3 challenge of UI automated testing that many teams are facing it: applications change too frequently, different ways in error handling, and cross-browser testing. To deal with these challenges efficiently, your team should decide on standard ways of using, managing, storing, and maintaining the tool and the test assets.
When the UI changes too frequently, you want to have a self-healing feature like that of Katalon Studio provides. Self-healing capability offers a host of benefits for testers and your team. It takes less time and effort to ensure all the functional tests run smoothly and avoids interruption for all executions.
Another consideration, importantly, is choosing powerful tools with sufficient features. A tool may have a vast feature set but your team only needs part of them. The more complicated the tool is, the slower the testing process becomes, and the training as well. Hence your team should try to find a way to reduce the feature set or remove unwanted parts from the toolbar, select a license model to meet your needs, or try to find alternative tools that are more focused on your required functionality.
Incompatibility between different environments and platforms: Test automation does not work on all environments/platforms. Implement automated tests to maximize tool independence, thereby minimizing the cost of using multiple tools.
This chapter shows you key considerations of free and paid automated testing tools. Regardless of the type of tool, you must be careful to investigate the total cost of ownership throughout the testing tool's lifecycle by performing a cost-benefit analysis. After getting the tool chosen, I believe you're now ready for building your own test automation solution. Let's check chapter 5 for the generic test automation architecture. It might provide you some ideas to start.
Chapter 5 - Generic Test Automation Architecture
Glad to see you again! In the last chapter, we helped you differentiate between free and paid test automation tools, with each type having unique features for small to large software development teams to succeed in their projects. After selecting your tool, it's now time to understand the test automation structure to effectively organize test cases. This is very helpful if you wanna build your own test automation tool.
A test automation engineer has the role from designing, developing, implementing to maintaining test automation solutions. In this chapter, you'll explore the generic test automation architecture along with its four layers, including test generation, test definition, test execution, and test adaptation. Let's examine the first layer of the architecture which is test generation, where you design, create, or generate automated test cases.
You can script test cases using any programming languages such as Java and C-Sharp in an integrated development environment such as VS Code or IntelliJ, but this relevant coding knowledge is needed. That said, how about manual testers who lack coding skills? In this case, your test generation layer should have a user interface to support test automation engineers. Thus you can easily drag and drop or select test steps, actions, etc. to design a test case with some clicks.
On the other hand, the test generation layer should automatically generate automated tests by transferring manual steps to a script. In addition, with tools like Katalon Studio, the test generation layer can even record manual steps on the application under test to automatically generate test scripts. So, after preparing your automated test cases, how will you manage them, and test data efficiently? Suppose that you want to execute a test case with a wide range of test data. Should your test cases be put in any specific test suite, such as confirmation or regression test suites?
Moving on to the test definition layer, it is where the test automation solution supports the definition and implementation of test cases and test suites. Usually, it separates the test definition from the application under test in terms of source code.
In fact, teams often ignore the importance of the test definition layer once they don't have as many test cases as they expect. In another aspect, the layer contains some means to take over both high and low-level tests, assigning different test data to the same test case, also known as data-driven testing. Besides, it handles the test procedures and test library components to verify a large volume of data combinations of each test case.
Moving on to the next layer is the test execution layer which supports executing test cases and logging automatically or semi-automatically. In this context, what do we mean by ''test logging?'' The terms mean that, it records and updates the pass or failed status of history logging. The layer will be useful for running automated regression tests on CI or CD tools, getting the results directly from these CI/CD tools, then transferring to other reporting systems.
Last but not least, relates to the connection between an application under test and test automation system, the last layer - test adaption provides the necessary code to adapt the automated test scripts for various components or interfaces of the application under test. It also provides different adaptors for connecting to the application under test via APIs, protocols, or other devices. A notable example here can be called TestOps. It includes the interface between test management and test adaptation layer which copes with the selection of appropriate adaptors in relation to the chosen test configuration.
To conclude, a test automation solution is a software system, thus it can be implemented like others. You can use any software engineering approach, from structured to software technologies and tools. Furthermore, team structure and programming skills are needed.
It is important to understand that a generic test automation architect has four layers: test generation, test definition, test execution, and test adaptation. You may not need to have all of them. For example, your team is just about to start introducing test automation, you should combine test execution and test adaptation in one layer.
Chapter 6 - Approaches for Automating Test Cases
Hi there and welcome back! In the last chapter, we discussed the generic test automation architecture and deep dive into its four layers. We all know that automated testing shortens your development cycles, avoids cumbersome repetitive tasks, and helps improve software quality but how do you get started? You can find your own answer after learning this chapter. Here, we will walk you through different test automation scripting approaches, and the strategy to choose the best-fit one for your team.
Before jumping into these mentioned approaches and techniques, it's easier to get back to the basics. Scripting an automated test, by definition, indicates the process in which a test case is translated into sequences of actions executed against a system under test. To be more specific, this sequence of actions can be documented in a test procedure and implemented in a test script. Besides, automated test cases also define test data for the interaction with the system under test, including verification steps to ensure the results as expected.
There are different approaches that will be applied to different contexts. For example:
- The Test Automation Environment implements test cases directly into automated test scripts. This option is the least recommended as it lacks abstraction and increases the maintenance load.
- The Test Automation Environment designs test procedures, and transforms them into automated test scripts. This option has abstraction but lacks automation to generate the test scripts.
- The Test Automation Environment uses a tool to translate test procedures into automated test scripts. This option combines both abstraction and automated script generation.
- The Test Automation Environment uses a tool that generates automated test procedures and/or translates the test scripts directly from models. This option has the highest degree of automation.
Moving on, I will quickly walk you through the following approaches. The first is capture/playback approach. The second is structured scripting approach. The third is data-driven development. The fourth is keyword-driven development. Last but not least, it's model-based testing.
For the Capture/playback approach, in terms of this one, tools are used to capture interactions with the Application Under Test while performing a manual test case. A captured script is a linear representation with specific data and actions as part of each script. Thus you need to duplicate the steps for creating new scripts. By applying this approach, you can reap several benefits. It can be used for AUTs on the GUI and/or API level. Initially, it is easy to set up and use. However, it also comes with some drawbacks. The implementation of the test scripts can only start when the AUT is available and the captured scripts are hard to maintain.
Second, for the Structured Scripting approach, in contrast to the linear scripting approach, the structured scripting technique introduces script libraries. The pros include a significant reduction in the maintenance changes required. Second, it is also the reduction in the cost of automating new tests. And last but not least, it is largely attainable through the reuse of scripts. The cons include the initial effort to create the shared scripts and programming skills will be required to create all the shared scripts.
Next, it is the Data-driven Development technique. The data-driven scripting technique builds on the structured scripting technique. The inputs are extracted from the scripts and put into one or more separate files. The pros include the cost of adding new automated tests can be significantly reduced. Second, it gives deeper testing in a specific area and may increase test coverage. Lastly, it can specify 'automated' tests simply by populating one or more data files. The cons include the need to manage data files and make sure they are readable. The other one is that the negative tests are a combination of test procedures and test data may be missed.
For the Keyword-driven Development Process, the keyword-driven scripting technique builds on the data-driven scripting technique. There are two main differences. First, the data files are now called 'test definition' files or something similar. Second of all, there is only one control script.
The pros include the cost of adding new automated tests can be significantly reduced. Secondly, it can specify 'automated' tests simply by describing the tests using the keywords and associated data. Lastly, the keywords can offer abstraction from the complexities of the interfaces of the System Under Test. On the other hand, the cons include the implementing the keywords remains a big task for test automation engineers. Next, the care needs to be taken to ensure that the correct keywords is implemented.
For the Model-based Testing, model-based testing refers to the automated generation of test cases. Different test generation methods can be used to derive tests for any of the scripting frameworks discussed before. The following pros of this technique include the model-based testing allows by abstraction to concentrate on the essence of testing. In case of changes in the requirements, the test model has to be adapted only. The cons include modeling expertise is required to run a model-based testing approach effectively. Furthermore, model-based testing approaches require adjustments in the test processes.
Remember approach selection for automating test cases is heavily dependent on the context of the project. We walked you through five approaches to script your tests. Depending on your project's contexts such as resources, capabilities, budgets, and time, you will select the most suitable approach.
For example, for the early phase of introducing test automation to your project, you should implement test cases directly into an automated test script, so the Capture and Playback approach is the most suitable. Or more abstract approaches such as the data-driven approach and keyword-driven approach are better for larger projects and a lot of automated tests need to be generated.
Chapter 7 - Test Automation Reporting and Metrics
In the last chapter, we got a chance to know about important scripting approaches that are commonly used for test automation. Moving to this chapter, we'll explore how test automation metrics help the manager and engineer teams track project status toward the goals as well as monitoring the impact of changes made to the test automation solution.
The most common test automation metrics can be divided into two groups, specifically external and internal sides. External metrics comprise automation benefits, effort to build automated tests and effort to maintain automated tests. While the purpose of external metrics is to measure the impacts of test automation solutions on other activities, the internal metrics, including tool scripting metrics, speed and efficiency are those used to measure the effectiveness and efficiency of the test automation framework in fulfilling its objectives.
To recall what the test automation solution is, you can look back at chapter 5 to understand more about the four layers of the generic test automation architecture. And on top of that, there are Trend metrics to get an overview of the historical performance of these metrics.
Now, we'll have a look at the first external metric, called automation benefits. In fact, any measure of benefit will depend on the objective of the test automation solution. Typically this may save time, human resources, increase the frequency of test execution or test coverage, and create some other advantages such as increased repeatability or fewer manual errors.
It is particularly important to measure and report the benefits of a test automation solution. Possible aspects to measure the teams' benefit include the number of manual test hours saved, reduction in time to perform regression testing. The costs - in terms of money, time, and the number of people involved over a given time period are easily observable. People who are not working in the testing field will also pay close attention to the overall cost and achieved benefits to form an impression upon test automation productivity.
Now, we'll come closer to the second external metric - the effort to automate tests - which is one of the key costs associated with test automation. The implement cost has a positive correlation with the size of test cases. Of course, the more test steps conducted, the more expensive the cost will be. While the cost to implement a specific automated test will depend largely on the test itself, other factors such as the scripting approach used (we have mentioned this part in chapter 6), familiarity with the test tool, the environment, and the skill level of the test automation engineer will also have an impact.
Because larger or complicated tests typically take longer to automate, computing the build cost for test automation may be based on an average build time. This may be further refined by comparing the average cost between a manual and automated function for a specific set of tests such as those targeting the same function or those at a given test level. It takes manual tests twice the effort of automating a test case, for example.
Every software needs maintenance once there is a new release. In this case, the last external metric or effort to maintain automated tests, is vital to keep automated tests in sync with the application under test and highlight when steps need to be taken to reduce the maintenance effort. Measures of maintenance effort can be expressed as the total number of all automated tests, or on average how long automated tests can update a new version. Efforts required to maintain test cases due to changes in software systems should correspond with changes in the application under test.
For the first internal metric, tool scripting, the different number of scripting standards for different companies will determine the extent to which these standards are being followed. There are many metrics that can be used to monitor automation script development, most of which are similar to source code metrics for the application under test. As examples, lines of code and cyclomatic complexity can be used to highlight overly large or complex scripts.
Let's move on to the next internal metric. Differences in how long it takes to perform the same test steps in the same environments can indicate problems in the application under test, according to the second internal metric - speed and efficiency of test automation components. That said, investigation will be in need if the application under test is not performing the same functionality in the same elapsed time.
On top of the five metrics that we have already talked about, the trend metric is another factor that observes the measures change over time instead of just at a given period. The cost of measuring, yet should be as low as possible and can often be achieved by automating collections and reports.
In conclusion, this chapter gives you an overview of how test automation metrics measure the performance of automated testing processes. As with any metrics, test automation metrics users should have set their goal in building better quality software with less effort, plus delivering capability faster and more affordable.
Chapter 8 - Transitioning from Manual Testing into Automated Testing
Effective testing is key to a successful project. Yet managing and running tests manually take time and money so that automated testing came into play with the objective of maximizing efficiency and delivering a high-quality product in a cost-effective manner. However, the process of switching from manual to automated testing is obviously NOT that simple.
In this chapter, we'll show you how that process works and which elements are involved, including factors to consider when implementing automated regression testing, implementing automation of Confirmation Testing as well as implementing automation within New Feature Testing.
Just for more information, although at first development teams may think that the cost of investing in test automation is somewhat higher than the traditional technique, in the long run, aggregately the cost will turn out to be much lower. Looking at the graph, you can see the orange line, which represents the cost in manual testing cases will continue going up in positive correlation with the time expansion.
Now, let's move to the first consideration that is implementing automated tests for regression testing. Regression testing provides a great opportunity to use automation, as it refers to the type of testing that ensures developed and tested software still appropriately performs without any unexpected side effects after modifications.
In developing steps to prepare to automate regression tests, a number of questions must be asked: how frequently should the tests be run, do tests share data, and what pre-conditions are required before test execution. Each of these questions is explained more in detail in the following sections.
Tests that are executed often as part of regression testing are the best candidates for automation. Automation, like manual testing, requires a certain amount of effort to conduct so that only higher-frequency tests are preferentially automated. Meanwhile, the remaining low-frequency regression tests can still be tested manually.
Tests often share data. This activity can occur when tests use the same record of data to execute the different applications under the test's functionality. An example of this might be test case ''A'' which verifies an employee's available time for vacation while test case ''B'' refers to courses taken as part of these employees' career development goals. Each test case uses the same employee information but verifies different aspects.
In a manual test environment, the employee data would typically be duplicated many times across each test case. However, for automated tests, shared data should be feasibly stored and accessed from a single source to avoid the introduction of errors.
Next, how about we'll walk you through the test preconditions part? Most of the time, a test cannot be executed properly without the setting initial conditions stage. These conditions may include selecting the correct database, the test data set, or even starting up initial values or parameters. Many of these initialization steps that are required to establish a test's precondition can all be automated, allowing for a more reliable and independent solution. As regression tests are converted to automation, these preconditions also need to be a part of the automation process.
Moving forward to the next segment, we'll go deep into which factors to consider when implementing automation of confirmation testing. Confirmation testing particularly is performed to follow a code fix that addresses a reported defect.
A tester typically follows necessary steps in confirmation testing to ensure that the defect no longer exists and prevent its reproduction later. Defects have a way of reintroducing themselves into subsequent releases, therefore automation is in need in this case to reduce repetitive execution time for confirmation testing. Keep in mind that tracking automated confirmation tests allows for additional reports of how many times and cycles expended in resolving defects.
Automated confirmation tests can be incorporated into a standard automated regression suite, or subsumed into existing automated tests. With either approach, the value of automating defect confirmation testing still holds. Along with confirmation testing, regression testing is necessary to ensure new defects have not been introduced as a side effect of the defect fix. Impact analysis may be required to determine the appropriate scope of regression testing.
In general, it is easier to automate test cases for new functionality rather than the existing ones. Test engineers, based on their knowledge can explain to developers and architects which factors are exactly needed to consider when implementing automation within new feature testing. Don't worry, we cover this topic for you right away.
As new features are introduced into an application under test, testers are required to develop new tests against these new features and corresponding requirements. The current test automation solution should be evaluated to confirm that it still meets the needs of new features. This investigation includes, but does not limit to, the existing approach and test tools used, as well as third-party development tools.
Changes to the test automation solution, if any, must also be evaluated so it does not affect the performance of existing solutions. If a new feature is implemented with, as an example, a different class of object, testware components may need to make additions. Besides, compatibility with existing test tools must be evaluated or, where necessary, alternative solutions identified. For example, if using a keyword-driven approach, it is necessary to develop new or modify the existing keywords to accommodate the new functionality. Last but not least, one needs to determine if the existing test automation solution will continue to meet current framework standards. Testers now may have to ask themselves: are implementation techniques still valid, or is a new architecture required, and can this be done by extending current capability.
In summary, this chapter introduces the process of switching from manual testing to automated testing environments. Specifically, it lists out in detail the factors to consider when implementing automated regression testing, Confirmation Testing and New Feature Testing.
Chapter 9 - Summary: Test Automation Foundations
In summary, this foundation course introduces various aspects of test automation that are fundamental to the beginning of implementing automation into testing projects. After these 8 chapters, you should know how to compose automation objectives and requirements, choosing automation approaches and tools from those criteria, and draft up fool-proof implementation plans and testing reports. Should you want to learn more about different topics in testing, join our other courses we have with the link below. We look forward to seeing you shine in our Katalon Academy Program.