Chapter 2 - Approaches and Methods to Automate Test Cases
Chapter 3 - Automation Tool Evaluation and Selection
Chapter 4 - Test Automation Reporting and Metrics
Chapter 5 - Course Summary
This course discusses the basics and best practices to help your team ensure a smooth transition from manual to automated testing. The course’s materials are essential to know before choosing any tools or frameworks to apply test automation.
In four chapters, you will learn about the:
Chapter 1 - Considerations for Essential Types of Automated Testing
Effective testing is key to a successful project. Yet managing and running tests manually take time and money so that automated testing came into play with the objective of maximizing efficiency and delivering a high-quality product in a cost-effective manner. However, the process of switching from manual to automated testing is obviously NOT that simple.
In this chapter, we'll show you how that process works and which elements are involved, including factors to consider when implementing automated regression testing, implementing automation of Confirmation Testing as well as implementing automation within New Feature Testing.
Just for more information, although at first development teams may think that the cost of investing in test automation is somewhat higher than the traditional technique, in the long run, aggregately the cost will turn out to be much lower. Looking at the graph, you can see the orange line, which represents the cost in manual testing cases will continue going up in positive correlation with the time expansion.
Now, let's move to the first consideration that is implementing automated tests for regression testing. Regression testing provides a great opportunity to use automation, as it refers to the type of testing that ensures developed and tested software still appropriately performs without any unexpected side effects after modifications.
In developing steps to prepare to automate regression tests, a number of questions must be asked: how frequently should the tests be run, do tests share data, and what pre-conditions are required before test execution. Each of these questions is explained more in detail in the following sections.
Tests that are executed often as part of regression testing are the best candidates for automation. Automation, like manual testing, requires a certain amount of effort to conduct so that only higher-frequency tests are preferentially automated. Meanwhile, the remaining low-frequency regression tests can still be tested manually.
Tests often share data. This activity can occur when tests use the same record of data to execute the different applications under the test's functionality. An example of this might be test case ''A'' which verifies an employee's available time for vacation while test case ''B'' refers to courses taken as part of these employees' career development goals. Each test case uses the same employee information but verifies different aspects.
In a manual test environment, the employee data would typically be duplicated many times across each test case. However, for automated tests, shared data should be feasibly stored and accessed from a single source to avoid the introduction of errors.
Next, how about we'll walk you through the test preconditions part? Most of the time, a test cannot be executed properly without the setting initial conditions stage. These conditions may include selecting the correct database, the test data set, or even starting up initial values or parameters. Many of these initialization steps that are required to establish a test's precondition can all be automated, allowing for a more reliable and independent solution. As regression tests are converted to automation, these preconditions also need to be a part of the automation process.
Moving forward to the next segment, we'll go deep into which factors to consider when implementing automation of confirmation testing. Confirmation testing particularly is performed to follow a code fix that addresses a reported defect.
A tester typically follows necessary steps in confirmation testing to ensure that the defect no longer exists and prevent its reproduction later. Defects have a way of reintroducing themselves into subsequent releases, therefore automation is in need in this case to reduce repetitive execution time for confirmation testing. Keep in mind that tracking automated confirmation tests allows for additional reports of how many times and cycles expended in resolving defects.
Automated confirmation tests can be incorporated into a standard automated regression suite, or subsumed into existing automated tests. With either approach, the value of automating defect confirmation testing still holds. Along with confirmation testing, regression testing is necessary to ensure new defects have not been introduced as a side effect of the defect fix. Impact analysis may be required to determine the appropriate scope of regression testing.
In general, it is easier to automate test cases for new functionality rather than the existing ones. Test engineers, based on their knowledge can explain to developers and architects which factors are exactly needed to consider when implementing automation within new feature testing. Don't worry, we cover this topic for you right away.
As new features are introduced into an application under test, testers are required to develop new tests against these new features and corresponding requirements. The current test automation solution should be evaluated to confirm that it still meets the needs of new features. This investigation includes, but does not limit to, the existing approach and test tools used, as well as third-party development tools.
Changes to the test automation solution, if any, must also be evaluated so it does not affect the performance of existing solutions. If a new feature is implemented with, as an example, a different class of object, testware components may need to make additions. Besides, compatibility with existing test tools must be evaluated or, where necessary, alternative solutions identified. For example, if using a keyword-driven approach, it is necessary to develop new or modify the existing keywords to accommodate the new functionality. Last but not least, one needs to determine if the existing test automation solution will continue to meet current framework standards. Testers now may have to ask themselves: are implementation techniques still valid, or is a new architecture required, and can this be done by extending current capability.
In summary, this chapter introduces the process of switching from manual testing to automated testing environments. Specifically, it lists out in detail the factors to consider when implementing automated regression testing, Confirmation Testing and New Feature Testing.
Chapter 2 - Approaches and Methods to Automate Test Cases
Hi there and welcome back! We all know that automated testing shortens your development cycles, avoids cumbersome repetitive tasks, and helps improve software quality but how do you get started? You can find your own answer after learning this chapter. Here, we will walk you through different test automation scripting approaches, and the strategy to choose the best-fit one for your team.
Before jumping into these mentioned approaches and techniques, it's easier to get back to the basics. Scripting an automated test, by definition, indicates the process in which a test case is translated into sequences of actions executed against a system under test. To be more specific, this sequence of actions can be documented in a test procedure and implemented in a test script. Besides, automated test cases also define test data for the interaction with the system under test, including verification steps to ensure the results as expected.
There are different approaches that will be applied to different contexts. For example:
- The Test Automation Environment implements test cases directly into automated test scripts. This option is the least recommended as it lacks abstraction and increases the maintenance load.
- The Test Automation Environment designs test procedures, and transforms them into automated test scripts. This option has abstraction but lacks automation to generate the test scripts.
- The Test Automation Environment uses a tool to translate test procedures into automated test scripts. This option combines both abstraction and automated script generation.
- The Test Automation Environment uses a tool that generates automated test procedures and/or translates the test scripts directly from models. This option has the highest degree of automation.
Moving on, I will quickly walk you through the following approaches. First, it's the capture/playback approach. Second, it's the structured scripting approach. Third, it's the data-driven development. Forth, it's the keyword-driven development. Last but not least, it's the model-based testing.
For the Capture/playback approach, in terms of this one, tools are used to capture interactions with the Application Under Test while performing a manual test case. A captured script is a linear representation with specific data and actions as part of each script. Thus you need to duplicate the steps for creating new scripts. By applying this approach, you can reap several benefits. It can be used for AUTs on the GUI and/or API level. Initially, it is easy to set up and use. However, it also comes with some drawbacks. The implementation of the test scripts can only start when the AUT is available and the captured scripts are hard to maintain.
Second, for the Structured Scripting approach, in contrast to the linear scripting approach, the structured scripting technique introduces script libraries. The pros include a significant reduction in the maintenance changes required. Second, it is also the reduction in the cost of automating new tests. And last but not least, it is largely attainable through the reuse of scripts. On the other side, the cons include the initial effort to create the shared scripts and programming skills will be required to create all the shared scripts.
Next, it is the Data-driven Development technique. The data-driven scripting technique builds on the structured scripting technique. The inputs are extracted from the scripts and put into one or more separate files. The pros include the cost of adding new automated tests can be significantly reduced. Second, it gives deeper testing in a specific area and may increase test coverage. Lastly, it can specify 'automated' tests simply by populating one or more data files. The cons include the need to manage data files and make sure they are readable. The other one is that the negative tests are a combination of test procedures and test data may be missed.
For the Keyword-driven Development Process, the keyword-driven scripting technique builds on the data-driven scripting technique. There are two main differences. First, the data files are now called 'test definition' files or something similar. Second of all, there is only one control script.
The pros include the cost of adding new automated tests can be significantly reduced. Secondly, it can specify automated tests simply by describing the tests using the keywords and associated data. Lastly, the keywords can offer abstraction from the complexities of the interfaces of the System Under Test. On the other hand, the cons include that implementing the keywords remains a big task for test automation engineers. Next, the care needs to be taken to ensure that the correct keywords is implemented.
For the Model-based Testing, model-based testing refers to the automated generation of test cases. Different test generation methods can be used to derive tests for any of the scripting frameworks discussed before. The following pros of this technique include the model-based testing allows by abstraction to concentrate on the essence of testing. In case of changes in the requirements, the test model has to be adapted only. The cons include modeling expertise is required to run a model-based testing approach effectively. Furthermore, model-based testing approaches require adjustments in the test processes.
Remember approach selection for automating test cases is heavily dependent on the context of the project. We walked you through five approaches to script your tests. Depending on your project's contexts such as resources, capabilities, budgets, and time, you will select the most suitable approach.
For example, for the early phase of introducing test automation to your project, you should implement test cases directly into an automated test script, so the Capture and Playback approach is the most suitable. Or more abstract approaches such as the data-driven approach and keyword-driven approach are better for larger projects and a lot of automated tests need to be generated.
Chapter 3 - Automation Tool Evaluation and Selection
Welcome back to Katalon courses! In this chapter, we will discuss what factors you should consider when evaluating and selecting the best-fit test automation tool for your team projects.
Now, we'll walk you through the differences between free and paid tools with some examples of how to choose the most suitable one for your team. There's a variety of different issues that your team must consider when selecting test tools. Among these top considerations, many of them make the decision based on which type of tools would best serve their needs, whether it's a freebie or a paid solution.
Let be honest, have you ever wondered why the stage of selecting tools is that important? Simply, it's because starting with the best-fit tool will allow your team to reap its full potential during the whole project, all the way from the software development steps to long-term operation and maintenance. With the right tool, QA teams have an effective way to mitigate hidden risks like regression risks that may occur due to constant updates and maintenance. To ensure the success of testing projects, your team must carefully investigate the total cost of ownership throughout the expected life of the tool by performing a cost-benefit analysis. This topic is covered later in the ROI section.
Now, we'll take a closer look at the 2 types of test automation tools, among which exists the best match solution for your team. First, you have the option of a free and open-source tool. Their most significant advantage is that they come with no expenses or the license fee. With their proven flexibility and open capabilities, you can customize and create a more tailored solution for your in-house framework.
Of course, intensive training to relevant coding knowledge is a basic need for this type of tool. Besides, the frequency of releasing new updates, such as new features or bug fixes, is one more thing you may consider when using free or open-source tools. With that being said, your team has to keep an eye on all the new updates and latest features, especially breaking changes, without timely announcements and dedicated support from the tool vendor and community.
Commercial tools, with the only disadvantage of requiring paid licenses, sometimes also offer free trial for weeks or free versions, but it's for personal use only. Unlike free tools, commercial tools allow testers to start with a lot of built-in functions such as test design via UI without programming skills needed, plus it calls for less effort to maintain test assets.
Regarding the new feature enhancement and support, paid licenses protect your test automation solutions with a warranty plan by frequently releasing updates and defect fixes in a certain amount of time. For small testing teams with little-to-no programming skills, it may be a better choice to just start with a vendor-based tool instead of a free option based on the pros and cons we already mentioned. Although teams have to pay for licenses, developing and maintaining test automation frameworks as well as cases are much easier.
Regardless of the chosen free or paid tools, you should start with a proof-of-concept to validate the tool suitable with your application's infrastructure and architect, before introducing them into your organization. Moving on to the next part, there are examples you may consider before selecting any tool.
First, tools for unit testing must be different from UI testing tools depending on users' objective of using automated tools and the test level. So for instance, assuming GUI elements are visible but cannot be captured using Xpath or CSS Selector when performing UI end-to-end testing for web or mobile applications. This scenario is also known as a false positive. In contrast, when the element's locator is interactable by automated tests without being visible to the end-users, it will then cause a false negative.
So remember, these are the top 3 challenges of UI automated testing that many teams are facing: applications change too frequently, different ways in error handling, and cross-browser testing. To deal with these challenges efficiently, your team should decide on standard ways of using, managing, storing, and maintaining the tool and the test assets.
When the UI changes too frequently, you want to have a self-healing feature like that Katalon Studio provides. Self-healing capability offers a host of benefits for testers and your team. It takes less time and effort to ensure all the functional tests run smoothly and avoids interruption for all executions.
Another consideration, importantly, is choosing powerful tools with sufficient features. A tool may have a vast feature set but your team only needs part of them. The more complicated the tool is, the slower the testing process becomes, and the training as well. Hence your team should try to find a way to reduce the feature set or remove unwanted parts from the toolbar, select a license model to meet your needs, or try to find alternative tools that are more focused on your required functionality.
Incompatibility between different environments and platforms: Test automation does not work on all environments/platforms. Implement automated tests to maximize tool independence, thereby minimizing the cost of using multiple tools.
This chapter shows you key considerations of free and paid automated testing tools. Regardless of the type of tool, you must be careful to investigate the total cost of ownership throughout the testing tool's lifecycle by performing a cost-benefit analysis.
Chapter 4 - Test Automation Reporting and Metrics
Moving to this chapter, we'll explore how test automation metrics help the manager and engineer teams track project status toward the goals as well as monitoring the impact of changes made to the test automation solution.
The most common test automation metrics can be divided into two groups, specifically external and internal sides. External metrics comprise automation benefits, effort to build automated tests and effort to maintain automated tests. While the purpose of external metrics is to measure the impacts of test automation solutions on other activities, the internal metrics, including tool scripting metrics, speed and efficiency are those used to measure the effectiveness and efficiency of the test automation framework in fulfilling its objectives. And on top of that, there are Trend metrics to get an overview of the historical performance of these metrics.
Now, we'll have a look at the first external metric, called automation benefits. In fact, any measure of benefit will depend on the objective of the test automation solution. Typically this may save time, human resources, increase the frequency of test execution or test coverage, and create some other advantages such as increased repeatability or fewer manual errors.
It is particularly important to measure and report the benefits of a test automation solution. Possible aspects to measure the teams' benefit include the number of manual test hours saved, reduction in time to perform regression testing. The costs - in terms of money, time, and the number of people involved over a given time period are easily observable. People who are not working in the testing field will also pay close attention to the overall cost and achieved benefits to form an impression upon test automation productivity.
Now, we'll come closer to the second external metric - the effort to automate tests - which is one of the key costs associated with test automation. The implementation cost has a positive correlation with the size of test cases. Of course, the more test steps conducted, the more expensive the cost will be. While the cost to implement a specific automated test will depend largely on the test itself, other factors such as the scripting approach used, familiarity with the test tool, the environment, and the skill level of the test automation engineer will also have an impact.
Because larger or complicated tests typically take longer to automate, computing the build cost for test automation may be based on an average build time. This may be further refined by comparing the average cost between a manual and automated function for a specific set of tests such as those targeting the same function or those at a given test level. It takes manual tests twice the effort of automating a test case, for example.
Every software needs maintenance once there is a new release. In this case, the last external metric or effort to maintain automated tests, is vital to keep automated tests in sync with the application under test and highlight when steps need to be taken to reduce the maintenance effort. Measures of maintenance effort can be expressed as the total number of all automated tests, or on average how long automated tests can update a new version. Efforts required to maintain test cases due to changes in software systems should correspond with changes in the application under test.
For the first internal metric, tool scripting, the different number of scripting standards for different companies will determine the extent to which these standards are being followed. There are many metrics that can be used to monitor automation script development, most of which are similar to source code metrics for the application under test. As examples, lines of code and cyclomatic complexity can be used to highlight overly large or complex scripts.
Let's move on to the next internal metric. Differences in how long it takes to perform the same test steps in the same environments can indicate problems in the application under test, according to the second internal metric - speed and efficiency of test automation components. That said, investigations will be in need if the application under test is not performing the same functionality in the same elapsed time.
On top of the five metrics that we have already talked about, the trend metric is another factor that observes the measures change over time instead of just at a given period. The cost of measuring, yet should be as low as possible and can often be achieved by automating collections and reports.
In conclusion, this chapter gives you an overview of how test automation metrics measure the performance of automated testing processes. As with any metrics, test automation metrics users should have set their goal in building better quality software with less effort, plus delivering capability faster and more affordable.
Chapter 5 - Course Summary
That's it! You've finished our course - Transition from Manual to Automated Testing: Key Considerations. To recap, you've learned the:
- Key factors to consider when implementing regression testing, confirmation testing, and new feature testing
- Popular approaches and techniques to automate test cases
- How to choose between automation testing tools and frameworks
- Five key test reporting metrics to measure the quality and impact of test automation solutions
Katalon Academy is working on even more courses to help you learn about test automation and immediately put them into practice with Katalon tools. In the meantime, you can watch more of our existing courses by clicking on the Courses button right on the top menu bar. Thank you, and happy learning!
Thatâs it! Youâve finished our course âTransition from Manual to Automated Testing: Key Considerations.â
NEXT COURSES TO LEARN