Software testing is a critical component of software development that helps to ensure that software applications work correctly, meet user requirements and expectations with the least defects and issues. It is complex, challenging, and requires a deep understanding of testing techniques, processes, and tools.
To have a successful career in software testing, you must have a solid foundation. And we understand that starting in this field can be daunting since there are tons of inconsistent materials out there to learn. That’s why we designed this course to equip you with the essentials of software testing in a simple yet practical manner, backed up with real-life examples.
In six chapters, you will learn about:
By the end of this course, you will understand your role as a QA or QC in software development and have a solid foundation to continue your self-learning journey in software testing.
Hi, welcome to Katalon Academy and the first chapter where you will learn and understand the concept of software testing correctly.
Here's a common misconception about software testing: It's only about executing tests. It would mean if you are a manual tester and you're testing a web application, you would only perform some actions on the app according to the pre-written test steps and then verify the results. In the case of automated testing, you would only run the automated tests and check the results.
Software testing is more than just executing tests. In fact, it's a stage in the software development lifecycle used to assess the quality of the software and reduce the risk of software failure in operation.
It's a process that includes a set of activities, rather than just execution, to verify whether a system under test meets the specified requirements. It also validates whether the system meets the needs of users and other stakeholders in the operational environments in terms of both functional and non-functional aspects such as performance and security. This can be done by checking whether the actual results on the system match the expected results.
While there is no fixed test process for all teams, here are the main groups of activities that are necessary to meet the testing objectives. They include:
- Test planning
- Test designing (including creating test environments, test cases, and test data)
- Test executing
- Test reporting and analytics
- And finally test monitoring and control
We will discuss the objectives and details of these activities in the later chapters.
It's also worth mentioning that testing is not limited to the execution of a system under test, which is called dynamic testing. It also involves static testing, which is reviewing work products such as requirements, user stories, and source code.
For example, when you test a web application, dynamic testing is when you perform actions on the application. It requires the website to open and run to verify whether a function meets the requirements, or a whether non-functional aspect can meet the user's expectations. For example, make sure that the website won't crash even with a high volume of traffic or users.
Static testing, on the other hand, is when you review the requirements of a function before it's implemented or coded. It does not require the website to operate.
Before moving to the next chapters, it would be helpful for us to learn several basic terms and truly understand what they mean.
The most common terms you will see and hear a lot are Error, Defect, and Failure. They are not the same but closely related.
Their relationship can be described like this: a person makes an error (or mistake) that can introduce a defect (often called bugs, or sometimes called fault) in the software code, which can then lead to a failure of the software during operation. One thing to remember here is that failures of a system are not solely due to defects. They can also result from other environmental factors.
For instance, when writing requirements, a business analyst makes a mistake by misunderstanding a requirement from clients, which leads to a requirement defect. This defect then results in another programming error from an engineer, which leads to a defect in the system code. When the code is later executed, a system failure may trigger during the operation, depending on specific data input.
The next terms we will discuss are Quality management, Quality assurance, and Quality control.
Quality management is the highest level. It includes all activities that direct and control an organization regarding quality, including both quality assurance and quality control.
Quality assurance is more process-oriented. It creates proper processes and makes sure that all team members follow them. When processes are carried out properly, the created work products generally have higher quality. It's reasonable to say that quality assurance focuses more on defect prevention.
Quality control, on the other hand, is more product-oriented. It includes all testing activities to identify defects and achieve different levels of quality. While quality assurance supports proper testing, quality control ensures the proper execution of the entire test process.
In reality, most people and many companies use these two terms interchangeably in work and hiring. When looking for a job, you should pay more attention to the job description, what they require you to do, to know what you're applying for.
Lastly, we will look into the two terms Verification and Validation. Each activity has a completely different purpose.
Verification answers the question ''Do we build the system or product right?'' It ensures a module, system, or product is designed and developed correctly according to the written specifications or requirements.
Validation, on the other hand, answers the question ''Do we build the right system or product?'' It is checking how well the system or product addresses and meets the user's needs and expectations in a real-world context.
Now you understand the concept and the scope of testing and some common basic terms. Let's move on to see the importance of software testing.
One thing you should always remember is that there's no perfect software application. Most people have had an experience with software that did not work as expected. Software that does not work correctly can lead to many problems, including loss of money, time, business reputation, and even injury or death.
NASA's Mars Climate Orbiter is a popular example of one of the most costly software defects, around 200 million dollars for the spacecraft development alone. It was reported that the ground control software used the imperial measurements while the onboard software produced by NASA used the international system of units. This mismatch in development caused the probe to travel much lower to the surface of Mars and eventually got burned up.
Software defects that create security vulnerabilities in the system can also be very costly. A system hack or a data leak can severely damage the business's reputation and profit.
A software defect in any car's safety systems, such as anti-lock brakes, stability control, and airbags can potentially lead to accidents and injuries, or even death. These examples and many similar ones are the reasons why testing exists, to assess the quality of the software and to reduce the risk of software failure in operation.
The most typical objectives of testing can be categorized into 4 groups:
1. Preventing defects: This can be done by having testers involved in reviewing the requirements reviews or refining the user stories before the development phase begins. The identification and removal of defects in requirements can reduce the risk of incorrect or untestable features being developed.
2. Finding defects: This usually happens after the development phase, when a tester comes in and performs the test steps on an application or system to verify whether a function works correctly according to the specified requirements.
3. Gaining confidence about the level of quality: Without testing, you cannot tell the quality level of the software in question. The more you test, the better the software becomes. When bugs or defects are found early, you can quickly fix them to ensure software quality when it is released to end users.
4. Providing significant information for decision-making: The test lead or test manager normally look into the higher-level metrics, such as test coverage and defect management, and communicate with the product manager or release manager to evaluate whether the software is ready to release or not. If there are too many unsolved defects, it would make sense to delay the release. However, that would result in some possible negative consequences. For example, the competitors would release their products first and win more users. Regardless of the decisions, the information or reports from testing activities is crucial to evaluate the situation and make the right decision.
That said, the objectives of testing can vary depending on the context of the business, the system under test, the development model, the testing types and levels, and so on. For example, when testing a function after it's developed, the main objective might be to identify as many defects as possible so that they can be fixed as early as possible.
But in the later stage when the software is about to be released, it would make more sense to just focus on confirming that the system can work according to the requirements and provide the information to the stakeholders to evaluate the current state of software quality for release.
One more thing we would like you to remember is that the cost of defects will rise higher when the bugs escape and are found in the later stages.
Take mass-producing a new car model as an example. Let's say there's a defect in the airbag software. If we can detect this early when the function is being developed, it would be easy and inexpensive to fix.
But if we miss it, and only detect this problem after thousands of cars have been released to the market, it would cost a huge amount of money to recall and fix all the vehicles and significantly damage the business image in case of injuries. These stories have happened several times before in the car industry.
Now you understand why testing is an important process in software development. It helps us identify, prevent, and fix defects to ensure the highest possible level of quality when software is developed and delivered to end users.
Over the years, many testing principles have been developed. Here are the 7 ones that are widely applied and you can use them as general guidelines.
1. Early testing (sometimes called shift-left testing) saves time and money.
As we discussed in the previous lesson, defects become more costly when found in the later stages. So, it's important to carry out testing activities as early as possible in the software development lifecycle, which helps reduce or eliminate costly changes.
This is one of the primary reasons why many teams turn to Scrum or Agile development process instead of following the traditional Waterfall model. We will discuss this more in a later chapter about the software development lifecycle.
2. Testing cannot prove there are no defects.
Testing can only show the presence of defects and reduces the probability of undiscovered defects remaining in the software. We don't know what we don't know. Even when there are no defects found through testing, you cannot conclude the software is defect-free.
3. Exhaustive testing is not possible.
It is impossible to test everything in a software application. Just a simple function like uploading files can have endless test scenarios. It would be impossible to test all possible file types and sizes that could be uploaded within a timeframe. And one web application has many other features and functions.
Instead of trying to test everything, use different risk analysis methods, test techniques, product knowledge, and personal experience to set priorities and focus on the areas that matter the most in a release.
4. Defects cluster together.
A small number of modules in any software system is highly likely to contain most of the defects.
Defect clustering occurs because of various reasons. It can be because certain modules or functionalities are more complex than others, or they are used more frequently, or they were not tested thoroughly during development. The underlying cause is not always clear, but it is essential to identify and focus test efforts on the areas of the software that are prone to defects.
For example, with a banking software system, we would likely prioritize testing the modules that deal with financial transactions, since these are frequently functioning, high-risk, and highly-complex areas.
5. Avoid the pesticide paradox.
In the context of farming, when you use the same pesticide over and over again, at a certain point, it will stop working because the pests might build up the immune system or new kinds of incest are introduced.
The same applies to testing. When the same tests are repeated and the software system remains the same, the same defects will be found.
In reality, new requirements and functions are frequently added, resulting in the possibility of new defects in the system. Since the existing tests were designed to look for specific defects, they may not be effective at identifying new types of bugs.
To overcome the pesticide paradox, your testing team needs to continually update and modify the test cases and test data to ensure that they will be able to identify new defects in the software system.
6. Testing depends on context.
The effectiveness of a testing approach depends on the context in which it is applied. Context includes factors such as the type of system under test, the end users, the technology used, the development methodology, and the business environment.
For example, the way you test a game application may be different from accounting software, as the requirements of the user interface, functionalities, and performance are different for each. Similarly, testing a web application following Agile methodologies is certainly different from testing a desktop application using the Waterfall methodologies.
To apply testing effectively in a specific context, your testing team must understand the unique features of the software system you are testing, the needs of the end-users, the business goals, and other internal conditions.
7. The mistaken belief of absence of errors.
This principle implies that the absence of errors, including identifying and fixing all found defects, does not guarantee that the software is functioning correctly or meeting user needs in reality.
Testers should not narrow their scope to just finding and fixing defects. They should always consider the broader context of the software they are testing and its intended use in real-life scenarios. Ultimately, good software is one that can meet user needs and function correctly in all relevant contexts.
There are certainly a lot more testing principles out there. But these seven principles will give you a good mindset and foundation when stepping into the world of software testing. And that's the end of this chapter. You have learned the concept of software testing, the objectives, and its importance in software development. See you in the next chapter.
Hi and welcome to chapter 2 of the Fundamentals of software testing course. Since software testing is a stage in the software development lifecycle, it's beneficial for us to understand the development lifecycle first before we jump into the main testing activities in a later chapter.
A software development life cycle (SDLC) is a process used to develop and deliver software applications to end users. There are many different SDLC models, each with its own principles, characteristics, and ways of operation. That said, the typical activities or stages involved in an SDLC include:
This stage involves:
- gathering requirements,
- defining the project goals, objectives, and approach,
- estimating the resources,
- identifying associated risks,
- and developing a project plan and documents.
Let's say, a client comes to your software development team to develop a new mobile application. Your plan will include defining the application features, setting the project timeline, identifying the resources needed to complete the project, terms and conditions, and many more things.
After gathering the requirements, the development team moves on to the Analysis stage where they conduct a detailed analysis of the client's needs requirements, decide how they approach those needs and transform them into requirements, and identify any potential challenges or constraints that may affect the project's progress.
In this stage, a document will be written to describe client needs, high level requirements, business opportunities and constraints.
At this stage, a detailed plan and document is created for software requirements, specifications, or user stories. The software architecture, the user interface, and database design is also defined.
Continuing our example, the development team may create a wireframe or prototype of the mobile application, which defines the layout of the screens, the navigation flow, and the user interactions.
In this stage, developers or engineers start writing the actual code for the application, based on the design and specifications created in the previous stages.
The development team may use a programming language like Java or whatever language specified in the requirements to write the code for the mobile application, and use a unit testing framework like JUnit to test early the small individual modules.
In this stage, the team carries out testing activities, from planning, designing, executing, reporting, and analyzing. Different types of tests are executed at different test levels to ensure that the software application meets the client's requirements and expectations in terms of functional and non-functional aspects.
We will discuss in detail the different types and levels of testing in later chapters.
Once the software application is tested and approved, it is deployed to the production environment. This involves installing the software on the specified server, configuring it to work in the specified environment, and may involve training the end users on how to use the application.
The final stage of the software development lifecycle involves maintaining and supporting the software application after deployment. This includes fixing any bugs or issues that arise, updating the software to address new requirements or changes in the environment, and providing technical support to the end users.
Remember, the stages we have just described in the SDLC are at a general and high level. The way they are applied in real life depends on the business context, common practices, and development models which we will discuss in the next lesson.
There are many software development lifecycle models. Each has a different way of performing development activities at different stages, and how the activities relate to one another logically and chronologically. And each development model requires different approaches to testing.
That said, the common software development lifecycle models can be categorized into 2 groups:
- The first one is sequential development models (including popular ones like Waterfall and V models)
- And the second group is iterative and incremental development models (the most popular one is arguably Scrum)
Let's quickly walk through the Waterfall and V-model.
Looking at the Waterfall model, you can see different stages of software development are kind of linear, one after the other. Only when one stage is finished can the next one start. And the testing for verification and validation purposes starts in the very late stage, which is certainly not optimal to ensure software quality with time constraints. We cannot carry out testing activities early and there is a lot to test when all the features are developed and passed to the testing phase.
This, however, is significantly improved with the V-model. For each stage of the development level on the left, the testing team will involve and integrate the test process for corresponding test levels.
But the biggest drawback of the sequential development models still remains. Since they typically aim to deliver complete software that contains a full set of features, it normally requires many months or even many years to deliver what stakeholders and users want. These models no longer work for companies that have aggressive competitors in a competitive market where quality at speed is required.
That's why teams turn to the iterative and incremental development model to overcome this problem. It allows them to break down a big development lifecycle into a series of smaller cycles. In each, they perform all development activities from planning, analyzing, creating specifications, designing, developing, to testing, only for a small group of software functions or features. Then, repeat for the next iteration.
Each iteration delivers working software which is a combination of previously-developed and newly added features or enhancements. This means the software's features will grow incrementally over time until it is delivered with a complete set of features. The software will be constantly updated and new versions will be regularly released to the market and end users, generating profits for the business. Whereas, traditional models require up to years of waiting time.
Let's take Scrum as an example. This Agile framework has each iteration, commonly called a sprint, typically lasting for 2 weeks, and the feature increments are correspondingly small, such as two or three new features together with a few enhancements.
During a sprint, the testing team will involve in different development activities and carries out corresponding test activities for a small group of features. At the end of each sprint, there are reviews and retrospectives meetings. This allows the organization to learn, develop, and deliver fast, and aslo get constant feedback while still ensuring software quality.
Now you understand the software development lifecycle and different models, together with their major pros and cons. But regardless of the models, there are still common test activities, which we will discuss in detail in the next chapter.
Hi and welcome to chapter 3 where we're going to identify and explain the main activities in a software testing lifecycle or test process. This is essential to understand and remember as most organizations follow and carry out these testing activities.
You should always remember that there is no universal fixed test process for all teams. It heavily depends on the organizational context, which may include:
- Software development lifecycle model and project methodologies being used,
- Test levels and test types being considered,
- Product and project risks,
- Operational constraints, including but not limited to budgets, time, resources, complexity, contractual and regulatory requirements,
- and other organizational policies and practices.
That said, there are a set of very common activities that build up a test process. Without them, testing will be less likely to achieve the objectives we have discussed. The activities include:
- Test planning
- Test analysis
- Test design
- Test implementation
- Test execution
- Test reporting
- Test monitoring and control
In the Waterfall model, these activities are likely to happen only once. But in Agile development, they will be repeated in every iteration for a specified set of developed features.
Let's go through each activity and discuss their general tasks.
1. Test planning
Test planning involves activities that define the scope, objectives, and strategy for testing activities. It is a continuous activity, which is performed, adjusted, and updated throughout the product development process.
A test plan is typically created by a test lead or test manager during the planning and analysis stage of the software development lifecycle. A detailed and comprehensive master test plan may be required for traditional development models. While in Agile development, a test plan is required for every iteration and is relatively shorter and simpler. It's often subject to many changes based on the changing requirements and priorities in each iteration.
To create a well-structured test plan, the activities that need to be conducted may include:
- Determining the scope, objectives, and risks of testing
- Defining the overall approach of testing, including test levels, testing types, testing techniques, and toolset to be used
- Integrating testing activities into the development activities
- Deciding what to test, the people and other resources required, and how to test
- Defining timeline and scheduling testing activities for each iteration to meet the defined objectives
- Estimating the budget
- Defining test deliverables, including different test documents and reports to be produced
- And finally, selecting metrics for test monitoring and control
The content of a test plan varies and can extend beyond what we have discussed since it depends on many internal and external factors.
During test analysis, the testing team review, study, and analyze the requirements to identify the testable features and define any associated test conditions (a test condition is an aspect of software that a test case is designed to verify or validate). The output of this stage is determining ''what to test'' and, at the same, identifying any requirement defects to fix early.
The analysis activities may include:
- Going through and analyzing the requirements or any similar work products
- Identifying any types of defects in the requirements such as inaccuracies, ambiguities, contradictions, and omissions
- Identifying features and sets of features to be tested
- Defining and prioritizing test conditions for each feature based on requirement analysis
These activities not only help verify whether the requirements are consistent, properly expressed, and complete, but also validate whether they properly capture the stakeholders' and users' needs.
3. Test Design
After knowing what to test, this stage helps you answer the question ''how to test''. From the defined test conditions from the analysis stage, you will design:
- A set of test cases with defined priorities
- Other supporting test artifacts (often called testware), including test data, test environment, tools, and automated test scripts.
Let's say, for an e-commerce website, you will need to design tests to test different modules and core functionalities like login, adding a product to cart, checking out, paying with a credit card, etc.
To test these functions, we may need to set up at least one non-production environment where access is restricted to only the internal development teams. Since testing in a production environment is too risky (which is where the end users use the developed system), non-production environments are usually created for testing purposes.
Test data is also needed to be defined for different tests. For example, we need combinations of valid and invalid usernames and passwords to test different scenarios of the login function.
To know what test cases need to be designed for corresponding test conditions, you will need to know how to apply different test techniques for white-box, black-box, and experience-based testing.
4. Test implementation
Now we move on to answer the question ''Do we have everything ready to run the tests.'' You will need to build and create any test artifacts that are needed for test execution. Sometimes, this implementation stage is combined with the design stage.
Some typical activities include:
- Developing test procedure (a sequence of test cases in execution order)
- Creating test scripts if your team applies automated testing
- Organizing tests for efficient execution
- Building the test environments where tests will be performed
- Preparing test data and ensuring it is properly loaded in the test environments
These building activities may take a lot of time at first. But once the building is finished, you can reuse these test artifacts in later iterations in Agile development.
5. Test execution
After everything is planned and set, the next step is to actually run the tests, either manually or through an automation tool. Then, compare the actual results with the expected results (which come from the written requirements or verbally from the stakeholders).
Test case failure may occur when there's a defect in the software but sometimes it comes from the executed test itself. For example, a tester fails to follow the test steps in the right order, or there's something wrong with the automated test scripts. In cases like these, the reason why the tests fail may not come from software defects. You will have to be able to analyze, fix, and repeat the execution if necessary.
After running the tests, the next task is to log the outcome of the test executions and move on to the next stage.
6. Test reporting
This stage involves collecting and documenting data about the testing process, test results, and other metrics related to the project. Test results are analyzed, and reports are created to help stakeholders understand the testing status, including defects found, defects fixed, how many requirements are covered by testing, and the overall quality of software.
Typical test reports may include:
- Test summary report: providing an overview of the testing effort, including the testing objectives, scope, results, status, and any issues that were encountered
- Defect report: summarizing all defects found during testing, including their severity, priority, and status
- Test coverage report: providing information on the quality of each requirement based on the status of the corresponding test result
- Traceability matrix report: mapping the requirements to the corresponding test cases, providing a clear picture of how many requirements have been covered by testing
The test reporting phase is very critical since it provides a way to measure progress for decision-making. Good test reporting can also provide a foundation for future testing efforts by identifying areas to improve so that the testing teams can do better in the next iterations.
7. Test monitoring and control
Throughout the testing process, there are real-time monitoring and control activities. The purpose of this is to track the actual testing progress and make sure that it is meeting the objectives and goals defined in the test plan.
Test monitoring involves tracking the test activities that are being carried out in real-time. This includes monitoring the progress of all testing activities from analyzing, designing, and implementing to test execution and reporting.
Test control, on the other hand, involves taking corrective actions when there's anything going off-track compared to the test plan. This includes identifying and resolving issues related to all testing activities and making changes to the test plan as needed.
Test progress against the plan is regularly communicated to stakeholders in test progress reports, which provide an update on the progress of testing activities.
And that's the overall stages of the software testing lifecycle. At the end of each iteration, there are usually meetings where the team gathers feedback and evaluates the entire testing process to learn from the successes and identify areas of improvement.
Hi and welcome to another chapter where we're going to introduce you to another very common concept. That is the level of testing.
Let's say you're building an e-commerce website. It has multiple pages or modules, allowing a user to register, log in, view products, add something to their cart, check out, and pay with the input of their payment information.
When developing a whole website like that, developers usually start with very small individual units or components like coding the rules for buttons, and the username and password fields. Then, they integrate those components together to build a module like a login page. Different modules will then be integrated to form a larger and complete system.
To ensure the quality of a software application throughout the development levels, following the practice of early testing, different levels of testing are introduced.
Simply put, test levels are groups of test activities that are organized and managed together. Each test level may include all the testing activities we discussed in the previous chapter. They are performed in relation to a given level of development, from developing individual units or components to an integrated and complete system.
The four universal levels of testing are:
- Unit or component testing
- Integration testing
- System testing
- And acceptance testing
Let's walk through each level, provided with examples, to have a better understanding.
1. Unit testing (aka component testing)
You can understand a unit as the application's smallest testable piece of code that makes sense to test. That code is written to perform a specific task, such as a method or function. That unit can be tested in isolation from the rest of the code to ensure it works correctly according to the expected result.
For example, for the login function of an e-commerce website, a developer would break down the code into individual units, which could include:
- Input validation: The dev will write code to check if the user account input is valid. For example, if the email address format is correct, or if the password contains a certain number of characters.
- Or redirection: making sure the user gets redirected to their account page after successful login.
Defects can be found and fixed early in the development process, saving time and money since fewer bugs escape to later stages.
Unit or component testing is normally the responsibility of a developer, or occasionally a tester who knows how to code. The developer writes test scripts that exercise the code for each unit. This involves identifying the inputs that the unit expects, executing the code with those inputs, and verifying the expected outputs. Unit tests are typically automated using testing tools and frameworks such as JUnit.
2. Integration testing
When developers finish developing and testing individual units or components, they will then incorporate those together to form a larger module or functionality.
Continuing our example of the Login page, the Login function is a combination of several integrated components such as the username and password input fields, the Login button, the authentication server, and the user database.
During integration testing, we would test the interaction and integration of these components to ensure that they work together correctly to achieve the intended functionality of the login function.
For example, we can create test cases to check that when a user enters the correct username and password credentials, the authentication server correctly checks the user database and grants access to the user. We can also test for scenarios such as incorrect username or password entry, simultaneous login attempts from multiple users, and server downtime.
Testers or QAs are normally responsible for integration testing. During this phase, we should focus more on the integration itself since the functionalities of the individual components have already been covered during component testing.
For example, if there are two components A and B that need to work together to achieve a particular task, the integration tests would focus more on verifying that the data and communication between them are accurate and efficient.
3. System testing
When we have different large modules ready, like user management, product selection, shopping carts, payment, and so on, we integrate them together to build a complete system, which is an e-commerce website.
System testing focuses on the behavior and capabilities of a whole system or product to perform end-to-end tasks and also the non-functional aspects while performing those tasks.
For example, when testing an e-commerce website, system testing would involve testing the website's login, followed by searching, adding items to cart, checking out, and making payment functionalities. This is usually called end-to-end testing since it tests the whole system with complete user flows.
We would also write non-functional tests. For example, test how well the website performs under heavy user load (load testing), how secure the website is against hacking attempts (security testing), how easily users can navigate the website (usability testing), how well the website performs under different network conditions (network testing), and how well the website responds to unexpected failures (recovery testing).
Overall, the goal of system testing is to ensure that the whole website meets the requirements and expectations of end-users in terms of both functional and non-functional aspects.
There's a higher level of system testing that we should be aware of. Let's say, the e-commerce website is integrated with a third-party payment processing system. It would be necessary to test the exchange of information and data between the website and the payment processing system, ensuring that payment information is processed and verified correctly. Other aspects such as security and reliability should also be tested for that integration.
When an internal system is connected with external systems, testing the interactions between these systems is referred to as system integration testing. This type of testing involves verifying that the two systems exchange data correctly, communicate effectively, and behave correctly as expected.
System testing is very important since it produces information that is crucial for stakeholders to make release decisions. Independent testers who rely on requirements or specifications are usually responsible for system testing.
Just make sure, at this level, there are no defects in the requirements since it can lead to the understanding of expected system outputs, which can lead to disagreement and argument. To prevent this, teams should involve testers and QAs for static testing in the early phases of development.
4. Acceptance testing
Acceptance testing is the final level of the software testing process and is performed to determine whether the software system is acceptable for delivery to customers or end users.
While system testing is carried out by the testing team and the main objectives are finding and fixing defects, acceptance testing is usually performed by end-users, customers, clients, or other stakeholders who are not part of the development team.
It is normally black-box testing since the people who test do not know about the software's internal codes, structure, and design. The main focus for acceptance testing is validating whether the software can meet the user's needs in the real world.
One of the most common forms of acceptance testing is alpha and beta testing. In particular, alpha testing is performed in the development site or environment by internal stakeholders or a group of selected users.
Beta testing, which normally happens after alpha testing, is performed mainly in the user environment. Companies usually release the product to only a group of potential and existing users. They will try and use the application to see whether it can meet their needs and provide feedback for improvements.
And that's the four levels of testing, including unit testing performed by developers, integration and system testing performed by the testing team, and finally acceptance testing done by other stakeholders or end users. These levels are established to tight closely with different levels of development to ensure the quality of a software application throughout the development process.
Welcome to the fifth chapter of the course Fundamentals of software testing! In this chapter, you will learn some of the most common and important testing types that most testing teams perform throughout the development lifecycle.
A testing type refers to a group of testing activities, or a specific method or approach used to evaluate particular aspects or characteristics of the software product.
You should remember that testing types are different from testing levels. Testing levels refer to the testing activities that are conducted to ensure the quality of the product, based on the different levels of development.
On the other hand, testing types are used to evaluate specific aspects and characteristics of the software. These aspects and characteristics include but are not limited to:
- Functional quality characteristics, such as completeness, correctness, and appropriateness
- Non-functional quality characteristics, such as reliability, performance efficiency, security, compatibility, and usability
- Structure or architecture of the component or system
- Effects of changes to the system
Testing types can be conducted at different levels of testing where it's appropriate depending on the specific requirements of the software. We will give you some examples to help understand this better. But before that, let's walk through some of the most common types of testing, which include:
- Functional testing
- Non-functional testing
- Structural testing
- And change-related testing (including two smaller common types which are confirmation and regression testing)
1. Functional testing
Functional testing focuses on ''what'' a software system should do at different testing levels. As the name suggests, it involves tests that verify whether software functions work correctly based on the specified requirements.
For example, for a simple calculator app, you would verify that it can handle each math operation separately for unit testing, including addition, subtraction, multiplication, and division. Then, verify whether it can correctly handle chained operations in the right order such as 3+3*3=12 at the integration level and many other specified requirements regarding the app's functionalities.
Functional testing is usually a type of black-box testing, meaning that the testers are focused on testing the functionality of the software without knowledge of its internal workings, structure, or implementation.
The testers treat the software as a "black box," and the focus is on the inputs, outputs, and verifying that the functions give the expected results based on the specified requirements, regardless of how they are coded or implemented.
2. Non-functional testing
While functional testing looks into what a system does in terms of functionality, non-functional testing focuses on ''how well'' that system behaves.
It is very important to ensure the quality of software by ensuring it can meet the user's expectations in the real world. For example, if an e-commerce website takes about 1 minute to load all the products and information on a page, users are likely to bounce off. Or if the navigation is poorly designed, they will get annoyed and stop using the website.
That's why non-functional testing exists, to look into non-functional aspects of a software system, which include these main aspects: performance, usability, security, and compatibility.
Some of the most common types of non-functional testing are:
First, performance testing: This involves evaluating the software's performance under different conditions, such as load, stress, and endurance. In particular, load testing tests the software's performance under expected load conditions such as how it performs with a certain number of concurrent users, transactions, or requests. Stress testing tests the performance under very heavy load conditions, usually beyond the software's expected limits. Finally, endurance testing examines the performance under expected load conditions over an extended period of time.
Next, we have usability testing: This involves evaluating the software's user interface and user experience to ensure that it is user-friendly and intuitive. Testers might require some UI UX expertise.
Next is security testing: This involves evaluating the software's ability to protect against security threats such as hacking, phishing, and data breaches.
Finally, compatibility Testing: This involves evaluating the software's ability to work with different hardware, software, database, and network configurations such as browsers, operating systems, devices, internet speeds, and different network and database settings.
These are just some very common types of non-functional testing. There are many other ones like reliability testing, scalability testing, recovery testing, and maintainability testing. You can search online, read the definitions, and examples to know more about different types of non-functional testing.
3. Structural testing
Next, we have structural testing, which is also known as white-box testing or code-based testing. Compared to black-box testing which verifies functions based on the inputs and outputs, structural testing or white-box testing looks into and tests the software's internal structure in a detailed and systematic way to ensure that it meets the design specifications and functional requirements. The internal structure may include code, architecture, workflows, and data flows.
Let's have an example of why white-box testing is important. With a logical flaw in the e-commerce website's payment module, users might bypass the process and purchase products without actually paying for them. This can lead to financial losses for the business.
Another popular example is software applications that have slow loading time or response time because the code inside is not optimized for performance. This can make users frustrated and stop using the app. Code that is poorly designed, hard to read, or difficult to understand also makes it challenging for developers to make changes, fix bugs, and maintain.
Structural testing can be performed at any testing level. Most of the time, it is carried out by developers or specialized testing engineers who have deep knowledge of coding. It helps detect and fix defects in the code so that the functions and performance of a system can meet the requirements and expectations of users.
4. Change-related testing
When developing a software system, changes are frequently made, which are to correct code defects or add new functionalities. Whenever this happens, testing should be performed to confirm the code changes have corrected the defects or the new functions have been added successfully and the remaining functions still work fine.
There are two common types of change-related testing.
The first one is confirmation testing, also called retesting. This happens when a defect is identified and fixed. It involves re-execute all the test cases that failed due to the said defect, confirming whether it is successfully fixed.
The second type is regression testing. When a new change is added, whether it's a defect fix or a newly added function, it may have a negative effect on the existing codes and functionalities in the system. Such unintended effects are called regressions. Regression testing is performed whenever a new change is introduced to the system to make sure the change doesn't have any negative impact on the existing code and functions.
Regression testing is extremely important, especially in Agile or iterative and incremental development. This is because new changes are always introduced to the system in each iteration.
As the functionalities stack up iteration after iteration, more pressure will be placed on the testing phase. You will have to test the new functions and also, at the same time, make sure the existing ones still work fine. If your team only executes tests manually, it will be very hard to cover all the previous functionalities over time since they become larger and large in number after many iterations.
That's why many teams turn to automated testing and regression testing is very a strong candidate to start automation. Whenever a function is built and added to the system, automated test scripts are created to test that function and then will be added to the regression test collection or test suite. The regression tests will grow in number over time and can be run every time a new feature is added.
Test automation is a great means to significantly increase your testing efficiency with less spent resources and effort. In other courses, we will show you how to use the Katalon Platform and low-code solutions to start applying test automation to your working projects.
Now, before ending the chapter, remember that testing types and testing levels are different. It's possible to perform any testing type (not limited to what we have discussed above) at any testing level. Let's have some examples for a banking application.
At the unit testing level: For functional testing, you can test the calculation logic of interest rate for a savings account. For non-functional testing, you can test the response time of a function that retrieves the account balance from the database. For structural testing, you can look into the statement coverage of the code that calculates the interest rate for a savings account (ensuring every statement in the code has been executed at least once).
At the integration testing level: For functional testing, you can test the functionality of transferring money between two different accounts. For non-functional testing, you can test the performance of the application by simulating multiple concurrent fund transfer transactions. For structural testing: you can test the branch coverage of the code that processes the fund transfer transaction.
At the system testing level: For functional testing, you can test different workflows to verify the core functionalities of the app, including all the modules such as login, account management, transaction processing, and reporting. For non-functional testing, you can test the performance of the system under a heavy load of transactions and test the ability to protect sensitive data, prevent unauthorized access, and other security vulnerability testing and security compliance testing. For structural testing, you can test the code coverage of all the system components, including the user interface, application logic, and database. For change-related testing, you can test the impact of a new feature, such as mobile banking, on the system test cases for existing functionalities.
You would also want to check if the banking app can exchange data and work as expected with the integration with other third-party applications from other financial businesses.
And that's the end of this chapter, you now know some of the most common and important testing types. You also understand the relationship between testing types and testing levels with our examples. In the last chapter, let's talk about the mindset of a QA, QC, or tester and how it is different from a developer's.
In this last chapter, we will look into the mindset difference between testers and developers, the psychology you should beware of, and how to enhance the communication between the two teams.
Testers and developers certainly have different mindsets because they approach software from different perspectives. Developers are typically focused on building and implementing software, while testers are focused on finding defects, verifying and validating the software, and other testing objectives we discussed earlier.
Developers tend to have a more technical and analytical mindset. They are skilled in programming languages, software development frameworks, logic, and tools used to build software. They approach software development from a solution-focused mindset and are focused on creating functionality that meets requirements efficiently.
Testers, on the other hand, have a more critical, exploratory, and professional pessimistic mindset. They approach software from a user's perspective and are focused on finding defects and improving the overall quality of the software. They have a deep understanding of the software requirements, user stories, acceptance criteria, user behaviors, and use this knowledge to identify areas of the software that require testing and improvement.
Let's consider an example. Suppose a developer is working on building a feature that allows users to upload images to a social media platform. The developer would be focused on building an efficient and scalable feature that meets the technical requirements for uploading images. They may consider factors such as file size, image format, and bandwidth usage to ensure the feature works as intended.
A tester, on the other hand, would approach this feature from a user's perspective. They would consider how users might interact with the feature and what issues they may encounter. For example, they might test to see if the feature works for users with slow internet connections or if it supports common image file types. They may also test for edge cases, such as what happens if a user uploads a large number of images at once.
These two mindsets if work well with each other can ensure a very high level of software quality. But there are some psychological factors that affect the relationship between testers and developers.
Confirmation bias is one well-known human psychology that can hinder one's ability to accept information that challenges their existing beliefs. This can be particularly problematic in software development, where a developer may become so invested in their code that they have difficulty accepting that it could contain defects. Even when presented with evidence that their code is incorrect in certain scenarios, a developer may be reluctant to accept this information due to their confirmation bias.
Unfortunately, this is just one of many cognitive biases that can affect how people understand and accept information produced by testing. In some cases, identifying defects during a static test or during dynamic test execution may be perceived as a negative criticism of the developed system and of its author.
To overcome these psychological barriers, it is important to build a culture and environment where testers and developers are educated to understand and appreciate each other's roles and responsibilities. They should aim to work collaboratively, focusing on the common goal of delivering high-quality software. Testers can provide developers with valuable feedback on the quality of their code, while developers can help testers understand the technical aspects of the software.
Effective communication is also crucial in breaking down barriers. Both testers and developers should learn and have good interpersonal skills to be able to communicate effectively and build positive relationships with coworkers.
Here are some ways of constructive communication that you can follow:
- Collaborate instead of battling. Remind your team member of the shared goal of achieving better quality systems.
- Highlight the benefits of testing if needed. For example, defects can help authors realize their mistakes and create quality profitable software for business.
- Deliver test results and findings, backed up with evidence, in a neutral communication way without criticizing the person who created the defective item.
- Try to understand the other person's perspective, and acknowledge their feelings and concerns. Recognizing their point of view can help build trust and respect.
- Confirm that both parties have understood what has been said and ensure that there are no misunderstandings. This can prevent confusion and promote clear communication.
The mindset difference and confirmation bias are also the reason why independent testing is very important. A developer's mindset may have some of the elements of a tester's mindset. And there's no doubt that they are able to test their own code and maybe even at different levels.
That said, experienced developers are often more interested in designing and building solutions than in breaking things to find possible defects. And confirmation bias makes it difficult for them to think of the test scenarios that might expose defects in the code which they write and expect to work.
That's why it's a common practice to apply independent testing, which means the test activities are done by independent testers rather than the developers themselves. It brings an objective and unbiased perspective to the testing process, hence, increasing the defect detection effectiveness, which is particularly important for large, complex, or safety-critical systems.
Here are the different levels of independent testing: At the lowest level, testing is done by the developer who writes the code and builds the system. At a higher level, testing is done by another person in the same development team, such as another developer. Then, there's another level where testing is done by some independent testing team of the same organization. The quality ensured at this level is relatively high. At the highest level, testing is done by independent testers from other organizations like outsourcing testing services. This is expected to have the least bias in testing.
And that concludes our course Fundamentals of software testing. We have covered a lot in this learning session. So, let's recap:
Remember, software testing is more than just running tests. It's a process to verify and validate a system according to user requirements and expectations. Defects can be extremely costly and dangerous.
Software testing involves both static and dynamic testing. Test early is one of the best practices to follow. Try to remember and apply the seven testing principles to your work.
Testing is a crucial part of software development lifecyle. There are many development models and each may require different testing approaches. The majority of software businesses are following the incremental and iterative development model since it allows them to develop and deliver software fast to the market. This is done by breaking down the development process into short iterations where they plan, design, develop, and test only a small portion of features.
There is no universally fixed testing process for all teams. But the typical stages in a software testing lifecycle are test planning, analysis, designing, implementation, execution, report, monitoring and control. These activities are repeated in each iteration.
There are four common levels of testing. They are unit testing aka component testing, integration testing, system testing, and acceptance testing. Each is closely tightened to a level of development to ensure the quality of software throughout the development process.
The most common ones include functional testing, non-functional testing, structural testing, and change-related testing. There are many many other specific types to meet specific testing objectives. Any testing type can be applied at any testing level where it's appropriate.
Testers and developers have different mindsets, which when combined can ensure the highest level of software quality. As a tester or QA, you should be aware of certain negative human psychological factors to overcome them with effective communication and an appropriate mindset when working with other stakeholders.
Congratulations! You have successfully completed the course. These fundamentals will help you build a good foundation to step into the world of software testing. See you on another course.
The course covers the basics of software testing, provides me with helpful insights and is appropriate for complete beginners like me.
Great learning experience
very nice explanation