We all know the importance of software testing, but the benefits it brings to the project are proportional to how good the testing is. To ensure that your testing is efficient and you are getting the most out of it, here are some good practices you can implement in your daily work.
Choose the Right Tools
Testers work with many tools in their daily life: communication tools, project management tools, test management tools, and, more and more often, test automation tools. That is why it’s important to use the right tool for each task. While some decisions may be out of our control because they are usually company-wide, like which communication (email, chat) tools are used, you can choose which testing tools you want to work with.
For test management tools, as well as for web automation tools, understand first what problems you are trying to solve. For example, an API project will have different requirements than a mobile application, so at least in terms of automation, different tools and frameworks apply. Other things to consider when choosing a tool are:
- what is the available budget?
- what are the team’s technical skills?
- what are the tool’s integrations? (between themselves, with communication tools, with CI/CD tools)
Find the Right Balance between Manual and Automated Testing
Automated testing is all the rage lately, and for good reasons, but it doesn’t mean that everything needs to be automated. 100% automation is most of the time not feasible, and some tests are simply too complicated to automate and also have a good ROI. Aim to automate the repeatable tests, the tests that validate functionality that is not expected to change often, and other tests that cannot be performed manually (like load and stress tests).
Reserve time for manual testing, so testers can use their experience and creativity to perform exploratory and ad-hoc testing. Usability testing is also better suited for manual testing because only a human can evaluate if a functionality is “easy to use”.
Focus on Good-Quality Tests
When writing tests, whether manual or automated testing scripts, follow best practices for their creation:
- each test should validate a single functionality.
- automated tests should have consistent results as long as the implementation doesn’t change.
- in test automation, be wary of false positives, because they can cause you to miss important bugs.
- test steps should be clear and easy to follow.
- test data needs to be relevant and as similar to real-life data as possible.
Don’t Forget Non-Functional Testing
Functional testing is important because it validates that the application works as expected, but non-functional aspects shouldn’t be neglected either. Depending on the nature of the project and the application, consider performing:
- security testing – especially if the application deals with sensitive data;
- accessibility testing – in the US and the EU it has become a norm for apps to meet standards for being used by people with various disabilities;
- usability testing – user should be able to use the application easily;
- performance testing – for applications that will require multiple concurrent users;
- localization testing – if the application will be used by users from different geographical locations.
Use Real Devices
We expect web and mobile applications to be used by users on various platforms, browsers, and operating systems. That’s why it’s important to test that the SUT works well on all of them. While it may be difficult and time-consuming to test ALL available platforms and combinations, you can choose the most common ones, and use real devices in your tests. This way, you will use the application the same way the real users do and can identify potential problems quickly. For the rest of the devices, you can automate the tests and perform cross-browser or cross-platform testing on a grid, using emulators or simulators.
Evaluate Test Results
For both manual and automated tests, having tests but not interpreting their results brings little value. After each test run, look into the test results and gather information from them. Some questions to ask yourself are:
- What tests used to pass and now fail?
- Which functionality has the most failures?
- Should you improve automation coverage on the functionalities that fail so the defects are caught earlier?
- Are the tests up to date or do they need maintenance?
Track the Right Metrics
Among some of the best practices is identifying the right metrics for your testing processes. Don’t track what somebody else is tracking, make sure that the metrics are relevant for you. Also, make sure that the metrics can be measured accurately. Then set realistic objectives for these metrics.
Some relevant metrics in software testing are test coverage, test case automation percentage, test case passing rate over time, or metrics related to defects. All these should have the goal of improving the quality of the application over time. Based on the information gathered, the testing process can be improved. For example, if the coverage is found not to be high enough, the team can focus on writing more tests. Or if many tests fail, the team should work on fixing the defects causing the failures. This will improve confidence in the quality of the software and guarantee better releases for the end-user.
Final thoughts
A good QA team will try to improve all the time. To do that, it’s a good idea to follow some of the testing industry’s best practices and apply them in their daily work. The list of practices in this article is not exhaustive, and like everything else in software, it can be context-dependent. But it can be used as a place to start, and you can add your own practices to the list of things you do to improve your work.