Ten releases a day. Ten. Releases. A day. This is the reality of the development industry, and it's been so for a while now. The main requirement of modern development that is striving to remain competitive is the speed of rolling out new features. But ten years ago we had releases once a month, or even rarer.
The release cycle accelerated by hundreds of times thanks to dozens of new tools and approaches in development and operation. About 80% of IT companies are using (with varying success) DevOps with clouds, CI/CD systems, containers and monitoring that breaks any barriers, and Agile, which is designed for maximum focus and speed. The focus of testing inevitably shifts from manual to automation. It is impossible to manually test ten releases a day in point of fact.
For most companies, the transition to automation is hiring a test automation engineer, choosing a language and a framework, and then covering the product with automated tests. However, all that is just the tip of the iceberg. In fact, to make automated testing work, you will have to solve tasks of another level:
- How can we run tests frequently, quickly, and efficiently? How we can make testing understandable and useful to the whole team? How we can make sure that the team believes in tests and that the results of the runs are really used, and not just treated as a formality? We've already written about this earlier.
- How can we automate tests to make manual testers and managers trust the tests? Usually, automation is built on the basis of a manual testing team that is accustomed to working in its TMS (Test Management System). How can we bring together existing tools and automation? How much effort and time will be required for integration or migration?
- What can we do if some processes are stalling? For instance, if tests are written, but they are not running; if release managers roll the code out into production despite the "red" tests; if the only ones who see the testing metrics are testers? To solve these problems you need experience and expertise. And if you ignore them, automation becomes stigmatized as "inefficient" and becomes a thing in itself.
That is why Qameta Software designed Allure TestOps, to make test automation simple, transparent, and integrated throughout the development life cycle.
The mission of Allure TestOps is to provide a complete set of tools to help engineers develop tests without being distracted by processes, infrastructure, and integrations. To meet this challenge, Allure TestOps focuses on three approaches, each supported by a set of tools:
- Out-of-the-box test automation.
- Compatibility with TMS.
- Community and Expertise.
Let's look at each of the points in more detail.
Out-of-the-box test automation
Most often, difficulties arise when the first pack (let's say, 500 automated tests) has already been written.*The first thing to do after implementing automation is to ensure that the team trusts the tests.*The organization of automated testing and manual testing is very different. If you just take a framework and write several hundred tests without architecture and documentation, neither developers nor fellow manual testers will deal with them, because to people "from the outside" it is completely incomprehensible what exactly the automated tests are and how they work. Allure TestOps provides a set of tools for building clear and transparent automation from scratch: from test writing to analytics from dozens of runs.
Write tests correctly
Most likely, you will be automating the first manual regression tests and validation. And to do this, you need to transfer almost completely to the code scenarios of manual tests, and after that launch them on the CI side. Allure TestOps has a network of dozens of ready-made native integrations with frameworks in eleven programming languages for this. When an automation engineer transfers a manual test to the code using the Allure plugin, the entire structure is preserved. To do this, import any manual test from Allure. This will save all steps, test data, tags, and any other important meta-information entered by a manual tester. And if the test is updated, the results of changes to the script and configuration of the automated test will be displayed in Allure TestOps. This approach allows you to connect a manual tester with an automation engineer. In this case, the manual tester acts as a customer: he or she writes test cases that contain a lot of useful information. Automators pack all this information into automated tests, and as a result, these tests look almost the same as manual ones.
Properly launch and execute
The second thing you will have to face is the launch of tests. At the first stage, in which there are just a few automated tests, you can get by with a standard CI system. However, as the test base grows, this format will become more and more difficult for the testing team. This is due to several reasons:
- Most modern software is written in microservices. This means that you need to run all the tests less often than making selections for features or specific branches in the repository. It is difficult to make such selections on a "bare" CI system.
- The systems themselves were created for developers, and for many testers, it is unusual to work on them.
- It is often not enough just to run tests. Often if tests crash or are unstable and flaky, you need to restart them, saving settings and history. Usual CI systems don't do this, and developing, configuring, and maintaining scripts can take a lot of time and effort.
Native integrations of Allure TestOps with any CI system allow not only to run all tests, but also to create, run, and rerun small samples. This functionality is often required on projects with a large number of tests. It allows developers, managers, or analysts to run small groups of tests during the day, e. g. covering one specific feature. This saves a lot of time when managing test automation. Automatic tests periodically turn out to be unstable and often the possibility to quickly rerun certain tests is exactly what we need.
Correctly sort out the results
People are smart, automated tests are stupid. If something has changed in the code or in the product, the tests do not think, they fail. This creates noisy data. According to statistics, 5 out of 100 automated tests will regularly fail. Let's say it takes five minutes to review each fail. It turns out that the tester will need 25 minutes to analyze all the failed tests. In general, it's not much. But if these 100 tests are run by different people 10 times a day, then each of the team members will spend 25 minutes sorting out the fails. As a result, 4 hours of QA team time will be spent on the same 5 fails. Repetitive fails are one of the main problems that reduce the credibility of automated tests because in the end, these fails will simply stop being carefully reviewed. In Allure, this problem is solved through defects. If a defect is created and the test fails with the same error, Allure will link the fail to the defect and you won't have to review the failed test again. Thus, developers and QA will spend a minimum of time sorting out automated tests trying to ignore already known problems and paying attention only to fails that occur as a result of changes in the codebase or due to new bugs.
Correctly analyze
Testing doesn't exist only to run tests. It is needed to find problems and errors as early as possible. This means that if the automation engineer is the only one looking at the results of the tens of thousands of test runs he's doing, these tests do not make much sense. To make testing useful to everyone, you need to know how to put together reports. Moreover, just one report most likely won't be enough! Let's see which reports can be useful:
- Export of automated test run reports for fellow testers. You can keep saying as much as you like that manual and automated testing are two different worlds, but in fact, all testers work towards the same goal. Try to make reports as detailed as possible so that your colleagues will understand them and will be comfortable with them.
- Export results (bugs and defects) to a tracker for fellow developers. You should aim for a state where developers see your tests, accept their results, and understand what the reports are saying.
- Sometimes colleagues or managers require some particular data about the progress of testing or development, e. g. feature coverage, suite run time, or number of fails over a certain period. To answer these questions, you need a tool that can quickly collect custom reports from the available data.
To do this, Allure TestOps stores and marks all tests in such a way that you can format the data any way you need it. Analytics for automated tests takes into account many aspects: stability, speed of execution, speed of development, and which problems you encounter more often with your automated tests. Any of these metrics can be easily visualized and turned into a dashboard. Such dashboards allow you to follow trends in the speed of development and execution of automated tests, the frequency of their failure, the workload of team members, etc.
TMS compatibility
Big companies implement automation on top of mature QA departments, where the team already uses tools to manage tests and work with them. Usually, these departments are built around TMS. TMS is a great solution for managing manual testing, but embedding any "manual" tool into DevOps pipelines is hampered by speed and scaling requirements. This means that abruptly moving from a system that everyone is used to and around which processes are built will cause dissatisfaction and misunderstanding on the part of many testers. That is why Allure TestOps fits in perfectly with TMS and allows you to tie automation to development pipelines while maintaining information exchange with manual migration testing. In addition, such integrations allow you to implement automation without disrupting existing QA processes and rebuild processes step by step.
Integration with Xray Test Management and TestRail. These systems are the de facto standard in the manual testing area. Their wide functionality allows you to build manageable and understandable processes in the QA department. However, the introduction of automated testing with these tools requires additional development: you need to integrate CI systems and all frameworks with the provided API for automated tests.
Allure TestOps is built in such a manner that the team works in a single pipeline. Integrations with classical TMSs were developed specifically for this purpose, so that manual testing is not only in tune with automation but also gets direct contact with the development and operation teams. In addition to developing integrations with TMS, the Qameta Software team is always ready to help users automate the migration of all data, parameters, and settings from any TMS, if needed.
Community and Expertise
Qameta Software came into being around Allure Report, an open-source project used by hundreds of thousands of users around the world. Over the 10 years of its existence, the project has gathered a huge community, which allowed Qameta Software to focus on implementing ideas and solutions from many users and contributors.
This evolutionary development path led the development team to Allure TestOps, a universal tool that allows both innovative startups and large businesses (e. g. banks, telecom companies, IT corporations) to implement effective automation processes and find growth points in processes at the intersection of QA, Ops, and development.
In addition, Qameta Software actively shares its experience in open sources:
- In the corporate blog, we share testing practices and design patterns for automators.
- We speak at international conferences and meetups around the world.
- We support online schools and educational projects.
Write tests. Allure TestOps handles the rest.
The bottom line is that the main task of Allure TestOps is to allow the team to focus on product quality and the development of automated tests, providing all the process and infrastructure complexities out-of-the-box.