TestOps tooling tips

Good news, everyone! Today we are going to discuss the tooling tips for testing in the DevOps age. It’s common knowledge that “T” in “DevOps” stands for “Testing”. That is the reason for the term “TestOps” to appear. So, what is it? Wikipedia provides a view on that:

TestOps is often considered a subset of DevOps, focusing on accelerating the practice of software testing within agile development methodologies. It includes the operations of test planning, managing test data, controlling changes to tests, organizing tests, managing test status, and gaining insights from testing activities to inform status and process improvements. 

So, it’s all about process and standards, both of which require suitable tooling. Let’s take a look at how testing should work in the DevOps-ready pipelines. 

What testers usually do, and what is testing automation?

Testers are looking for ways to assure the quality of the product. How? The obvious answer here is “testers write and run tests”. Obvious but not full, of course. 

As a starting point, run through this checklist:

  • Are your tests trusted? 
  • Are your runs stable and persistent?
  • Do your test-suites run fast?
  • Do people easily understand the results of your test runs?

If you are not sure every answer is “yes”, you are in the right spot. So what is testing automation and what about TestOps?

  • We write tests. No tests — no testing, it’s that simple. 
  • We run tests. The way how you deploy and run your tests matters — the more reliability you want the more diverse runs you want to do. 
  • We analyze results. Working with results for numerous runs and large test-suites may take a lot of time. Try to optimize it.
  • We provide reports. If testing results stay in the QA team, you’re doing it wrong. If nobody gets the point of using your reports, you are doing it wrong.  

In this post, you’ll find some recipes to boost your testing productivity with Allure ReportAllure TestOps, and some other useful tools.

Write tests

Choose convenient tools. Convenient is not always the right one, and here is the difference:

  • The convenient tool solves specific tasks. You don’t want a bulky swiss army knife to use one specific feature. 
  • The convenient tool is extendable. Your service is specific, with no exceptions. That means that tool will need some adjustments with time: corner cases, exceptions, etc. 
  • The convenient tool is easy to explain. It lets you scale the team easily. Be sure to introduce your colleagues to new tools and approaches to free some of your time up from routine tasks. The only thing important here is to stay clear. Avoid introductions like “Okay, it’s simple: update helm here, then deploy to canary and monitor logs in Elastic”, — that way you likely just leave the impression of a know-it-all person who does not help and stays with their routine.
  • The convenient tool is easy to unplug. With time new more convenient tools will show up on the horizon. Always remember the cost of jumping off the tool to not be left with a huge amount of legacy dependencies. Allure Report is a good example: it works on a set of annotations in your code and to abandon using it you need to just delete one dependency which will not break anything. 

Don’t bring complexity. If your project is new, stay sure that the amount of new stuff is under control. We have seen projects with 200 tests that include web-, API-, screenshots-, A/B- tests kept in a shiny brand new database. And while we were looking at it some Kotlin tests popped up. Each of the above tools is good, but being brought together all at once they become an intransparent unmaintainable mess for your colleagues and newcomers.  

Always do code review. Senior people will be able to show the right patterns and practices and sort errors out on real code samples. Junior engineers may look through pull requests to find good code or learn from other’s mistakes (for closed pull requests).

Run tests

The first piece of advice is to get used to Docker. The entry threshold of Docker is not high but you’ll get the fruits soon: the ability to run specific environments for your test-suites without calling your Ops colleagues saves a lot of time and grants you an opportunity to compare different non-production tools. Imagine, that you need some special Jenkins plugin or you want to compare Selenoid with Selenium Grid, with Docker you may do it by yourself.

Run tests on Pull Requests. New code may break the project, it’s okay for new projects. The structure and architecture may change as new features appear or old ones are being deeply refactored. On the other hand, production failure is always painful. So, how do we test new code to keep production alive?

  1. Let the developer create a new branch. 
  2. Create PR and run the tests on a new branch.
  3. Check how it works and fix the tests and errors.
  4. Merge the branch to the stable, the sprint branch. That means that the code gets to master only after the stable build is all-green. Run the tests on each merge, of course. Just do it.

Run the tests as frequently as possible. You’ll periodically catch flaky tests that fail from time to time depending on the instability of the tests themselves, unstable testing, time and day issues, infrastructure issues. 

To minimize these factors’ influence, run the test suites on warmed-up infrastructure. Don’t leave your testing server idle, run tests on different environments and branches, at a different time and with various infrastructure. 

So, having a lot of runs results and a history of test runs, you’ll be able to figure out more issues that cause tests to go red. 

Analyze results

Don’t be afraid to rerun tests. Tests may fail for a million various reasons starting from issues in recently deployed code to cloud provider infrastructure outbreak. Tracing and fixing these factors is not always possible, so if your tests are flaky, run them, run, like a Forest.

For Java code, you may choose Gradle with the official Test Retry Gradle plugin or Maven Rerun Failing Tests.

But if you don’t want to keep and maintain a zoo of different tools for automated and persisted runs, Allure TestOps does conditional and automated runs for multiple languages (JVM, Python, PHP, JavaScript) and frameworks (Cucumber, Mocha, PyTest, JUnit, etc.) on different CI’s (Jenkins, Gitlab, Bamboo).

As soon as you start running your tests a lot, another challenge pops up: testers will spend 80% of their time sorting failed tests. Just imagine you have 100 tests, not much, and a fail rate of 5%. That means if you run your tests 10 times a day, the testers will have to sort 50 red tests a day. 5 minutes each makes 4+ hours of work.

The first step here is to start grouping tests by status. Allure Report has four of them: passed, failed, broken, skipped, — and a JSON-list registry that sorts all the tests by error messages. In the end, you have a clean list of 4-7 issues that cause all these dozens of tests to fail. Allure TestOps offers UI-solution to work with Defects — fully automatic and scalable. If handling red tests is a pain for you, check it out in our previous post (линк на дефекты).       

Provide reports

Remember, it’s not you the one who needs testing reports. Do you have all the necessary information in error messages and stack traces? Well, most of your colleagues don’t. So, try to find a way to make the testing results actionable for colleagues and managers.

  1. Export automated test run results to a TMS manual testers use in your team. Some say that automated and manual testing are two different universes but both of them work for one result — the quality of software in production. Sometimes it’s useful to reassure a test with manual expertise or double-check a run result. In this case, good reporting will give you more freedom.
  2. Export bugs and defects to developers’ issue-tracker. You need your fellow developers to see and accept the testing results you provide. Don’t expect any enthusiasm from the receiving side — they are overwhelmed with complex tooling, so provide insights in 
  3. Be ready to make custom dashboards and reports fast. Managers usually don’t drill down in technical details, but they may need some specific metrics slice. This becomes possible when your tests are all tagged and marked, so you just need to build a view for a filter. 

Keep your testing documentation up to date and share detailed and actionable reports. Allure TestOps has all the documentation, exporting, and reporting functionality out-of-the-box, so doing each of these steps takes a couple of clicks.

As a conclusion

All the tools and approaches might seem obvious, but if you stick with them all the time, you’ll see how the efficiency of testing efforts boosts. Let’s look through all the steps to set TestOps’ish pattern in your testing: 

  • Write tests. Choose new tools wisely. Run from complexity and remember that your tooling maintenance overhead should be lower than your main job.
  • Run tests. In the morning and afternoon, before PR and on night release. The more frequently — the better. Build and automate the infrastructure for persistent launches in various environments.
  • Use test results. Many launches mean many failed tests, automate sorting red and flaky tests. 
  • Share testing results. Make all the results and insights you get from testing available and actionable for the team.

If you now look at the Wikipedia definition of TestOps at the beginning of this article, you’ll see it as a much clearer one, as you already know what to do. And Allure TestOps is ready to help you in this long journey to software quality.

Learn more about Allure tools

Learn more about Allure Framework, our open-source testing reporting tool, or Allure TestOps, the all-in-one quality management platform.

Subscribe to our Twitter feed, Gitter chat, or Telegram community, it is a wholesome place to get help and stay up to date with news.

Share this article

Subscribe to the blog

Menu