How to Decide if You Should Automate a Test Case

This is a guest post by Nishi Grover Garg.

Test automation is imperative for the fast-paced agile projects of today. Testers need to continuously plan, design and execute automated tests to ensure the quality of the software. But the most important task is to decide what to automate first. Here, we have compiled a list of questions to help you prioritize what you should automate next and guide your test automation strategy.

Is the test going to be repeated?

Automating a test case will be worthwhile if it is going to be repeated multiple times during test execution. Make sure to consider the frequency with which you plan to execute that test case or the test suite.

Is it a high-priority feature?

Some features or areas are more prone to failure than others, and those are the areas that will be a bigger return on your investment in automation. Automating high-priority tests will ensure consistency and reduce the chances of human error. 

Do you need to run the test with multiple datasets or paths? 

A data-driven approach to test automation can be the most useful framework when tests need to be executed with varying datasets. Automating once and repeating the same steps with different data takes away the drudgery and minimizes the chances of misses or errors.

Is it a Regression or Smoke Test?

Regression tests and smoke tests are the ones you will end up executing the most frequently. Generally, these test suites are the ones that cover the width of the entire product in some capacity, so automating them will give you a quicker way of assessing the quality of the entire software. Automating regression test suites can help integrate them with the build process in your DevOps pipeline thus making quality a part of your regular build!

Does this automation lie within the feasibility of your chosen test automation tool?

Doing a feasibility analysis of your test automation also requires you to see if it is even possible to perform that specific test automation using your current set of tools. For example, trying to automate inputs to an SAP interface for one specific test might not be the best use of your time if it isn’t supported by your web automation test tool. Acquiring another tool or trying alternates might not be a good use of the time either. You might very well perform that one test manually in such a case!

Is the area of your app that this is testing prone to change?

This is an important question to ask yourself because If the test you are considering automating is prone to change, it might not be worthwhile to spend your effort on that just yet. Moreover, automated tests that rely on the user interface and the UI elements are the most prone to break when the UI changes.

Take a deep look at the upcoming changes in the functionality and its integrations with related features, if you see big changes ahead, perhaps delay the automation of those tests for now.

Is it a Random Negative Test?

We know we need to perform negative tests to test the sturdiness of a feature. However, trying to automate random negative tests might not be the best use of automation.

Can these tests be executed in parallel, or only in sequential order?

If you design your automation well, you can save an immense amount of time by running tests in parallel. Leveraging this is paramount to getting the best out of your automation efforts. If your tests are such that they will end up only being executed in a certain sequential order, they might not be the best candidates for automation. Although you could still automate them, you would not be making the best of your efforts. Perhaps you should look at your automation strategy first.

Are you doing it only for the reports?

Test automation tools will provide you with useful insights into the quality of the software that you can showcase with the use of some insightful reports. But are these reports the only reason you are looking at automation? Just looking at the red or green status results of the test reports might not be the best way to assess the software quality. You will need to spend time analyzing the tests that failed, why they failed, and what needs to be corrected. Tests created once will need maintenance and continuous monitoring to keep them up to date. All of that needs to be kept in mind and the effort needs to be accounted for. There is more to test automation than just the fancy reports!

Looking at the questions above, analyze the state of your test case, the intent behind its automation, and its feasibility, as well as the value that you might get out of it. Hope that helps you decide what tests you should or should not be picking for automation!

Nishi is a corporate trainer, an agile enthusiast, and a tester at heart! With 13+ years of industry experience, she currently works with Trifacta as a Community Enablement Manager. She is passionate about training, organizing community events and meetups, and has been a speaker at numerous testing events and conferences. Check out her blog where she writes about the latest topics in Agile and Testing domains.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

Automation, Programming

How to Report On Test Automation (Tools + Metrics)

Test automation streamlines repetitive tasks, accelerates testing processes, and ensures consistent, reliable results in software testing. Reporting on test automation provides actionable insights and visibility into the test outcomes, enabling teams to mak...

Automation

How to Improve Automation Test Coverage 

When developing automated tests, test coverage is fundamental for improving end-user experience and increasing software quality. This article outlines actionable steps and strategies for improving your automation test coverage.

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...