Few weeks ago, I gave a talk on this topic for Ministry of Testing Kuala Lumpur meetup where I was the speaker . After the talk, I wanted to further enhance the reach of this topic to fellow mates whom might need it. So instead of simply sharing the slides, i’m writing this article to provide better context to some of the key slides. Hope it benefits you.

One of the many challenges QA will face when automating their testing is selecting test cases which can range from hundreds to even thousands test cases. This is the problem i faced as well when embarking into the journey of building my first automation framework. After doing plenty of research online, I had trouble filtering those test cases because the process would take tremendous amount of time for one thousand test cases. So I decided to come up with my own recipe and tested it to see if it works.

But before diving into that, let’s take a look at this famous buzz question ongoing in test automation field right now.

Is 100% test automation possible? My take on that is never! This is because no machine can automate a tester’s instinct. My favourite analogy to this is quoted by a famous author :

Autopilot in airplane cannot replace a pilot. It’s there only as an aiding tool for the pilot. Same goes to automation and testers.

Pradeep Soundarajan

Now that we are left with areas that can be automated, should we automate all of it?

Of course, we can automate everything that can be but there are factors which must be considered. Money, time and deadlines are the common constraints in work environment which usually makes it hard to achieve. So what do we do? Smartest thing to do is automating the “right” test cases.

But how do we pick the right test cases?


This recipe is written by the famous Angie Jones. All you need to do is give points for each test case based on criteria such Gut, Risk, Value, History and few more. Once you have the score, you use the range decision model as below to decide whether to automate the test case or not.


This modularisation model is similar to Angie Jone’s specifying the score but instead you give a “YES” or “NO”. Another difference is that, this model considers some extra factors such as complexity of test case and repeatability.

These 2 recipes are incredibly effective as it gives metrics of measurement for clear decision making. However, the task of giving points for thousand test cases seems like an exhaustive task. Upon mustering some confidence to experiment, i came up with my very own experimental recipe to pick the right test cases to automate.

As funny as it sounds, this is the best mnemonics I manage to come up with.

As you can see above, each word represents a process. I came up with these mnemonic to remember them easily. Now let’s look at what each process means.

Taking a step back is crucial because when you are wearing the “automation hat” high chances you will be inclined to immediately select the most exciting or most repetitive test case to automate. Instead of jumping into action, visualise the whole system, understand the interactions between the components and the meaning of it.

The second step is identifying the application’s goal and selecting what are the core functionalities and use cases needed to support the application’s goal.

Once you have identified the core functionalities and the test case areas of the application, next is to think and list down the possible combinations of those areas.

In order to avoid selecting ALL COMBINATIONS, filter out the combinations based on the ingredients/factor such as complexity, long testing, criticality, risk and so on. The weightage of each factors depends on the type of application we are testing.

If we are testing a health care or finance related software, risk and criticality will carry very high percentage. In comparison, if we are dealing with blogs or simple listing websites, data driven factor carries higher weightage and risk will be relatively lower.

Apart from selecting test cases, we must also eliminate test cases which should not be automated. We can eliminate them based on certain criteria such as test cases related to A/B test, feature that will be updated in short period of time and features that has low usage frequency.

Low usage frequency test cases does not mean it should not be automated. If you had to prioritise, probably you could least prioritise it and handle them later on during your extra resources.

As you can see, these are the conclusion. It strongly revolves around the understanding of the system as a whole rather than diving into thousands of test case straight.

Based on my experience, I strongly recommend to build your own recipe by using other materials as only guide including mine. This is very important as there is no one fit solution for all. So be brave to experiment your own methods and learn from others as well.

Hoping to see you share your experience on how you selected the right test cases to automate. Below is the link to my full slide.

Success! You're on the list.

We would also like to recommend you to checkout Feedspot, where you can find Top 75 Automation related blogs trending in the world right now (including us). Enjoy exploring!


Add yours

Leave a Reply

Up ↑