How we optimized Automated Tests for Faster Feedback

We planned to integrate automated smoke tests to the deployment Pipeline. So, we started gathering the scenarios from the sprint team, which were P0 test cases. 25 cases were picked for automation which had pre-requisite data, configuration etc.

We picked an environment , distributed the cases among the team and started writing the automated test script.

In 2 weeks, we had completed the script and to ensure one can easily figure out the BVT suite (smoke test), we decided to keep all cases in one excel file.( Our second mistake). So what was our first mistake, will come later.

All was good; we made this suite a part of the deployment, developers were happy, sprint team was happy as we were able to catch the critical issue and take prompt action and we lived happily ever after…… This is what we thought after getting consistent pass percentage.

“We had a keyword-driven framework and used to write test cases in excel. One excel meant multiple test cases corresponding to one feature. E.g. Suite named as User would have cases around user creation, updating, deletion etc. So if one case failed the entire suite used to re-run.”

Problems vs Solutions

But the real problems started after a year…

  1. In the excel file, if one case failed, entire cases were re-run and execution time doubled. This delayed deployment as without smoke test being green, deployment was not certified. Stakeholder had huge trust in BVT suite due to its past results
  2. Datasets which were pre-requisite for the actual cases started to expire.

And we missed to automate the data set creation part (which was our first mistake)

3. With the increase in data due to continues execution on the same environment, you heard it right we used to run test cases in same production account due to which the script started taking more time to execute.

Our journey towards success

This time rather working in silos, we decided to create a plan, brainstorm all possible scenarios, and then proceed with revamp.

  1. Instead of keeping all cases in one file, we decided to split the file in independent cases.
  2. We automated the pre-requisite and divided into 2 categories -> Multi Run data set and One Time Data set (needs to be run once in an environment, e.g. Venue Creation).
  3. To repeatedly avoid repeating the same basic steps in different cases, we decided to navigate directly to the page in a few cases. In another, we followed how the user would use our system to ensure we cover both aspects.
  4. We created a cleanup script which used to run periodically to delete.
  5. As we were using selenium grid for parallel execution, we ensured all-suite have the same execution time as final time in the grid is the taken by a suite with maximum time.
  6. If any case failed, only that suite was rerun and in this scenario execution time was from 3 to 6min rather than 15 (original execution time for the smoke test) to 30 mins.

Lessons Learned

  1. Don’t jump to solve a problem without discussing what the future will be like. Here we didn’t think about failure, data and data sets.
  2. Always run your cases in an environment where automation was never run.
  3. Choose the best and worst environment in terms of data to find the gaps.
  4. Don’t create something and leave it, always monitor and try to observe.

About the Author:

Jyoti Mishra | Lead Software Engineer in Test

Jyoti serves as a key member of the software development team as the lead QA tester on development projects in PayU India. She supervises a five-member software QA testing team to develop and implement test practice around exploratory and automation testing. She always ensures that her project members have the necessary resources to achieve a milestone by leveraging her strong collaboration & communication skills. An avid reader who loves to interact with people and believes that those interactions helped her be a better individual, both professionally and personally.

Success! You're on the list.

Leave a Reply

Up ↑