What is Test Automation and Why Do We Do It?

This post is part of the Web UI Automation With Python for the Lazy QA software testing course.

I know this is a crash course in software test automation and we want to jump into coding, but I’m going to use this lesson to first answer two important questions:

  1. What is software test automation?
  2. Why do we use automation?

The first question is important, because it’ll explain the difference between manual and automated testing. If you’re new to test automation, then understanding this difference will help you to not feel like you don’t know what you’re doing.

Like the first question, it’s important to be able to answer the second question to have a sense of purpose. Your answer will serve as a reminder to yourself and your stakeholders (product, dev, QA) why your test automation is valuable.

Test Automation has its technical challenges, but if you can’t advocate for your project then you won’t even get to the coding part.

In this lesson I’ll share with you how I would answer these two questions and then give a brief walkthrough of how I plan my automation initiatives. After you’re able to answer both of these questions on your own you’ll have the background and confidence to get hands on with the tools of the trade.

What is software test automation?

Simply put, software test automation is just testing software with the use of automated tools. Rather than investing your time performing testing steps by hand, you could use other software to run through the steps for you.

I’ll give two examples to compare how our testing looks like with and without the use of automation. The first example is a UI test, while the second test is a performance test. Despite being different types of tests both will leverage automation in a similar fashion.

Web UI Test Example

Let’s say you’re testing a common scenario like logging in to a website. To do this manually, you’d run through these five steps:

  1. Navigate to the login page.
  2. Type in your username.
  3. Type in your password.
  4. Click on the submit button.
  5. Check if you are logged in to the homepage successfully.

It’s pretty straightforward, right? All you need is a browser and yourself to perform the actions. But, how would you automate those browser interactions?

Remember how we defined software test automation as “testing software with the use of automated tools?” According to this definition, we’d need a tool to do the navigation, the typing, the clicking, and even the checking for us. Lucky for us, there are plenty of open-sourced tools that can perform these browser actions for us. If you Google “Web UI automation tools” you’d get a list that includes popular options like Selenium, Cypress, and Playwright.

In later lessons we’ll get a chance to practice automating browser actions using Playwright.

Performance Test Example

Now, what if you were to automate a performance test? The process of automating a performance test is no different. If we take the same login scenario and turned it into a performance test, it might look something like this:

  1. Navigate to the login page.
  2. Type in your username.
  3. Type in your password.
  4. Click on the submit button.
  5. Check how long it took to fully load the homepage.

If you were to run through these steps manually you’d need a browser, yourself to perform the browser actions, and a stopwatch to keep time. Just as we needed a tool to automate our Web UI test, so too we’ll need a tool that can ideally perform all five steps without human input.

Off the top of my head I can think of using Google’s Lighthouse tool, which is built into Google Chrome, to measure page load times. To full automate this test, however, I would need a powerful tool that could simulate browser interactions (steps 1-4) in addition to measuring the page load times (step 5). Google, again, will find us plenty of automated performance testing tools for the job (Gatling and JMeter).

I hope these two examples demonstrate what software test automation is. I only covered two types of tests, but you can imagine with such a broad definition of test automation that many types of software testing could be automated.

The next section will explain why we might choose to use automated tools in our testing versus doing everything manually.

Why do we use automation?

When I ask myself, “what’s wrong with checking the login flow myself?” the answer is nothing.

I’m a decent tester and for such a simple scenario I could complete the tests as quickly and accurately as an automated test. Maybe for the performance test a performance testing tool would be a bit more accurate than a stopwatch.

So, then why do we need automation you may ask? Well, simply stated, I’m not a machine.

There are some things machines are better at, and that’s being able to consistently perform tasks quickly and without a decline in accuracy. Yes, I could be just as fast or accurate as an automated test, but only for the first few tests or so. The speed and accuracy benefits from automation add up over time and over hundreds of tests. Eventually, my performance would decline as I take on more and more tests. The more complex and annoying the tests are the less likely I’d be 100% focused enough to do them correctly.

Let’s dive a bit deeper into the speed and accuracy gains you can expect from leveraging automation.

Speed

When I think of automation the invention of the assembly line and all the robots since then that take part in the car manufacturing process comes to mind. Before these inventions, cars were assembled by hand and the process was slow. It was because these cars could be built faster by automating part of the assembly process that we reaped the benefits of mass production.

In the same way, test automation allows us to mass produce test results. Because we execute tests much faster with automation, we’re able to use the time savings to run even more tests. I like to think of my automated tests as fellow team members testing with me in parallel. I give it all the boring, annoying tests to run while I play around doing exploratory testing.

I always bring up the speed factor when it comes to convincing others to adopt automation. It’s intuitive that tests run much faster with automation and it’s a great selling point tying that back to improving testing coverage.

Accuracy

While speed is my go to reason for leveraging automation, accuracy is not far behind. I say this, because I know for a fact that I gloss over testing steps whenever I do a long regression test. With automation, I wouldn’t have to worry about messing up the test; however, the only reason why I don’t fully commit to the accuracy angle is because there’s an assumption that our tests are bug-free and testing what we expect it to test.

That is not always the case my friend.

One of the difficulties of writing automated tests is that you have to micromanage your tests. You have to instruct it every little detail to check, because it doesn’t make implicit checks against the UI like your eyes and brain would. If you take a glance at a badly designed page, you could point out ten annoying flaws in an instant. Your automated tests can’t unless you explicitly code in those ten checks.

Nevertheless, if your automated tests are bug free and consistently producing the results you expect, then you can be confident that they are checking the right things. That’s where automation’s accuracy and predictability shine, since you can count on the tests to check every step in the exact same manner from your first test run to your hundredth. You can’t say the same for a person testing the same test case a hundred times due to the usual human limitations.

We get tired. We get distracted. We want to clock out.

To reuse the car manufacturing analogy, think about the robot hands that screw in the nuts and bolts on cars. Would you expect a human to be able to do that as quickly or consistently as a robot programmed for this sole function?

Humans were not born for a singular purpose.

Exercise: Drafting a Test Automation Plan

I’ll end this lesson by sharing my thoughts on test plans and walking you through how I plan my automation projects.

Test plans are just documents that detail your QA process. It usually includes enough detail to give the reader an understanding of what you test, how you test it, and everything in between (who, what, when, where, how).

Since there are so many “Test Plan” templates online, I won’t add another one to the mix. What I will say is that you should Google different examples and pick information from each example that you want to communicate through your test plan. Don’t feel like you need to find the “perfect” test plan template to use. I think a lot of people fall into this trap and end up adding a lot of low value information.

Unless you’re testing mission critical software, I don’t think you need to be that detailed in your plans. Stay agile. Know what you want to communicate to your audience and include that information.

If you do need an example to look at though I would suggest to take a look at this article from BrowserStack. This company is a pretty trusted source in the software testing space and also the article wasn’t 20 pages long.

I like good documentation. It’s a lifesaver when I’m new to a project and there’s no one to answer questions for me. But, I don’t want to read double digit pages of fluff and neither do your stakeholders. For me, I like to draft a one page test plan that describes what I plan to automate.

I typically include the information below:

  • Brief summary of how automation fits into the current manual QA process. I talk about when automation gets run, how automation bugs are logged, and all the usual manual QA stuff but for automation.
  • A list of proposed test cases to automate or a link to the tests if there’s not enough room. This is probably the most important piece! You don’t want to waste time automating something no one considers valuable.
  • I sometimes go into more detail listing devices/browsers that will be covered, but this is only if cross-browser is a requirement. You can add additional automation requirements here as needed.
  • Expected timelines for POC or initiative completion, and next steps. This is where I put my project manager hat on and show my stakeholders I can get this project done.
  • Potential blockers and other support I expect to need from stakeholders. Nothing goes smoothly ever. Use this section to identify and clear any early hurdles.

After I get the draft in a decent state, I schedule a half-hour kickoff call with my stakeholders to gather feedback and to hash out the details. During the meeting I make sure to take some notes and a list of action items to follow up on. The final draft serves as a contract of sorts between you and your stakeholders, so that the scope of your automation work is laid out and there are no surprises.

If there’s a takeaway from my ramblings, it’s to treat your test plans like concise vacation backup guides. Keep them under one page. Use them like reference documentation for new hires and external teams. Don’t make your QA process so complicated that someone needs to refer to the test plan anytime they test something. If you make the test plans too detailed, then you’ll have to update them often, or they end up outdated.

Or worse, no one reads them.

Lesson Wrap Up

You should now know what software test automation is, why we do automation, and how to plan an initiative. In the next lesson we’ll get hands on and play around with a few automation tools.

I don’t like giving homework, because no one does it. But, if you want something to do, then come up with your own test automation plan for https://demoblaze.com/.

Fair warning though! If the page count gets close to double digits pages, then it’s an automatic F!

  • July 24, 2023