Automate Approval Testing What It Is and How to Use It

Automate Approval Testing

Testing is a crucial part of a software development life cycle whose goal is to ensure the reliability and stability of applications. Traditional testing techniques involve writing detailed test cases with expected outputs. However, the amount of code when using only assertions (a piece of code that checks a specific condition to ensure that it holds true during the testing process) grows, and each time you have to write new sets of assertions that will be complicated to understand for large and branched objects or your own framework. Therefore, standard unit tests can become cumbersome and difficult to maintain, especially in dynamic environments. Approval testing can come to the rescue here. It offers an alternative approach that simplifies the testing process by capturing and approving system outputs.

On the other hand, approval testing can be a valuable technique for quickly capturing the existing behavior of undocumented legacy code. When dealing with legacy systems, especially those lacking proper documentation, understanding their behavior and making changes can be challenging. In such cases, approval testing can serve as an excellent tool to provide a safety net and allow for refactoring or enhancements without introducing unintended consequences.

In this article, we will explore approval testing and its benefits and provide practical examples of approval testing in Python.

What is approval testing?

Approval testing is a software testing technique used to verify that the output of a system or a component under test matches an expected result. The result is represented in text form and can be either a serialized object for which asserts were previously written or any custom representation of them. Unlike traditional testing, which involves writing test cases with expected outcomes in advance, approval testing captures the output produced during manual testing and “approves” it as the correct result. Subsequent test runs compare the output to the approved result, and any deviations trigger an investigation to determine whether the changes are expected.

Approval test after execution will generate a “.received” file, the correctness of which will be assessed by the test author. The confirmation creates an “.approved” file, which will be compared against the newly generated “.received” file when the test is run later. If there are discrepancies, the test is considered to fail, and the test validator framework will open the file in the diff tool, which is used to compare and highlight the differences between two files or texts.

Thus, approval testing has many benefits, which we summarize below.

  • Reduced time for writing tests: it eliminates the need to manually update test cases with new expected outputs. As long as the output is correct and approved, future test runs will automatically compare the results. However, it’s essential to be cautious when updating the approved files to avoid introducing incorrect results into the testing process.
  • Flexible output verification and clarity of the presentation of the result: approval testing can be used for systems that produce non-deterministic outputs, such as those involving graphical interfaces or complex data structures.
  • Ease of collaboration: since approval testing captures the actual output, it fosters collaboration between developers and testers. Developers can approve correct results while testers can investigate and fix discrepancies.

The disadvantages are a slightly increased test execution time compared to ordinary tests, the need to manually “approve” every time the result is changed, and the difficulty of presenting the initial result before the test is executed.

Approval testing process.

The process of using approval testing typically involves the following steps.

  • Record the approved output: Initially, you need to manually test the system or component and record the expected output in a file (often a text file) called the “approved file.” This file will serve as the baseline for future tests.
  • Automate the test: Write an automated test script that executes the system or component with specific inputs and captures the output produced during the test run.
  • Compare the result with the approved output: After running the automated test, the output obtained is compared to the contents of the approved file. If the output matches the approved content, the test passes. Otherwise, it fails, indicating that something has changed in the system.
  • Review and update the approved file when needed: If the test fails and the changes are expected, you can update the approved file to reflect the new correct output. If the changes are unexpected or incorrect, you need to investigate and fix the system or test.
qodo
Code. As you meant it.
TestGPT
Try Now

To integrate approval testing in your workflow, the following should be done.

  • Select an approval testing tool: Choose an approval testing framework or library that suits your programming language and testing environment. Some popular approval testing libraries include TextTest and Testoot for Python, ApprovalTests for various languages (including Python), and ApprovalTests.Net for .NET.
  • Install the library: Install the chosen approval testing library using the package manager relevant to your programming language.
  • Implement approval testing logic: If you are developing tests for the legacy code, focus on identifying the most critical scenarios first and use cases that exercise various functionalities of it. These scenarios will serve as the foundation for your approval tests. Manually test the legacy code using the identified scenarios and capture the output, which represents the system’s current behavior. Once you have the output from the manual testing, create approval test cases using the approval testing framework.
  • Continuous Integration (CI): Automate approval testing and incorporate it into your CI pipeline to automatically validate changes made to the system as part of the development process.

Combining qodo (formerly Codium) with Approval Testing to quickly capture existing behavior of undocumented legacy code.

Further, we will illustrate how to combine the qodo (formerly Codium) extension and the ApprovalTests library to achieve high-level software regression tests.

First of all, install the ApprovalTests library.

pip install approvaltests

Second, create assertion tests using qodo (formerly Codium). Assertion tests are needed to verify the behavior of the code. For this illustration, we’ll use a simple function that calculates the sum of two numbers.

def calculation(x, y):
   return x + y

Here is a test set generated by qodo (formerly Codium).

# Generated by qodo

import pytest

class TestCalculation:
   # Tests that the function returns 0 when x and y are both 0
   def test_addition_of_zeroes(self):
       assert calculation(0, 0) == 0

   # Tests that the function returns 2 when x and y are both 1
   def test_addition_of_ones(self):
       assert calculation(1, 1) == 2

   # Tests that the function returns -2 when x and y are both -1
   def test_addition_of_negatives(self):
       assert calculation(-1, -1) == -2

   # Tests that the function returns the correct sum when x and y are large numbers
   def test_addition_of_large_numbers(self):
       assert calculation(1000000, 2000000) == 3000000

   # Tests that the function returns 1 when x is 0 and y is 1
   def test_addition_of_zero_and_one(self):
       assert calculation(0, 1) == 1

   # Tests that the function returns None when x or y is None
   def test_addition_of_none(self):
       assert calculation(None, 1) == None
       assert calculation(0, None) == None

Once we have our assertion tests, we can transform them into approval tests using the verify function from the ApprovalTests library. It takes the output of the assertions and uses it in the approval testing process.

# test_example.py

import pytest
from approvaltests import verify
class TestCalculation:
   # Tests that the function returns 0 when x and y are both 0
   def test_addition_of_zeroes(self):
       verify(calculation(0, 0))
   # Tests that the function returns 2 when x and y are both 1
   def test_addition_of_ones(self):
       verify(calculation(1, 1))

When the tests are rewritten, they should be run using the pytest test runner. The first time you run the tests, they will fail, as the output of the assertions is not yet approved.

python -m pytest test_example.py

After running the approval tests, you will find the received output files (with a .received extension) containing the test results. After reviewing the outputs and ensuring they match the expected behavior of the function, rename the received files to approved files (with a .approved extension).

These approval tests can be effectively used to ensure that the function’s behavior remains consistent over time. As more scenarios and assertions are added, the approval tests will help capture a comprehensive understanding of the code’s behavior.

Overall, approval testing can be a valuable addition to your testing toolbox, especially when traditional testing methods become cumbersome or impractical or in scenarios where the system’s output is not fully deterministic, such as when dealing with complex data structures or graphical user interfaces. It helps avoid the overhead of maintaining detailed expected outcomes for every test case and instead focuses on verifying changes in the system output.

More from our blog