Using externally defined expected results

For some applications, the users can articulate processing rules that describe the software's behavior. In other cases, the job of an analyst or a designer transforms the user's desires into procedural descriptions of the software.

It's often easiest to provide concrete examples of expected results. Either the ultimate users or intermediary analysts may find it helpful to create a spreadsheet that shows sample inputs and expected results. Working from user-supplied, concrete sample data can simplify the software being developed.

Whenever possible, have real users produce concrete examples of correct results. Creating procedural descriptions or software specifications is remarkably difficult. Creating concrete examples and generalizing from the examples to a software specification is less fraught with complexity and confusion. Further, it plays into a style of development where test cases drive development efforts. A suite of test cases provides a developer with a concrete definition of done. Tracking the software development project status leads to asking how many test cases we have today and how many of them pass.

Given a spreadsheet of concrete examples, we need to turn each row into a TestCase instance. We can then build a suite from these objects.

For the previous examples in this chapter, we loaded the test cases from a TestCase-based class. We used unittest.defaultTestLoader.loadTestsFromTestCase to locate all the methods with a name that starts with test. The loader creates a test object from each method with the proper name prefix and combines them into a test suite.

There's an alternative approach, however. For this example, we're going to build test case instances individually. This is done by defining a class with a single runTest() method. We can load multiple instances of this class into a suite. For this to work, the TestCase class must define only one test with the name runTest(). We won't be using the loader to create the test objects; we'll be creating them directly from rows of externally supplied data.

Let's take a look at a concrete function that we need to test. This is from Chapter 4, Attribute Access, Properties, and Descriptors:

from Chapter_4.ch04_ex3 import RateTimeDistance

This is a class that eagerly computes a number of attributes when it is initialized. The users of this simple function provided us with some test cases as a spreadsheet, from which we extracted the CSV file. For more information on CSV files, see Chapter 10, Serializing and Saving - JSON, YAML, Pickle, CSV, and XML. We need to transform each row into a TestCase. Here's the data in the CSV file:

rate_in,time_in,distance_in,rate_out,time_out,distance_out 
2,3,,2,3,6 
5,,7,5,1.4,7 
,11,13,1.18,11,13 

We're not going to define a class using a name that starts with test because the class isn't going to be simply discovered by a loader. Instead, the class is used to build instances into a larger suit of tests. Here's the test case template that we can use to create test instances from each row of the CSV file:

class Test_RTD(unittest.TestCase):

def runTest(self) -> None:
with (Path.cwd() / "data" / "ch17_data.csv").open() as source:
rdr = csv.DictReader(source)
for row in rdr:
self.example(**row)

def example(
self,
rate_in: str,
time_in: str,
distance_in: str,
rate_out: str,
time_out: str,
distance_out: str,
) -> None:
args = dict(
rate=float_or_none(rate_in),
time=float_or_none(time_in),
distance=float_or_none(distance_in),
)
expected = dict(
rate=float(rate_out), time=float(time_out), distance=float(distance_out)
)
rtd = RateTimeDistance(**args)
assert rtd.distance and rtd.rate and rtd.time
self.assertAlmostEqual(rtd.distance, rtd.rate * rtd.time, places=2)
self.assertAlmostEqual(rtd.rate, expected["rate"], places=2)
self.assertAlmostEqual(rtd.time, expected["time"], places=2)
self.assertAlmostEqual(rtd.distance, expected["distance"], places=2)

The testing is embodied in the runTest() method of this class. In previous examples, we used method names starting with test_ to provide the test case behavior. Instead of multiple test_ method names, a single runTest() method can be provided. This will also change the way a test suite is built, as we'll see next.

This method parses a row of a spreadsheet into a dictionary. For this to work correctly, the sample data column headings must match the parameter names required by the example() method. The input values are placed in a dictionary named args; the expected result values are, similarly, placed into a dictionary named expected.

The float_or_none() function helps handle the CSV source data where a None value will be represented by an empty string. It converts the text of a cell to a float value or None. The function is defined as follows:

def float_or_none(text: str) -> Optional[float]:
if len(text) == 0:
return None
return float(text)

Each row of the spreadsheet is processed through the example() method. This gives us a relatively flexible approach to testing. We can permit users or business analysts to create all the examples required to clarify proper operation.

We can build a suite from this test object as follows:

def suite9():
suite = unittest.TestSuite()
suite.addTest(Test_RTD())
return suite

Note that we do not use the loadTestsFromTestCase method to discover the methods with test_ names. Instead, we create an instance of the test case that can simply be added to the test suite.

The suite is executed using the kind of script we've seen earlier. Here's an example:

if __name__ == "__main__":
t = unittest.TextTestRunner() t.run(suite9())

The output looks like this:

..F 
====================================================================== 
FAIL: runTest (__main__.Test_RTD) 
{'rate': None, 'distance': 13.0, 'time': 11.0} -> {'rate': 1.18, 'distance': 13.0, 'time': 11.0} 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "p3_c15.py", line 504, in runTest 
    self.assertAlmostEqual( self.rtd.rate, self.result['rate'] ) 
AssertionError: 1.1818181818181819 != 1.18 within 7 places 
 
---------------------------------------------------------------------- 
Ran 3 tests in 0.000s 
 
FAILED (failures=1) 

The user-supplied data has a small problem. The users provided a value that has been rounded off to only two places. Either the sample data needs to provide more digits, or our test assertions need to cope with the rounding.

This can also be run from the command line using unittest test discovery. Here's the command we can run to use the built-in test discovery features of the unittest module:

python3 -m unittest Chapter_17/ch17_ex1.py

This produces an abbreviated output with many of the testing examples from this chapter. It looks like this:

.x............x..Run time 0.931446542
..
----------------------------------------------------------------------
Ran 19 tests in 0.939s

Each . is a test that passed. The x marks are tests that are expected to fail. As we noted previously, some of the tests reveal problems with the defined classes, and those tests will fail until the classes are fixed.

The Run time 0.931446542 output is from a print() within a test. It's not a standard part of the output. Because of the way the output is structured, it can be difficult to print other debugging or performance data output inside a test case. This example shows how it can be confusing to have the additional output in the middle of the simple line of periods showing test execution progress.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset