Chapter 5. Testing Enterprise iOS Applications

If there’s one philosophy that has become entrenched in the DNA of software development in recent memory, it’s that testing is crucial. For some organizations, it means comprehensive unit testing, for others a full-on TDD approach with continuous integration and regression testing. But the chances are that you’re not going to be shipping your iOS application out the door without some significant test infrastructure.

Beyond simple unit testing, you also should have testing of the running application, which offers its own challenges. Not only do you need to have a testing framework that can successfully exercise your UI, but you also have to make sure that all your other integration components (such as backend servers and databases) are in a consistent state every time you run the tests.

Recently, more advanced metrics such as cyclic complexity numbers (CCN) have become en vogue. They recognize that just because code is fully tested doesn’t mean that it’s well written. On the other hand, developers can end up gaming the system to get lower CCN numbers, at the cost of code quality. We’ll take a look at how to generate CCN metrics automatically, later in the chapter.

It’s worth noting here that the various testing frameworks have, in my opinion, been the most poorly maintained and casually broken parts of the SDK over the time that I’ve been developing. Apple changes how (and if!) test frameworks operate without notice, and sometimes apparently at random. Developers with automated test frameworks pray to their chosen deity before taking an XCode update, because you never know what may and may not work afterwards. With that in mind, be aware that anything you see here is subject to change, and just because you set something up one way today doesn’t mean that it will work the same way tomorrow. The general principles outlined here should hold true, however.

Unit Testing iOS Applications

Creating unit tests is probably the most straightforward part of iOS testing, because OCUnit has been a part of XCode for a long time (so it’s pretty well documented). With recent releases of XCode, things have been improved further, such as by running the tests on the simulator rather than at compile time, which makes them easier to debug, and offers up a powerful new alternative to the creaky UIAutomationTesting framework, something I’ll get to shortly.

Certain things had been broken, however—most notably the generation of code coverage statistics using the gcov library. When iOS 4 went to using LLVM as the default compiler, gcov was broken, but it has returned from the dead in iOS 5. I’ll talk a bit about gcov and CoverStory later in this section.

Setting Up an OCUnit Target

In any event, you get asked if you want to create a unit test target when you first create a project. If you don’t, you can add one later by doing a simple “Add Target” and picking “Cocoa Touch Unit Testing Bundle” (Figure 5-1).

Adding an OCUnit target to your project
Figure 5-1. Adding an OCUnit target to your project

Once you have a testing target, you can immediately try running it, since the default template creates a single unit test file with a single failing test case. You’ll eventually want to have multiple files of tests; you can create more by dropping new “Objective-C test case class” files into the unit test target.

There are a few things to point out immediately. Firstly, you need to make sure that the testing target has access to all the classes, frameworks, and resources that the test cases will need to run. You can set them up in the Build Phases tab of the target info (Figure 5-2).

Adding dependencies to a test target
Figure 5-2. Adding dependencies to a test target

This used to be a lot more of a pain in the butt than it is now. Until recently, OCUnit tests ran at compile time, and therefore couldn’t make use of things in the framework libraries, or any class in your code that included them. You had to carefully manage which files went into which target, or your unit test target wouldn’t compile.

Well, all that has changed. You can now make your whole honking application a dependency of your unit test target, and have access to anything in it, as we’ll see later. But for the moment, just know that if your test target sets a target dependency on your application, you should be all set.

Once you have the test target all set up, it’s time to write some unit tests. Remember, unit tests are intended to test the functionality in a single class, and purists insist on using techniques like dependency injection and mock classes to isolate logic down to a single class. This has the advantage of letting you test your code in a clean-room, as it were, but requires you to create a lot more test framework. It’s not my place to prescribe a unit testing approach, but I’ll say that I find that making some compromises here can let you test effectively without having to recreate the entire outside world in mocks.

We have a few classes it would be useful to unit test from the Buggy Whip Chat application (you first saw these back in Chapter 4), namely the URL endpoint generators. So, to begin our testing effort, let’s tackle them.

We start, as described above, by adding a new unit test class to our unit test target. Since we’re testing the WebServiceEndpointGenerator class, we’ll call it TestWebServiceEndpointGenerator. For some reason (as of this writing, as with everything in the book), creating a unit test project gives you a nice clean sample test case file, but adding a new test class gives you the old template, full of conditionals for in-app vs. static testing that don’t really apply anymore since unit tests now always run on the simulator, not at compile time. So, the first thing we can do is to strip out all the conditionalization logic in the header and code files, leaving them ending much more like the sample file you get with the target. When we’re done, they should look like the code in Example 5-1.

Example 5-1. The header and code files for a bare test case
////  TestWebServiceEndpointGenerator.h 

#import <SenTestingKit/SenTestingKit.h>
#import <UIKit/UIKit.h>

@interface TestWebServiceEndpointGenerator : SenTestCase

@end


////  TestWebServiceEndpointGenerator.m

#import "TestWebServiceEndpointGenerator.h"

@implementation TestWebServiceEndpointGenerator

@end

The first method to test is the getForecastWebServiceEndpoint method. So that we can do negative as well as positive testing, the method has been rewritten slightly to add some argument checking at the top:

+(NSURL *) getForecastWebServiceEndpoint:
       (NSString *) zipcode
       startDate:(NSDate *) startDate 
       endDate:(NSDate *) endDate {
    if ((zipcode == NULL) || (startDate == NULL) || 
        (endDate == NULL)) {
        NSException *exception =
             [NSException 
               exceptionWithName:
                 @"Missing Argument Exception"
               reason:@"An argument was missing"
               userInfo:nil];
        @throw exception;
    }
    if ([endDate compare:startDate] == NSOrderedAscending) {
        NSException *exception =
         [NSException exceptionWithName:@"Date Order Exception"
            reason:@"The start date can not be after the end date"
            userInfo:nil];
        @throw exception;
    }

Now we have some nice juicy argument checking we can use for negative tests. But since we’re optimistic by nature, let’s write the positive test first, shown in Example 5-2.

Example 5-2. Writing positive test cases
NSDate *testStartDate;
NSDate *testEndDate;

-(void) setUp {
    NSDateFormatter *df = [NSDateFormatter new];
    [df setDateFormat:@"YY-MM-dd"];
    testStartDate = [df dateFromString:@"2011-08-01"];
    testEndDate = [df dateFromString:@"2011-08-02"];
}

-(void) testGetForecastWebServiceEndpointSuccess {
    NSURL *url = 
         [WebServiceEndpointGenerator 
                    getForecastWebServiceEndpoint:@"03038"
                    startDate:testStartDate
                    endDate:testEndDate];
    STAssertNotNil(url, @"No URL returned");
    NSString *correctURL = @"";
    NSString *urlString = [url absoluteString];
    STAssertEqualObjects(@"", urlString, 
      @"The generated URL, %@, did not match the 
        expected URL,
      %@", urlString, correctURL);}

Breaking this down, we start by defining two variables to hold a start and end date, since we’ll be using them in a lot of the test cases, and we don’t want to have to create them fresh in each one. Every test case class has a setUp and tearDown method that you can override, which will be run before the test cases start and after they end, respectively. It’s a good place to make sure all your state is reset, and to initialize common data that will be needed by the class. We can place our code to create good test dates there.

The actual test case itself just calls the method under test with good data, and then makes sure that we got a good value back, and that the URL string matches what we expect. All of the unit test assertions are documented, but they generally follow the pattern of a test condition, an error message that will be used with the NSString stringWithFormat message, and the arguments to the formatter. In this case, we use STAssertNotNil and STAssertEqualObjects. You should understand that STAssertEqualObjects and STAssertEquals have different behaviors: the former uses the equals message, while the latter tests for literally object equality.

If you run this test as is (by selecting ProductTest, typing ⌘-U, or holding the run icon and selecting Test), it will fail, as shown in Figure 5-3.

A failing test run
Figure 5-3. A failing test run

This is because we’re testing the generated URL string against the empty string. So where do we get a good string to test it against? In pure TDD, we would have figured out what string we should expect before we wrote the code and the test would have tested for it. In lazy man’s TDD, we can run the test against a known bad string, copy the good string out of the error log by right-clicking on the error in the navigator and selecting “copy”, inspect it and make sure that it looks correct, then paste it into the “expected” string part of the test.

Happy path testing is all well and good, but as you probably know, the world is full of sad, broken paths. To round out the tests, we can write a number of negative cases, shown in Example 5-3.

Example 5-3. Writing negative test cases
-(void) testGetForecastWebServiceEndpointMissingZip {
    @try {
        [WebServiceEndpointGenerator
          getForecastWebServiceEndpoint:nil
          startDate:testStartDate endDate:testEndDate];
    } @catch (NSException * e) {
        STAssertEqualObjects(@"Missing Argument Exception",
            e.name, @"Wrong exception type");
        return;
    }
    STFail(@"Call did not generate exception");
}

-(void) testGetForecastWebServiceEndpointMissingStartDate {
    @try {
        [WebServiceEndpointGenerator
           getForecastWebServiceEndpoint:@"03038"
           startDate:nil endDate:testEndDate];
    } @catch (NSException * e) {
        STAssertEqualObjects(@"Missing Argument Exception",
            e.name, @"Wrong exception type");
        return;
    }
    STFail(@"Call did not generate exception");
}

-(void) testGetForecastWebServiceEndpointMissingEndDate {
    @try {
        [WebServiceEndpointGenerator 
            getForecastWebServiceEndpoint:@"03038"
           startDate:testStartDate endDate:nil];
    } @catch (NSException * e) {
        STAssertEqualObjects(@"Missing Argument Exception",
            e.name, @"Wrong exception type");
        return;
    }
    STFail(@"Call did not generate exception");
}

-(void) testGetForecastWebServiceEndpointDatesInWrongOrder {
    @try {
        [WebServiceEndpointGenerator
          getForecastWebServiceEndpoint:@"03038"
          startDate:testEndDate endDate:testStartDate];
    } @catch (NSException * e) {
        STAssertEqualObjects(@"Date Order Exception",
            e.name, @"Wrong exception type");
        return;
    }
    STFail(@"Call did not generate exception");
}

The interesting bit in these test cases is that, because what we’re interested in is whether or not the tested method throws an exception, what we do is wrap the invocation of the method in a try block, and if the exception is thrown, compare the exception name with the name we were expecting to see. If the name is wrong, the assert will fail. If the exception is not thrown, we won’t return out of the test in the catch clause, and will explicitly fail.

Generating Code Coverage Metrics

These days, it’s not enough to write good unit tests: you also need to prove that you’re covering all the code with them. As previously mentioned, you can use the gcov library to produce code coverage results from your unit tests. To do this, turn on the Generate Test Coverage Files flag in the Build Settings of your unit test target, as shown in Figure 5-4.

Turning on code coverage in XCode
Figure 5-4. Turning on code coverage in XCode

Once you’ve turned on code coverage support and run your unit tests, you’re going to end up with a bunch of files with gcda and gcno extensions somewhere. Perhaps in your project directory; more likely in some place like ~/Library/Developer/Xcode/DerivedData. Unfortunately, these files are next to useless to try and interpret visually. Fortunately, there are good tools such as CoverStory to do all the hard work for you.

CoverStory is available at http://code.google.com/p/coverstory/, and is a normal Mac OS X application. When you run it, you can use the FileOpen command to point it at a directory (or tree of directories) containing your code coverage files. It then opens up and displays a listing of your code, with unrun code shown in red, and counts next to each line showing how many times each line ran (Figure 5-5).

An example of a CoverStory session
Figure 5-5. An example of a CoverStory session

CoverStory also gives you a list of files on the left-hand side, along with code coverage percentages, so you can see how your code is doing, coverage-wise, and find undercovered files.

Generating Code Complexity Metrics

Over the last few years, code complexity metrics have become the hot new measure of code goodness. Simply put, cyclomatic complexity numbers (CCN) are a measure of how many ways there are of getting through a given piece of code. The more conditional code you have in a method, the deeper you nest your if statements, the more looping you’re doing, the higher your numbers are going to be.

Different organizations have different acceptable levels of complexity. For example, a company might mandate that any new code must have a CCN lower than 20, while old code must be refactored if the number is higher than 50. Getting numbers down typically involves breaking up large methods into smaller ones, and is generally a good thing, although I’ve seen cases where code actually ended up less readable as a result of trying to knock down high CCN numbers.

In the Java world, tools like Coverity are routinely used to generate CCN metrics, and even to fail builds based on them. In my searches, I’ve only found one good tool to compute CCN metrics for Objective-C, and it’s just a Python script that a guy named Terry Yinzhe threw together under the Apache license. It’s called hfcca13.py, and it’s included in the example code for the book (see How to Contact Us).

Using it is very simple. Simply invoke it with a list of the source (not header) files that you wish to have analyzed, and it will spit out the result to the command line. For example, running it against our source tree gives results like this:

$ ./hfcca13.py `find ChatAPI -name "*.m" -print`
==============================================================
NLOC    CCN   token          function@line@file
--------------------------------------------------------------
     9     2    10 init@13@ChatAPI/ChatAPI/GoogleTalkAPI.m
     6     1     4 setUp@13@ChatAPI/ChatAPITests/ChatAPITests.m
     6     1     4 tearDown@20@ChatAPI/ChatAPITests/ChatAPITests.m
     4     1     3 testExample@27@ChatAPI/ChatAPITests/ChatAPITests.m
--------------------------------------------------------------
2 file analyzed.
==============================================================
LOC    Avg.NLOC AvgCCN Avg.ttoken  function_cnt    file
--------------------------------------------------------------
     24      9      2        10         1     ChatAPI/ChatAPI/GoogleTalkAPI.m
     33      5      1         3         3     ChatAPI/ChatAPITests/ChatAPITests.m

!!!! Warnings (CCN > 15) !!!!
==============================================================
NLOC    CCN   token          function@file
--------------------------------------------------------------
Total warning: (0/4, 0.0%)

As you can see, we’re being good little programmers, and have no methods with CCN values above 15. The script takes a number of parameters to tweak the behavior, including setting the warning threshold for the CCN value. At work, we’ve integrated the script into our Hudson build, and the build automatically breaks if it finds values above a certain level.

Creating UI Tests (The Old and Painful Way)

If you want to test your UI, the traditional way (at least for the last year or so) has been to use the UIAutomation Framework. The UIAutomation Framework works in conjunction with the Instruments tool. You create JavaScript files that can be used to access and poke at the UI elements of the application, within some fairly restrictive bounds, and with some nasty bugs to watch out for. I was never so happy a developer as the day I was able to hand off test script creation to our QA team, because it was excruciating work to create them.

To begin, we need to do some setup work in our project. The framework uses the Accessibility label property of UI elements to refer to them in the JavaScript, so the first step is to mark up the elements in Interface Builder (shown in Figure 5-6). However, there’s a bug that will cause everything to go pear-shaped very quickly if you ever assign an accessibility label to a view of any kind, so you may be better off without it.

Alas, many of the elements on the screen don’t have accessibility labels, so you have to access them through more arcane methods. As a example, let’s look at a test that looks for the zip code button in the toolbar, presses it, and then sees if it gets a good value back (Example 5-4).

Labeling elements for scripting
Figure 5-6. Labeling elements for scripting
Example 5-4. A sample UIAutomation script
UIALogger.logStart("Find Toolbar");
var target = UIATarget.localTarget();
var window = target.frontMostApp().mainWindow();
window.logElementTree();
var toolbars = window.toolbars();
if (toolbars.length != 1) {
      UILogger.logFail("Did not find toolbar");
}

UIALogger.logStart("Find Output View");
var outputViews = window.textViews();
if (outputViews.length == 0) {
   UIALogger.logFail("Did not find output textview");
}
var outputView = outputViews[0];

UIALogger.logStart("Find Button");
var buttons = toolbars[0].buttons();
var zipButton = buttons["Zip Code"];
if (zipButton == null) {
   UILogger.logFail("Did not find zipcode button");
}

UIALogger.logStart("Test Zipcode Button");
zipButton.tap();var result = 
    outputView.withValueForKey(
       "Postal Code: 03038
City: ... Longitude: 71.30W
",
        "value");
if (result == null) {
   UILogger.logFail("Did not get expected result");
}

So what have we got going on here? The UIALogger.logStart call signals to the framework that we’re about to start a test. The next two lines are boilerplate, and gain access to the currently visible window of the application. The logElementTree call is useful to put in during test development: it dumps a tree view of all the UI elements below the specified one (in this case, the window) to the log, so that you can figure out how to navigate to specific elements in your test code.

Next, we get a list of all the toolbars on the window (there should only be one, something we test for and fail if untrue). We also look for the TextView that we use for output, and the button inside the toolbar called “Zip Code”. Here we see one of the weaknesses of the framework—you need to access the buttons by their label text, which means that these tests will fail if the test is run in a different language. You can also try to get the button positionally, but this means that if you add a button, the test may break.

Once the test has access to the button, we have the test tap it, and check to see if the string placed in the results view is correct. But we can’t just tap the button and then use the value() method on the output view to check it, because the web service will almost certainly not have returned yet, so the text in the view will be the old text. Instead, we need to use the withValueForKey method, specifying “value” as the key. The way that this method works is that it searches for an element which satisfies the criteria, waiting until a specified timeout (which you can configure) has elapsed before failing (and returning null). So, if the output view value becomes equal to the test string before the timeout occurs, the view will be returned and the test against null will fail.

This kind of programmatic acrobatics is typical of what you have to go through when using the UIAutomation Framework. It is extremely poorly documented by Apple, tends to be very fragile and occasionally non-intuitive, and is (unfortunately) the only officially supported game in town.

Anyway, once you have your test script, you fire up Instruments, and select the Automation template (Figure 5-7). Once at the main screen, use the target pulldown at the top to select the simulator version of your application build (which may be hiding down inside of your ~/Library/Developer/DerviedData hierarchy), select the script on the script pane, and hit the record button (Figure 5-8).

When you run the test, the Script Log will show you the results. One thing that you’ll quickly notice is that the test never stops; you have to hit the record button again to stop it. For this reason (among others), you can’t use these tests from the command line as part of an automated build process.

Starting Instruments
Figure 5-7. Starting Instruments
Setting up the test in Instruments
Figure 5-8. Setting up the test in Instruments

One thing you can do is to throw other instruments such as memory leak detectors on in parallel with your automation test, which will allow you to profile your application over time to make sure you don’t introduce issues down the road.

Because of the awkwardness, fragility, and slowness of the framework, it would be nice if there were alternatives, and there are. They just aren’t officially supported.

UI Testing Using OCUnit

In theory, now that OCUnit tests run in the simulator, you could write your UI tests using OCUnit and be done with the UIAutomation framework. The only problem is that until recently, if you tried to do this, you’d get a message saying that testing UI with OCUnit was only supported on physical devices. In other words, if you wanted to do this, you’d need to run it with a tethered phone or pad attached, which is pretty non-optimal.

Recently, things have changed. Now, if you create a Cocoa Touch Unit Testing Target, and then go into the build settings for the target and set the Test Host parameter to $(BUNDLE_LOADER), you can run tests that directly manipulate the UI. You can see how this is set in Figure 5-9.

Setting the test host
Figure 5-9. Setting the test host

With the tests set to run against the test host, we can now write UI OCUnit tests, such as the one in Example 5-5.

Example 5-5. A Cocoa Touch unit test
#import "BuggyWhipChatTests.h"
#import "BuggyWhipChatAppDelegate.h"
#import "RootViewController.h"
#import "DetailViewController.h"

@implementation BuggyWhipChatTests

BuggyWhipChatAppDelegate *delegate;

- (void)setUp{
    [super setUp];
    delegate = 
       [[UIApplication sharedApplication] delegate];
}

- (void)tearDown{
    [super tearDown];
}

- (void) testUIElementsPresent {
    RootViewController *controller =
      delegate.rootViewController;
    STAssertNotNil(controller, 
          @"Root view controller not found");
    if ([[UIDevice currentDevice] userInterfaceIdiom] ==
          UIUserInterfaceIdiomPad) {
        STAssertNotNil(controller.detailViewController,
           @"Detail view controller not found on iPad");
    }
}

-(void) testSOAPWeather {
    RootViewController *controller =
             delegate.rootViewController;
    DetailViewController *detail =
             controller.detailViewController;
    detail.outputView.text = @"";
    [detail showSOAPWeather:nil];
    int i;
    for (i = 0; i < 30; i++) {
        [[NSRunLoop currentRunLoop] runUntilDate: 
              [NSDate dateWithTimeIntervalSinceNow: 1]];
        if ([detail.outputView.text length] > 0) {
            break;
        }
    }
    STAssertTrue([detail.outputView.text length] > 0,
                 @"Detail view is blank");
}

-(void) testZipCode {
    RootViewController *controller = 
        delegate.rootViewController;
    DetailViewController *detail = 
        controller.detailViewController;
    detail.outputView.text = @"";
    [detail lookupZipCode:nil];
    int i;
    for (i = 0; i < 30; i++) {
        [[NSRunLoop currentRunLoop] runUntilDate: 
                 [NSDate dateWithTimeIntervalSinceNow: 1]];        if ([detail.outputView.text length] > 0) {
            break;
        }
    }
    STAssertTrue([detail.outputView.text length] > 0,
                 @"Detail view is blank");
}

@end

As opposed to the arcane UIAutomation framework, this is pretty straightforward. In the setup for each test, we grab a handle on the application delegate. Then in the first test, we check to make sure that we can find the root view controller—and if we’re running in iPad mode, that we can find the detail controller.

In the two meaningful tests, we set the text of the output view to empty, then trigger the event handler for the button (which simulates the action of pressing the button itself). In order to let the asynchronous network request run, we go into a loop, giving the current run loop a second to execute, then checking to see if the output view has been set. If we see it is set, we break out of the loop. After the loop ends (one way or another), we test to see if we got a result.

This style of testing is much easier to create, and can be incorporated directly into the development process, rather than requiring the developer to break out into JavaScript to create the UI tests. It will also allow code coverage of UI testing.

The one current problem with UI OCUnit testing is that no one has figured out how to run it from the command line, so it can’t be used as part of an automated build process. Because it uses the simulator, this will be a fragile thing even if it is enabled, because the simulator can get into funny states or hang. But even in its present state, if it continues to work, it represents a big step forward for the testing process.

Now that our app is all tested and happy, we’re ready to put it up for sale. But if this is the first time you’ve thought about the iTunes store, you’re in for a rude surprise. You need to be thinking about it from the first day of your project, which is just what the next chapter is about.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset