Chapter 15

Taps, Touches, and Gestures

The screens of the iPhone, iPod touch, and iPad—with their crisp, bright, touch-sensitive display—are truly things of beauty and masterpieces of engineering. The multitouch screen common to all iOS devices is one of the key factors in the platform’s tremendous usability. Because the screen can detect multiple touches at the same time and track them independently, applications are able to detect a wide range of gestures, giving the user power that goes beyond the interface.

As an example, suppose you’re in the Mail application staring at a long list of junk e-mail that you want to delete. You have several options:

  • You can tap each e-mail message individually, tap the trash icon to delete the message, and then wait for the next message to download, deleting each one in turn. This method is best if you want to read each e-mail message before you delete it.
  • From the list of e-mail messages, you can tap the Edit button in the upper-right corner, tap each e-mail row to mark it, and then hit the Delete button to delete all marked messages. This method is best if you don't need to read each message before deleting it.
  • Swipe an e-mail message in the list from side-to-side. This gesture produces a Delete button for that e-mail. Tap the Delete button, and the message is deleted.

These examples are just a few of the countless gestures that are made possible by the multitouch display. You can pinch your fingers together to zoom out while viewing a picture, or reverse-pinch to zoom in. You can long-press on an icon to turn on “jiggly mode,” which allows you to delete applications from your iOS device.

In this chapter, we're going to look at the underlying architecture that lets you detect gestures. You'll learn how to detect the most common ones, and how to create and detect a completely new gesture.

Multitouch Terminology

Before we dive into the architecture, let's go over some basic vocabulary:

  • Event: Generated when you interact with the device's multitouch screen. A gesture is passed through the system inside a series of events. Events contain information about the touch or touches that occurred. Events are passed through the responder chain, as discussed in the next section.
  • Gesture:Any sequence of events that happens from the time you touch the screen with one or more fingers until you lift your fingers off the screen. No matter how long it takes, as long as one or more fingers are still against the screen, you are within a gesture (unless a system event, such as an incoming phone call, interrupts it). Note that Cocoa Touch doesn't expose any class or structure that represents a gesture. In a sense, a gesture is a verb, and a running app can watch the user input stream to see if one is happening.
  • Touch: Refers to a finger being placed on the screen, dragging across the screen, or being lifted from the screen. The number of touches involved in a gesture is equal to the number of fingers on the screen at the same time. You can actually put all five fingers on the screen, and as long as they aren't too close to each other, iOS can recognize and track each of them. Now, there aren't many useful five-finger gestures, but it's nice to know the iOS can handle one if necessary.

    NOTE: In fact, experimentation has shown that the iPad can handle up to 11 simultaneous touches! This may seem excessive, but could be useful if you're working on a multiplayer game, where several players are interacting with the screen at the same time.

  • Tap: Happens when the user touches the screen with a single finger and then immediately lifts the finger off the screen without moving it around. The iOS device keeps track of the number of taps and can tell you if the user double-tapped or triple-tapped, or even tapped 20 times. It handles all the timing and other work necessary to differentiate between two single-taps and a double-tap, for example.
  • Gesture recognizer: An object that knows how to watch the stream of events generated by a user and recognize when the user is touching and dragging in a way that matches a predefined gesture. Included in iOS 3.2 and up, the UIGestureRecognizer class and its various subclasses can help take a lot of work off your hands when you want to watch for common gestures. It nicely encapsulates the work of looking for a gesture, and it can be easily applied to any view in your application.

The Responder Chain

Since gestures are passed through the system inside events, and events are passed through the responder chain, you need to understand how the responder chain works in order to handle gestures properly.

If you've worked with Cocoa for Mac OS X, you're probably familiar with the concept of a responder chain, as the same basic mechanism is used in both Cocoa and Cocoa Touch. If this is new material, don't worry; we'll explain how it works.

Several times in this book, we've mentioned the first responder, which is usually the object with which the user is currently interacting. The first responder is the start of the responder chain. There are other responders as well.

Any class that has UIResponder as one of its superclasses is a responder. UIView is a subclass of UIResponder, and UIControl is a subclass of UIView, so all views and all controls are responders. UIViewController is also a subclass of UIResponder, meaning that it is a responder, as are all of its subclasses like UINavigationController and UITabBarController. Responders, then, are so named because they respond to system-generated events, such as screen touches.

Up the Responder Chain

If the first responder doesn't handle a particular event, such as a gesture, it passes that event up the responder chain. If the next object in the chain responds to that particular event, it will usually consume the event, which stops the event's progression through the responder chain.

In some cases, if a responder only partially handles an event, that responder will take an action and forward the event to the next responder in the chain. That's not usually what happens, though. Normally, when an object responds to an event, that's the end of the line for the event. If the event goes through the entire responder chain and no object handles the event, the event is then discarded.

Let's take a more specific look at the responder chain. The first responder is almost always a view or control, and it gets the first shot at responding to an event. If the first responder doesn't handle the event, it passes the event to its view controller. If the view controller doesn't consume the event, the event is then passed to the first responder's parent view. If the parent view doesn't respond, the event will go to the parent view's controller, if it has one. The event will proceed up the view hierarchy, with each view and then that view's controller getting a chance to handle the event. If the event makes it all the way up through the view hierarchy, the event is passed to the application's window. If the window doesn't handle the event, it passes that event to the application's UIApplication object instance. If UIApplication doesn't respond to the event, it goes gently into that good night.

This process is important for a number of reasons. First, it controls the way gestures can be handled. Let's say a user is looking at a table and swipes a finger across a row of that table. Which object handles that gesture?

If the swipe is within a view or control that's a subview of the table view cell, that view or control will get a chance to respond. If it doesn't, the table view cell gets a chance. In an application like Mail, where a swipe can be used to delete a message, the table view cell probably needs to look at that event to see if it contains a swipe gesture. Most table view cells don't respond to gestures. In that case, the event proceeds up to the table view, and then up the rest of the responder chain, until something responds to that event or it reaches the end of the line.

Forwarding an Event: Keeping the Responder Chain Alive

Let's take a step back to that table view cell in the Mail application. We don't know the internal details of Apple's Mail application, but let's assume, for the nonce, that the table view cell handles the delete swipe, and only the delete swipe. That table view cell must implement the methods related to receiving touch events (which you'll see in the next section) so that it can check to see if that event contained a swipe gesture. If the event contains a swipe, then the table view cell takes an action, and that's that; the event goes no further.

If the event doesn't contain a swipe gesture, the table view cell is responsible for forwarding that event manually to the next object in the responder chain. If it doesn't do its forwarding job, the table and other objects up the chain will never get a chance to respond, and the application may not function as the user expects. That table view cell could prevent other views from recognizing a gesture.

Whenever you respond to a touch event, you need to keep in mind that your code doesn't work in a vacuum. If an object intercepts an event that it doesn't handle, it needs to pass it along manually, by calling the same method on the next responder. Here's a bit of fictional code:

-(void)respondToFictionalEvent:(UIEvent *)event {
    if (someCondition)
        [self handleEvent:event];
    else
        [self.nextResponder respondToFictionalEvent:event];
}

Notice how we call the same method on the next responder. That's how to be a good responder chain citizen. Fortunately, most of the time, methods that respond to an event also consume the event. But it's important to know that if that's not the case, you need to make sure the event gets pushed back into the responder chain.

The Multitouch Architecture

Now that you know a little about the responder chain, let's look at the process of handling gestures. As you've learned, gestures are passed along the responder chain, embedded in events. That means that the code to handle any kind of interaction with the multitouch screen needs to be contained in an object in the responder chain. Generally, that means you can either choose to embed that code in a subclass of UIView or embed the code in a UIViewController.

So does this code belong in the view or in the view controller?

Where to Put Touch Code

If the view needs to do something to itself based on the user's touches, the code probably belongs in the class that defines that view. For example, many control classes, such as UISwitch and UISlider, respond to touch-related events. A UISwitch might want to turn itself on or off based on a touch. The folks who created the UISwitch class embedded gesture-handling code in the class so the UISwitch can respond to a touch.

Often, however, when the gesture being processed affects more than the object being touched, the gesture code belongs in the view's controller class. For example, if the user makes a gesture touching one row that indicates that all rows should be deleted, the gesture should be handled by code in the view controller.

The way you respond to touches and gestures in both situations is exactly the same, regardless of the class to which the code belongs.

The Four Touch-Notification Methods

Four methods are used to notify a responder about touches: touchesBegan:withEvent:, touchesMoved:withEvent:, touchesEnded:withEvent:, and touchesCancelled:withEvent:.

When the user first touches the screen, the iOS device looks for a responder that has a method called touchesBegan:withEvent:. To find out when the user first begins a gesture or taps the screen, implement this method in your view or your view controller. Here's an example of what that method might look like:

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {

    NSUInteger numTaps = [[touches anyObject] tapCount];
    NSUInteger numTouches = [touches count];

    // Do something here.
}

This method, and all of the touch-related methods, are passed an NSSet instance called touches and an instance of UIEvent. You can determine the number of fingers currently pressed against the screen by getting a count of the objects in touches. Every object in touches is a UITouch event that represents one finger touching the screen. If this touch is part of a series of taps, you can find out the tap count by asking any of the UITouch objects. If touches contains more than one object, you know the tap count must be one, because the system keeps tap counts only as long as just one finger is being used to tap the screen. In the preceding example, if numTouches is 2, you know the user tapped the screen with two fingers at once.

All of the objects in touches may not be relevant to the view or view controller where you've implemented this method. A table view cell, for example, probably doesn't care about touches that are in other rows or in the navigation bar. You can get a subset of touches that has only those touches that fall within a particular view from the event, like so:

NSSet *myTouches = [event touchesForView:self.view];

Every UITouch represents a different finger, and each finger is located at a different position on the screen. You can find out the position of a specific finger using the UITouch object. It will even translate the point into the view's local coordinate system if you ask it to, like this:

CGPoint point = [touch locationInView:self];

You can get notified while the user is moving fingers across the screen by implementing touchesMoved:withEvent:. This method is called multiple times during a long drag; each time it is called, you will get another set of touches and another event. In addition to being able to find out each finger's current position from the UITouch objects, you can also find out the previous location of that touch, which is the finger's position the last time either touchesMoved:withEvent: or touchesBegan:withEvent: was called.

When the user's fingers are removed from the screen, another event, touchesEnded:withEvent:, is invoked. When this method is called, you know that the user is finished with a gesture.

The final touch-related method that responders might implement is touchesCancelled:withEvent:. This method is called if the user is in the middle of a gesture when something happens to interrupt it, like the phone ringing. This is where you can do any cleanup you might need so you can start fresh with a new gesture. When this method is called, touchesEnded:withEvent: will not be called for the current gesture.

OK, enough theory—let's see some of this in action.

Detecting Touches

Our first example is a little application that will give you a better feel for when the four touch-related responder methods are called. In Xcode, create a new project using the View-based Application template. For Product, choose iPhone, and call the new project TouchExplorer.

Our TouchExplorer app will print messages to the screen containing the touch and tap count every time a touch-related method is called (see Figure 15-1).

Image

Figure 15-1. The TouchExplorer application

NOTE: Although the applications in this chapter will run on the simulator, you won't be able to see all of the available multitouch functionality unless you run them on an iPhone or iPod touch. If you've been accepted into the iOS Developer Program, you can run the programs you write on your device of choice. The Apple web site does a great job of walking you through the process of getting everything you need to prepare to connect Xcode to your device.

Building the TouchExplorer Application

We need three labels for this application: one to indicate which method was last called, another to report the current tap count, and a third to report the number of touches. Single-click TouchExplorerViewController.h, and add three outlets and a method declaration. The method will be used to update the labels from multiple places.

#import <UIKit/UIKit.h>

@interface TouchExplorerViewController : UIViewController {
    UILabel    *messageLabel;
    UILabel    *tapsLabel;
    UILabel    *touchesLabel;
}
@property (nonatomic, retain) IBOutlet UILabel *messageLabel;
@property (nonatomic, retain) IBOutlet UILabel *tapsLabel;
@property (nonatomic, retain) IBOutlet UILabel *touchesLabel;
- (void)updateLabelsFromTouches:(NSSet *)touches;
@end

Now, double-click TouchExplorerViewController.xib to edit the file. In Interface Builder, double-click the View icon in the dock to edit the view if the view editor is not already open. Dragthree Labels from the library to the View window. Control-drag from the File's Owner icon to each of the three labels, connecting one to the messageLabel outlet, another to the tapsLabel outlet, and the last one to the touchesLabel outlet.

You should resize the labels so that they take up the full width of the view and also center the text, but the exact placement of the labels doesn't matter. You can also play with the fonts and colors if you're feeling a bit Picasso. When you're finished placing the labels, double-click each label and press the delete key to get rid of the text that's in them.

Finally, single-click the View icon in the main nib window and bring up the attributes inspector (see Figure 15-2). In the inspector, go to the bottom of the View section and make sure that both User Interaction Enabled and Multiple Touch are both checked. If Multiple Touch is not checked, your controller class's touch methods will always receive one, and only one, touch—no matter how many fingers are actually touching the screen.

Image

Figure 15-2. In the view's attributes, make sure both User Interaction Enabled and MultipleTouch are checked.

When you're finished, save the nib. Then return to Xcode, select TouchExplorerViewController.m, and add the following code at the beginning of that file:

#import "TouchExplorerViewController.h"

@implementation TouchExplorerViewController
@synthesize messageLabel;
@synthesize tapsLabel;
@synthesize touchesLabel;

- (void)updateLabelsFromTouches:(NSSet *)touches {
NSUInteger numTaps = [[touches anyObject] tapCount];
    NSString *tapsMessage = [[NSString alloc]
        initWithFormat:@"%d taps detected", numTaps];
    tapsLabel.text = tapsMessage;
    [tapsMessage release];

    NSUInteger numTouches = [touches count];
    NSString *touchMsg = [[NSString alloc] initWithFormat:
        @"%d touches detected", numTouches];
    touchesLabel.text = touchMsg;
    [touchMsg release];
}
...

Next, insert the following lines of code into the existing viewDidUnload and dealloc methods:

...
- (void)viewDidUnload {
    // Release any retained subviews of the main view.
    // e.g. self.myOutlet = nil;
    self.messageLabel = nil;
    self.tapsLabel = nil;
    self.touchesLabel = nil;
    [super viewDidUnload];
}
- (void)dealloc {
    [messageLabel release];
    [tapsLabel release];
    [touchesLabel release];
[super dealloc];
}
...

And add the following new methods at the end of the file:

...
#pragma mark -
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    messageLabel.text = @"Touches Began";
    [self updateLabelsFromTouches:touches];
}

- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event{
    messageLabel.text = @"Touches Cancelled";
    [self updateLabelsFromTouches:touches];
}

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
    messageLabel.text = @"Touches Ended.";
    [self updateLabelsFromTouches:touches];
}

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
    messageLabel.text = @"Drag Detected";
    [self updateLabelsFromTouches:touches];
}
@end

In this controller class, we implement all four of the touch-related methods discussed earlier. Each one sets messageLabel so the user can see when each method is called. Next, all four of them call updateLabelsFromTouches: to update the other two labels. The updateLabelsFromTouches: method gets the tap count from one of the touches, figures out the number of touches by looking at the count of the touches set, and updates the labels with that information.

Running TouchExplorer

Compile and run the application. If you're running in the simulator, try repeatedly clicking the screen to drive up the tap count, and try clicking and holding down the mouse button while dragging around the view to simulate a touch and drag. Note that a drag is not the same as a tap, so once you start your drag, the app will report zero taps.

You can emulate a two-finger pinch in the iPhone simulator by holding down the option key while you click with the mouse and drag. You can also simulate two-finger swipes by holding down the option key to simulate a pinch, moving the mouse so the two dots representing virtual fingers are next to each other, and then holding down the shift key (while still holding down the option key). Pressing the shift key will lock the position of the two fingers relative to each other, and you can do swipes and other two-finger gestures. You won't be able to do gestures that require three or more fingers, but you can do most two-finger gestures on the simulator using combinations of the option and shift keys.

If you're able to run this program on your iPhone or iPod touch, see how many touches you can get to register at the same time. Try dragging with one finger, then two fingers, and then three. Try double- and triple-tapping the screen, and see if you can get the tap count to go up by tapping with two fingers.

Play around with the TouchExplorer application until you feel comfortable with what's happening and with the way that the four touch methods work. When you're ready, move on to the next section, which demonstrates how to detect one of the most common gestures: the swipe.

Detecting Swipes

The application we're about to build, called Swipes, does nothing more than detect swipes, both horizontal and vertical (see Figure 15-3). If you swipe your finger across the screen from left to right, right to left, top to bottom, or bottom to top, Swipes will display a message across the top of the screen for a few seconds, informing you that a swipe was detected.

Image

Figure 15-3. The Swipes application will detect both vertical and horizontal swipes.

Detecting swipes is relatively easy. We're going to define a minimum gesture length in pixels, which is how far the user must swipe before the gesture counts as a swipe. We'll also define a variance, which is how far from a straight line our user can veer and still have the gesture count as a horizontal or vertical swipe. A diagonal line generally won't count as a swipe, but one that's just a little off from horizontal or vertical will.

When the user touches the screen, we'll save the location of the first touch in a variable. Then we'll check as the user's finger moves across the screen to see if it reaches a point where it has gone far enough and straight enough to count as a swipe. Let's build it.

Building the Swipes Application

Create a new project in Xcode using the View-based Application template and a Product of iPhone. Name the project Swipes.

Click SwipesViewController.h, and add the following code:

#import <UIKit/UIKit.h>

@interface SwipesViewController : UIViewController {
    UILabel     *label;
    CGPoint     gestureStartPoint;
}
@property (nonatomic, retain) IBOutlet UILabel *label;
@property CGPoint gestureStartPoint;
- (void)eraseText;
@end

We start by declaring an outlet for our one label and a variable to hold the first spot the user touches. Then we declare a method that will be used to erase the text after a few seconds.

Double-click SwipesViewController.xib to open it for editing. In the attributes inspector, make sure that the view is set so User Interaction Enabled and Multiple Touch are both checked. Then drag a Label from the library and drop it on the View window. Set up the label so it takes the entire width of the view from blue line to blue line, and its alignment is centered. Feel free to play with the text attributes to make the label easier to read. Next, double-click the label and delete its text. Control-drag from the File's Owner icon to the label, and connect it to the label outlet.

Save your nib. Now return to Xcode, and select SwipesViewController.m. Add the following code at the top:

#import "SwipesViewController.h"

#define kMinimumGestureLength    25
#define kMaximumVariance         5

@implementation SwipesViewController
@synthesize label;
@synthesize gestureStartPoint;

- (void)eraseText {
    label.text = @"";
}
...

We start by defining a minimum gesture length of 25 pixels and a variance of 5. If the user is doing a horizontal swipe, the gesture could end up 5 pixels above or below the starting vertical position and still count as a swipe, as long as the user moves 25 pixels horizontally. In a real application, you will probably need to play with these numbers a bit to find what works best for that application.

Insert the following lines of code into the existing viewDidUnload and dealloc methods:

...
- (void)viewDidUnload {
    // Release any retained subviews of the main view.
    // e.g. self.myOutlet = nil;
    self.label = nil;
    [super viewDidUnload];
}

- (void)dealloc {
    [label release];
    [super dealloc];
}
...

And add the following methods at the bottom of the class:

#pragma mark -
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {

    UITouch *touch = [touches anyObject];
    gestureStartPoint = [touch locationInView:self.view];
}

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
    CGPoint currentPosition = [touch locationInView:self.view];

    CGFloat deltaX = fabsf(gestureStartPoint.x - currentPosition.x);
    CGFloat deltaY = fabsf(gestureStartPoint.y - currentPosition.y);

    if (deltaX >= kMinimumGestureLength && deltaY <= kMaximumVariance) {
        label.text = @"Horizontal swipe detected";
        [self performSelector:@selector(eraseText)
            withObject:nil afterDelay:2];
    }
    else if (deltaY >= kMinimumGestureLength &&
            deltaX <= kMaximumVariance){
        label.text = @"Vertical swipe detected";
        [self performSelector:@selector(eraseText) withObject:nil
            afterDelay:2];
    }
}

@end

Let's start with the touchesBegan:withEvent: method. All we do there is grab any touch from the touches set and store its point. We're primarily interested in single-finger swipes right now, so we don't worry about how many touches there are; we just grab one of them.

   UITouch *touch = [touches anyObject];
   gestureStartPoint = [touch locationInView:self.view];

In the next method, touchesMoved:withEvent:, we do the real work. First, we get the current position of the user's finger.

   UITouch *touch = [touches anyObject];
   CGPoint currentPosition = [touch locationInView:self.view];

After that, we calculate how far the user's finger has moved both horizontally and vertically from its starting position. The function fabsf() is from the standard C math library that returns the absolute value of a float. This allows us to subtract one from the other without needing to worry about which is the higher value.

   CGFloat deltaX = fabsf(gestureStartPoint.x - currentPosition.x);
   CGFloat deltaY = fabsf(gestureStartPoint.y - currentPosition.y);

Once we have the two deltas, we check to see if the user has moved far enough in one direction without having moved too far in the other to constitute a swipe. If they have, we set the label's text to indicate whether a horizontal or vertical swipe was detected. We also use performSelector:withObject:afterDelay:to erase the text after it has been on the screen for 2 seconds. That way, the user can practice multiple swipes without needing to worry whether the label is referring to an earlier attempt or the most recent one.

        if (deltaX >= kMinimumGestureLength && deltaY <= kMaximumVariance) {
         label.text = @"Horizontal swipe detected";
         [self performSelector:@selector(eraseText)
             withObject:nil afterDelay:2];
    }
    else if (deltaY >= kMinimumGestureLength &&
            deltaX <= kMaximumVariance){
        label.text = @"Vertical swipe detected";
        [self performSelector:@selector(eraseText)
            withObject:nil afterDelay:2];
    }

Go ahead and compile and run Swipes. If you find yourself clicking and dragging with no visible results, be patient. Click and drag straight down or straight across until you get the hang of swiping.

Using Automatic Gesture Recognition

The procedure we used for detecting a swipe isn't too bad. All the complexity is in the touchesMoved:withEvent: method, and even that isn't all that complicated. But there's an even easier way to do this. iOS now includes a class called UIGestureRecognizer, which eliminates the need for watching all the events to see how fingers are moving. You don't use UIGestureRecognizer directly, but instead create an instance of one of its subclasses, each of which is designed to look for a particular type of gesture, such as a swipe, pinch, double-tap, triple-tap, and so on.

Let's see how to modify the Swipes app to use a gesture recognizer instead of our hand-rolled procedure. As always, you might want to make a copy of your Swipes project folder and start from there.

Start off by selecting SwipesViewController.m, and deleting both the touchesBegan:withEvent: and touchesMoved:withEvent: methods. That's right, you won't be needing them. Then add a couple of new methods in their place:

- (void)reportHorizontalSwipe:(UIGestureRecognizer *)recognizer {
  label.text = @"Horizontal swipe detected";
  [self performSelector:@selector(eraseText) withObject:nil afterDelay:2];
}

- (void)reportVerticalSwipe:(UIGestureRecognizer *)recognizer {
  label.text = @"Vertical swipe detected";
  [self performSelector:@selector(eraseText) withObject:nil afterDelay:2];
}

Those methods implement the actual “functionality” (if you can call it that) that's brought about by the swipe gestures, just as the touchesMoved:withEvent: method did previously. Now, remove the comment markers around the viewDidLoad method, and add the new code shown here:

- (void)viewDidLoad {
    [super viewDidLoad];
    UISwipeGestureRecognizer *vertical = [[[UISwipeGestureRecognizer alloc]
        initWithTarget:self action:@selector(reportVerticalSwipe:)] autorelease];
    vertical.direction = UISwipeGestureRecognizerDirectionUp|
        UISwipeGestureRecognizerDirectionDown;
    [self.view addGestureRecognizer:vertical];

    UISwipeGestureRecognizer *horizontal = [[[UISwipeGestureRecognizer alloc]
        initWithTarget:self action:@selector(reportHorizontalSwipe:)] autorelease];
    horizontal.direction = UISwipeGestureRecognizerDirectionLeft|
        UISwipeGestureRecognizerDirectionRight;
    [self.view addGestureRecognizer:horizontal];
}

There you have it! To sanitize things even further, you can also delete the lines referring to gestureStartPoint from SwipesViewController.h (but leaving them there won't harm anything either).

Thanks to UIGestureRecognizer, all we needed to do here was create and configure some gesture recognizers, and add them to our view. When the user interacts with the screen in a way that one of the recognizers recognizes, the action method we specified is called.

In terms of total lines of code, there's not much difference between using gesture recognizers and the previous approach for a simple case like this. But the code that uses gesture recognizers is undeniably simpler to understand and easier to write. You don't need to give even a moment's thought to the issue of calculating a finger's movement over time, because that's already done for you by the UISwipeGestureRecognizer.

Implementing Multiple Swipes

In the Swipes application, we just grab any object out of the touches set to figure out where the user's finger is during the swipe. This approach is fine if we're interested in only single-finger swipes, the most common type of swipe used.

But what if we want to handle two- or three-finger swipes? In previous versions of this book, we dedicated about 50 lines of code, and a fair amount of explanation, to achieving this by tracking multiple UITouch instances across multiple touch events. Fortunately, now that we have gesture recognizers, this problem is solved.

A UISwipeGestureRecognizer can be configured to recognize any number of simultaneous touches. By default, each instance expects a single finger, but you can configure it to look for any number of fingers pressing the screen at once. Each instance responds to only the exact number of touches you specify. So, to update our app, we'll create a whole bunch of gesture recognizers in a loop.

Make yet another copy of your Swipes project folder. Edit SwipesViewController.m and modify the viewDidLoad method, replacing it with the one shown here:

- (void)viewDidLoad {
    [super viewDidLoad];
    UISwipeGestureRecognizer *vertical;

    for (NSUInteger touchCount = 1; touchCount <= 5; touchCount++) {
        vertical = [[[UISwipeGestureRecognizer alloc] initWithTarget:self
            action:@selector(reportVerticalSwipe:)] autorelease];
        vertical.direction = UISwipeGestureRecognizerDirectionUp|
            UISwipeGestureRecognizerDirectionDown;
        vertical.numberOfTouchesRequired = touchCount;
        [self.view addGestureRecognizer:vertical];

        UISwipeGestureRecognizer *horizontal;
        horizontal = [[[UISwipeGestureRecognizer alloc] initWithTarget:self
            action:@selector(reportHorizontalSwipe:)] autorelease];
        horizontal.direction = UISwipeGestureRecognizerDirectionLeft|
            UISwipeGestureRecognizerDirectionRight;
        horizontal.numberOfTouchesRequired = touchCount;
        [self.view addGestureRecognizer:horizontal];
    }
}

Note that in a real application, you might want different numbers of fingers swiping across the screen to trigger different behaviors. You can easily do that using gesture recognizers, simply by having each call a different action method.

Now, all we need to do is change the logging, by adding a method that gives us a handy description of the number of touches and using it in the “report” methods, as shown here. Add this method just below dealloc:

- (NSString *)descriptionForTouchCount:(NSUInteger)touchCount {
    if (touchCount == 2)
        return @"Double ";
    else if (touchCount == 3)
        return @"Triple ";
    else if (touchCount == 4)
        return @"Quadruple ";
    else if (touchCount == 5)
        return @"Quintuple ";
    else
        return @"";
}

Next, modify these two methods as shown:

- (void)reportHorizontalSwipe:(UIGestureRecognizer *)recognizer {
    label.text = @"Horizontal swipe detected";
    label.text = [NSString stringWithFormat:@"%@Horizontal swipe detected",
        [self descriptionForTouchCount:[recognizer numberOfTouches]]];
    [self performSelector:@selector(eraseText) withObject:nil afterDelay:2];
}

- (void)reportVerticalSwipe:(UIGestureRecognizer *)recognizer {
    label.text = @"Vertical swipe detected";
    label.text = [NSString stringWithFormat:@"%@Vertical swipe detected",
        [self descriptionForTouchCount:[recognizer numberOfTouches]]];;
    [self performSelector:@selector(eraseText) withObject:nil afterDelay:2];
}

Compile and run this version. You should be able to trigger double- and triple-swipes in both directions, and should still be able to trigger single-swipes. If you have small fingers, you might even be able to trigger a quadruple- or quintuple-swipe.

CAUTION: With a multiple-finger swipe, be careful that your fingers aren't too close to each other. If two fingers are very close to each other, they may register as only a single touch. Because of this, you shouldn't rely on quadruple- or quintuple-swipes for any important gestures, because many people will have fingers that are too big to do those swipes effectively.

Detecting Multiple Taps

In the TouchExplorer application, we printed the tap count to the screen, so you've already seen how easy it is to detect multiple taps. However, in a real program, it's not quite as straightforward as it seems, because often you will want to take different actions based on the number of taps.

If the user triple-taps, you get notified three separate times: you get a single-tap, a double-tap, and finally a triple-tap. If you want to do something on a double-tap but something completely different on a triple-tap, having three separate notifications could cause a problem.

Fortunately, the engineers at Apple anticipated this situation, and provided a mechanism to let multiple gesture recognizers play nicely together, even when they're faced with ambiguous inputs that could seemingly trigger any of them. The basic idea is that you place a constraint on a gesture recognizer, telling it to not trigger its associated method unless some other gesture recognizer fails to do so itself.

That seems a bit abstract, so let's make it real. One commonly used gesture recognizer is represented by the UITapGestureRecognizer class. A tap gesture recognizer can be configured to do its thing when a particular number of taps occur. Imagine we have a view for which we want to define distinct actions that occur when the user taps once or double-taps. You might start off with something like the following:

    UITapGestureRecognizer *singleTap = [[[UITapGestureRecognizer alloc] initWithTarget:
        self action:@selector(doSingleTap)] autorelease];
    singleTap.numberOfTapsRequired = 1;
    [self.view addGestureRecognizer:singleTap];

    UITapGestureRecognizer *doubleTap = [[[UITapGestureRecognizer alloc] initWithTarget:
        self action:@selector(doDoubleTap)] autorelease];
    doubleTap.numberOfTapsRequired = 2;
    [self.view addGestureRecognizer:doubleTap];

The problem with this piece of code is that the two recognizers are unaware of each other, and they have no way of knowing that the user's actions may be better suited to another recognizer. With the preceding code, if the user double-taps the view, the doDoubleTap method will be called, but the doSingleMethod will also be called twice—once for each tap.

The way around this is to require failure. We tell singleTap that we want it to trigger its action only if doubleTap doesn't recognize and respond to the user input with this single line:

    [singleTap requireGestureRecognizerToFail:doubleTap];

This means that when the user taps once, singleTap doesn't do its work immediately. Instead, singleTap waits until it knows that doubleTap has decided to stop paying attention to the current gesture (the user didn't tap twice). We're going to build on this further with our next project.

In Xcode, create a new project with the View-based Application template and a Product of iPhone. Call this new project TapTaps. This application will have four labels, to inform us when it has detected a single-tap, double-tap, triple-tap, or quadruple tap (see Figure 15-4).

Image

Figure 15-4. The TapTaps application detects up to four simultaneous taps.

We need outlets for the four labels, and we also need separate methods for each tap scenario to simulate what you would have in a real application. We'll also include a method for erasing the text fields. Expand the Classes folder, single-click TapTapsViewController.h, and make the following changes:

#import <UIKit/UIKit.h>

@interface TapTapsViewController : UIViewController {
    UILabel *singleLabel;
    UILabel *doubleLabel;
    UILabel *tripleLabel;
    UILabel *quadrupleLabel;
}
@property (nonatomic, retain) IBOutlet UILabel *singleLabel;
@property (nonatomic, retain) IBOutlet UILabel *doubleLabel;
@property (nonatomic, retain) IBOutlet UILabel *tripleLabel;
@property (nonatomic, retain) IBOutlet UILabel *quadrupleLabel;
- (void)tap1;
- (void)tap2;
- (void)tap3;
- (void)tap4;
- (void)eraseMe:(UITextField *)textField ;
@end

Save the file.

Next, expand the Resources folder. Double-click TapTapsViewController.xib to edit the GUI. Once you're there, add four Labels to the view from the library. Make all four labels stretch from blue guideline to blue guideline, and then format them however you see fit. For example, feel free to make each label a different color. When you're finished, control-drag from the File's Owner icon to each label, and connect each one to singleLabel, doubleLabel, tripleLabel, and quadrupleLabel, respectively. Now, make sure you double-click each label and press the delete key to get rid of any text.

Save your changes and return to Xcode. SelectTapTapsViewController.m, and add the following code at the top of the file:

#import "TapTapsViewController.h"

@implementation TapTapsViewController
@synthesize singleLabel;
@synthesize doubleLabel;
@synthesize tripleLabel;
@synthesize quadrupleLabel;

- (void)tap1 {
    singleLabel.text = @"Single Tap Detected";
    [self performSelector:@selector(eraseMe:)
        withObject:singleLabel afterDelay:1.6f];
}

- (void)tap2 {
    doubleLabel.text = @"Double Tap Detected";
    [self performSelector:@selector(eraseMe:)
        withObject:doubleLabel afterDelay:1.6f];
}

- (void)tap3 {
    tripleLabel.text = @"Triple Tap Detected";
    [self performSelector:@selector(eraseMe:)
        withObject:tripleLabel afterDelay:1.6f];
}

- (void)tap4 {
    quadrupleLabel.text = @"Quadruple Tap Detected";
    [self performSelector:@selector(eraseMe:)
        withObject:quadrupleLabel afterDelay:1.6f];
}

- (void)eraseMe:(UITextField *)textField {
    textField.text = @"";
}
...

Insert the following lines into the existing dealloc and viewDidUnload methods:

...
- (void)viewDidUnload {
      // Release any retained subviews of the main view.
    // e.g. self.myOutlet = nil;
   self.singleLabel = nil;
   self.doubleLabel = nil;
   self.tripleLabel = nil;
   self.quadrupleLabel = nil;
   [super viewDidUnload];
}

- (void)dealloc {
    [singleLabel release];
    [doubleLabel release];
    [tripleLabel release];
    [quadrupleLabel release];
    [super dealloc];
}

Now, uncomment viewDidLoad and add the following code:

- (void)viewDidLoad {
    [super viewDidLoad];

    UITapGestureRecognizer *singleTap =
        [[[UITapGestureRecognizer alloc] initWithTarget:self
                                                 action:@selector(tap1)] autorelease];
    singleTap.numberOfTapsRequired = 1;
    singleTap.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:singleTap];

    UITapGestureRecognizer *doubleTap =
        [[[UITapGestureRecognizer alloc] initWithTarget:self
                                                 action:@selector(tap2)] autorelease];
    doubleTap.numberOfTapsRequired = 2;
    doubleTap.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:doubleTap];
    [singleTap requireGestureRecognizerToFail:doubleTap];
    UITapGestureRecognizer *tripleTap =
        [[[UITapGestureRecognizer alloc] initWithTarget:self
                                                 action:@selector(tap3)] autorelease];
    tripleTap.numberOfTapsRequired = 3;
    tripleTap.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:tripleTap];
    [doubleTap requireGestureRecognizerToFail:tripleTap];

    UITapGestureRecognizer *quadrupleTap =
        [[[UITapGestureRecognizer alloc] initWithTarget:self
                                                 action:@selector(tap4)] autorelease];
    quadrupleTap.numberOfTapsRequired = 4;
    quadrupleTap.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:quadrupleTap];
    [tripleTap requireGestureRecognizerToFail:quadrupleTap];
}

The four tap methods do nothing more in this application than set one of the four labels and use performSelector:withObject:afterDelay: to erase that same label after 1.6 seconds. The eraseMe: method erases any label that is passed into it.

The interesting part of this is what occurs in the viewDidLoad method. We start off simply enough, by setting up a tap gesture recognizer and attaching it to our view.

    UITapGestureRecognizer *singleTap =
        [[[UITapGestureRecognizer alloc] initWithTarget:self
                                                 action:@selector(tap1)] autorelease];
    singleTap.numberOfTapsRequired = 1;
    singleTap.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:singleTap];

Note that we set both the number of taps (touches in the same position, one after another) required to trigger the action and touches (number of fingers touching the screen at the same time) to 1. After that, we set up another tap gesture recognizer to handle a double-tap.

    UITapGestureRecognizer *doubleTap =
        [[[UITapGestureRecognizer alloc] initWithTarget:self
                                                 action:@selector(tap2)] autorelease];
    doubleTap.numberOfTapsRequired = 2;
    doubleTap.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:doubleTap];
    [singleTap requireGestureRecognizerToFail:doubleTap];

That's pretty similar to the previous recognizer, right up until that last line, in which we give singleTap some additional context. We are effectively telling singleTap that it should trigger its action only in case some other gesture recognizer—in this case, doubleTap—decides that the current user input isn't what it's looking for.

Let's think about what this means. With those two tap gesture recognizers in place, a single tap in the view will immediately make singleTap think, “Hey, this looks like it's for me.” At the same time, doubleTap will think, “Hey, this looks like it might be for me, but I'll need to wait for one more tap.” Because singleTap is set up to wait for doubleTap's “failure,” it doesn't send its action method right away; instead, it waits to see what happens with doubleTap.

After that first tap, if another tap occurs immediately, then doubleTap thinks, “Hey, that's mine all right,” and fires its action. At that point, singleTap will realize what happened and give up on that gesture. On the other hand, if a particular amount of time goes by (the amount of time that the system considers to be the maximum length of time between taps in a double-tap), doubleTap will give up, and singleTap will see the failure and finally trigger its event.

The rest of the method goes on to define gesture recognizers for three and four taps, and at each point, configures one gesture to be dependent on the failure of the next.

    UITapGestureRecognizer *tripleTap =
        [[[UITapGestureRecognizer alloc] initWithTarget:self
                                                 action:@selector(tap3)] autorelease];
    tripleTap.numberOfTapsRequired = 3;
    tripleTap.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:tripleTap];
    [doubleTap requireGestureRecognizerToFail:tripleTap];

    UITapGestureRecognizer *quadrupleTap =
        [[[UITapGestureRecognizer alloc] initWithTarget:self
                                                 action:@selector(tap4)] autorelease];
    quadrupleTap.numberOfTapsRequired = 4;
    quadrupleTap.numberOfTouchesRequired = 1;
    [self.view addGestureRecognizer:quadrupleTap];
    [tripleTap requireGestureRecognizerToFail:quadrupleTap];

Note that we don't need to explicitly configure every gesture to be dependent on the failure of each of the higher-tap-numbered gestures. That multiple dependency comes about naturally as a result of the chain of failure established in our code. Since singleTap requires the failure of doubleTap, doubleTap requires the failure of tripleTap, and tripleTap requires the failure of quadrupleTap, by extension, singleTap requires that all of the others fail.

Compile and run the app, and whether you single-, double-, triple-, or quadruple-tap, you should see only one label displayed.

Detecting Pinches

Another common gesture is the two-finger pinch. It's used in a number of applications—including Mobile Safari, Mail, and Photos—to let you zoom in (if you pinch apart) and zoom out (if you pinch together).

Detecting pinches is really easy, thanks to UIPinchGestureRecognizer. This one is referred to as a continuous gesture recognizer, because it calls its action method over and over again during the pinch.

While the gesture is underway, the pinch gesture recognizer goes through a number of states. The only one we want to watch for is UIGestureRecognizerStateBegan, which is the state that the recognizer is in when it first calls the action method after detecting that a pinch is happening.

At that moment, the pinch gesture recognizer's scale property is always set to 1.0; for the rest of the gesture, that number goes up and down. We're going to use the scale value to resize the text in a label. We'll build the PinchMe app (see Figure 15-5), which will detect the pinch gesture for both zooming in and zooming out.

Image

Figure 15-5. The PinchMe application detects the pinch gesture, both for zooming in and zooming out.

Create a new project in Xcode, again using the View-based Application template, and call this one PinchMe.

The PinchMe application is going to need only a single outlet for a label, but it also needs an instance variable to hold the size of the label's font at the start of the pinch. Expand the Classes folder, single-click PinchMeViewController.h, and make the following changes:

#import <UIKit/UIKit.h>

@interface PinchMeViewController : UIViewController {
    UILabel *label;
    CGFloat initialFontSize;
}
@property (nonatomic, retain) IBOutlet UILabel *label;
@property CGFloat initialFontSize;

@end

Now that we have our outlet, expand the Resources folder, and double-click PinchMeViewController.xib. In Interface Builder, make sure the view is displayed in its editing window, and drag a single label over to it. Resize the label to fill the entire view, and put a small word or just a letter or two into it. Be sure the label is resized blue guideline to blue guideline, from left to right and top to bottom. This text is what we'll be zooming in and out on. Set the label's alignment to centered. Next, control-drag from the File's Owner icon to the label, and connect it to the label outlet.

Save the nib, bounce back to Xcode toPinchMeViewController.m, and add the following code at the top of the file:

#import "PinchMeViewController.h"


@implementation PinchMeViewController
@synthesize label;
@synthesize initialFontSize;

...

Clean up our outlet in the dealloc and viewDidUnload methods:

...
- (void)viewDidUnload {
    // Release any retained subviews of the main view.
    // e.g. self.myOutlet = nil;
    self.label = nil;
    [super viewDidUnload];
}

- (void)dealloc {
    [label release];
    [super dealloc];
}
...

Then remove the comment marks around the viewDidLoad method, and add the following code to the method:

- (void)viewDidLoad {
    [super viewDidLoad];

    UIPinchGestureRecognizer *pinch = [[[UIPinchGestureRecognizer alloc]
        initWithTarget:self action:@selector(doPinch:)] autorelease];
    [self.view addGestureRecognizer:pinch];
}

And add the following method at the end of the file:

...
- (void)doPinch:(UIPinchGestureRecognizer *)pinch {
    if (pinch.state == UIGestureRecognizerStateBegan) {
        initialFontSize = label.font.pointSize;
    } else {
        label.font = [label.font fontWithSize:initialFontSize * pinch.scale];
    }
}
@end

In viewDidLoad, we set up a pinch gesture recognizer and tell it to notify us via the doPinch: method when pinching is occurring. Inside doPinch:, we look at the pinch's state to see if it's just starting; if so, we store the current font size for later use. Otherwise, if the pinch is already in progress, we use the stored initial font size and the current pinch scale to calculate a new font size.

And that's all there is to pinch detection. Compile and run the app to give it a try. As you do some pinching, you'll see the text change size in response. If you're on the simulator, remember that you can simulate a pinch by holding down the option key and clicking and dragging in the simulator window using your mouse.

Creating and Using Custom Gestures

You've now seen how to detect the most commonly used iPhone gestures. The real fun begins when you start defining your own, custom gestures! You've already seen how to use a few of UIGestureRecognizer's subclasses, so now it's time to learn how to create your own gestures, which can be easily attached to any view you like.

Defining a custom gesture is tricky. You've already mastered the basic mechanism, and that wasn't too difficult. The tricky part is being flexible when defining what constitutes a gesture. Most people are not precise when they use gestures. Remember the variance we used when we implemented the swipe, so that even a swipe that wasn't perfectly horizontal or vertical still counted? That's a perfect example of the subtlety you need to add to your own gesture definitions. If you define your gesture too strictly, it will be useless. If you define it too generically, you'll get too many false positives, which will frustrate the user. In a sense, defining a custom gesture can be hard, because you need to be precise about a gesture's imprecision. If you try to capture a complex gesture like, say, a figure eight, the math behind detecting the gesture is also going to get quite complex.

When defining new gestures for your own applications, make sure you test them thoroughly, and if you can, have other people test them for you as well. You want to make sure that your gesture is easy for the user to do, but not so easy that it gets triggered unintentionally. You also need to make sure that you don't conflict with other gestures used in your application. A single gesture should not count, for example, as both a custom gesture and a pinch.

Defining the Check Mark Gesture

In our sample, we're going to define a gesture shaped like a check mark (see Figure 15-6).

Image

Figure 15-6. Our check mark gesture

What are the defining properties of this check mark gesture? Well, the principal one is that sharp change in angle between the two lines. We also want to make sure that the user's finger has traveled a little distance in a straight line before it makes that sharp angle. In Figure 15-6, the legs of the checkmark meet at an acute angle, just under 90 degrees. A gesture that required exactly an 85-degree angle would be awfully hard to get right, so we'll define a range of acceptable angles.

Create a new project in Xcode using the View-based Application template, and call the project CheckPlease. In this project, we're going to need to do some fairly standard analytic geometry to calculate such things as the distance between two points and the angle between two lines. Don't worry if you don't remember much geometry; we've provided you with functions that will do the calculations for you.

Look in the 15 - CheckPlease folder for the two files named CGPointUtils.h and CGPointUtils.c. Drag both of these to the Other Sources folder of your project. Feel free to use these utility functions in your own applications.

Control-click in the Classes folder, and add a new file to the project. Use the new file assistant to create a new Objective-C class (make it a subclass of NSObject for now, since the assistant doesn't give us a way to create a subclass of UIGestureRecognizer). Call the file CheckMarkRecognizer.m, be sure to ask for the .h file, and save it in the project's Classes folder. Then select CheckMarkRecognizer.h, and make the following changes:

#import <Foundation/Foundation.h>

@interface CheckMarkRecognizer : NSObject {
@interface CheckMarkRecognizer : UIGestureRecognizer {
    CGPoint     lastPreviousPoint;
    CGPoint     lastCurrentPoint;
    CGFloat     lineLengthSoFar;
}
@end

Here we declare three variables: lastPreviousPoint, lastCurrentPoint, and lineLengthSoFar. Each time we're notified of a touch, we're given the previous touch point and the current touch point. Those two points define a line segment. The next touch adds another segment. We store the previous touch's previous and current points in lastPreviousPoint and lastCurrentPoint, which gives us the previous line segment. We can then compare that line segment to the current touch's line segment. Comparing these two line segments can tell us if we're still drawing a single line or if there's a sharp enough angle between the two segments that we're actually drawing a check mark.

Remember that every UITouch object knows its current position in the view as well as its previous position in the view. In order to compare angles, however, we need to know the line that the previous two points made, so we need to store the current and previous points from the last time the user touched the screen. We'll use these two variables to store those two values each time this method is called, so that we have the ability to compare the current line to the previous line and check the angle.

We also declare a variable to keep a running count of how far the user has dragged the finger. If the finger hasn't traveled at least 10 pixels (the value in kMinimumCheckMarkLength), whether the angle falls in the correct range doesn't matter. If we didn't require this distance, we would receive a lot of false positives.

Now, select CheckMarkRecognizer.m, and make the following changes:

#import "CheckMarkRecognizer.h"
#import "CGPointUtils.h"
#import <UIKit/UIGestureRecognizerSubclass.h>

#define kMinimumCheckMarkAngle    50
#define kMaximumCheckMarkAngle    135
#define kMinimumCheckMarkLength   10

@implementation CheckMarkRecognizer

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    [super touchesBegan:touches withEvent:event];
    UITouch *touch = [touches anyObject];
    CGPoint point = [touch locationInView:self.view];
    lastPreviousPoint = point;
    lastCurrentPoint = point;
    lineLengthSoFar = 0.0f;
}

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
    [super touchesMoved:touches withEvent:event];
    UITouch *touch = [touches anyObject];
    CGPoint previousPoint = [touch previousLocationInView:self.view];
    CGPoint currentPoint = [touch locationInView:self.view];
    CGFloat angle = angleBetweenLines(lastPreviousPoint,
                                      lastCurrentPoint,
                                      previousPoint,
                                      currentPoint);
    if (angle >= kMinimumCheckMarkAngle && angle <= kMaximumCheckMarkAngle
&& lineLengthSoFar > kMinimumCheckMarkLength) {
        self.state = UIGestureRecognizerStateEnded;
    }
    lineLengthSoFar += distanceBetweenPoints(previousPoint, currentPoint);
    lastPreviousPoint = previousPoint;
    lastCurrentPoint = currentPoint;
}
@end

After importing CGPointUtils.h, the file we mentioned earlier, we import a special header file called UIGestureRecognizerSubclass.h, which contains declarations that are intended for use only by a subclass. The important thing this does for is to make the gesture recognizer's state property writable. That's the mechanism our subclass will use to affirm that the gesture we're watching for was completed successfully.

Then we define the parameters that we use to decide whether the user's finger-squiggling matches our definition of a check mark. You can see that we've defined a minimum angle of 50 degrees and a maximum angle of 135 degrees. This is a pretty broad range, and depending on your needs, you might decide to restrict the angle.

We experimented a bit with the angles and found that our practice check mark gestures fell into a fairly broad range, which is why we chose a relatively large tolerance here. We were somewhat sloppy with our check mark gestures, and so we expect that at least some of our users will be just as sloppy. As a wise man once said, “Be rigorous in what you produce and tolerant in what you accept.”

Let's take a look at the touch methods. You'll notice that each of them first calls the superclass's implementation, something we've never done before. We need to do this in a UIGestureRecognizer subclass, so that our superclass can have the same amount of knowledge about the events as we do.

In touchesBegan:withEvent:, we determine the point that the user is currently touching and store that value in lastPreviousPoint and lastCurrentPoint. Since this method is called when a gesture begins, we know there is no previous point to worry about, so we store the current point in both. We also reset the running line length count to 0.

Then, in touchesMoved:withEvent:, we calculate the angle between the line from the current touch's previous position to its current position, and the line between the two points stored in the lastPreviousPoint and lastCurrentPoint instance variables. Once we have that angle, we check to see if it falls within our range of acceptable angles and check to make sure that the user's finger has traveled far enough before making that sharp turn. If both of those are true, we set the label to show that we've identified a check mark gesture. Next, we calculate the distance between the touch's position and its previous position, add that to lineLengthSoFar, and replace the values in lastPreviousPoint and lastCurrentPoint with the two points from the current touch so we'll have them the next time through this method.

Now that we have a gesture recognizer of our own to try out, it's time to connect it to a view, just as we've done with the others we've used.

Attaching the Check Mark Gesture to a View

Single-click CheckPleaseViewController.h, and make the following changes:

#import <UIKit/UIKit.h>

@interface CheckPleaseViewController : UIViewController {
    UILabel     *label;

}
@property (nonatomic, retain) IBOutlet UILabel *label;

@end

Here, we simply define an outlet to a label that we'll use to inform the user when we've detected a check mark gesture.

Expand the Resources folder, and double-click CheckPleaseViewController.xib to edit the GUI. Add a Label from the library and set it up the way you want it to look. Control-drag from the File's Owner icon to that label to connect it to the label outlet and double-click the label to delete its text. Save the nib file.

Now, return to Xcode, choose to edit CheckPleaseViewController.m, and add the following code to the top of the file:

#import "CheckPleaseViewController.h"
#import "CheckMarkRecognizer.h"

@implementation CheckPleaseViewController
@synthesize label;

- (void)doCheck:(CheckMarkRecognizer *)check {
    label.text = @"Checkmark";
    [self performSelector:@selector(eraseLabel)
               withObject:nil afterDelay:1.6];    
}

- (void)eraseLabel {
    label.text = @"";
}

That gives us an action method to connect our recognizer to, which in turn triggers the familiar-looking eraseLabel method. Next, remove the comment markers around the viewDidLoad method, and add the following lines, which connect an instance of our new recognizer to the view:

- (void)viewDidLoad {
    [super viewDidLoad];
    CheckMarkRecognizer *check = [[[CheckMarkRecognizer alloc] initWithTarget:self
        action:@selector(doCheck:)] autorelease];
    [self.view addGestureRecognizer:check];
}

All that's left to do now is to add the following code to the existing viewDidUnload and dealloc methods:

- (void)viewDidUnload {
    // Release any retained subviews of the main view.
    // e.g. self.myOutlet = nil;
    self.label = nil;
    [super viewDidUnload];
}

- (void)dealloc {
    [label release];
    [super dealloc];
}

Compile and run the app, and try out the gesture.

Garçon? Check, Please!

You should now understand the mechanism iOS uses to tell your application about touches, taps, and gestures. You learned how to detect the most commonly used iOS gestures, and even got a taste of how you might go about defining your own custom gestures. The iOS interface relies on gestures for much of its ease of use, so you'll want to have these techniques at the ready for most of your iOS development.

When you're ready to move on, turn the page, and we'll tell you how to figure out where in the world you are using Core Location.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset