Chapter     9

Camera Recipes

A great number of mobile applications can interact with your device’s camera, including apps that take pictures, record videos, and provide overlays (for example, augmented-reality applications such as a constellation app). iOS developers have a great deal of control in how they can interact with any given device’s hardware. With iOS 7, a number of features have been added to improve capture quality and give developers more to work with, such as increased frame rate and smoother auto focus. One exciting new feature is the ability to use real-time discovery of machine-readable metadata (barcodes). In this chapter, you will learn multiple ways to access and use these functionalities, from simple, predefined interfaces to incredibly flexible, custom implementations, as well as capturing machine-readable metadata.

Note   The iOS simulator does not support camera hardware. To test most recipes in this chapter, you must run them on a physical device.

Recipe 9-1: Taking Pictures

iOS has an incredibly handy and simple interface to your device’s camera. With this interface you can allow users to take pictures and record video from inside an app. Here, you learn the basics of starting the camera interface to capture a still image.

To begin, you will create a simple project that allows you to pull up your camera, take a picture, and then display the most recently taken image on your screen. First, create a new single view application project. For this recipe, you don’t need to import any extra frameworks into your project.

Setting Up the User Interface

Set up a simple user interface containing an image view and a button. Switch over to the view in your Main.storyboard file. Then drag an image view from the object library and make it fill the entire view. Next, drag out a UIButton into your view. The button accesses the camera, so set its text to “Take Picture.” Your view should now resemble the view in Figure 9-1. You may want to position the button high enough or add a constraint so it won’t be cut off on a 3.5" screen (see Chapter 3 for adding Auto Layout constraints).

9781430259596_Fig09-01.jpg

Figure 9-1. A simple user interface for taking pictures

Create outlets for your image view and your button and name them “imageView” and “cameraButton,” respectively. Also, create an action with the name “takePicture” for your button.

Your ViewController.h file should now resemble the code in Listing 9-1.

Listing 9-1.  The ViewController.h file with added outlets and the takePicture: method

//
//  ViewController.h
//  Recipe 9-1 Taking Pictures
//

#import <UIKit/UIKit.h>

@interface ViewController : UIViewController

@property (weak, nonatomic) IBOutlet UIImageView *imageView;
@property (weak, nonatomic) IBOutlet UIButton *cameraButton;

- (IBAction)takePicture:(id)sender;

@end

Accessing the Camera

To access the camera you will use the UIImagePickerController class, which presents an interface for choosing photos or taking them. Whenever dealing with the camera hardware on iOS, it is essential that, as a developer, you include a function to have your app check for hardware availability. This is done through the isSourceTypeAvailable: class method of UIImagePickerController. The method takes one of the following predefined constants as an argument:

  • UIImagePickerControllerSourceTypeCamera
  • UIImagePickerControllerSourceTypePhotoLibrary
  • UIImagePickerControllerSourceTypeSavedPhotosAlbum

For this recipe you’ll use the first choice, UIImagePickerControllerSourceTypeCamera. UIImagePickerControllerPhotoLibrary is used to access all the stored photos on the device, while UIImagePickerControllerSavedPhotosAlbum is used to access only the Camera Roll album.

Now, switch to the ViewController.m file and locate the stubbed out takePicture: action method. There, you’ll begin by checking whether the camera source type is available and, if not, display a UIAlertView saying so. Fill out the takePicture method, as shown in Listing 9-2.

Listing 9-2.  The takePicture: method implementation

- (IBAction)takePicture:(id)sender
{
        // Make sure camera is available
    if ([UIImagePickerController
         isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] == NO)
    {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error"
                                                        message:@"Camera Unavailable"
                                                       delegate:self
                                              cancelButtonTitle:@"Cancel"
                                              otherButtonTitles:nil, nil];
        [alert show];
        return;
    }
}

The iOS simulator does not have camera functionality. Therefore, you’ll see only the error message, demonstrated in Figure 9-2, when you run your app there. To fully test this application, you will need to run it on a physical device.

9781430259596_Fig09-02.jpg

Figure 9-2. The simulator does not have camera support, so you need to test your app on a real device

Before you expand the takePicture: method in the case in which the camera is in fact available, you need to make a couple of changes in ViewController.h. The first is to add a property to hold the image-picker instance through which interface you’ll access the camera. The second change is to prepare the view controller for receiving events from the image picker. Such a delegate needs to conform to both UIImagePickerControllerDelegate and UINavigationControllerDelegate protocols. Update the header file with the changes in bold shown in Listing 9-3.

Listing 9-3.  The updated ViewController.h file with delegates and the UIImagePickerController property

//
//  ViewController.h
//  Recipe 9-1 Taking Pictures
//

#import <UIKit/UIKit.h>

@interface ViewController : UIViewController <UIImagePickerControllerDelegate,
                                             UINavigationControllerDelegate>

@property (weak, nonatomic) IBOutlet UIImageView *imageView;
@property (weak, nonatomic) IBOutlet UIButton *cameraButton;
@property (strong, nonatomic) UIImagePickerController *imagePicker;

- (IBAction)takePicture:(id)sender;

@end

Now you can fill out the code in your takePicture: action method in ViewController.m. It creates and initializes the image-picker instance—if it hasn’t already done so—and presents it to handle the camera device (Listing 9-4).

Listing 9-4.  Adding code to the takePicture: method to initialize and present the camera

- (IBAction)takePicture:(id)sender
{
        // Make sure camera is available
    if ([UIImagePickerController
         isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] == NO)
    {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error"
                                                        message:@"Camera Unavailable"
                                                       delegate:self
                                              cancelButtonTitle:@"Cancel"
                                              otherButtonTitles:nil, nil];
        [alert show];
        return;
    }
    if (self.imagePicker == nil)
    {
        self.imagePicker = [[UIImagePickerController alloc] init];
        self.imagePicker.delegate = self;
        self.imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;

    }
    [self presentViewController:self.imagePicker animated:YES completion:NULL];
}

If you run your app now (on a real device) and tap the button, you should be presented with a simple camera interface that allows you to take a picture and select it for the purpose of your app (or retake it if you’re not satisfied). Figure 9-3 shows the user interface of UIImagePickerController.

9781430259596_Fig09-03.jpg

Figure 9-3. The user interface of a UIImagePickerViewController

Retrieving a Picture

Now that you have set up your view controller to successfully present your UIImagePickerController, you need to handle how your view controller reacts to the completion of the UIImagePickerController’s selection, when a picture has been taken and selected for use. Do this by using the delegate method imagePickerController:didFinishPickingMediaWithInfo:. Retrieve the picture, update the image view, and finally dismiss the image picker, as shown in Listing 9-5.

Listing 9-5.  Implementing the imagePickerController:didFinishPickingMediaWithInfo: delegate method

-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
    UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
    self.imageView.image = image;
    self.imageView.contentMode = UIViewContentModeScaleAspectFill;
    [self dismissViewControllerAnimated:YES completion:NULL];
}

Note   By setting the content-mode property of the image view to UIViewContentModeScaleAspectFill, we ensure that the picture will fill the entire view while still maintaining its aspect ratio. This usually results in the picture’s being cropped instead of looking stretched. Alternatively, you could use UIViewContentModeScaleAspectFit, which displays the entire picture with a retained aspect ratio but does not necessarily fill the entire view.

Another delegate method needs to be implemented to handle the cancellation of an image selection. The only action you need to take is to dismiss the image-picker view. Add the code in Listing 9-6 to the implementation file.

Listing 9-6.  Implementing the imagePickerControllerDidCancel: delegate method

- (void) imagePickerControllerDidCancel: (UIImagePickerController *) picker
{
    [self dismissViewControllerAnimated:YES completion:NULL];
}

Your app can now access the camera, take a picture, and set it as the background of the app, as Figure 9-4 shows.

9781430259596_Fig09-04.jpg

Figure 9-4. Your app with a photo set as the background

Note   The UIImagePickerController class does not support landscape orientation for taking pictures. Although you can take pictures that way, the view does not adjust according to the landscape orientation, which results in a rather weird user experience.

Implementing Basic Editing

As an optional setting, you could allow your camera interface to be editable, enabling the user to crop and frame the picture he has taken. To do this, you simply have to set the UIImagePickerController’s allowsEditing property to YES, as shown in Listing 9-7.

Listing 9-7.  Modifying the takePicture: action method to allow editing

- (IBAction)takePicture:(id)sender
{
    // ...

    if (self.imagePicker == nil)
    {
        self.imagePicker = [[UIImagePickerController alloc] init];
        self.imagePicker.delegate = self;
        self.imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
        self.imagePicker.allowsEditing = YES;

    }
    [self presentViewController:self.imagePicker animated:YES completion:NULL];
}

Then, to acquire the edited image, you also need to make a change in the imagePickerController:didFinishPickingMediaWithInfo: method, as shown in Listing 9-8.

Listing 9-8.  Modifying imagePickerController:didFinishPickingMediaWithInfo: to receive edited image

-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
    UIImage *image = [info objectForKey:UIImagePickerControllerEditedImage];
    self.imageView.image = image;
    self.imageView.contentMode = UIViewContentModeScaleAspectFill;
    [self dismissViewControllerAnimated:YES completion:NULL];
}

Saving Pictures to a Photos Album

You might want to save the pictures you take to the device’s saved-photos album. This is easily done with the UIImageWriteToSavedPhotosAlbum() function. Add a line to your imagePickerViewController:didFinishPickingMediaWithInfo: method, as shown in Listing 9-9.

Listing 9-9.  Modifying the imagePickerController:didFinishPickingMediaWithInfo: method to add a save option

-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
    UIImage *image = (UIImage *)[info objectForKey:UIImagePickerControllerEditedImage];
    UIImageWriteToSavedPhotosAlbum (image, nil, nil , nil);
    self.imageView.image = image;
    self.imageView.contentMode = UIViewContentModeScaleAspectFill;
    [self dismissViewControllerAnimated:YES completion:NULL];
}

However, as of iOS 6 privacy restrictions have been applied to the saved-photos album; an app that wants to access it now needs an explicit authorization from the user. Therefore, you should also provide an explanation as to why your app requests access to the saved-photos library. This is done in the application’s Info.plist file (found in the Supporting Files folder in the project navigator) and the key NSPhotoLibraryUsageDescription (or “Privacy—Photo Library Usage Description,” as it’s displayed in the property list).

You can enter any text you want for the usage description; we chose “Testing the camera.” The important thing to know is that the text will be displayed to the user when he is prompted for authorizing the app access to the photo album, as in Figure 9-5.

9781430259596_Fig09-05.jpg

Figure 9-5. Saving the picture to the photos library will need an authorization from the user

Recipe 9-2: Recording Video

Your UIImagePickerController is actually a lot more flexible than you’ve seen so far, especially because you’ve been using it exclusively for still images. Here, you’ll learn how to set up your UIImagePickerController to handle both still images and video.

For this recipe, you build off the code from Recipe 9-1 because it already includes the setup you need. You add to its functionality by implementing the option to record and save videos.

Start by setting the image picker’s allowed media types to all available types for the camera. This can be done using the availableMediaTypesForSourceType: class method of the UIImagePickerController, as shown in Listing 9-10.

Listing 9-10.  Updating the takePicture: method to allow video recording

- (IBAction)takePicture:(id)sender
{
        // Make sure camera is available
    if ([UIImagePickerController
         isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] == NO)
    {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error"
                                                        message:@"Camera Unavailable"
                                                       delegate:self
                                              cancelButtonTitle:@"Cancel"
                                              otherButtonTitles:nil, nil];
        [alert show];
        return;
    }
    if (self.imagePicker == nil)
    {
        self.imagePicker = [[UIImagePickerController alloc] init];
        self.imagePicker.delegate = self;
        self.imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
        self.imagePicker.mediaTypes = [UIImagePickerController
            availableMediaTypesForSourceType:UIImagePickerControllerSourceTypeCamera];
        self.imagePicker.allowsEditing = YES;
        
    }
    [self presentViewController:self.imagePicker animated:YES completion:NULL];
}

Next, you need to instruct your application on how to handle when the user records and uses a video. To do this, link the Mobile Core Services framework to your project and import its API in your view controller’s header file, as shown in Listing 9-11.

Listing 9-11.  Importing the framework to the ViewController.h file

//
//  ViewController.h
//  Recipe 9-2 Recording Videos
//

#import <UIKit/UIKit.h>
#import <MobileCoreServices/MobileCoreServices.h>

@interface ViewController : UIViewController<UIImagePickerControllerDelegate, UINavigationControllerDelegate>

@property (weak, nonatomic) IBOutlet UIImageView *imageView;
@property (weak, nonatomic) IBOutlet UIButton *cameraButton;
@property (strong, nonatomic) UIImagePickerController *imagePicker;

- (IBAction)takePicture:(id)sender;

@end

Now add the bold code in Listing 9-12 to your UIImagePickerController’s delegate method.

Listing 9-12.  Adding video comparison to the imagePickerController:didFinishPickingMediaWithInfo: method

-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
    NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType];
    
    if (CFStringCompare((__bridge CFStringRef) mediaType, kUTTypeMovie, 0) ==
        kCFCompareEqualTo)
    {
        // Movie Captured
        NSString *moviePath =
           (NSString *) [[info objectForKey: UIImagePickerControllerMediaURL] path];
        
        if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum (moviePath))
        {
            UISaveVideoAtPathToSavedPhotosAlbum (moviePath, nil, nil, nil);
        }
    }
    else
    {
        // Picture Taken
        UIImage *image =
            (UIImage *)[info objectForKey:UIImagePickerControllerEditedImage];
        UIImageWriteToSavedPhotosAlbum (image, nil, nil , nil);
        self.imageView.image = image;
        self.imageView.contentMode = UIViewContentModeScaleAspectFill;
    }
    [self dismissViewControllerAnimated:YES completion:NULL];
}

Essentially, what you are doing in Listing 9-12 is comparing the media type of the saved file. The main issue comes into play when you attempt to compare mediaType, which is an NSString, with kUTTypeMovie, which is of the type CFStringRef. You accomplish this by casting your NSString down to a CFStringRef. In iOS 5+ this process became slightly more complicated with the introduction of Automatic Reference Counting (ARC) because ARC deals with Objective-C object types such as NSString, but not with C types like CFStringRef. You create a bridged casting by placing __bridge before your CFStringRef, as shown earlier, to instruct ARC not to deal with this object.

If all has gone well, your app should now be able to record video by selecting the video mode in the image-picker view, as shown in Figure 9-6. The video is then saved (if allowed by the user) to the private photos library.

9781430259596_Fig09-06.jpg

Figure 9-6. The image-picker view with a switch control between photo and video modes

Recipe 9-3: Editing Videos

Although your UIImagePickerController offers a convenient way to record and save video files, it does nothing to allow you to edit them. Fortunately, iOS has another built-in controller called UIVideoEditorController, which you can use to edit your recorded videos.

You can build this fairly simple recipe off your second project from Recipe 9-2, in which you added video functionality to your UIImagePickerController.

Start by adding a second button with the title “Edit Video” to your view controller’s interface file. Arrange the two buttons, as shown in Figure 9-7.

9781430259596_Fig09-07.jpg

Figure 9-7. New user interface with a button for editing the video

Next, create an action named editVideo for when the user taps the “Edit Video” button.

You’ll also need a property to store the path to the video that the user records. Define it in the view controller’s header file, as shown in Listing 9-13.

Listing 9-13.  The ViewController.h file with the new additions

//
//  ViewController.h
//  Recipe 9-3 Editing Videos
//

#import <UIKit/UIKit.h>
#import <MobileCoreServices/MobileCoreServices.h>

@interface ViewController : UIViewController<UIImagePickerControllerDelegate, UINavigationControllerDelegate>

@property (weak, nonatomic) IBOutlet UIImageView *imageView;
@property (weak, nonatomic) IBOutlet UIButton *cameraButton;
@property (strong, nonatomic) UIImagePickerController *imagePicker;
@property (strong, nonatomic) NSString *pathToRecordedVideo;

- (IBAction)takePicture:(id)sender;
- (IBAction)editVideo:(id)sender;

@end

Now, in the imagePickerController:didFinishPickingMediaWithInfo: method, make sure the pathToRecordedVideo property gets updated with the path to the newly recorded video, as shown in Listing 9-14.

Listing 9-14.  Adding the video path for the newly recorded video

-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
    NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType];
    
    if (CFStringCompare((__bridge CFStringRef) mediaType, kUTTypeMovie, 0) ==
        kCFCompareEqualTo)
    {
        NSString *moviePath = (NSString *)[[info objectForKey: UIImagePickerControllerMediaURL] path];
        self.pathToRecordedVideo = moviePath;

        if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum (moviePath))
        {
            UISaveVideoAtPathToSavedPhotosAlbum (moviePath, nil, nil, nil);
        }
    }
    else
    {
        //...
    }
}

With the pathToRecordedVideo property in place, you can turn your focus to your editVideo action. This action opens the last recorded video for editing in a video editor controller, or displays an error if no video was recorded. Listing 9-15 shows this method implementation.

Listing 9-15.  The editVideo method implementation

- (IBAction)editVideo:(id)sender
{
    if (self.pathToRecordedVideo)
    {
        UIVideoEditorController *editor = [[UIVideoEditorController alloc] init];
        editor.videoPath = self.pathToRecordedVideo;
        editor.delegate = self;
        [self presentViewController:editor animated:YES completion:NULL];
    }
    else
    {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error"
            message:@"No Video Recorded Yet"
            delegate:self
            cancelButtonTitle:@"Cancel"
            otherButtonTitles:nil, nil];
        [alert show];
    }
}

Because the video editor’s receiving delegate is your view controller, you need to make sure it conforms to the UIVideoEditorControllerDelegate protocol. Add the protocol to the header file, as shown in Listing 9-16.

Listing 9-16.  Declaring the UIVideoEditorControllerDelegate protocol

// ...

@interface ViewController : UIViewController<UIImagePickerControllerDelegate,
                                             UINavigationControllerDelegate ,
                                             UIVideoEditorControllerDelegate>

// ...

@end

Finally, you need to implement a few delegate methods for your UIVideoEditorController. First, you need a delegate method to handle a successful editing/trimming. Listing 9-17 shows this method.

Listing 9-17.  The VideoEditorController:didSaveEditedVideoToPath: delegate method implementation

-(void)videoEditorController:(UIVideoEditorController *)editor didSaveEditedVideoToPath:(NSString *)editedVideoPath
{
    self.pathToRecordedVideo = editedVideoPath;
    if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum (editedVideoPath))
    {
        UISaveVideoAtPathToSavedPhotosAlbum (editedVideoPath, nil, nil, nil);
    }
    [self dismissViewControllerAnimated:YES completion:NULL];
}

As you can see, your application sets the newly edited video as your next video to be edited so that you can create increasingly trimmed clips. It also saves each edited version to your photo album, if possible.

You need one more delegate method to handle the cancellation of your UIVideoEditorController. Add the implementation of the method shown in Listing 9-18.

Listing 9-18.  Implementation of the videoEditorControllerDidCancel: delegate method

-(void)videoEditorControllerDidCancel:(UIVideoEditorController *)editor
{
    [self dismissViewControllerAnimated:YES completion:NULL];
}

Upon testing on a physical device, your application should now successfully allow you to edit your videos. Figure 9-8 shows a view of your application giving you the option to edit a recorded video.

9781430259596_Fig09-08.jpg

Figure 9-8. Editing (trimming) a video using UIVideoEditorController

Note   You might have noticed that the recording quality is a little lower when you are in the editor. The default video quality is set to medium. If you would like high quality, simply set the video quality property in the takePicture: method; for example, Self.imagePicker.videoQuality = UIImagePickerControllerQualityTypeHigh.

Recipe 9-4: Using Custom Camera Overlays

There are a variety of applications that implement the camera interface but also implement a custom overlay—for example, to display constellations on the sky or simply to implement their own custom camera controls. In this recipe, you’ll continue building on the project from the preceding recipes and implement a very basic custom camera screen overlay. Specifically, you replace the default button controls with your own versions of them. Although simple, the example should give you an idea of how to create your own, more useful overlay functionalities.

You build your custom overlay view directly in code in a method you’ll name “customViewForImagePicker :.” This method creates an overlay view and populates it with three buttons: one for taking the picture, one for turning the flash on and off, and one to toggle between the front and rear cameras. Listing 9-19 shows this code, which you add to the ViewController.m from Recipe 9-3.

Listing 9-19.  Implementation of the customViewForImagePicker: method

-(UIView *)customViewForImagePicker:(UIImagePickerController *)imagePicker;
{
    UIView *view = [[UIView alloc] initWithFrame:CGRectMake(0, 20, 280, 480)];
    view.backgroundColor = [UIColor clearColor];
    
    UIButton *flashButton =
        [[UIButton alloc] initWithFrame:CGRectMake(10, 10, 120, 44)];
    flashButton.backgroundColor = [UIColor colorWithRed:.5 green:.5 blue:.5 alpha:.5];
    [flashButton setTitle:@"Flash Auto" forState:UIControlStateNormal];
    [flashButton setTitleColor:[UIColor whiteColor] forState:UIControlStateNormal];
    flashButton.layer.cornerRadius = 10.0;
    
    UIButton *changeCameraButton =
        [[UIButton alloc] initWithFrame:CGRectMake(190, 10, 120, 44)];
    changeCameraButton.backgroundColor =
        [UIColor colorWithRed:.5 green:.5 blue:.5 alpha:.5];
    [changeCameraButton setTitle:@"Rear Camera" forState:UIControlStateNormal];
    [changeCameraButton setTitleColor:[UIColor whiteColor]
        forState:UIControlStateNormal];
    changeCameraButton.layer.cornerRadius = 10.0;
    
    UIButton *takePictureButton =
        [[UIButton alloc] initWithFrame:CGRectMake(100, 432, 120, 44)];
    takePictureButton.backgroundColor =
        [UIColor colorWithRed:.5 green:.5 blue:.5 alpha:.5];
    [takePictureButton setTitle:@"Click!" forState:UIControlStateNormal];
    [takePictureButton setTitleColor:[UIColor whiteColor]
        forState:UIControlStateNormal];
    takePictureButton.layer.cornerRadius = 10.0;
    
    [flashButton addTarget:self action:@selector(toggleFlash:)
        forControlEvents:UIControlEventTouchUpInside];
    [changeCameraButton addTarget:self action:@selector(toggleCamera:)
        forControlEvents:UIControlEventTouchUpInside];
    [takePictureButton addTarget:imagePicker action:@selector(takePicture)
        forControlEvents:UIControlEventTouchUpInside];
    
    [view addSubview:flashButton];
    [view addSubview:changeCameraButton];
    [view addSubview:takePictureButton];
    
    return view;
}

In Listing 9-19, you have defined your UIView as well as the buttons to be put in it, given them their actions to perform and added them into the view, set the title of each button to be either its starting value or its purpose, and also set their cornerRadius so that the buttons will have rounded corners. One of the most important details here is that you set your buttons to be semitransparent, as they are placed over your camera’s display. You do not want to cover up any of your picture, so the buttons have to be at least partially see-through.

As you may have noticed, the action for the takePictureButton is directly connected to the takePicture method on the image picker. The other two buttons, on the other hand, are connected to methods (toggleFlash and toggleCamera, respectively) on your view controller. At this point, those two methods don’t exist, so you need to implement them. Listing 9-20 shows their implementation.

Listing 9-20.  Implementation for the toggleFlash: and toggleCamera: methods

-(void)toggleFlash:(UIButton *)sender
{
    if (self.imagePicker.cameraFlashMode == UIImagePickerControllerCameraFlashModeOff)
    {
        self.imagePicker.cameraFlashMode = UIImagePickerControllerCameraFlashModeOn;
        [sender setTitle:@"Flash On" forState:UIControlStateNormal];
    }
    else
    {
        self.imagePicker.cameraFlashMode = UIImagePickerControllerCameraFlashModeOff;
        [sender setTitle:@"Flash Off" forState:UIControlStateNormal];
    }
}

-(void)toggleCamera:(UIButton *)sender
{
    if (self.imagePicker.cameraDevice == UIImagePickerControllerCameraDeviceRear)
    {
        self.imagePicker.cameraDevice = UIImagePickerControllerCameraDeviceFront;
        [sender setTitle:@"Front Camera" forState:UIControlStateNormal];
    }
    else
    {
        self.imagePicker.cameraDevice = UIImagePickerControllerCameraDeviceRear;
        [sender setTitle:@"Rear Camera" forState:UIControlStateNormal];
    }
}

Next, hide the default camera buttons and provide the image picker with your custom overlay view. Add the two lines of code shown in Listing 9-21 to your takePicture: method. You can also comment out the setting of the allowsEditing property because the new way of taking pictures doesn’t support that.

Listing 9-21.  Modifying the takePicture method to hide the default camera buttons and to set the custom overlay

- (IBAction)takePicture:(id)sender
{
        // Make sure camera is available
    if ([UIImagePickerController
         isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] == NO)
    {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error"
                                                        message:@"Camera Unavailable"
                                                       delegate:self
                                              cancelButtonTitle:@"Cancel"
                                              otherButtonTitles:nil, nil];
        [alert show];
        return;
    }
    if (self.imagePicker == nil)
    {
        self.imagePicker = [[UIImagePickerController alloc] init];
        self.imagePicker.delegate = self;
        self.imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
        self.imagePicker.mediaTypes = [UIImagePickerController
            availableMediaTypesForSourceType:UIImagePickerControllerSourceTypeCamera];
        // self.imagePicker.allowsEditing = YES;
        self.imagePicker.showsCameraControls = NO;
        self.imagePicker.cameraOverlayView =
            [self customViewForImagePicker:self.imagePicker];
    }
    [self presentViewController:self.imagePicker animated:YES completion:NULL];
}

Finally, you need to make a small change to the imagePickerController:didFinishPickingMediaWithInfo: method. As mentioned earlier, the takePicture method of the image picker doesn’t support editing. This means that you have to retrieve your picture from the info dictionary using the UIImagePickerControllerOriginalImage key instead of UIImagePickerControllerEditedImage, as shown in Listing 9-22.

Listing 9-22.  Modifying the imagePickerController: didFinishPickingMediaWithInfo: to use the original image

-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
    NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType];
    
    if (CFStringCompare((__bridge CFStringRef) mediaType, kUTTypeMovie, 0) ==
        kCFCompareEqualTo)
    {
        NSString *moviePath =
           (NSString *) [[info objectForKey: UIImagePickerControllerMediaURL] path];
        self.pathToRecordedVideo = moviePath;
        
        if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum (moviePath))
        {
            UISaveVideoAtPathToSavedPhotosAlbum (moviePath, nil, nil, nil);
        }
    }
    else
    {
        UIImage *image =
            (UIImage *)[info objectForKey: UIImagePickerControllerOriginalImage];
        UIImageWriteToSavedPhotosAlbum(image, nil, nil , nil);
        self.imageView.image = image;
        self.imageView.contentMode = UIViewContentModeScaleAspectFill;
    }
    [self dismissViewControllerAnimated:YES completion:NULL];
}

If you run your app now, your camera should, as shown in Figure 9-9, display your three buttons in an overlay.

9781430259596_Fig09-09.jpg

Figure 9-9. An image-picker controller with a custom overlay view replacing the standard buttons

From here you can create your own custom overlays and easily change their functions to fit nearly any situation. The following recipes leave the image-picker controller and instead look into the Audiovisual (AV) Foundation framework for capturing your pictures and videos.

Recipe 9-5: Displaying Camera Preview with AVCaptureSession

While the UIImagePickerController and UIVideoEditorController interfaces are incredibly useful, they certainly aren’t as customizable as they could be. With the AV Foundation framework, however, you can create your camera interfaces from scratch, making them just the way you want. The AV Foundation framework gives you access to more audio and video tools so you can fully customize the user experience.

In this recipe and the ones that follow, you will use the AVCaptureSession API to essentially create your own version of the camera. You’ll do this in steps, starting with the displaying of a camera preview.

Begin by creating a new single-view project. You’ll use the same project for the rest of this chapter, so name it accordingly (such as “MyCamera”). Also, make sure to add the AVFoundation framework to your project or you’ll run into linker errors later.

Now, add a property to your view controller to hold your AVCaptureSession instance and one to hold the video input instance by making the changes in Listing 9-23 to your ViewController.h file.

Listing 9-23.  The ViewController.h file with added AVCaptureSession and AVCaptureDeviceInput properties

//
//  ViewController.h
//  Recipe 9-5 Displaying Camera Preview With AVCaptureSession
//

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>

@interface ViewController : UIViewController

@property (strong, nonatomic) AVCaptureSession *captureSession;
@property (strong, nonatomic) AVCaptureDeviceInput *videoInput;

@end

Next, switch to the ViewController.m file and locate the viewDidLoad method. There you set up the capture session to receive input from the camera. We’ll show you step-by-step now and later will present you with the complete viewDidLoad implementation.

First, create your AVCaptureSession. Optionally, you might also want to change the resolution preset, which is set to AVCaptureSessionPresetHigh by default. Add the code in Listing 9-24 to the viewDidLoad method.

Listing 9-24.  Creating and initializing an AVCaptureSession instance

self.captureSession = [[AVCaptureSession alloc] init];
//Optional: self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;

Next, specify your input device, which is your rear camera (assuming one is accessible), as shown in Listing 9-25. You specify this through the use of the AVCaptureDevice class method +defaultDeviceWithMediaType:, which can take a variety of different arguments depending on the type of media desired, the most prominent of which are AVMediaTypeVideo and AVMediaTypeAudio.

Listing 9-25.  Specifying the capture device

AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

Next, you need to set up the instance of AVCaptureDeviceInput to specify your chosen device as an input for your capture session. Also, include a check to make sure the input has been correctly created before adding it to your session. Listing 9-26 shows this instance and check statement.

Listing 9-26.  Setting the input instance and checking its existence before assigning the video input

NSError *error = nil;
self.videoInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (self.videoInput)
{
    [self.captureSession addInput:self.videoInput];
}
else
{
    NSLog(@"Input Error: %@", error);
}

The last part of your viewDidLoad, as shown in Listing 9-27, is the creation of a preview layer, with which you can see what your camera is viewing. Set your preview layer to be the layer of your main view, but with a slightly altered height so as not to block a button that you’ll set up in the next recipe.

Listing 9-27.  Creating the preview layer

AVCaptureVideoPreviewLayer *previewLayer =
    [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
UIView *aView = self.view;
previewLayer.frame =
    CGRectMake(0, 20, self.view.frame.size.width, self.view.frame.size.height-70);
[aView.layer addSublayer:previewLayer];

Once all these steps are complete, the viewDidLoad method should look like Listing 9-28.

Listing 9-28.  The complete viewDidLoad method

- (void)viewDidLoad
{
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.

    self.captureSession = [[AVCaptureSession alloc] init];
    //Optional: self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;

    AVCaptureDevice *device =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    self.videoInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (self.videoInput)
    {
        [self.captureSession addInput:self.videoInput ];
    }
    else
    {
        NSLog(@"Input Error: %@", error);
    }
    
    AVCaptureVideoPreviewLayer *previewLayer =
        [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
    UIView *aView = self.view;
    previewLayer.frame =
        CGRectMake(0, 0, self.view.frame.size.width,
            self.view.frame.size.height-70);
    [aView.layer addSublayer:previewLayer];
}

Note   Just like any other CALayer, an AVCaptureVideoPreviewLayer can be repositioned, rotated, resized, and even animated. With it, you are no longer bound to using the entire screen to record video as you are with the UIImagePicker, meaning you could have your preview layer in one part of the screen and other information for the user in another. As with almost every part of iOS development, the possibilities of use are limited only by the developer’s imagination.

Now the only thing that needs to be added is code to start and stop your capture session. In this app, you’ll display the camera preview upon application launch, so a good place to put the start code is in the viewWillAppear: method, as shown in Listing 9-29.

Listing 9-29.  Implementation of the viewWillAppear method

- (void)viewWillAppear:(BOOL)animated
{
    [super viewWillAppear:animated];
    [self.captureSession startRunning];
}

You will also add the corresponding stopping code for the capture session, as shown in Listing 9-30.

Listing 9-30.  Implementation of the viewWillDisappear method

- (void)viewWillDisappear:(BOOL)animated
{
    [super viewWillDisappear:animated];
    [self.captureSession stopRunning];
}

If you build and run your application now, it should display a live camera preview, as shown in Figure 9-10.

9781430259596_Fig09-10.jpg

Figure 9-10. Displaying a camera preview with AVCaptureSession

Recipe 9-6: Capturing Still Images with AVCaptureSession

In the preceding recipe, you learned how to set up an AVCaptureSession with input from the camera. You also saw how you can connect an AVCaptureVideoPreviewLayer to display a live camera preview in your app. Now you will expand the project by connecting an AVCaptureStillImageOutput object to take still images and save them to the saved photos library on the device.

Before digging into the coding, you need to make a couple of changes to the project. The first is to add the AssetsLibrary.framework to your project. You use functionality from that framework to write the photos to the photos library.

Because you access the shared photos library of the device, you need to provide a usage description in the application’s Info.plist file. Go ahead and add the NSPhotoLibraryUsageDescription key (displayed as “Privacy—Photo Library Usage Description” in the property editor) with a brief text containing the reason why your app seeks the access (such as “Testing AVCaptureSession”). Refer to Recipe 1-7 in Chapter 1 for a refresher on this procedure.

Adding a Capture Button

You need a way to trigger a still image capture. Start by adding a button with the title “Capture” to your view controller’s storyboard view. Also make sure to create an action named “capture” for the button. To ensure the button moves with the size of the screen, select the button and choose “Add Missing Constraints” from the Resolve Auto Layout Issues menu in the lower-right corner of the storyboard editor screen. When you’re done, the view should resemble Figure 9-11 with the button selected.

9781430259596_Fig09-11.jpg

Figure 9-11. A user interface with a button to capture a video frame

Switch to your ViewController.h file and import the AssetsLibrary framework. Also, add a property to hold your still image output instance, as shown in bold in Listing 9-31.

Listing 9-31.  The finished ViewController.h file

//
//  ViewController.h
//  Recipe 9-6: Taking Still Images With AVCaptureSession
//

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <AssetsLibrary/AssetsLibrary.h>

@interface ViewController : UIViewController

@property (strong, nonatomic) AVCaptureSession *captureSession;
@property (strong, nonatomic) AVCaptureDeviceInput *videoInput;
@property (strong, nonatomic) AVCaptureStillImageOutput *stillImageOutput;

- (IBAction)capture:(id)sender;

@end

In ViewController.m, add the code in Listing 9-32 to the viewDidLoad method. The new code (marked in bold) allocates and initializes your still image output object and connects it to the capture session.

Listing 9-32.  Adding the still image output object and connecting it to the capture session

- (void)viewDidLoad
{
    [super viewDidLoad];
        // Do any additional setup after loading the view, typically from a nib.
    self.captureSession = [[AVCaptureSession alloc] init];
    //Optional: self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;
    
    AVCaptureDevice *device =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    
    NSError *error = nil;
    self.videoInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (self.videoInput)
    {
        [self.captureSession addInput:self.videoInput];
    }
    else
    {
        NSLog(@"Input Error: %@", error);
    }
    
    self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *stillImageOutputSettings =
        [[NSDictionary alloc] initWithObjectsAndKeys:
            AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [self.stillImageOutput setOutputSettings:stillImageOutputSettings];
    [self.captureSession addOutput:self.stillImageOutput];

    AVCaptureVideoPreviewLayer *previewLayer =
        [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
    UIView *aView = self.view;
    previewLayer.frame =
        CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height-70);
    [aView.layer addSublayer:previewLayer];
}

Note   Besides AVCaptureStillImageOutput, you can use a number of other output formats—for example, the AVCaptureMovieFileOutput, which you use in the next recipe, the AVCaptureVideoDataOutput, with which you can access the raw video output frame by frame, AVCaptureAudioFileOutput for saving audio files, and AVCaptureAudioDataOutput for processing audio data.

Now it’s time to implement the action method. All it should do is trigger the capturing of a still image. We chose to extract the capturing code into a helper method for the sake of making changes that will come in the next recipe easier. Add the method call to the capture action method, as shown in Listing 9-33.

Listing 9-33.  Implementing the capture: action method

- (IBAction)capture:(id)sender
{
    [self captureStillImage];
}

The implementation of captureStillImage method can seem daunting at first, so we’ll take it in steps and then show you the complete method.

First, acquire the capture connection and make sure it uses the portrait orientation to capture the image. Listing 9-34 shows this step.

Listing 9-34.  Starting the captureStillImage method and adding the capture connection with portrait orientation

- (void) captureStillImage
{
    AVCaptureConnection *stillImageConnection =
       [self.stillImageOutput.connections objectAtIndex:0];
    if ([stillImageConnection isVideoOrientationSupported])
       [stillImageConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];

    // ...
}

Then, as Listing 9-35 shows you run the captureStillImageAsynchronouslyFromConnection method and provide a code block that is invoked when the still image capture has been completed.

Listing 9-35.  Adding code to capture the still image

[self.stillImageOutput
    captureStillImageAsynchronouslyFromConnection:stillImageConnection
    completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
    {
        // ...
    }
];

When the capture has completed, check to see whether it was successful; otherwise, log the error as shown in Listing 9-36.

Listing 9-36.  Checking for captured image success or failure

[self.stillImageOutput
    captureStillImageAsynchronouslyFromConnection:stillImageConnection
    completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
    {
        if (imageDataSampleBuffer != NULL)
        {
            // ...
        }
        else
        {
            NSLog(@"Error capturing still image: %@", error);
        }
    }
 ];

If the capture was successful, extract the image from the buffer, as shown in Listing 9-37.

Listing 9-37.  Completing code for image-capture success

if (imageDataSampleBuffer != NULL)
{
    NSData *imageData = [AVCaptureStillImageOutput
        jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
    UIImage *image = [[UIImage alloc] initWithData:imageData];
    
    // ...
}

Next, save the image to the photo library. This is also an asynchronous task, so provide a block for when it completes. Whether the task completed successfully or with an error (in other words, if the user didn’t allow access to the photo library), display an alert to notify the user, as shown in Listing 9-38.

Listing 9-38.  Adding code to save asynchronously and alert the user of success or error

ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[image CGImage]
    orientation:(ALAssetOrientation)[image imageOrientation]
    completionBlock:^(NSURL *assetURL, NSError *error)
    {
        UIAlertView *alert;
        if (!error)
        {
            alert = [[UIAlertView alloc] initWithTitle:@"Photo Saved"
                message:@"The photo was successfully saved to your photos library"
                delegate:nil
                cancelButtonTitle:@"OK"
                otherButtonTitles:nil, nil];
        }
        else
        {
            alert = [[UIAlertView alloc] initWithTitle:@"Error Saving Photo"
                message:@"The photo was not saved to your photos library"
                delegate:nil
                cancelButtonTitle:@"OK"
                otherButtonTitles:nil, nil];
        }
                    
        [alert show];
    }
];

Listing 9-39 shows the captureStillImage method in its entirety.

Listing 9-39.  The full captureStillImage implementation

- (void) captureStillImage
{
    AVCaptureConnection *stillImageConnection =
        [self.stillImageOutput.connections objectAtIndex:0];
    if ([stillImageConnection isVideoOrientationSupported])
        [stillImageConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
    
    [self.stillImageOutput
        captureStillImageAsynchronouslyFromConnection:stillImageConnection
        completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
        {
            if (imageDataSampleBuffer != NULL)
            {
                NSData *imageData = [AVCaptureStillImageOutput
                    jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
                ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
                UIImage *image = [[UIImage alloc] initWithData:imageData];
                [library writeImageToSavedPhotosAlbum:[image CGImage]
                    orientation:(ALAssetOrientation)[image imageOrientation]
                    completionBlock:^(NSURL *assetURL, NSError *error)
                    {
                        UIAlertView *alert;
                        if (!error)
                        {
                            alert = [[UIAlertView alloc] initWithTitle:@"Photo Saved"
                                message:@"The photo was successfully saved to your photos library"
                                delegate:nil
                                cancelButtonTitle:@"OK"
                                otherButtonTitles:nil, nil];
                        }
                        else
                        {
                            alert = [[UIAlertView alloc] initWithTitle:@"Error Saving Photo"
                                message:@"The photo was not saved to your photos library"
                                delegate:nil
                                cancelButtonTitle:@"OK"
                                otherButtonTitles:nil, nil];
                         }
                    
                         [alert show];
                     }
                ];
            }
            else
            {
                NSLog(@"Error capturing still image: %@", error);
            }
        }
    ];
}

That completes Recipe 9-6. You now can run your app and tap the “Capture” button to take a picture that is saved in your photo library, as in Figure 9-12.

9781430259596_Fig09-12.jpg

Figure 9-12. A still image captured and saved to the photo library

While you haven’t included any fancy animations to make it look like a camera, this is quite useful as far as a basic camera goes. Recipe 9-7 takes it to the next level and shows you how to record a video using AVCaptureSession.

Recipe 9-7: Capturing Video with AVCaptureSession

Now that you have covered some of the basics of using AVFoundation, you will use it to implement a slightly more complicated project. This time, you’ll extend your app to include a capturing video mode. To do this, you’ll need to allow the user to switch between taking pictures and recording videos. First you will build the functionality to switch modes, then you will implement the video capture using AVCaptureSession. You’ll build on the same project that you have been working on since Recipe 9-5.

Adding a Video Recording Mode

Add a new component to your user interface that lets the user switch between still image and video recording modes. A simple segmented control works for the purpose of this recipe, so go ahead and add one from the object library. Change the default text on the two segments to “Take Photo” and “Record Video” and place them so that your view resembles Figure 9-13.

9781430259596_Fig09-13.jpg

Figure 9-13. A simple user interface that allows the user to switch modes between photo and video capturing

To access both the segmented control and the button from your code, you will need to add outlets for them. Use the names “modeControl” and “captureButton,” respectively. You also need to respond when the segment control’s value changes, so create an action for that event. Name the action “updateMode.”

Now switch over to your ViewController.h file. Add a couple of properties that are for the video recording setup of your capture session—one for audio input and one for movie file output. Also, to prepare the view controller for being an output delegate for the movie file recording, you will add an AVCaptureFileOutputRecordingDelegate protocol to the header. Listing 9-40 shows the code with all these changes, which are marked in bold.

Listing 9-40.  Setting up the ViewController.h file

//
//  ViewController.h
//  Recipe 9-7 Recording Video With AVCaptureSession
//

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <AssetsLibrary/AssetsLibrary.h>

@interface ViewController : UIViewController <AVCaptureFileOutputRecordingDelegate>

@property (strong, nonatomic) AVCaptureSession *captureSession;
@property (strong, nonatomic) AVCaptureDeviceInput *videoInput;
@property (strong, nonatomic) AVCaptureDeviceInput *audioInput;
@property (strong, nonatomic) AVCaptureStillImageOutput *stillImageOutput;
@property (strong, nonatomic) AVCaptureMovieFileOutput *movieOutput;

@property (weak, nonatomic) IBOutlet UIButton *captureButton;
@property (weak, nonatomic) IBOutlet UISegmentedControl *modeControl;

- (IBAction)capture:(id)sender;
- (IBAction)updateMode:(id)sender;

@end

Now that your header file is set up, switch to your implementation file. To start, you will make several changes to the viewDidLoad method. The first is to set up an audio input object to capture sound from the device’s microphone while recording video. These changes are shown in bold in Listing 9-41.

Listing 9-41.  Setting up the audio input object

- (void)viewDidLoad
{
    [super viewDidLoad];
    self.captureSession = [[AVCaptureSession alloc] init];
    //Optional: self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;
    
    AVCaptureDevice * videoDevice=
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    AVCaptureDevice *audioDevice =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
        
    NSError *error = nil;
    
    self.videoInput =
        [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:nil];
    self.audioInput =
        [[AVCaptureDeviceInput alloc] initWithDevice:audioDevice error:nil];

    // ...
}

Now we’ll replace the existing if-else statement you added in Listing 9-26 from the preceding recipe. The replacement consists of an if-else statement that encompasses some of the image output initialization code already written and a new if statement to check for errors on the audio input. Listing 9-42 shows these changes.

Listing 9-42.  Adding error checks to the audio and video inputs

- (void)viewDidLoad
{
  
        [super viewDidLoad];
        self.captureSession = [[AVCaptureSession alloc] init];
        //Optional: self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;
        
        AVCaptureDevice *videoDevice =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
        AVCaptureDevice *audioDevice =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];

        NSError *error = nil;
    
        self.videoInput =
        [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
        self.audioInput =
        [[AVCaptureDeviceInput alloc] initWithDevice:audioDevice error:&error];
    

        if (self.videoInput)
        {
            
            self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
            NSDictionary *stillImageOutputSettings = [[NSDictionary alloc]
                                                      initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
            [self.stillImageOutput setOutputSettings:stillImageOutputSettings];
            [self.captureSession addOutput:self.stillImageOutput];
        }
        else
        {
            NSLog(@"Video Input Error: %@", error);
        }
        if (!self.videoInput)
        {
            NSLog(@"Audio Input Error: %@", error);
        }
//...
}

Next, set up an output object that records the data from the input objects and produces a movie file, as shown in Listing 9-43.

Listing 9-43.  Setting up the movie output object

- (void)viewDidLoad
{
    // ...

    self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *stillImageOutputSettings = [[NSDictionary alloc]
        initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [self.stillImageOutput setOutputSettings:stillImageOutputSettings];
    
    self.movieOutput = [[AVCaptureMovieFileOutput alloc] init];

    [self.captureSession addOutput:self.stillImageOutput];

    // ...
}

Finally, set up the capture session in the picture-taking mode and adjust the size of the preview layer so it won’t cover the new segment control. Listing 9-44 shows this change.

Listing 9-44.  Adding the video input to the capture session and changing the frame size

if (self.videoInput)
{
    
    self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *stillImageOutputSettings = [[NSDictionary alloc]
                                              initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [self.stillImageOutput setOutputSettings:stillImageOutputSettings];
    
    self.movieOutput = [[AVCaptureMovieFileOutput alloc] init];
    
    // Setup capture session for taking pictures
    [self.captureSession addInput:self.videoInput];
    [self.captureSession addOutput:self.stillImageOutput];
}

With all these changes, your viewDidLoad method should resemble Listing 9-45.

Listing 9-45.  The complete viewDidLoad method

- (void)viewDidLoad
{
  
        [super viewDidLoad];
        self.captureSession = [[AVCaptureSession alloc] init];
        //Optional: self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;
        
        AVCaptureDevice *videoDevice =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
        AVCaptureDevice *audioDevice =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    
        NSError *error = nil;
    
        self.videoInput =
        [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
        self.audioInput =
        [[AVCaptureDeviceInput alloc] initWithDevice:audioDevice error:&error];
    
        if (self.videoInput)
        {
            
            self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
            NSDictionary *stillImageOutputSettings = [[NSDictionary alloc]
                                                      initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
            [self.stillImageOutput setOutputSettings:stillImageOutputSettings];
            
            self.movieOutput = [[AVCaptureMovieFileOutput alloc] init];
            
            // Setup capture session for taking pictures
            [self.captureSession addInput:self.videoInput];
            [self.captureSession addOutput:self.stillImageOutput];
        }
        else
        {
            NSLog(@"Video Input Error: %@", error);
        }
        if (!self.videoInput)
        {
            NSLog(@"Audio Input Error: %@", error);
        }

        AVCaptureVideoPreviewLayer *previewLayer =
        [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
        UIView *aView = self.view;
        previewLayer.frame =
        CGRectMake(0, 70, self.view.frame.size.width, self.view.frame.size.height-140);
        [aView.layer addSublayer:previewLayer];

}

Note that you’re not adding audioInput and movieOutput objects to the capture session. Later, you’ll add and remove input objects, depending on which mode the user selects, but for now it’s assumed to be the “Take Photo” mode. Therefore, only input and output objects associated with that particular mode are added in the viewDidLoad method. (For the same reason, it’s also important that the segment control has the correct selected index value set.)

Now, update the capture action method, as shown in Listing 9-46. It now should check what mode the application is in; if it is in the “Take Photo” mode, it should do what it used to do, which is to capture a still image.

Listing 9-46.  Adding a condition statement to the capture: method

- (IBAction)capture:(id)sender
{
    if (self.modeControl.selectedSegmentIndex == 0)
    {
        // Picture Mode
        [self captureStillImage];
    }
    else
    {
        // Video Mode
    }
}

If in video recording mode, however, it should toggle between a start and stop recording mode, depending on whether or not a movie is currently being recorded, as shown in Listing 9-47.

Listing 9-47.  Adding code to the capture: method to handle video operation

- (IBAction)capture:(id)sender
{
    if (self.modeControl.selectedSegmentIndex == 0)
    {
        // Picture Mode
        [self captureStillImage];
    }
    else
    {
        // Video Mode
        if (self.movieOutput.isRecording == YES)
        {
            [self.captureButton setTitle:@"Capture" forState:UIControlStateNormal];
            [self.movieOutput stopRecording];
        }
        else
        {
            [self.captureButton setTitle:@"Stop" forState:UIControlStateNormal];
            [self.movieOutput startRecordingToOutputFileURL:[self tempFileURL]
                recordingDelegate:self];
        }
    }
}

You have probably noticed that you called the method tempFileURL to set up your AVCaptureOutput earlier. This method, in short, returns a path for your recorded video to be temporarily saved on your device. If there is already a file saved at the location, it will delete that file. (This way, you never use more than one video’s worth of disk space.) In a real application, you might want to prompt the user and inform the user of an overwrite, but for simplicity we’ll skip the prompt code. Listing 9-48 shows the tempFileURL implementation.

Listing 9-48.  Implementing the tempFileURL method

- (NSURL *) tempFileURL
{
    NSString *outputPath = [[NSString alloc] initWithFormat:@"%@%@",
        NSTemporaryDirectory(), @"output.mov"];
    NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
    NSFileManager *manager = [[NSFileManager alloc] init];
    if ([manager fileExistsAtPath:outputPath])
    {
        [manager removeItemAtPath:outputPath error:nil];
    }
    return outputURL;
}

The next step is to set up your AVCaptureMovieFileOutput’s delegate method to be invoked when an AVCaptureSession has finished recording a movie. The method starts by checking whether there were any errors in recording the video to a file and then saves your video file into your asset library. The process of writing a video to the photo album is nearly the same as with photos, so you’ll probably recognize a lot of that code from the preceding recipe. Listing 9-49 shows the implementation.

Listing 9-49.  Implementation of the captureOutputdidFinishRecordingToOutputFileAtUrl:fromConnections:error method

- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
      fromConnections:(NSArray *)connections
                error:(NSError *)error
{
    BOOL recordedSuccessfully = YES;
    if ([error code] != noErr)
    {
        // A problem occurred: Find out if the recording was successful.
        id value = [[error userInfo]
            objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
        if (value)
            recordedSuccessfully = [value boolValue];
        // Logging the problem anyway:
        NSLog(@"A problem occurred while recording: %@", error);
    }
    if (recordedSuccessfully)
    {
        ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
        
        [library writeVideoAtPathToSavedPhotosAlbum:outputFileURL
            completionBlock:^(NSURL *assetURL, NSError *error)
            {
                UIAlertView *alert;
                if (!error)
                {
                    alert = [[UIAlertView alloc] initWithTitle:@"Video Saved"
                       message:@"The movie was successfully saved to your photos library"
                       delegate:nil
                       cancelButtonTitle:@"OK"
                       otherButtonTitles:nil, nil];
                }
                else
                {
                    alert = [[UIAlertView alloc] initWithTitle:@"Error Saving Video"
                       message:@"The movie was not saved to your photos library"
                       delegate:nil
                       cancelButtonTitle:@"OK"
                       otherButtonTitles:nil, nil];
                }
              
                [alert show];
            }
        ];
    }
}

Finally, as shown in Listing 9-50, you implement the action method that’s invoked when the user switches between the two modes. This method updates the capture session with the correct input and output objects that are associated with the corresponding mode. It also sets the orientation mode for the video mode output objects. (This is already taken care of for the still image mode; see Recipe 9-6 for details.) Finally, it resets the title of the capture button. In iOS 7, the user has the option of denying access to the microphone; there is also an if statement to check that you can add the audio input to the session. If not, it prompts the user with directions about how to fix the problem and switches the user back to the camera.

Listing 9-50.  Implementation of the updateMode: action

- (IBAction)updateMode:(id)sender
{
    [self.captureSession stopRunning];
    if (self.modeControl.selectedSegmentIndex == 0)
    {
        // Still Image Mode
        if (self.movieOutput.isRecording == YES)
        {
            [self.movieOutput stopRecording];
        }
        [self.captureSession removeInput:self.audioInput];
        [self.captureSession removeOutput:self.movieOutput];
        [self.captureSession addOutput:self.stillImageOutput];
    }
    else
    {
        if([self.captureSession canAddInput:self.audioInput])
        {
            // Video Mode
            [self.captureSession removeOutput:self.stillImageOutput];
            [self.captureSession addInput:self.audioInput];
            [self.captureSession addOutput:self.movieOutput];
        
            // Set orientation of capture connections to portrait
            NSArray *array = [[self.captureSession.outputs objectAtIndex:0] connections];
            for (AVCaptureConnection *connection in array)
            {
                connection.videoOrientation = AVCaptureVideoOrientationPortrait;
            }
        }
        else
        {
            self.modeControl.selectedSegmentIndex = 0;
            NSLog(@"User turned off access to microphone");
            UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Can't Access Audio" message:@"Verify microphone access is turned on in Settings->Privacy->Microphone" delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
            [alert show];
            
        }
    }
    [self.captureButton setTitle:@"Capture" forState:UIControlStateNormal];
    
    [self.captureSession startRunning];
}

Now you are ready to build and run your app. You should be able to switch between taking photos and recording videos, and the results should be stored in the photos library of your device. Figure 9-14 shows the app in action. If you turn off the access to the microphone by going to Settings arrow.jpg Privacy arrow.jpg Microphone, the app will prompt you and let you know that it failed to add an audio input and how to fix it.

9781430259596_Fig09-14.jpg

Figure 9-14. An app that can take photos and record videos

Recipe 9-8: Capturing Video Frames

For many applications that utilize videos, a thumbnail image is a useful way to “represent” a given video. In this recipe, you expand the preceding recipe and generate and display a thumbnail image when a video has been recorded.

Start by adding the CoreMedia framework to your project. You use it to generate the thumbnail.

Next, add an image view to the lower-left corner of your main view’s user interface so that it resembles Figure 9-15. Once again, you will want to add some constraints to the image view by selecting “Add Missing Constraints” from the Resolve Auto Layout Issues menu in the lower-right corner of the storyboard window. This will ensure it looks fine on a 3.5" device. With UIImage selected, add a width and height constraint from the pin menu, as shown in Figure 9-16.

9781430259596_Fig09-15.jpg

Figure 9-15. The user interface with a thumbnail image view in the lower-left corner

9781430259596_Fig09-16.jpg

Figure 9-16. Adding width and height constraints

Add an outlet with the name “thumbnailImageView” for the image view so you can reference it later from your code.

Now, let’s get to the core of this recipe. The method in Listing 9-51 extracts an image from the halfway point of the movie and updates the thumbnail image view. This method basically creates an asset using a URL and creates an image generator from it. Then we check for proper rotation and set the time of the clip we want to take the image from. Then we generate the image asynchronously.

Listing 9-51.  Implementation of createThumbnailForVideoURL

-(void)createThumbnailForVideoURL:(NSURL *)videoURL
{
    AVURLAsset *myAsset = [[AVURLAsset alloc] initWithURL:videoURL options:[NSDictionary dictionaryWithObject:@"YES" forKey:AVURLAssetPreferPreciseDurationAndTimingKey]];
    
    AVAssetImageGenerator *imageGenerator =
        [AVAssetImageGenerator assetImageGeneratorWithAsset:myAsset];
    //Make sure images are correctly rotated.
    imageGenerator.appliesPreferredTrackTransform = YES;
    Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]);
    CMTime half = CMTimeMakeWithSeconds(durationSeconds/2.0, 600);
    NSArray *times = [NSArray arrayWithObjects: [NSValue valueWithCMTime:half], nil];
    
    [imageGenerator generateCGImagesAsynchronouslyForTimes:times
        completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime,
                            AVAssetImageGeneratorResult result, NSError *error)
        {
            if (result == AVAssetImageGeneratorSucceeded)
            {
                self.thumbnailImageView.image = [UIImage imageWithCGImage:image];
            }
            else if (result == AVAssetImageGeneratorFailed)
            {
                NSLog(@"Failed with error: %@", [error localizedDescription]);
            }
        }
    ];
}

Now all that’s left is to call the method when a video has been recorded, as shown in Listing 9-52. All that we’re doing here is adding one line of code that calls the method in Listing 9-51 when a video is successfully captured.

Listing 9-52.  Modifying the captureOutput:didFinishRecordingToOutputFuleAtURL:fromConnections: method

- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
      fromConnections:(NSArray *)connections
                error:(NSError *)error
{
    BOOL recordedSuccessfully = YES;
    if ([error code] != noErr)
    {
        // A problem occurred: Find out if the recording was successful.
        id value =
            [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
        if (value)
            recordedSuccessfully = [value boolValue];
        // Logging the problem anyway:
        NSLog(@"A problem occurred while recording: %@", error);
    }
    if (recordedSuccessfully)
    {
        [self createThumbnailForVideoURL:outputFileURL];
        ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
        
        [library writeVideoAtPathToSavedPhotosAlbum:outputFileURL
            completionBlock:^(NSURL *assetURL, NSError *error)
            {
                UIAlertView *alert;
                if (!error)
                {
                    alert = [[UIAlertView alloc] initWithTitle:@"Video Saved"
                        message:@"The movie was successfully saved to your photos library"
                        delegate:nil
                        cancelButtonTitle:@"OK"
                        otherButtonTitles:nil, nil];
                }
                else
                {
                    alert = [[UIAlertView alloc] initWithTitle:@"Error Saving Video"
                        message:@"The movie was not saved to your photos library"
                        delegate:nil
                        cancelButtonTitle:@"OK"
                        otherButtonTitles:nil, nil];
                }
              
                [alert show];
            }
         ];
    }
}

Now build and run your application and switch to the “Record Video” mode when it has launched. Record a video by clicking the “Capture” button twice (once for start and again for stop). A few seconds later a thumbnail from halfway into your movie will be displayed in the lower-left corner, as Figure 9-17 shows.

9781430259596_Fig09-17.jpg

Figure 9-17. Your app displaying a movie thumbnail in the lower-left corner

Note   As you might have noticed, the method used in Recipe 9-8 is rather slow, and it usually takes a couple of seconds to extract the image. There are other ways you might want to try—for example, adding an AVCaptureStillImageOutput to your capture session (it’s possible to have several output objects connected) and taking a snapshot when the recording starts. You can use what you learned in Recipe 9-6. Another way is to use the AVCaptureVideoDataOutput mentioned earlier. With that method you can grab any frame you like during recording and extract an image.

Recipe 9-9: Capturing Machine-Readable Codes

With iOS 7 we now have a new functionality in the AVCaptureMetadataOutput class that makes reading machine-readable codes very easy to implement. Machine-readable codes are basically just one-dimensional or two-dimensional barcodes such as UPC-E or QR codes. As an example, Apple’s passbook app uses this functionality to read various types of codes. Now you, too, can add this functionality to your app with very little code. This class now gives us the ability to read any of the following types of machine-readable codes:

One–Dimensional

  • UPC-E
  • EAN-8, EAN-13
  • Code 39 (with and without checksum)
  • Code 93
  • Code 128

Two–Dimensional

  • PDF417
  • QR
  • Aztec

To start, you will be creating a new single view application that is similar to Recipe 9-5. As you did before, you need to add the AssetsLibrary, AVFoundation, and CoreGraphics frameworks. Drag a label to the screen and arrange it as shown in Figure 9-18. Make the label three lines high and resize it to be the width of the screen. Select the label and add some constraints by choosing “Add Missing Constraints” from the Resolve Auto Layout Issues menu in the lower-right side of the storyboard editor.

9781430259596_Fig09-18.jpg

Figure 9-18. The finished interface

Next, create an outlet for the text label with the name “codeLabel.”

Now that you have a starting place set up, add a few properties as well as a delegate declaration to the view controller header file. The properties will be used to handle a capture session, the input device, and the metadata output. We’ll need to add a delegate to our view controller to retrieve information when a bar code or QR code is scanned. Listing 9-53 shows the completed ViewController.h file.

Listing 9-53.  The completed ViewController.h file with required properties and delegate declaration

//
//  ViewController.h
//  Recipe 9-9 Capturing Machine-Readable Codes
//

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>

@interface ViewController : UIViewController <AVCaptureMetadataOutputObjectsDelegate>

@property (weak, nonatomic) IBOutlet UILabel *codeLabel;
@property (strong, nonatomic) AVCaptureSession *captureSession;
@property (strong, nonatomic) AVCaptureDeviceInput *videoInput;
@property (strong, nonatomic) AVCaptureMetadataOutput *metadataOutput;

@end

Next, we need to add a little bit of code to the viewDidLoad: method and add the delegates. Because there are a few different things we’ll be adding to the viewDidLoad method, we’ll first explain each piece and reveal the complete viewDidLoad method when we’re done.

To begin, we’ll create an AVCaptureSession and provide the input type and the input device, as shown in Listing 9-54. As we did in the preceding two recipes, we are checking for errors and the existence of the video input before assigning the input.

Listing 9-54.  Adding an AVCaptureSession and video input

//
//  ViewController.m
//  Recipe 9-9 Capturing Machine-Readable Codes
//

//...
- (void)viewDidLoad
{

    [super viewDidLoad];

    self.captureSession = [[AVCaptureSession alloc] init];

    AVCaptureDevice *device =
    [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    
    NSError *error = nil;
    self.videoInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (self.videoInput)
    {
        [self.captureSession addInput:self.videoInput];
    }
    else
    {
        NSLog(@"Input Error: %@", error);
    }

//...
}

Next, you should allocate and initialize your metadataOutput property and then set the delegate and the dispatch queue to the main thread. Because QR codes aren’t really that intensive, the main thread should be fine for handling these operations. You also need to set a couple of metadata types. For this example, we will use UPC-E code and QR code types. You can expand these as much as you please using any of the types listed earlier. Add the code in Listing 9-55 to the viewDidLoad method.

Listing 9-55.  Creating an AVCaptureMetadagaOutput instance and setting the object types

self.metadataOutput = [[AVCaptureMetadataOutput alloc] init];
[self.captureSession addOutput:self.metadataOutput];

[self.metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];

self.metadataOutput.metadataObjectTypes = @[AVMetadataObjectTypeUPCECode, AVMetadataObjectTypeQRCode];

As we did in Recipe 9-5, we’ll create a new preview layer and set it to a view that will allow us to view what the camera is seeing. Add the code in Listing 9-56 to the viewDidLoad method as well.

Listing 9-56.  Creating a preview layer to view the current camera view

AVCaptureVideoPreviewLayer *previewLayer =
[AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
UIView *aView = self.view;
previewLayer.frame =
CGRectMake(0, 20, self.view.frame.size.width,
           self.view.frame.size.height-100);
[aView.layer addSublayer:previewLayer];

When you are done, your viewDidLoad method should look like Listing 9-57.

Listing 9-57.  The completed viewDidLoad method

- (void)viewDidLoad
{
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    
    self.captureSession = [[AVCaptureSession alloc] init];
    
    AVCaptureDevice *device =
    [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    
    NSError *error = nil;
    self.videoInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (self.videoInput)
    {
        [self.captureSession addInput:self.videoInput];
    }
    else
    {
        NSLog(@"Input Error: %@", error);
    }
    
    self.metadataOutput = [[AVCaptureMetadataOutput alloc] init];
    [self.captureSession addOutput:self.metadataOutput];
    
    [self.metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
    
    self.metadataOutput.metadataObjectTypes = @[AVMetadataObjectTypeUPCECode, AVMetadataObjectTypeQRCode];
    
    AVCaptureVideoPreviewLayer *previewLayer =
    [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
    UIView *aView = self.view;
    previewLayer.frame =
    CGRectMake(0, 20, self.view.frame.size.width,
               self.view.frame.size.height-70);
    [aView.layer addSublayer:previewLayer];
    
}

Now the only action left is to create the delegate method. All we’ll do here is simply set the text label to the value received. Add the code in Listing 9-58 to the implementation file.

Listing 9-58.  The captureOutput:didOutputMetadataObjects:fromConnection: method implementation

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
    
        self.codeLabel.text = [NSString stringWithFormat:@" Type - %@: Value - %@",object.type,object.stringValue];

}

The last step is to start and stop the capture session using the viewWillAppear and viewWillDisappear methods, as we did in Recipe 9-5. Add the two methods and their code shown in Listing 9-59 to the view controller as well.

Listing 9-59.  Adding the viewWillAppear and viewWillDisappear method implementations

- (void)viewWillAppear:(BOOL)animated
{
    [super viewWillAppear:animated];
    [self.captureSession startRunning];
}
- (void)viewWillDisappear:(BOOL)animated
{
    [super viewWillDisappear:animated];
    [self.captureSession stopRunning];
}

That’s it! If you run it and find either a UPC-E or QR code, your app should look similar to Figure 9-19.

9781430259596_Fig09-19.jpg

Figure 9-19. The finished  app with the text of the code type and value

Summary

As a developer, you have a great deal of choice when it comes to dealing with your device’s camera. The predefined interfaces, such as UIImagePickerController and UIVideoEditorController, are incredibly useful and well designed, but Apple’s implementation of the AV Foundation framework allows for more possibilities. Everything from dealing with video and audio to handling barcodes and still images is possible. Even a quick glance at the full documentation reveals countless other functionalities not discussed here, including everything from device capabilities (such as the video camera’s LED “torch”) to the implementation of your own “Touch-To-Focus” functionality. We live in a world where images, audio, and video fly around the world in a matter of seconds, and as developers we must be able to design and create innovative solutions that fit in with our media-based community.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset