Chapter 6

Camera Recipes

There are a great number of mobile applications that make use of interaction with the device's camera, including such tasks as taking images, recording videos, and providing overlay, such as with augmented-reality applications. iOS developers have a great deal of control in how they can interact with any given device's hardware. In this chapter, you will go over multiple ways to access and use these functionalities, from simple, predefined interfaces to incredibly flexible, custom implementations.

NOTE: The iOS simulator does not support camera hardware. In order to test most recipes in this chapter, they must be run on a physical device.

Recipe 6–1: Taking Pictures

iOS has an incredibly handy and simple interface built into it to utilize your device's camera through fairly standard, pre-defined settings. Through these, you can set up basic camera-focused apps, allowing users to take pictures and video from inside an app. Here, you will go over the basics of starting the camera interface in order to capture a still image.

You will again start by creating a new project using the Single View Application template, as shown in Figure 6–1.

Image

Figure 6–1. Creating a single view application

You will name your project “Chapter6Recipe1”. Set the Class Prefix to Capture, as shown in Figure 6–2, and, if your version of Xcode includes it, make sure that Use Automatic Reference Counting is enabled.

Image

Figure 6–2. Specifying your project settings

Click Next and then Create to start the project.

In order to access your camera in the pre-defined way that you will be using, you do not need to import any extra frameworks into your project, so you will proceed to build your user interface.

You will be making a simple project that will allow you to pull up your camera, take a picture, and then display the most recently taken image on your screen.

First, you need to switch over to your view controller's .xib file, which, if you used all the same names as shown in Figure 6–3, will be called CaptureViewController.xib. From the object library on the right-hand side of the screen, click and drag out a UIImageView into your view, and rearrange it to fill the entire view. Next, you will drag out a UIButton into your view, which will be used to access the camera, so you will set its text to say “Change Image” by double-clicking the button's text.

Image

Figure 6–3. CaptureViewControlle's user interface

Next, make sure that you are in Assistant Editor mode by selecting the middle “editing” button in the top right area of your screen, as shown in Figure 6–4. This will allow you to view both your interface file and your header file at the same time and work across them synchronously.

Image

Figure 6–4. Selecting the Assistant Editor

Once in the Assistant Editor mode, create an outlet for your UIImageView called “imageViewRecent”. Repeat this process for your UIButton, naming it “cameraButton”. Figure 6–5 shows the resulting window of these operations.

Image

Figure 6–5. Completed user interface with connected outlets and actions

Before you leave this setup, you will declare an action for your button to perform when it is pressed, called -cameraButtonPressed:, by adding a method declaration to your header file, and then connecting your UIButton to it by holding ^ (Ctrl), clicking the button, dragging a blue line up to the method name in the header file, and releasing it. When you do this, the method declaration should become highlighted, and a small message box will display saying “Connect Action”. Your method declaration will look like so:

- (IBAction)cameraButtonPressed:(id)sender;

Now that your interface file is put together, you can switch over to view your CaptureViewController.m file.

The first thing you will do for simple aesthetics' sake is to change the background color of your UIImageView to gray so that it is easier to distinguish from your UIButton when there is no image set as the background. To do this, place the following line of code into the -viewDidLoad method.

self.imageViewRecent.backgroundColor = [UIColor lightGrayColor];

Next, you will be handling what your app will do when your Change Image button is pressed. Since your button is already hooked up to perform your -cameraButtonPressed: method, all you need to do is implement this method. You will be using an instance of the UIImagePickerController class to access your camera.

Whenever dealing with the camera hardware on iOS, it is essential that, as a developer, you include a function to have your app check for hardware availability. This is done through the static UIImagePickerController method +isSourceTypeAvailable:, which takes several pre-defined constants as arguments. The options for this method are as follows:

UIImagePickerControllerSourceTypeCamera
UIImagePickerControllerSourceTypePhotoLibrary
UIImagePickerControllerSourceTypeSavedPhotosAlbum

You will be using the first choice here, UIImagePickerControllerSourceTypeCamera. UIImagePickerControllerPhotoLibrary is used to access all the stored photos on the device, while UIImagePickerControllerSavedPhotosAlbum is used to access only the Camera Roll album.

For your application, you will check if the Camera source type is available, and if not, display a UIAlertView saying so, by using the following code in your -viewDidLoad method.

if ([UIImagePickerController
isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] == NO)
    {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error"
message:@"Camera Unavailable" delegate:self cancelButtonTitle:@"Cancel"
otherButtonTitles:nil, nil];
        [alert show];
        return;
    }

If you are running this application in the simulator, you will notice that this condition always evaluates to true, resulting in your error message, as demonstrated in Figure 6–6. As mentioned earlier, the simulator has no camera functionality, so in order to fully test this application, you will need to test it on a physical device.

Image

Figure 6–6. Simulation of your app, which does not support camera use

Now you can handle the case that you actually hope for, in which the Camera is available. First, you will create an instance of UIImagePickerController, naming it imagePicker.

    UIImagePickerController *imagePicker = [[UIImagePickerController alloc] init];

Next, you will set the delegate and source type of imagePicker to be your view controller and UIImagePickerControllerSourceTypeCamera, respectively.

imagePicker.delegate = self;
    imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;

Finally, you will present your UIImagePickerController modally.

[self presentModalViewController:imagePicker animated:YES];

In its entirety, your method to handle your button presses should now look like so:

-(IBAction)cameraButtonPressed:(id)sender
{

    if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] == NO)
    {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error"
message:@"Camera Unavailable" delegate:self cancelButtonTitle:@"Cancel"
otherButtonTitles:nil, nil];
        [alert show];
        return;
    }
    UIImagePickerController *imagePicker = [[UIImagePickerController alloc] init];
    imagePicker.delegate = self;
    imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
    [self presentModalViewController:imagePicker animated:YES];
}

By setting up your view controller as the delegate for your UIImagePickerController, you are required to ensure that your view controller conforms to a couple of protocols, specifically the UIImagePickerController and UINavigationController protocols. You will add these to your class's header file, so that it now looks like so:

#import <UIKit/UIKit.h>
@interface CaptureViewController : UIViewController <UIImagePickerControllerDelegate, UINavigationControllerDelegate>
{
    UIImageView *imageViewRecent;
    UIButton *cameraButton;
}

@property (strong, nonatomic) IBOutlet UIImageView *imageViewRecent;
@property (strong, nonatomic) IBOutlet UIButton *cameraButton;
-(IBAction)cameraButtonPressed:(id)sender;
@end

Now that you have set up your view controller to successfully present your UIImagePickerController, you need to handle how your view controller reacts to the completion of the UIImagePickerController's selection, when a picture has been taken and selected for use. You will do this through the use of the delegate method -imagePickerController:didFinishPickingMediaWithInfo:. This method gives the delegate an instance of NSDictionary called info, with keys referring to the selected media.

First, you create an instance of UIImage to point to the selected image.

UIImage *originalImage = (UIImage *) [info
objectForKey:UIImagePickerControllerOriginalImage];

Next you will save this image to your device's album, so that the picture is usable outside of this app. Alternatively, if you did not want your app to save several pictures as you use it for testing, you could comment out this line:

UIImageWriteToSavedPhotosAlbum (originalImage, nil, nil , nil);

Now you set your UIImageView's image to be the chosen image, and also change the content mode of the UIImageView.

self.imageViewRecent.image = originalImage;
self.imageViewRecent.contentMode = UIViewContentModeScaleAspectFill;

NOTE: The UIImagePickerController class does not support landscape orientation for taking pictures. You compensate for this by changing the contentMode of your UIImageView to UIViewContentModeScaleAspectFill so that your image fills the screen. Alternatively, UIViewContentModeScaleAspectFit could also be used to fit the entire landscape image on the screen, though it will not fill the view.

Finally, you will dismiss your UIImagePickerController.

[self dismissModalViewControllerAnimated:YES];

As a whole, your delegate method's implementation will look like so:

- (void) imagePickerController: (UIImagePickerController *) picker
 didFinishPickingMediaWithInfo: (NSDictionary *) info
{
    UIImage *originalImage = (UIImage *) [info objectForKey:
        UIImagePickerControllerOriginalImage];
    UIImageWriteToSavedPhotosAlbum (originalImage, nil, nil , nil);
    self.imageViewRecent.image = originalImage;
    self.imageViewRecent.contentMode = UIViewContentModeScaleAspectFill;
    [self dismissModalViewControllerAnimated:YES];
}

You will also implement another UIImagePickerController delegate method to handle the cancellation of an image selection:

- (void) imagePickerControllerDidCancel: (UIImagePickerController *) picker
{
    [self dismissModalViewControllerAnimated:YES];
}

As an optional setting, you could also allow your camera interface to be editable, allowing the user to crop and frame the picture she or he has taken. In order to do this, you simply have to set the UIImagePickerController's allowsEditing property to “YES”, and, in order to acquire this edited image, you would replace the first three lines of code in your previous -imagePickerController:didFinishPickingMediaWithInfo: method with the following lines:

UIImage *editedImage = (UIImage *)[info
objectForKey:UIImagePickerControllerEditedImage];
UIImageWriteToSavedPhotosAlbum (editedImage, nil, nil , nil);
self.imageViewRecent.image = editedImage;

Assuming that you are able to run this app on a physical device, your app should now be able to correctly access the device's camera, select a picture, and set it as a background, as shown in Figure 6–7.

Image

Figure 6–7. Your app with a photo set as the background

Recipe 6–2: Recording Video

Your UIImagePickerController is actually significantly more flexible in its use than how you've been using it so far, especially since you've been using it almost exclusively for still images. Here, you'll go through how to set up your UIImagePickerController to handle both still images and video as well.

For this recipe, you will be building off of the code that you have already set up in the previous recipe, as it already includes the entire setup that you need. Your app will have the added functionality of being able to record and save videos.

First, you need to edit the properties of your UIImagePickerController to specify the allowable media types, through the use of the UIImagePickerController class method +availableMediaTypesForSourceType:, so that your -cameraButtonPressed: will now look like so:

-(IBAction)cameraButtonPressed:(id)sender
{
    if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] == NO)
    {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error"
message:@"Camera Unavailable" delegate:self cancelButtonTitle:@"Cancel"
otherButtonTitles:nil, nil];
        [alert show];
        return;

    }
    UIImagePickerController *imagePicker = [[UIImagePickerController alloc] init];
    imagePicker.delegate = self;
    imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
    imagePicker.mediaTypes = [UIImagePickerController
availableMediaTypesForSourceType:UIImagePickerControllerSourceTypeCamera];
    [self presentModalViewController:imagePicker animated:YES];
}

Next, you need to instruct your application on how to handle when a user records and uses a video. You will add the following code to your UIImagePickerController's delegate method:

NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType];

    if (CFStringCompare ((__bridge CFStringRef) mediaType, kUTTypeMovie, 0)
        == kCFCompareEqualTo) {

        NSString *moviePath = [[info objectForKey: UIImagePickerControllerMediaURL] path];

        if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum (moviePath))
        {
            UISaveVideoAtPathToSavedPhotosAlbum (moviePath, nil, nil, nil);
        }
    }

The first thing you will probably notice is that there is an error focused on kUTTypeMovie, saying that it is undefined. In order to fix this, you need to import the Mobile Core Services framework into your project, and then add the following import statement to the header of your view controller. The beginning of Recipe 6–4 contains a detailed demonstration of how to add a framework to a project if you are unfamiliar with this process.

#import <MobileCoreServices/MobileCoreServices.h>

Essentially, all you are doing here is comparing the media type of the saved file. Your main issue comes into play when you attempt to compare mediaType, an NSString, with kUTTypeMovie, which is of type CFStringRef. You accomplish this by casting your NSString down to a CFStringRef. iOS 5 has made this process slightly more complicated with the introduction of Automatic Reference Counting (ARC), because ARC deals with Objective-C object types such as NSString, but not with C types like CFStringRef. You create a bridged casting by placing “__bridge” before your CFStringRef, as shown earlier, in order to instruct ARC to no longer deal with this object.

If all has gone well, your app should now be able to record video!

Recipe 6–3: Editing Videos

While your UIImagePickerController offers an extremely convenient way to record and save video files, it does nothing to allow you to edit them. Luckily, iOS has another built-in controller called UIVideoEditorController, which you will use to allow your recorded videos to be edited.

You will build this fairly simple recipe off of your second project, in which you added video functionality to your UIImagePickerController.

First, you will make a second button in your view controller's interface file, giving it the title “Edit Video”, and the name editButton. You will also hook it up to an action, -editButtonPressed:, as shown in Figure 6–8.

Image

Figure 6–8. New user interface with editing button

Next, you define an NSString property to store the path to your most recently selected/edited video, as shown on the right in Figure 6–8, keeping care to @synthesize it and then setting it to “nil” in -viewDidUnload:

@property (nonatomic, strong) NSString *recentMovie;

You will need to add to your UIImagePickerController's delegate method a statement to store your recently created video's path by adding the following line:

self.recentMovie = moviePath;

Your delegate method now looks like so:

- (void) imagePickerController: (UIImagePickerController *) picker
 didFinishPickingMediaWithInfo: (NSDictionary *) info
{
    NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType];

    if (CFStringCompare ((__bridge CFStringRef) mediaType, kUTTypeMovie, 0)
        == kCFCompareEqualTo) {
        
        NSString *moviePath = [[info objectForKey: UIImagePickerControllerMediaURL]
path];
        self.recentMovie = moviePath;

        if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum (moviePath))
        {
            UISaveVideoAtPathToSavedPhotosAlbum (moviePath, nil, nil, nil);
        }
    }

    else
    {
    UIImage *originalImage = (UIImage *) [info objectForKey:
        UIImagePickerControllerOriginalImage];

    UIImageWriteToSavedPhotosAlbum (originalImage, nil, nil , nil);
    self.imageViewRecent.image = originalImage;
    }
    [self dismissModalViewControllerAnimated:YES];
}

You will implement your -editButtonPressed: action now in order to display a video to be edited if one exists, or to otherwise display an alert view telling you of your user's error.

-(void)editButtonPressed:(id)sender
{
    if (self.recentMovie)
    {
        UIVideoEditorController *editor = [[UIVideoEditorController alloc] init];
        editor.videoPath = self.recentMovie;
        editor.delegate = self;
        [self presentModalViewController:editor animated:YES];
    }
    else
    {
        UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error" message:@"No
Video Recorded Yet" delegate:self cancelButtonTitle:@"Cancel" otherButtonTitles:nil,
nil];
        [alert show];    }
}

Keep in mind that at this point you will need to make sure your view controller is listed as conforming to the UIVideoEditorControllerDelegate and UINavigationControllerDelegate protocols in your header file.

Finally, you only need to implement a few delegate methods for your UIVideoEditorController. First, here is a delegate method to handle a successful editing/trimming of the video:

-(void)videoEditorController:(UIVideoEditorController *)editor
didSaveEditedVideoToPath:(NSString *)editedVideoPath
{
    self.recentMovie = editedVideoPath;
    if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum (editedVideoPath))
    {
        UISaveVideoAtPathToSavedPhotosAlbum (editedVideoPath, nil, nil, nil);
    }
    [self dismissModalViewControllerAnimated:YES];
}

As you can see, your application will set the newly edited video as your next video to be edited, so that you can create increasingly trimmed clips. It will also save each edited version to your photo album as well if possible.

Lastly, you need one more delegate method to handle the cancellation of your UIVideoEditorController.

-(void)videoEditorControllerDidCancel:(UIVideoEditorController *)editor
{
    [self dismissModalViewControllerAnimated:YES];
}

Upon testing on a physical device, your application should now successfully allow you to edit your videos! Figure 6–9 shows a view of your application giving you the option to edit a recorded video.

Image

Figure 6–9. View seen while recording video

Recipe 6–4: Custom Camera Overlays

There are quite a variety of applications that implement the camera interface, but also implement a custom overlay, such as for displaying constellations on the sky, or simply implementing their own custom camera controls. Here, you will learn to do a basic implementation of a custom overlay over your camera's screen, continuing from your previous recipe's project.

You need to build your custom UIView to be used as a custom overlay programmatically, meaning you will not be using a XIB interface. You will be adjusting certain specific properties of the buttons that you put in, so the first thing you need to do is import the QuartzCore interface into your project, which means adding an import statement into your header file.

#import <QuartzCore/QuartzCore.h>

You will create a method, -customView:, that will take your UIImagePicker as an argument and return your UIView.

-(UIView *)customView:(UIImagePickerController *)imagePicker;
{
    UIView *view = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 280, 480)];
    view.backgroundColor = [UIColor clearColor];

    UIButton *flashButton = [[UIButton alloc] initWithFrame:CGRectMake(10, 10, 120,
44)];
    flashButton.backgroundColor = [UIColor colorWithRed:.5 green:.5 blue:.5 alpha:.5];
    [flashButton setTitle:@"Flash Auto" forState:UIControlStateNormal];
    [flashButton setTitleColor:[UIColor whiteColor] forState:UIControlStateNormal];
    flashButton.layer.cornerRadius = 10.0;

    UIButton *changeCameraButton = [[UIButton alloc] initWithFrame:CGRectMake(190, 10,
120, 44)];
    changeCameraButton.backgroundColor = [UIColor colorWithRed:.5 green:.5 blue:.5
alpha:.5];
    [changeCameraButton setTitle:@"Rear Camera" forState:UIControlStateNormal];
    [changeCameraButton setTitleColor:[UIColor whiteColor]
forState:UIControlStateNormal];
    changeCameraButton.layer.cornerRadius = 10.0;

    UIButton *takePictureButton = [[UIButton alloc] initWithFrame:CGRectMake(100, 432,
120, 44)];
    takePictureButton.backgroundColor = [UIColor colorWithRed:.5 green:.5 blue:.5
alpha:.5];
    [takePictureButton setTitle:@"Click!" forState:UIControlStateNormal];
    [takePictureButton setTitleColor:[UIColor whiteColor]
forState:UIControlStateNormal];
    takePictureButton.layer.cornerRadius = 10.0;

    [flashButton addTarget:self action:@selector(toggleFlash:)
forControlEvents:UIControlEventTouchUpInside];
    [changeCameraButton addTarget:self action:@selector(toggleCamera:)
forControlEvents:UIControlEventTouchUpInside];
    [takePictureButton addTarget:imagePicker action:@selector(takePicture)
forControlEvents:UIControlEventTouchUpInside];

    [view addSubview:flashButton];
    [view addSubview:changeCameraButton];
    [view addSubview:takePictureButton];

    return view;
}

Here, you have defined your UIView as well as your buttons to be put in it, given them their actions to perform, and added them into the view. You set the title of each button to be either its starting value or its purpose. You also set your cornerRadius so that your buttons will have rounded corners. One of the most important details here is that you set your buttons to be semi-transparent, as they will be placed over your camera's display. You do not want to cover up any of your picture, so they have to be at least partially see-through.

Now you need to simply implement your two toggling methods, -toggleCamera: and -toggleFlash:. You will need a few extra instance variables in your class to deal with these properly, including two BOOLs to keep track of your settings, as well as a pointer to a UIImagePickerController to pass around your camera interface. Your view controller's header file should now look like so:

#import <UIKit/UIKit.h>
#import <MobileCoreServices/MobileCoreServices.h>
#import <QuartzCore/QuartzCore.h> //Need this!

@interface CaptureViewController : UIViewController <UIImagePickerControllerDelegate,
UINavigationControllerDelegate>
{
    UIImageView *imageViewRecent;
    UIButton *cameraButton;
    UIImagePickerController *currentPicker;
    BOOL flashOn;
    BOOL frontCameraUsed;
}

@property (strong, nonatomic) IBOutlet UIImageView *imageViewRecent;
@property (strong, nonatomic) IBOutlet UIButton *cameraButton;

-(IBAction)cameraButtonPressed:(id)sender;
@end

Next, add the following line to -cameraButtonPressed: after imagePicker is created.

currentPicker = imagePicker;

Also add the following line to the camera's -imagePickerController:didFinishPickingMediaWithInfo: delegate method in order to “release” your currentPicker's value. This can go at the very end of the method, after the view controller has been dismissed.

currentPicker = nil;

Now you can successfully define your -toggleFlash: and -toggleCamera: methods like so:

-(void)toggleFlash:(UIButton *)sender
{
    if (flashOn)
    {
        currentPicker.cameraFlashMode = UIImagePickerControllerCameraFlashModeOff;
        flashOn = NO;
        [sender setTitle:@"Flash Off" forState:UIControlStateNormal];
    }
    else
    {
        currentPicker.cameraFlashMode = UIImagePickerControllerCameraFlashModeOn;
        flashOn = YES;
        [sender setTitle:@"Flash On" forState:UIControlStateNormal];
    }
}
-(void)toggleCamera:(UIButton *)sender
{
    if (frontCameraUsed)
    {
        currentPicker.cameraDevice = UIImagePickerControllerCameraDeviceRear;
        frontCameraUsed = NO;
        [sender setTitle:@"Rear Camera" forState:UIControlStateNormal];
    }
    else
    {
        currentPicker.cameraDevice = UIImagePickerControllerCameraDeviceFront;
        frontCameraUsed = YES;
        [sender setTitle:@"Front Camera" forState:UIControlStateNormal];
    }
}

Finally, you simply need to change the visibility of the camera controls by adding two more lines to your -cameraButtonPressed: method, just before your controller is presented.

imagePicker.showsCameraControls = NO;
imagePicker.cameraOverlayView = [self customView:imagePicker];

Your camera should now have a wonderful little overlay with a couple of buttons that change how it works, as in Figure 6–10. You can see that from here, you can create your own custom overlays and easily change their functions to fit nearly any situation.

Image

Figure 6–10. Custom overlay view over a camera

Recipe 6–5: AV Framework and Capture Sessions

While the UIImagePickerController and UIVideoEditorController interfaces are incredibly useful, they certainly aren't as customizable as they could be. Using the AV framework, you can create an immensely more customizable camera interface. Here, you will be creating essentially your own version of the camera, but in such a way that further customization is incredibly easy, by using a different method known as an AVCaptureSession.

First, you will create a new project, called “Chapter6Recipe5”, using a class prefix of “CustomCamera”.

You will need a variety of different frameworks linked to your project. Navigate to the project's main settings, select CustomCamera under Targets, and then flip over to the Build Phases tab, resembling Figure 6–11.

Image

Figure 6–11. Preparing to add frameworks to your project

Under Link Binary With Libraries, you will use the + button to add several other frameworks. Search for and add the following frameworks:

  • AV Foundation
  • Core Graphics
  • Core Video
  • Core Media

You will actually not need to type any import statements for any of these except for the AV Foundation one. Go ahead and add the following import lines to the header file of your main view controller.

#import <AVFoundation/AVFoundation.h>
#import <AVFoundation/AVCaptureInput.h>

Next, you'll switch over to your view controller's .xib file to do a bit of quick setup. Here, all you need to do is drag a UIButton over to your view, and then connect it to your header file, naming it startButton. You need to make an action for this button to perform, so you will declare the following method header, and connect your startButton to it.

-(IBAction)startPressed:(id)sender;

Once these changes have been made, your user interface and code should resemble Figure 6–12 in the Assistant Editor.

Image

Figure 6–12. User interface with configured outlet and action

Next, switching away from your XIB/header file combination to view the header and implementation files, you will define a new kind of property for your view controller, like so.

@property (strong, nonatomic) AVCaptureSession *captureSession;

You must, as always, remember to “@synthesize captureSession;” and do “self.captureSession = nil” in your implementation file and -viewDidUnload respectively.

Next, you will be writing your -viewDidLoad method to prepare your view, create your AVCaptureSession, and set it up as you desire. You will add the following sets of lines to your method after the call [super viewDidLoad];.

First, you must create your AVCaptureSession, and give it a resolution preset, like so:

self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;

Next, you will create an instance of AVCaptureDevice, with which you will specify your input device, which in this case will be the device's rear camera (assuming one is accessible). You specify this through the use of the AVCaptureDevice class method +defaultDeviceWithMediaType:, which can take a variety of different arguments, depending on the type of media desired, the most prominent of which are AVMediaTypeVideo and AVMediaTypeAudio.

AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

Next, you need to create an instance of AVCaptureDeviceInput in order to specify your chosen device as an input for your capture session. You will also include a check to make sure the input has been correctly created before adding it to your session.

NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
    if (!input)
    {
        NSLog(@"Input Error");
    }
    else
    {
        [self.captureSession addInput:input];
    }

Next, you will set up an output for your capture session, like so:

AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
    [self.captureSession addOutput:output];
    output.videoSettings =
    [NSDictionary dictionaryWithObject:[NSNumber
numberWithInt:kCVPixelFormatType_32BGRA]
                                forKey:(id)kCVPixelBufferPixelFormatTypeKey];

    dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

Here, you have created an AVCaptureVideoDataOutput, which is commonly used when a developer's goal is to deal with raw video input frame-by-frame, as opposed to simply saving a video as a whole. Other types of AVCaptureOutputs include the following:

  • AVCaptureMovieFileOutput: Used for saving whole video files
  • AVCaptureAudioDataOutput: Used for processing audio data
  • AVCaptureStillImageOutput: Used for extracting specific still images from a session (This type of output could also be used to perform your current goal.)

You have also dispatched an extra queue to handle the processing of your frames that you will end up seeing later.

The last part of your -viewDidLoad will be the creation of an AVCaptureVideoPreviewLayer, with which you will be able to see exactly what your camera is viewing in the app. You will set your preview layer to be the layer of your main view, but with a slightly altered height, so as not to block your button from being visible.

AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:self.captureSession];
    UIView *aView = self.view;
    previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.width,
self.view.frame.size.height-70);
    [aView.layer addSublayer:previewLayer];

A Note on AVCaptureVideoPreviewLayer

The most significant part of the concept of an AVCaptureVideoPreviewLayer is not its visual output, but instead its power to be manipulated. Just like any other CALayer, it can be repositioned, rotated, and resized. At this point, you are no longer bound to using the entire screen to record video as you are with the UIImagePicker, meaning you could have your preview layer in one part of the screen and other information for the user in another. As with almost every part of iOS development, the possibilities of use are limited only by the developer's imagination.

Since you set your current view controller as the delegate for your AVCaptureVideoDataOutput, you will need to implement the AVCaptureVideoDataOutputSampleBufferDelegate protocol. Your header file should now look like so:

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <AVFoundation/AVCaptureInput.h>

@interface CustomCameraViewController : UIViewController
<AVCaptureVideoDataOutputSampleBufferDelegate>{
    UIImageView *imageViewDisplay;
    UIButton *startButton;
    BOOL capture;
}

@property (strong, nonatomic) IBOutlet UIButton *startButton;
@property (strong, nonatomic) AVCaptureSession *captureSession;
-(IBAction)startPressed:(id)sender;
@end

Make sure to note the new addition also of the BOOL instance variable capture. You will be using this to keep track of whether your session's frames will be processed.

Next, you will implement your AVCaptureVideoDataOutputSampleBuffer's delegate method, like so:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {

    if (capture)
    {
        UIImage *chosenImage = [self imageFromSampleBuffer:sampleBuffer];
        UIImageWriteToSavedPhotosAlbum (chosenImage, nil, nil , nil);
        capture = NO;
    }
}

As you can see, you are simply checking if your capture BOOL evaluates to a “YES”, and if so, you acquire the image of the given video frame, and then save it to your device's photo album. By setting capture to “NO” immediately afterward, you limit your app to capture only one frame per button press. It should be fairly easy to see how this could be expanded to include any number of frames.

You must also define your -imageFromSampleBuffer: that you just used, as follows, making sure that it is defined above your delegate method.

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

Now, you will implement a fairly simple capturing toggle method to handle your button presses.

-(void)startPressed:(id)sender
{
    if (!capture)
    {
        capture = YES;
    }
    else
    {
        capture = NO;
    }
}

One of the most important steps to remember is to actually “start” and “stop” your AVCaptureSession. Since you want your camera's display to be visible any time the app is open, you will start your session in your -viewWillAppear method and stop it in your -viewWillDisappear method, the two of which will now look like so:

- (void)viewWillAppear:(BOOL)animated
{
    [super viewWillAppear:animated];
    [self.captureSession startRunning];
}
- (void)viewWillDisappear:(BOOL)animated
{
    [super viewWillDisappear:animated];
    [self.captureSession stopRunning];
}

If you run this app on your device now, you will probably notice that all the saved images that you take look like they were taken in landscape mode, and then rotated to fit in portrait, obviously not filling up the entire screen anymore. You fix this by changing the video orientation of your session's output connections by changing your -viewWillAppear: method to appear like so:

- (void)viewWillAppear:(BOOL)animated
{
    [super viewWillAppear:animated];
    [self.captureSession startRunning];

    NSArray *array = [[self.captureSession.outputs objectAtIndex:0] connections];
    for (AVCaptureConnection *connection in array)
    {
        connection.videoOrientation = AVCaptureVideoOrientationPortrait;
    }
}

If you run the app now on your device, you will be able to see a preview of what your camera is recording, and whenever you press the button, the current frame will be saved to your photo library, as in Figure 6–13. While you haven't included any fancy animations to make it look like a camera, this is incredibly useful as far as a basic camera goes, especially given your new ability to fully customize its behavior in a frame-by-frame case or as a whole video (if you add a second output).

Image

Figure 6–13. Your app displaying a preview of the camera's view

Recipe 6–6: Programmatically Recording Video

Now that you have covered some of the basics of using AVFoundation, you will implement a slightly more complicated project using it. This time, your application will be recording full video, rather than simply capturing specific frames. You will also add an audio input device so that your video has sound included.

First, you must make your new project, this time titled “Chapter6Recipe6”, with class prefix “CustomVideo”.

As usual, the first thing to do is to acquire the following necessary frameworks using the Link Binary With Libraries section in your project's Build Phases tab.

  • AV Foundation: You will use this to deal with your camera and microphone.
  • Assets Library: This is for saving the video that you will record to your device.

In Interface Builder, add a UIButton to the bottom center of the view controller's view, just like in the previous recipe, giving it the default label “Record”. When you connect this to the header file, give the UIButton the name button. Be sure to also create an action for this button to perform called -recordPressed: (see earlier recipes on how to do this).

Just like in your previous recipe, you will be building your AVCaptureSession to manage your camera's input and the output of your video, so you will start off by adding a few variables, properties, and protocols to your view controller's header file.

  • First, you will need an instance variable of type BOOL called recording, which will simply keep track of whether your video is recording.
  • Second, you need to create two properties that will be used to store pointers to your AVCaptureSession session and your AVCaptureMovieFileOutput output, the latter of which will be your AVCaptureOutput device.
  • Third, you need to include an import statement for the AV Foundation framework in your header file.
  • Finally, you will need to tell the compiler that your view controller conforms to the AVCaptureFileOutputRecordingDelegate protocol.

With all these changes, your view controller's header file will now look like so:

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <AssetsLibrary/AssetsLibrary.h>

@interface CustomVideoViewController : UIViewController
<AVCaptureFileOutputRecordingDelegate>{
    UIButton *button;
    BOOL recording;
}

@property (strong, nonatomic) IBOutlet UIButton *button;

@property (strong, nonatomic) AVCaptureSession *session;
@property (strong, nonatomic) AVCaptureMovieFileOutput *output;

-(IBAction)recordPressed:(id)sender;
@end

Now that your header file is all set up, you will switch over to your implementation file.

First, since you have set up two of your own properties, session and output, you absolutely have to remember to @synthesize both of them at the top of your implementation file. You should also remember to set them both equal to nil in your -viewDidUnload method.

Next, as with the previous recipe, you will start to build your -viewDidLoad method.

First, you must allocate your AVCaptureSession, like so:

self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetMedium;

Next, you will create two instances of AVCaptureDevice, one for your rear camera, and one for your microphone, and then create AVCaptureDeviceInputs for each of them and add them to your session.

AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];

NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *mic = [[AVCaptureDeviceInput alloc] initWithDevice:[devices
objectAtIndex:0] error:nil];
if (!input || !mic)
    {
        NSLog(@"Input Error");
    }
    else
    {
        [self.session addInput:input];
        [self.session addInput:mic];
    }

Now you create your AVCaptureOutput and add it to the session. You will make sure any connections that your output has have their video orientations set correctly.

self.output = [[AVCaptureMovieFileOutput alloc] init];

    NSArray *connections = self.output.connections;
    for (AVCaptureConnection *connection in connections)
    {
        if ([connection isVideoOrientationSupported])
             connection.videoOrientation = AVCaptureVideoOrientationPortrait;
    }
    if ([self.session canAddOutput:self.output])
        [self.session addOutput:self.output];

You will need to be able to see your camera's view, so you'll set up an instance of AVCaptureVideoPreviewLayer.

AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:self.session];
UIView *aView = self.view;
previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.width,
self.view.frame.size.height-70);
[aView.layer addSublayer:previewLayer];

Finally, you simply need to start your AVCaptureSession. You'll also set your recording instance to NO just to ensure it has the correct starting value.

[self.session startRunning];
    recording = NO;

In its entirety, your -viewDidLoad method now looks so:

- (void)viewDidLoad
{
    [super viewDidLoad];

    self.session = [[AVCaptureSession alloc] init];
    self.session.sessionPreset = AVCaptureSessionPresetMedium;
    AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];

    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio];

    AVCaptureDeviceInput *mic = [[AVCaptureDeviceInput alloc] initWithDevice:[devices
objectAtIndex:0] error:nil];

    if (!input || !mic)
    {
        NSLog(@"Input Error");
    }
    else
    {
        [self.session addInput:input];
        [self.session addInput:mic];
    }

    self.output = [[AVCaptureMovieFileOutput alloc] init];

    NSArray *connections = self.output.connections;
    for (AVCaptureConnection *connection in connections)
    {
        if ([connection isVideoOrientationSupported])
             connection.videoOrientation = AVCaptureVideoOrientationPortrait;
    }
    if ([self.session canAddOutput:self.output])
        [self.session addOutput:self.output];

    AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:self.session];
    UIView *aView = self.view;
    previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.width,
self.view.frame.size.height-70);
    [aView.layer addSublayer:previewLayer];

    [self.session startRunning];
    recording = NO;
}

TIP: Adding sound to your videos is not a complicated process in this method! All that is required is to add the audio input device to your session, and the AVCaptureSession will do the rest for you.

You will of course need to define how your application handles when the user presses your UIButton. This will be a fairly simple toggle function to start and stop your AVCaptureOutput.

-(IBAction)recordPressed:(id)sender
{
    if (!recording)
    {
        [self.button setTitle:@"Stop" forState:UIControlStateNormal];
        recording = YES;
        NSURL *fileURL = [self
tempFileURL];
        [self.output startRecordingToOutputFileURL:fileURL recordingDelegate:self];
    }
    else
    {
        [self.button setTitle:@"Record" forState:UIControlStateNormal];
        [self.output stopRecording];
        recording = NO;
    }
}

You probably noticed that you called the method -tempFileURL in order to set up your AVCaptureOutput early. This method, in short, returns a path for your recorded video to be temporarily saved on your device. If there is already a file saved at the location, it will delete that file. (This way, you never use more than one video's worth of disk space.)

- (NSURL *) tempFileURL
{
    NSString *outputPath = [[NSString alloc] initWithFormat:@"%@%@",
NSTemporaryDirectory(), @"output.mov"];
    NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
    NSFileManager *manager = [[NSFileManager alloc] init];
    if ([manager fileExistsAtPath:outputPath])
    {
        [manager removeItemAtPath:outputPath error:nil];
    }
    return outputURL;
}

The last major step to set up your video is to set up your AVCaptureMovieFileOutput's delegate. It will check if there were any errors in recording the video to a file, and then save your video file into your Asset Library.

- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
      fromConnections:(NSArray *)connections
                error:(NSError *)error {

    BOOL recordedSuccessfully = YES;
    if ([error code] != noErr) {
        // A problem occurred: Find out if the recording was successful.
        id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
        if (value) {
            recordedSuccessfully = [value boolValue];
        }
    }
    ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];

    [library writeVideoAtPathToSavedPhotosAlbum:outputFileURL
            completionBlock:^(NSURL *assetURL, NSError *error)
    {
        if (error)

           {
               NSLog(@"Error writing") ;

           }
    }];
}

Finally, to improve the functionality of your app's design, you will make your -viewWillAppear: and -viewWillDisappear: have a hand in your session's starting and stopping, so that you don't end up with a session running in the background or not running when you can see it.

- (void)viewWillAppear:(BOOL)animated
{
    [super viewWillAppear:animated];
    if (![self.session isRunning])
    {
        [self.session startRunning];
    }
}
- (void)viewWillDisappear:(BOOL)animated
{
        [super viewWillDisappear:animated];
    if ([self.session isRunning])
    {
        [self.session stopRunning];
    }
}

This app will look almost identical to the previous recipe's app, the main difference being the type of media saved. In the previous recipe, you were saving individual frames as images to your library, but here you will be saving recorded video with sound.

Recipe 6–7: Capturing Video Frames

For a large number of applications that utilize videos, a thumbnail image is often used to “represent” a given video. Adding on to your previous recipe, you will be adding in a capability to create a thumbnail image based on a specific point in your video. You will implement two different ways to do this, with one method based on the AVCaptureSession, and the other based on your saved video.

First, you will capture a still image from your AVCaptureSession using a different type of AVCaptureOutput, known as an AVCaptureStillImageOutput.

TIP: This first method of taking images has the added ability to automatically play a shutter sound when an image is taken!

First, you will add a couple of UIImageViews to your view controller's XIB file, which will be used to display the still images that you capture. You will connect them to your header file as usual, naming them imageViewThumb and imageViewThumb2. If desired, you can set their background color to something other than the default color so that they can be distinguished from your view before your images are put in them. Figure 6–14 shows your resulting XIB file.

Image

Figure 6–14. Setting up your user interface for thumbnails

Next you will need a property to keep track of your AVCaptureStillImageOutput, which you will declare and synthesize in your view controller as stillImageOutput, making sure to remember to set it to nil in your -viewDidUnload.

Your third step is to add this output to your AVCaptureSession in your -viewDidLoad method, which will now look like so:

- (void)viewDidLoad
{
    [super viewDidLoad];

    self.imageViewThumb.backgroundColor = [UIColor whiteColor];

    self.session = [[AVCaptureSession alloc] init];
    self.session.sessionPreset = AVCaptureSessionPresetMedium;

    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];

    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio];
    AVCaptureDeviceInput *mic = [[AVCaptureDeviceInput alloc] initWithDevice:[devices
objectAtIndex:0] error:nil];

    if (!input || !mic)
    {
        NSLog(@"Input Error");
    }
    else
    {
        [self.session addInput:input];
        [self.session addInput:mic];
    }

    self.output = [[AVCaptureMovieFileOutput alloc] init];

    NSArray *connections = self.output.connections;
    for (AVCaptureConnection *connection in connections)
    {
        if ([connection isVideoOrientationSupported])
            connection.videoOrientation = AVCaptureVideoOrientationPortrait;
    }
    if ([self.session canAddOutput:self.output])
        [self.session addOutput:self.output];

    /////////NEW STILL IMAGE OUTPUT CODE
    self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
                                    AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [self.stillImageOutput setOutputSettings:outputSettings];

    if ([self.session canAddOutput:stillImageOutput])
    {
        [self.session addOutput:stillImageOutput];
    }
    else
    {
        NSLog(@"Unable to add still image output");
    }
    /////////END OF NEW STILL IMAGE OUTPUT CODE
    AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:self.session];
    UIView *aView = self.view;
    previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.width,
self.view.frame.size.height-70);
    [aView.layer addSublayer:previewLayer];

    [self.session startRunning];
    recording = NO;
        // Do any additional setup after loading the view, typically from a nib.
}

Next, you will define the method that will actually capture your image and save it to your device.

- (void) captureStillImage
{
    AVCaptureConnection *stillImageConnection = [self.stillImageOutput.connections
objectAtIndex:0];
    if ([stillImageConnection isVideoOrientationSupported])
        [stillImageConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];

    [[self
stillImageOutput]
captureStillImageAsynchronouslyFromConnection:stillImageConnection

completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
     {
         ALAssetsLibraryWriteImageCompletionBlock completionBlock = ^(NSURL *assetURL,
NSError *error)
         {};

        if (imageDataSampleBuffer != NULL)
        {
            NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
            ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];

            UIImage *image = [[UIImage alloc] initWithData:imageData];
            self.imageViewThumb.image = image;
            [library writeImageToSavedPhotosAlbum:[image CGImage]
                 orientation:(ALAssetOrientation)[image imageOrientation]
                 completionBlock:completionBlock];
        }
        else
            completionBlock(nil, error);

        }];
}

Finally, you simply need to tell your application when to perform this -captureStillImage by placing a call to it in your recordPressed: method. I chose to have it take the picture right at the beginning of the recording, though it could also go at the end. Your method will now look like so:

-(IBAction)recordPressed:(id)sender
{
    if (!recording)
    {
        [self.button setTitle:@"Stop" forState:UIControlStateNormal];
        recording = YES;
        NSURL *fileURL = [self tempFileURL];
        [self.output startRecordingToOutputFileURL:fileURL recordingDelegate:self];

        //////CAPTURE IMAGE
        [self captureStillImage];
    }
    else
    {
        [self.button setTitle:@"Record" forState:UIControlStateNormal];
        [self.output stopRecording];
        recording = NO;
    }
}

At this point, your application, if run on a device, will successfully record a video as well as take a still image at the beginning (or end if you chose) of your recording. While this is of course very useful, it doesn't quite have that full customizability that you may desire. Next, you will implement a use of the class AVAssetImageGenerator, which can not only generate multiple images from a single video, but also create them at varied, specified times in the video.

First, you will need to link your binary with the Core Media framework. Do this as you have with all other framework links. You will not need any import statements for this framework. It will simply act as a reference for your compiler.

Next, you will add a property to your class of type AVAssetImageGenerator, making sure to synthesize and later to set it to nil correctly. Name this property imageGenerator.

You have already defined your delegate method to handle the successful saving of a video to a URL, so you will add to that your -captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method, in order to access the URL and generate your images from it. Your method should now look like so, with comments identifying the newly added lines:

- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
      fromConnections:(NSArray *)connections
                error:(NSError *)error {

    BOOL recordedSuccessfully = YES;
    if ([error code] != noErr) {
        // A problem occurred: Find out if the recording was successful.
        id value = [[error userInfo]
objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
        if (value) {
            recordedSuccessfully = [value boolValue];
        }
    }
    ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
    [library writeVideoAtPathToSavedPhotosAlbum:outputFileURL
                                completionBlock:^(NSURL *assetURL, NSError *error)
     {
         if (error)
         {
             NSLog(@"Error writing") ;
         }

     }];

    ////////////START OF NEW STILL IMAGE CODE
    AVURLAsset *myAsset = [[AVURLAsset alloc] initWithURL:outputFileURL
options:[NSDictionary dictionaryWithObject:@"YES"
forKey:AVURLAssetPreferPreciseDurationAndTimingKey]];

    self.imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:myAsset];
    self.imageGenerator.appliesPreferredTrackTransform = YES; //Makes sure images are correctly rotated.

    Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]);
    CMTime half = CMTimeMakeWithSeconds(durationSeconds/2.0, 600);
    NSArray *times = [NSArray arrayWithObjects: [NSValue valueWithCMTime:half], nil];
    [self.imageGenerator generateCGImagesAsynchronouslyForTimes:times
                    completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime
actualTime,AVAssetImageGeneratorResult result, NSError *error)
    {
        NSString *requestedTimeString = (__bridge NSString *)CMTimeCopyDescription(NULL,
requestedTime);
        NSString *actualTimeString = (__bridge NSString *)CMTimeCopyDescription(NULL,
actualTime);
        NSLog(@"Requested: %@; actual %@", requestedTimeString, actualTimeString);

        if (result == AVAssetImageGeneratorSucceeded)
            {
                self.imageViewThumb2.image = [UIImage imageWithCGImage:image];
            }

        if (result == AVAssetImageGeneratorFailed)
            {
                NSLog(@"Failed with error: %@", [error localizedDescription]);
            }
        if (result == AVAssetImageGeneratorCancelled)
            {
                NSLog(@"Canceled");
            }
    }];
    ///////END OF STILL IMAGE CODE
}

Your application can now correctly capture still images from a video in two different ways, and you should be able to see a clear view of each method on your device, as in Figure 6–15.

Image

Figure 6–15. Recording application with two different thumbnails

NOTE: You may notice that with the second method, the creating of the image takes significantly longer than the first. This is probably a method that would be better used at a point when the user cannot see the image in question until it has been successfully created.

Summary

As a developer, you have a great deal of choice when it comes to dealing with your device's camera. The pre-defined interfaces such as UIImagePickerController and UIVideoEditorController are incredibly useful and well designed, but Apple's implementation of the AV Foundation framework allows for infinitely more possibilities. Everything from dealing with video, audio, and still images is possible. Even a quick glance at the full documentation will reveal countless other functionalities that you have not gone over here, including everything from device capabilities (such as the video camera's LED “torch”), to the implementation of your own “Touch-To-Focus” functionality. We live in a world where images, audio, and video fly around the world in a matter of seconds, and as developers we must be able to design and create innovative solutions that fit in with our media-based community.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset