Recording and storing video

In the previous section, you used AVFoundation to build a simple audio player app. You will now use AVFoundation again, except instead of playing video or audio, you will now record video and store it in the user's photo library. When using AVFoundation to record a video feed, you do so with an AVCaptureSession object. A capture session is responsible for taking the input from one or more AVCaptureDeviceInput objects, and writing it to an AVCaptureOutput subclass.

The following diagram shows the objects that are involved with recording media through an AVCaptureSession:

To get started on implementing the video recorder, make sure to import AVFoundation in RecordVideoViewController.swift. Also, add the following properties to the RecordVideoViewController class:

let videoCaptureSession = AVCaptureSession()
let videoOutput = AVCaptureMovieFileOutput()

var previewLayer:  AVCaptureVideoPreviewLayer?

Most of the preceding properties should look familiar, because they were also shown in the screenshot that outlined the components that are involved with an AVCaptureSession. Note that AVCaptureMovieFileOutput is a subclass of AVCaptureOutput, specialized in capturing video. The preview layer will be used to render the video feed at runtime and present it to the user, so they can see what they are capturing with the camera.

The next step is to set up the AVCaptureDevice objects for the camera and microphone and associate them with the AVCaptureSession. Add the following code to the viewDidLoad() method:

// 1
guard let camera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back),
  let microphone = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified)
  else { return }

// 2
do {
  let cameraInput = try AVCaptureDeviceInput(device: camera)
  let microphoneInput = try AVCaptureDeviceInput(device: microphone)

  videoCaptureSession.addInput(cameraInput)
  videoCaptureSession.addInput(microphoneInput)
  videoCaptureSession.addOutput(videoOutput)
} catch {
  print(error.localizedDescription)
}

The preceding code first obtains a reference to the camera and microphone that will be used to record the video and audio. The second step is to create the AVCaptureDeviceInput objects that are associated with the camera and microphone and associate them with the capture session. The video output is also added to the video capture session. If you examine the screenshot that you saw earlier and compare it with the preceding code snippet, you will find that all four components are present in this implementation.

The next step is to provide the user with a view that shows the current camera feed, so they can see what they are recording. Add the following code to viewDidLoad() after the capture session setup code:

previewLayer = AVCaptureVideoPreviewLayer(session: videoCaptureSession)
previewLayer?.videoGravity = .resizeAspectFill
videoView.layer.addSublayer(previewLayer!)

videoCaptureSession.startRunning()

The preceding code sets up the preview layer and associates it with the video capture session. The preview layer will directly use the capture session to render the camera feed. The capture session is then started. This does not mean that the recording session starts; rather, only that the capture session will begin processing the data from its camera and microphone inputs.

The preview layer is added to the view at this point, but it doesn't cover the video view yet. Add the following implementation for viewDidLayoutSubviews() to RecordVideoViewController to set the preview layer's size and position, so it matches the size and position of videoView:

override func viewDidLayoutSubviews() {
  super.viewDidLayoutSubviews()

  previewLayer?.bounds.size = videoView.frame.size
  previewLayer?.position = CGPoint(x: videoView.frame.midX, y: videoView.frame.size.height / 2)
}

Running the app now will already show you the camera feed. However, tapping the record button doesn't work yet, because you haven't yet implemented the startStopRecording() method. Add the following implementation for this method:

@IBAction func startStopRecording() {
  // 1
  if videoOutput.isRecording {
    videoOutput.stopRecording()
  } else {
    // 2
    guard let path = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
      else { return }

    let fileUrl = path.appendingPathComponent("recording.mov")

    // 3
    try? FileManager.default.removeItem(at: fileUrl)

    // 4
    videoOutput.startRecording(to: fileUrl, recordingDelegate: self)
  }
}

Let's go over the preceding snippet step by step to see what exactly is going on:

  1. First, the isRecording property for the video output is checked. If a recording is currently active, the recording should be stopped.
  2. If no recording is currently active, a new path is created to store the video temporarily.
  3. Since the video output can't overwrite an existing file, the FileManager object should attempt to remove any existing files at the temporary video file path.
  4. The video output will start recording to the temporary file. The view controller itself is passed as a delegate to be notified when the recording has begun and is stopped.

Since RecordVideoViewController does not conform to AVCaptureFileOutputRecordingDelegate yet, you should add the following extension to add conformance to AVCaptureFileOutputRecordingDelegate:

extension RecordVideoViewController: AVCaptureFileOutputRecordingDelegate {
  // 1
  func fileOutput(_ output: AVCaptureFileOutput, didStartRecordingTo fileURL: URL, from connections: [AVCaptureConnection]) {
    startStopButton.setTitle("Stop Recording", for: .normal)
  }

  // 2
  func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
    guard error == nil
      else { return }

    UISaveVideoAtPathToSavedPhotosAlbum(outputFileURL.path, self, #selector(didSaveVideo(at:withError:contextInfo:)), nil)
  }

  // 3
  @objc func didSaveVideo(at path: String, withError error: Error?, contextInfo: UnsafeRawPointer?) {
    guard error == nil
      else { return }

    presentAlertWithTitle("Success", message: "Video was saved succesfully")
    startStopButton.setTitle("Start Recording", for: .normal)
  }
}

The preceding extension contains three methods. The first is a delegate method, called when the video output has begun recording. When the recording has started, the title of the startStopButton button is updated to reflect the current state. The second method is also a delegate method. This method is called when the recording has completed. If no errors occur, the video is stored at the temporary location you set up earlier. UISaveVideoAtPathToSavedPhotosAlbum(_:_:_:_:) is then called, to move the video from the temporary location to the user's photo library. This method is very similar to the UIImageWriteToSavedPhotosAlbum(_:_:_:_:) method that you used to store a picture. The third and final method in the extension is called when the video is stored in the user's photo library. When the video has been successfully stored, an alert is shown, and the title of the startStopButton button is updated again.

You can now run the app and record some videos! Even though you have done a lot of manual work by implementing the video recording logic directly with an AVCaptureSession, most of the hard work is done inside of the AVFoundation framework. One final media-related feature to explore is applying visual filters to images using Core Image.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset