Sometimes the basic camera intents aren't enough, or we are creating an augmented reality game, and we want to process the images as the camera streams them.
To create a camera app, or some other app that displays a live preview from the camera hardware, we need to set up several things similar to playing a video, we must implement the ISurfaceHolderCallback
interface, which will manage the live preview:
[assembly: UsesPermission(Manifest.Permission.Camera)]
ISurfaceHolderCallback
interface:public class MainActivity : Activity, ISurfaceHolderCallback { private ISurfaceHolder surfaceHolder; public void SurfaceChanged( ISurfaceHolder holder, Format format, int width, int height) { } public void SurfaceCreated(ISurfaceHolder holder) { surfaceHolder = holder; } public void SurfaceDestroyed(ISurfaceHolder holder) { surfaceHolder = null; } }
SurfaceView
instance's Holder
property:var holder = surfaceView.Holder; holder.SetType(SurfaceType.PushBuffers); holder.AddCallback(this);
Now that we have our surface for the preview, we can get hold of the cameras and start setting one of them up:
var cameraCount = Camera.NumberOfCameras; var cameras = new Camera.CameraInfo[cameraCount]; var backIndex = 0; for (int index = 0; index < cameraCount; index++) { cameras[index] = new Camera.CameraInfo(); Camera.GetCameraInfo(index, cameras[index]); if (cameras[index].Facing == CameraFacing.Back) { backIndex = index; } }
await Task.Run(() => { try { camera = Camera.Open(backIndex); } catch (Exception ex) { // handle the camera being used } });
var parameters = camera.GetParameters(); var previewSizes = parameters.SupportedPreviewSizes;
var best = previewSizes[0]; parameters.SetPreviewSize(best.Width, best.Height); camera.SetParameters(parameters);
Now we can set up the camera to preview onto our new surface either after the camera is opened or after the surface is created:
var scale = Math.Min( container.Width / (float)best.Width, container.Height / (float)best.Height); var width = best.Width * scale; var height = best.Height * scale; surfaceHolder.SetFixedSize((int)width, (int)height);
camera.SetPreviewDisplay(surfaceHolder);
camera.StartPreview();
The next step is to manage the camera after the destruction of either the surface or the actual activity:
SurfaceDestroyed()
as we will be starting it when a new surface is created:camera.StopPreview();
camera.StopPreview(); camera.Release(); camera = null;
To implement the logic to take a photo, we will need to implement two Camera
interfaces:
Camera.IPictureCallback
interface:public class MainActivity : Camera.IPictureCallback { public void OnPictureTaken(byte[] data, Camera camera) { } }
OnPictureTaken()
method, we can save the bytes to a file or process them in some way:File.WriteAllBytes(imagePath, data);
camera.StartPreview();
Camera.IShutterCallback
interface:public class MainActivity : Camera.IShutterCallback { public void OnShutter() { // an empty method plays the default shutter sound } }
TakePicture()
method on the camera, the first parameter being the shutter callback and the last being the picture taken callback. If we pass a null
value for the shutter callback, no sound will be played:camera.TakePicture(this, null, this);
To record a video, we use a MediaRecorder
instance and set various parameters to indicate what camera and audio to record.
[assembly: UsesPermission(Manifest.Permission.RecordAudio)]
camera.Unlock();
MediaRecorder
instance and set the audio and video source to be the camera:recorder = new MediaRecorder(); recorder.SetCamera(camera); recorder.SetAudioSource(AudioSource.Camcorder); recorder.SetVideoSource(VideoSource.Camera); recorder.SetProfile( CamcorderProfile.Get(CamcorderQuality.High));
recorder.SetOutputFile("video.mp4");
recorder.SetPreviewDisplay(surfaceHolder.Surface);
recorder.Prepare(); recorder.Start();
Stop()
method:recorder.Stop(); recorder.Reset(); recorder.Release(); recorder = null;
camera.Reconnect();
If we want to take a photo, we can use the default camera app. But, this app may not provide the required functionality that we need. We can be creating an actual camera app, or an app that provides augmented reality with real-time image effects or image overlays. Alternatively, it can be to provide image analysis, such as a QR reader or face detection.
Using the default camera app prevents us from modifying or overlaying the preview image in real-time, as we are only presented with the final captured image. If we need to access the real-time camera stream, we can request the data directly from the camera and display it on a surface.
As we are accessing the camera hardware directly, we need to request the Camera
permission. If we are going to record videos, we need to request the additional RecordAudio
permission to record the audio as well.
Once we have the permission to use the camera, we need to set up a surface to render the camera's preview. For this, we can use a SurfaceView
instance. To set up the surface, we can request the ISurfaceHolder
instance using the Holder
property on the SurfaceView
instance. Like when setting up to play a video, we need to add a callback that we use to monitor the life of the surface. The callback implements the ISurfaceHolderCallback
interface and provides methods that relate to the life of a surface. We will stop the preview if the surface is destroyed, and start it again when it is created.
After the surface is set up, we need to set up the camera that we are going to use. Before we can use a camera, we need to get hold of the cameras on the device. To get the cameras, we use the Camera
type. We can iterate over the cameras on the device using the NumberOfCameras
property and the GetCameraInfo()
method. The NumberOfCameras
property returns the number of cameras available on the device. The GetCameraInfo()
method is a method that allows us to obtain information about a particular camera.
We use the GetCameraInfo()
method with a Camera.CameraInfo
object instance. To obtain information about a single camera, we pass the index of the camera we want the information on and a CameraInfo
object that will be populated with the information. To find a specific camera, we can use the Facing
property, which can be either Back
or Front
.
In order to begin using the camera, we have to open it. To do this, we pass the index of the camera we wish to use to the Open()
method. This gives us exclusive access to the camera hardware, but if another app has opened the camera, this method will throw a RuntimeException
exception. As the camera can only be opened once, as soon as our activity is paused, we must release the camera using the Release()
method. If we do not release the camera, no other app will be able to use it.
Now that we have the camera open, we need to start the preview on our surface. Before we start the preview, we have to resize the surface to the aspect ratio of the camera preview to avoid the preview stretching. We also have to specify the desired size of the preview to the camera. We cannot use any arbitrary size but only the one that is supported by the camera.
To get the supported sizes, we first get the camera parameters using the GetParameters()
method. We then query the Parameters
object for the sizes using the SupportedPreviewSizes
property. This will provide a list of supported sizes, in descending order of size, which we use to select a specific size. The Parameters
object has a SetPreviewSize()
method, which allows us to specify a width and height of a preview size from the list. To actually set the preview size, we assign the Parameters
instance back to the camera using the SetParameters()
method.
When the surface becomes available, or when the camera is opened, we have to let the camera know that it should preview on the surface. First, we must ensure that the surface aspect ratio is the same as that of the camera preview to avoid the preview stretching. Then, we pass the surface to the SetPreviewDisplay()
method of the open camera. Finally, we start the preview using the StartPreview()
method.
As the preview only works when there is a surface, as soon as it is destroyed, we must stop the preview. We do this using the StopPreview()
method. When the activity is paused, we need to release the camera as it is a shared resource that we can only hold when our app is in the foreground. We can open the camera again once our app has resumed.
If we want to take a photo, all we need is an instance that implements the Camera.IPictureCallback
interface. We then pass this instance as the last parameter of the TakePicture()
method. This is an asynchronous method that will return immediately, but the callback method will only be invoked once the image has been captured. The callback method, OnPictureTaken()
, gives us access to a byte array that represents the image. We can either process or save the image to disk.
The image data that is provided, when using the last parameter, is the compressed JPEG data. If we use the second parameter, we will obtain the raw image data from the camera sensor. We can use either one or both, of the picture callbacks to obtain the image data. If we do not wish to provide a callback for a particular type, we can pass null
instead.
By default, the capture process does not play a shutter sound. To get a sound to play, we specify a callback that will fire on the shutter operation. This callback must implement the Camera.IShutterCallback
interface. We can leave the callback method, OnShutter()
, empty if we wish to play the default camera shutter sound. This method is not guaranteed to be called exactly when the photo is captured, but as soon as possible after the capture. Typically, the shutter sound will play after the capture was triggered, but before the image data is available. We pass the shutter callback instance as the first parameter of the TakePicture()
method.
If we want to record a video, we use a MediaRecorder
instance. Before we can record, we have to ensure that we have the RecordAudio
permission as well as the Camera
permission. Then, in order for the recorder to be able to access the camera, we have to unlock it. We do this using the Unlock()
method. This is required as the camera is a shared resource, and once opened, cannot be accessed by anything else.
Once the camera is unlocked, we can set up the MediaRecorder
instance. First, we set the camera that the recorder will use, then we set the audio and video source. The audio source must be Camcorder
and the video source must be Camera
. Next, we specify a profile to use, which can be High
or Low
. This is obtained using the Get()
method on the CamcorderProfile
type. Then, we set the output file location and the preview surface. The preview surface must be the same surface that was specified when setting the preview surface on the camera. Finally, we prepare the recorder and start it.
As soon as we want to stop recording, we invoke the Stop()
method. Then, we must reset the recorder using the Reset()
method. Next, we must release the resources held by the recorder using the Release()
method. Finally, we must lock the camera again using the Reconnect()
method. This prevents access to the camera from other apps while we are using it.
With Android version 5.0, an entire new namespace was created for working with cameras. This namespace is Android.Hardware.Camera2
. Cameras are accessed using the CameraManager
and CameraDevice
types. Photos are captured using a CameraCaptureSession
instance.
Although this new API is much more powerful, it is only available to devices running Android version 5.0 and above. Although the old API is deprecated, it is still supported and fully functional. This is very useful if we are going to support older devices.