Before we go ahead and implement a user interface for this project, let's talk about how we want it to work.
The purpose of this project is to allow the user to select a photo from their phone's storage and view it in VR. The phone's photo collection will be presented in a scrollable grid of thumbnail images. If a photo is a normal 2D one, it'll be displayed on the virtual screen plane we just made. If it's a photosphere, we'll view it as a fully immersive 360-degree spherical projection.
A sketch of our proposed scene layout is shown in the following diagram. The user camera is centered at the origin, and the photosphere is represented by the gray circle, which surrounds the user. In front of the user (determined by the calibration at launch), there will be a 5 x 3 grid of thumbnail images from the phone's photo gallery. This will be a scrollable list. To the left of the user, there is the image projection screen.
Specifically, the UI will implement the following features:
Some of our UI considerations are unique to virtual reality. Most importantly, all of the user interface elements and controls are in world coordinate space, That is, they're integrated into the scene as geometric objects with a position, rotation, and scale like any other component. This is in contrast with most mobile games where the UI is implemented as a screen space overlay.
Why? Because in VR, in order to create the stereoscopic effect, each eye has a separate viewpoint, offset by the interpupillary distance. This can be simulated in screen space by horizontally offsetting the position of screen space objects, so they appear to have a parallax (a technique we used in Chapter 4, Launcher Lobby). But when mixed with 3D geometry, camera, lighting, and rendering, that technique proves inadequate. A world space UI is required for an effective user experience and immersion.
Another feature that's unique to VR is gaze-based selection. In this case, where you look will highlight an image thumbnail, and then you click on the Cardboard trigger to open the image.
Lastly, as mentioned earlier, since we're working in world space and making selections based on where we're looking, the layout of our 3D space is an important consideration. Remember that we're in VR and not constrained by rectangular edges of a phone screen. Objects in the scene can be placed all around you. On the other hand, you don't want users twisting and turning all the time (unless that's an intended part of the experience). We'll pay attention to comfort zones to place our UI controls and image screen.
Furthermore, Google and researchers elsewhere have begun to develop best practices for the user interface design, including the optimal distance for menus and UI controls from the camera, approximately 5 to 15 feet (1.5 to 5 meters). This distance is close enough to enjoy a 3D parallax effect but not so close to make you look cross-eyed to focus on the objects.
Okay, let's begin with the UI implementation.
Firstly, let's move the screen from in front to the side, that is, rotate it 90 degrees to the left. Our transform math does the position after the rotation, so we now offset it along the x axis. Modify the setupScreen
method of the MainActivity
class, as follows:
void setupScreen() { Transform screenRoot = new Transform() .setLocalScale(4, 4, 1) .setLocalRotation(0, -90, 0) .setLocalPosition(-5, 0, 0); ...