In the last part of the project, we add a feature that detects when you're looking at an object (the cube) and highlights it with a different color.
This is accomplished with the help of the CardboardView
interface method, onNewFrame
, which passes the current head transformation information.
Let's start with the most interesting part. We'll borrow the isLookingAtObject
method from Google's Treasure Hunt demo. It checks whether the user is looking at an object by calculating where the object is in the eye space and returns true if the user is looking at the object. Add the following code to MainActivity
:
/** * Check if user is looking at object by calculating where the object is in eye-space. * * @return true if the user is looking at the object. */ private boolean isLookingAtObject(float[] modelView, float[] modelTransform) { float[] initVec = { 0, 0, 0, 1.0f }; float[] objPositionVec = new float[4]; // Convert object space to camera space. Use the headView from onNewFrame. Matrix.multiplyMM(modelView, 0, headView, 0, modelTransform, 0); Matrix.multiplyMV(objPositionVec, 0, modelView, 0, initVec, 0); float pitch = (float) Math.atan2(objPositionVec[1], -objPositionVec[2]); float yaw = (float) Math.atan2(objPositionVec[0], -objPositionVec[2]); return Math.abs(pitch) < PITCH_LIMIT && Math.abs(yaw) < YAW_LIMIT; }
The method takes two arguments: the modelView
and modelTransform
transformation matrices of the object we want to test. It also references the headView
class variable, which we'll set in onNewFrame
.
A more precise way to do this might be to cast a ray from the camera into the scene in the direction in which the camera is looking and determines whether it intersects any geometry in the scene. This will be very effective but also very computationally expensive.
Instead, this function takes a simpler approach and doesn't even use the geometry of the object. It rather uses the object's view transform to determine how far the object is from the center of the screen and tests whether the angle of that vector is within a narrow range (PITCH_LIMIT
and YAW_LIMIT
). Yeah I know, people get PhDs to come up with this stuff!
Let's define the variables that we need as follows:
// Viewing variables private static final float YAW_LIMIT = 0.12f; private static final float PITCH_LIMIT = 0.12f; private float[] headView;
Allocate headView
in onCreate
:
headView = new float[16];
Get the current headView
value on each new frame. Add the following code to onNewFrame
:
headTransform.getHeadView(headView, 0);
Then, modify drawCube
to check whether the user is looking at the cube and decide which colors to use:
if (isLookingAtObject(cubeView, cubeTransform)) { GLES20.glVertexAttribPointer(cubeColorParam, 4, GLES20.GL_FLOAT, false, 0, cubeFoundColorsBuffer); } else { GLES20.glVertexAttribPointer(cubeColorParam, 4, GLES20.GL_FLOAT, false, 0, cubeColorsBuffer); }
That's it! Except for one (minor) detail: we need a second set of vertex colors for the highlight mode. We'll highlight the cube by drawing all the faces with the same yellow color. There are a few changes to be made in order to make this happen.
In Cube
, add the following RGBA values:
public static final float[] CUBE_FOUND_COLORS_FACES = new float[] { // Same yellow for front, right, back, left, top, bottom faces 1.0f, 0.65f, 0.0f, 1.0f, 1.0f, 0.65f, 0.0f, 1.0f, 1.0f, 0.65f, 0.0f, 1.0f, 1.0f, 0.65f, 0.0f, 1.0f, 1.0f, 0.65f, 0.0f, 1.0f, 1.0f, 0.65f, 0.0f, 1.0f, };
In MainActivity
, add these variables:
// Model variables private static float cubeFoundColors[] = Cube.cubeFacesToArray(Cube.CUBE_FOUND_COLORS_FACES, 4); // Rendering variables private FloatBuffer cubeFoundColorsBuffer;
Add the following code to the prepareRenderingCube
method:
ByteBuffer bbFoundColors = ByteBuffer.allocateDirect(cubeFoundColors.length * 4); bbFoundColors.order(ByteOrder.nativeOrder()); cubeFoundColorsBuffer = bbFoundColors.asFloatBuffer(); cubeFoundColorsBuffer.put(cubeFoundColors); cubeFoundColorsBuffer.position(0);
Build and run it. When you look directly at the cube, it gets highlighted.