This chapter completes the Augmented Reality (AR) project we started in the previous chapter. We'll extend the project by adding custom logic to detect a horizontal or vertical surface and spawn the turtle on the detected surface by tapping the screen.
An indispensable tool for creating an AR game is the ability to detect features in the environment. A feature could be anything from a face to a specific image or QR code. In this chapter, we will leverage the tools built into AR Foundation to detect surfaces, which Unity calls planes. We'll take advantage of prebuilt components and GameObjects to visualize the planes, which makes debugging easier. We will be able to see in real time when Unity has detected a plane, the size of the detected plane, and its location in the world.
We'll then write custom logic that will extract the plane data generated by Unity and make it accessible to our scripts. We do this by ray casting from the device into the physical world.
Once we've detected a plane and extracted the plane data, we'll add a visual marker for the player. The marker will only appear on screen when the device is pointing at a valid plane. And when it's on screen, it will be placed at the correct position and rotation of the real-world surface. With that done, we'll move on to spawning the turtle that we created in the previous chapter at the marker's location.
We created this project using Unity's new Universal Render Pipeline (URP), and we'll build on that in this chapter by adding post-processing effects using the URP. The URP has been designed to provide control over how Unity renders a frame without the need to write any code. We've previously touched on adding post-processing effects in Chapter 11, Entering Virtual Reality. However, the process differs slightly using the URP, so once you've completed this chapter, you'll be able to add these effects whether you are using Unity's built-in render pipeline or the URP.
In this chapter, we will cover the following topics:
By the end of this chapter, you will have created an AR project in which you can spawn objects onto surfaces in the real world.
This chapter assumes that you have not only completed the projects from the previous chapters but also have a good, basic knowledge of C# scripting generally, though not necessarily in Unity. This project is a direct continuation of the project started in Chapter 13, Creating an Augmented Reality Game Using AR Foundation.
The starting project and assets can be found in the book's companion files in the Chapter14/Start folder. You can start here and follow along with this chapter if you don't have your own project already. The end project can be found in the Chapter14/End folder.
An essential part of programming an AR game is adding the ability to detect features in the environment. These features can be objects, faces, images, or in our case, planes. A plane is any flat surface with a specific dimension and boundary points. Once we've detected a plane, we can use its details to spawn a turtle at the correct position and with the proper rotation.
We'll detect planes using ray casting and a custom script. However, before we write the script, we'll first add a Plane Manager to the scene.
A Plane Manager will generate virtual objects that represent the planes in our environment. We'll use these virtual objects as a guide to where we can spawn the turtle. It will also provide useful debug information by drawing a boundary around any planes it detects. Using this feature, we can see in real time when Unity has detected a plane:
Suppose you select the AR Default Plane object and view its data in the Inspector. In that case, you'll notice it comes with several components already attached, including the AR Plane and AR Plane Mesh Visualizer scripts. The AR Plane represents the plane and includes useful data on it, including the boundary, center point, and alignment. The AR Plane Mesh Visualizer generates a mesh for each plane. It is this component that will be used to create and update the visuals for each plane. We will see this in action shortly:
Next, we'll assign the prefab to a Plane Manager, so it is generated during runtime:
The AR Plane Manager will generate GameObjects for each detected plane. The GameObject it generates is defined by this field.
You'll notice blank lines are generated as you move around your environment. These lines represent the boundaries of planes detected by Unity. Each boundary is one AR Plane GameObject that has been spawned into the environment by the AR Plane Manager. As you move the device around your environment, the bounds should expand.
Important Note
The boundary points of a plane are always convex.
Without writing a single line of code, we can now detect planes in the environment and visualize them using Unity GameObjects. Great! Next, we need to retrieve the data associated with the detected plane, which will eventually be used to spawn the turtle.
In the last chapter, we spawned a turtle in the world based on the device's position when the game started. Ideally, we would have control over where the turtle is placed. Instead of having it spawned when the game starts based on the phone's position, we can generate the object dynamically at a location we specify. To do this, we need to retrieve the plane data associated with the surface that is on the player's screen. To do this, we'll write a custom script:
using UnityEngine.XR.ARFoundation;
using UnityEngine.Events;
public class PlaneData
{
public Vector3 Position { get; set; }
public Quaternion Rotation { get; set; }
}
public class FindPlane : MonoBehaviour
{
public UnityAction<PlaneData> OnValidPlaneFound;
public UnityAction OnValidPlaneNotFound;
private ARRaycastManager RaycastManager;
private readonly Vector3 ViewportCenter = new Vector3(0.5f, 0.5f);
}
The following points summarize the preceding code snippet:
We create two classes: PlaneData and FindPlane.
PlaneData is the structure we'll use to store the Position and Rotation of the plane. We'll use this shortly.
To start the FindPlane class, we've added four member variables.
OnValidPlaneFound is a UnityAction that is invoked whenever a plane has been found. We can write classes that subscribe to this event and then whenever a plane is found, we will receive a PlaneData object. Subscribing to actions will be explained in detail when we come to spawn objects.
OnValidPlandNotFound will be raised on every frame in which a plane hasn't been found.
The RaycastManager of type ARRaycastManager is used in a very similar way to how we've used raycasts in previous chapters; however, instead of casting rays in the virtual world, the ARRaycastManager can detect features in the real world, including planes. This is perfect for our needs. This class is part of the ARFoundation package that we imported in the previous chapter.
ViewpointCenter is used to find the center of the screen. It's from this point that the ray will originate.
public class FindPlane : MonoBehaviour
{
void Awake()
{
RaycastManager = GetComponent<ARRaycastManager>();
}
void Update()
{
IList<ARRaycastHit> hits = GetPlaneHits();
UpdateSubscribers(hits);
}
}
The Awake function initializes the RaycastManager and the Update function is where the action happens. It calls GetPlaneHits, which returns a collection of ARRaycastHit. This collection is then passed to UpdateSubscribers.
Tip
The Awake function is called during initialization and before other event functions such as Start and OnEnable. The Update function is called every frame before the LateUpdate event and any Coroutine updates. For more information on the order of events, see https://docs.unity3d.com/Manual/ExecutionOrder.html.
public class FindPlane : MonoBehaviour
{
…
private List<ARRaycastHit> GetPlaneHits()
{
Vector3 screenCenter = Camera. main. ViewportToScreenPoint(ViewportCenter);
List<ARRaycastHit> hits = new
List<ARRaycastHit>();
RaycastManager.Raycast(screenCenter, hits, UnityEngine.XR.ARSubsystems. TrackableType. PlaneWithinPolygon);
return hits;
}
The following points summarize the preceding code snippet:
Tip
Camera.main will return a reference to the first enabled camera that has the MainCamera tag.
We then create a new List of ARRaycastHit. This collection will store the results of the raycast. An ARRaycastHit contains useful data about the raycast, including the hit point's position, which will be very useful.
We pass this list as a reference to RaycastManager.Raycast. This function performs the raycast and fills the hits collection with any raycast hits. If there were no hits, the collection would be empty. The third parameter of RaycastManager.Raycast is TrackableType, which lets Unity know the type of objects we are interested in. Passing the PlaneWithinPolygon mask here means the ray needs to intersect within a polygon generated by the Plane Manager we added in the Adding a plane manager section. This will become clear when we come to draw a placement marker later in this chapter, as the placement marker will only be drawn within the bounds of a plane.
We could pass a different value as the TrackableType, such as Face or FeaturePoint, to detect different objects. For all TrackableType varieties, see https://docs.unity3d.com/Packages/[email protected]/api/UnityEngine.XR.ARSubsystems.TrackableType.html.
The collection of hits is then returned from the function.
public class FindPlane : MonoBehaviour
{
…
private void UpdateSubscribers(IList<ARRaycastHit> hits)
{
bool validPositionFound = hits.Count > 0;
if (validPositionFound)
{
PlaneData Plane = new PlaneData
{
Position = hits[0].pose.position,
Rotation = hits[0].pose.rotation
};
OnValidPlaneFound?.Invoke(Plane);
}
else
{
OnValidPlaneNotFound?.Invoke();
}
}
}
The following points summarize the preceding code snippet:
If the hits collection size is greater than 0, we know that the RaycastManager has found a valid plane, so we set validPositionFound to true.
If validPositionFound is true, we create a new PlaneData using the position and rotation of the pose in the first ARRaycastHit contained in the hits collection. When the collection of ARRaycastHit is populated by the RaycastManager.Raycast function, it is sorted so that the first element will contain information on the hit point closest to the raycast's origin, which in this case is the player's device. Once the PlaneData has been created, we pass it to the OnValidPlaneFound action. This will alert all subscribers that we've found a plane.
If validPositionFound is false, we invoke OnValidPlaneNotFound. This alerts all subscribers that a plane was not found in this frame.
Tip
The ?. after the OnValidPlaneFound and OnValidPlaneNotFound is called a null-conditional operator. If the Unity actions are null, the attempt to call Invoke will evaluate to null, rather than cause a runtime NullReferenceException. For our purposes, it is similar to writing the following:
if(OnValidPlaneFound != null) OnValidPlaneFound.Invoke(Plane);
For more information on this operator, see https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators/member-access-operators#null-conditional-operators--and-.
Now we have the code that detects plans and alerts subscribers, let's add it to the scene:
This section has covered an important topic in AR. Feature detection is used in many AR projects, and by reaching this point, you have learned how to not only detect surfaces in the real world, but also how to extract useful information about the detection – information that we will be using shortly to spawn objects. We have also added and configured a Plane Manager. This manager object will help us interact with the AR environment and provides useful debugging information by drawing the boundaries of any planes it discovers. Creating the manager involved adding a new Plane Manager object to the scene and assigning it a newly created plane prefab. This prefab included a component used to visualize the plane and an AR Plane component that stores useful data about the surface, including its size. With the Plane Manager correctly configured, we then wrote a custom script that uses an ARRaycastManager to detect surfaces in the real world.
Now we have a reliable method of detecting a plane. When we run our game, FindPlane will attempt to detect a plane at each frame. At the moment, even if a frame is found, nothing happens with that information. We've created two actions, OnValidPlaneFound and OnValidPlaneNotFound, but we haven't written any class that subscribes to those events. That is about change as we write the functionality to place a visual marker whenever a plane is detected.
In this section, we'll design a visual cue that the player can use to determine when and where they can place an object. The marker will use the logic we created in the Detecting planes section to determine when a valid plane was found. There are two steps to adding the marker to our scene. First, we'll design the marker in Unity, and then we'll add the logic for placing the marker on valid surfaces in the real world.
To add a placement marker to the game, we first need to design it. In our project, the marker will be a simple circle platform. We'll use many of Unity's built-in tools to create the marker, and the only external resource we will require is a simple circle texture. Start by creating a new GameObject in our scene:
With the object created, we can modify its appearance by creating a custom Material:
Now we can update the material with the new texture. With the Marker Material selected, do the following in the Inspector:
The eagle-eyed among you may have noticed that the shader type in Figure 14.12 is Universal Render Pipeline/Lit. As we've set up our project to use the URP, all materials we create in the project will default to using the URP as their shader. This saves us time as we won't need to update them as we had to do with the turtle's material.
While we've kept the marker purposefully minimalist, feel free to experiment with different images to change the marker's appearance to suit you. You only need to import a different image and assign it to the material.
With the material complete, let's assign it to the Quad object:
You'll notice the appearance of the marker will change to a circle, as shown in Figure 14.13.
Tip
You can also assign the material by dragging it from the Project panel to the Quad object in the Hierarchy.
That's it for the marker's visuals. Feel free to experiment with different textures for the Marker Material to make it fit with your style, before moving onto the next section where we place the marker in the world.
With that set up, we can write the class that will display the object on the plane. We already have the mechanism to identify valid planes in the environment and extract the position and rotation. We'll take advantage of this code by subscribing to the OnValidPlaneFound and OnValidPlaneNotFound events on FindPlane. To do this, we need to create a new script:
public class MoveObjectToPlane : MonoBehaviour
{
private FindPlane PlaneFinder;
void Awake()
{
PlaneFinder = FindObjectOfType<FindPlane>();
}
void Start()
{
DisableObject();
PlaneFinder.OnValidPlaneFound += UpdateTransform;
PlaneFinder.OnValidPlaneNotFound += DisableObject;
}
}
The following points summarize the preceding code snippet:
In the class, we store a reference to FindPlane.
We disable the object when the scene starts by calling DisableObject. The marker will be re-enabled when a valid plane has been found.
In the Start function, we subscribe to the OnValidPlaneFound and OnValidPlaneNotFound events by passing a reference to the UpdateTransform and DisableObject functions, respectively. Using += means that we don't overwrite any existing data, so if any other classes have subscribed, we won't remove their subscriptions.
Whenever FindPlane calls the OnValidPlaneFound event, it will call this object's UpdateTransform function, passing in a PlaneData object. We'll write this function shortly, but it will be responsible for moving the placement marker to the plane.
Whenever FindPlane calls the OnValidPlaneNotFound event, it will call this object's DisableObject function. This function will disable the object, so if no plane is found, the placement marker will not be shown.
public class MoveObjectToPlane : MonoBehaviour
{
…
void OnDestroy()
{
PlaneFinder.OnValidPlaneFound -= UpdateTransform;
PlaneFinder.OnValidPlaneNotFound -= DisableObject;
}}
In the OnDestroy function, we remove the references to this object's function to ensure they are not called on a dead object. OnDestroy is called whenever this component or the object it belongs to is in the process of being removed from the scene (or the scene itself has been destroyed).
public class MoveObjectToPlane : MonoBehaviour
{
…
private void UpdateTransform(PlaneData Plane)
{
gameObject.SetActive(true);
transform.SetPositionAndRotation(Plane.Position, Plane.Rotation);
}
private void DisableObject()
{
gameObject.SetActive(false);
}
}
Both functions are relatively simple, as follows:
UpdateTransform enables the GameObject in case it was disabled previously. It also sets the position and rotation equal to that of the plane.
DisableObject disables the GameObject (somewhat unsurprisingly). We do this to prevent the marker from appearing on screen when there is no valid plane.
That's it for the code. Now, we can add the new script to the Placement Marker:
As you can see from Figure 14.16, if you run the game now, the marker will stick to any surface found in the center of the screen:
The black lines define the different planes. You can see the planes' boundaries expand as you move around the environment.
Important Note
You'll notice that the marker disappears if you move the view outside of a plane (defined by the black borders in Figure 14.16). In the FindPlane class, we pass the TrackableType of PlaneWithinPolygon to the Raycast function. Try experimenting with different flags to see what effect it has on the placement marker. The different TrackableType varieties can be found at https://docs.unity3d.com/2019.1/Documentation/ScriptReference/Experimental.XR.TrackableType.html.
As you move the device around, you'll notice that the marker also disappears when there isn't a valid surface. We now have a visual indicator of when there is a suitable surface for placing objects.
In this section, you've created your first URP material and assigned it a custom texture. The material was then assigned to a Quad in the scene to represent a placement marker. We then took advantage of the plane detection code we wrote in the Retrieving plane data section to position the marker onto a detected surface using the OnValidPlaneFound and OnValidPlaneNotFound actions.
Now that the player has this visual indication of when a suitable surface is in view, we can write the code that will spawn objects at the marker's location.
We've done most of the groundwork for placing the objects in the world. We already have a method for detecting a plane and providing a visual indicator for the player, so they know what a suitable surface is and, more importantly, what isn't. Now we need to spawn an object when a player taps on the screen and there is a valid plane. To do this, we need to create a new script, as follows:
public class PlaceObjectOnPlane : MonoBehaviour
{
public GameObject ObjectToPlace;
private FindPlane PlaneFinder;
private PlaneData Plane = null;
void Awake()
{
PlaneFinder = FindObjectOfType<FindPlane>();
}
void LateUpdate()
{
if (ShouldPlaceObject())
{
Instantiate(ObjectToPlace, Plane.Position, Plane.Rotation);
}
}
}
The following points summarize the preceding code snippet:
Similarly to the script we wrote in Placing the marker, we store a reference to the FindPlane component in the scene. We'll use this to subscribe to the OnValidPlaneFound and OnValidPlaneNotFound events.
In the LateUpdate function, we check if we are able to place an object, and if so, we create it with the position and rotation specified in the Plane member variable. This variable is set whenever a valid plane has been found. We use LateUpdate instead of the Update function because LateUpdate is called after Update so we can be certain that the FindPlane.Update function will have already checked for a plane this frame. If it has found a plane, we can use it on the same frame to generate an object.
public class PlaceObjectOnPlane : MonoBehaviour
{
…
private bool ShouldPlaceObject()
{
if (Plane != null && Input.touchCount > 0)
{
if (Input.GetTouch(0).phase == TouchPhase. Began)
{
return true;
}
}
return false;
}
}
To spawn an object, we need to meet the following criteria:
Plane should not be null.
The player should have just tapped on the screen (note that the touch phase of the event is TouchPhase.Began).
Tip
There are several different touch states, including Began, Moved, Stationary, Ended, and Canceled. Most of them are self-explanatory, although Ended and Canceled are worth differentiating. A touch is considered ended when the user lifts their finger from the screen, and it's considered canceled when the system cancels tracking of the touch. A touch can be canceled for several reasons; for example, if a user uses applies more touches than the system can handle, previous touches will be canceled. For more information on the different touch phases, see https://docs.unity3d.com/ScriptReference/TouchPhase.html.
At the moment, this function will never return true as Plane will always be null. We'll change this now.
public class PlaceObjectOnPlane : MonoBehaviour
{
…
void OnEnable()
{
PlaneFinder.OnValidPlaneFound += StorePlaneData;
PlaneFinder.OnValidPlaneNotFound += RemovePlaneData;
}
void OnDisable()
{
PlaneFinder.OnValidPlaneFound -= StorePlaneData;
PlaneFinder.OnValidPlaneNotFound -= RemovePlaneData;
}
}
Here, we subscribe to the events in OnEnable and unsubscribe to them in OnDisable. This code has been described in detail in the Placing the marker section, so I won't go into detail here. The StorePlaneData function is called whenever a plane is found, and RemovePlaneData is called every frame when there is no valid plane.
public class PlaceObjectOnPlane : MonoBehaviour
{
…
private void StorePlaneData(PlaneData Plane)
{
this.Plane = Plane;
}
private void RemovePlaneData()
{
Plane = null;
}
}
StorePlaneData is called whenever a plane is found. It stores the plane data to be used by the Update function to spawn an object. RemovePlaneData sets Plane to null when there is no valid plane in the device's viewport. By setting it to null here, ShouldPlaceObject will return false until a valid plane is found again and prevent the user from spawning an object in the meantime.
Now we need to add the new script to the scene, back in Unity:
Run the game now and you will be able to tap the screen to place objects in the world:
As you can see from Figure 14.19, you can place the turtle on different levels and also vertical planes, and it will appear with the correct rotation.
That's it for the main functionality for the AR project. While the interaction is rudimentary, it provides everything you need to create complex AR experiences. Before we wrap up, I would like to briefly run through how we can add post-processing effects in the URP. These post-processing effects will modify the visual appearance of the turtle we place in the world. Although we covered post-processing in previous chapters, its implementation is slightly different in the world of the URP, as you will see shortly.
To refresh your memory, the URP is a Scriptable Render Pipeline developed in-house by Unity. It has been designed to introduce workflows that provide control over how Unity renders a frame without the need to write any code. So far, we've learned how to update materials and enable background drawing for AR using the URP. In this section, we'll take it a step further and add post-processing effects using the URP. To accomplish this, we first need to modify the camera:
If you remember from Chapter 11, Entering Virtual Reality, we will need both Volume and Post Processing profiles to enable Post Processing. However, we'll create both in a slightly different way:
Now you are free to add custom post-processing effects. By default, no post-processing effects are enabled. Previously, we enabled the effects by selecting the profile in the Project panel; however, we can enable them directly from the Volume component:
Tip
If you drag the Turtle prefab to the scene, as shown in Figure 14.23, you can see the effect that different settings have on the scene.
Next, we'll configure the Chromatic Aberration post-processing effect:
Lastly, we'll configure the Color Adjustments post-processing effect:
And that's it for the modifications we'll make to the turtle's visuals in this section. You can see the contrast between the original turtle and the post-processing turtle in Figure 14.26:
Feel free to play around with the different overrides to see what effects you can produce. For more information on what each effect does, see the online documentation at https://docs.unity3d.com/Manual/PostProcessingOverview.html.
If you run the game on your device, you can spawn the techno turtle into the world yourself:
You will have noticed that your environment has changed appearance as the post-processing effects are applied to everything on screen, not just the turtle.
And that's it for the AR game: you can detect horizontal and vertical planes and spawn the techno turtle in your environment. And you learned how to do all of this in a URP project! In doing so, you've created a solid foundation for an AR game that can be extended in multiple ways. For example, how about creating a table-top fighting game? You will need to create the NPCs in Unity and then use the scripts we wrote here to place them in the real world. The possibilities are (almost) endless!
Congratulations! By reaching this point, you have completed the AR project and six other projects: first-person 3D games, 2D adventure games, space shooters, AI, machine learning, and virtual reality.
In this project alone, you've learned the foundations of AR development. You now know how to detect planes (and other features) in a real-world environment, extract information from the detected planes, and use it to spawn virtual objects. You've taken advantage of the tools offered by AR Foundation to create an AR game that can be played on Android or iOS. The game is easy to debug, as you can see in real time when Unity has detected a plane, the size of the detected plane, and its location in the world.
You then extended Unity's offerings by writing custom scripts to extract the plane data generated by Unity and make it accessible to any script that subscribes to updates. You designed a placement marker and object spawn script that uses this information to place objects in the environment.
Not only that, but you've also done it using Unity's URP. So on top of the AR knowledge, you now know how to convert materials to use the URP, along with how to implement AR and post-processing effects in the URP. Not bad!
Now is an excellent time to reflect on what you've learned in this book. Take the time to think about what you've read up to this point. Did any chapters stand out to you in particular? Maybe you were excited by working on AI? Or creating a project in virtual reality? I recommend you focus on those projects first. Play around with them, extend things, break things, and then work on fixing them. But whatever you do, have fun!
Q1. You can use the … flag to select which objects to detect in the real world.
A. PlaneFlag
B. RaycastHitType
C. TrackableType
D. FeatureFlag
Q2. You can disable the detection of vertical planes using the … component.
A. Plane Manager
B. Plane Detector
C. AR Session
D. AR Session Origin
Q3. RaycastManager is used to cast rays in AR.
A. True
B. False
Q4. A touch is defined as Canceled when which of the following happens?
A. The user removes their finger from the screen.
B. The system cancels the touch.
C. The user double taps the screen.
D. The tap response is used to spawn an object.
Q5. The URP is which of the following?
A. A Unity quality setting
B. A post-processing effect
C. An animation system
D. A Scriptable Render Pipeline
For more information on the topics covered in this chapter, see the following links: