Now we are finally ready to fly our mission. To make Crazyflie fly to a target requires that the quadrotor must be controllable to hover in place. Once this task is successful, the next step is to fly to a stationary target. We will introduce the steps to accomplish these tasks in the next sections.
The first step to control Crazyflie's flight is the ability to demonstrate control of the quadrotor hovering in one location. To start the process, use the launch command:
$ roslaunch crazyflie_autonomous hover_kinectv2.launch
Then, turn on Crazyflie and let it run through its startup routine. When it is complete, type in a second terminal window:
$ roslaunch crazyflie_autonomous control_crazyflie.launch
The hover mission can be started by pushing the Takeoff (blue) button on the Xbox 360 controller. After Crazyflie has achieved takeoff, the quadrotor will begin to receive cmd_vel
(geometry_msgs/Twist
) messages to stay in its same location with respect to the Kinect image frame. Crazyflie will try to maintain this location until the Land (green) button on the controller is pressed. If Crazyflie drifts to the edge of the Kinect image, a land service request will be generated by the crazyflie_window
node to (hopefully) safely land the quadrotor.
As described in the What makes takeoff and land work? section, the _cf_state
variable changes from takeoff
to flight
when one of the two takeoff conditions is met. These conditions are that the Crazyflie's position in y has changed by 25 pixels or that the thrust value is over 50,000. When one of these conditions is met, the initial values for the PID controllers are reset, and the initial integral variable for the Z PID controller is set.
The initial check in flight
mode is to determine whether the target
flag has been set to True
. This flag is set by the _update_target_pose
function if a target pose (geometry_msgs/PoseStamped
) message has been received. If this message has not been received, then the target
flag is False
and _cf_state
is set to hover
. The current x
, y
, and z
position of Crazyflie is captured as the three element list hover_position
.
As described in the Using Kinect and OpenCV section, the crazyflie_detector
node publishes the Crazyflie's tf transform as its x, y, and z position in the Kinect image frame. The crazyflie_controller
node calls the _getTransform
function every 20 milliseconds to get this transform and uses it for processing both flight
and hover
control.
In hover
mode, the PID controllers are used to calculate the linear values of x
, y
, and z
for the cmd_vel
message. Crazyflie's (x
, y
, z
) position in the Kinect frame is altered so that the direction of control corresponds to Crazyflie's coordinate frame (x – forward, y – left, and z – up). First, the value of Crazyflie's location in its x axis needs to increase as it flies to the left in the image. The value in its z axis needs to increase as Crazyflie flies up in the image. For Crazyflie's y axis, the values need to decrease as it flies closer to the Kinect camera. The following lines of code show the remapping of Crazyflie's positions (current and hover) in the camera frame to Crazyflie's coordinate axes:
# camera -x position self.fly.linear.x = self.m_pidX.update( (self.camera_width - cf_trans[0]), (self.camera_width - self.hover_position[0])) # camera -z position if cf_trans[2] == 0.0: self.fly.linear.y = self.m_pidY.update(self.hover_position[2], self.last_depth) else: self.fly.linear.y = self.m_pidY.update(self.hover_position[2], cf_trans[2]) self.last_depth = cf_trans[2] # camera -y position self.fly.linear.z = self.m_pidZ.update( (self.camera_height - cf_trans[1]), (self.camera_height - self.hover_position[1]))
Note that rospy.loginfo
statements have been removed to enhance clarity.
The m_pidX.update
method is called to calculate the correction needed in x
to maintain the hover position. The x
position values for both current and hover are subtracted from the camera image width to achieve the correct difference. The value returned is used as the cmd_vel
linear.x
(pitch) control.
The camera's z
position (depth) maps to the y axis control for Crazyflie. Sometimes, Kinect publishes bad depth values for Crazyflie's current position, so the case that the z
(depth) value equals zero is checked. If a zero value is found, the last good depth value is used. The m_pidY.update
method is called to handle these z
(depth) values, and the resulting value is assigned to the cmd_vel
linear.y
(roll) control.
The Kinect y
position corresponds to the z axis control for Crazyflie. The camera height value is used to flip the current location and the hover location values to increase as Crazyflie moves closer to the top of the image frame. The m_pidZ.update
method is called to process the y
values and provide the resulting cmd_vel
linear.z
(thrust) control.
Now we will look at how the Crazyflie is automatically controlled as it flies to a target.
Each step in this mission builds on the previous step. To get Crazyflie to fly to a particular location, a separate node was created and named target_detector
to handle the operation of locating the target and publishing its location. The code for this node is contained in the detect_target.py
script in the crazyflie_autonomous
package. To perform this phase of the mission, begin by typing the launch command:
$ roslaunch crazyflie_autonomous hover_kinectv2.launch
Then turn on Crazyflie and let it run through its startup routine. When it is complete, start the crazyflie_controller
node by typing in a second terminal window:
$ roslaunch crazyflie_autonomous control_crazyflie.launch
Then in a third window, execute the following command to start the target_detector
node:
$ rosrun crazyflie_autonomous detect_target.py
The target_detector
node will begin to transmit messages containing the location of the target marker with respect to the Kinect image frame.
Begin the mission by pushing the Takeoff button on the joystick controller. After Crazyflie has achieved takeoff, the quadrotor will begin to receive cmd_vel
(geometry_msgs/Twist
) messages to fly towards the target marker. Since the target is stationary, the location message is only published at 1 Hz. The quadrotor will hover above the target until the Land button on the joystick controller is pressed.
The following image shows our mission setup. The Crazyflie is positioned on the table with a blue circle around its ball marker, and the pink target (on top of TurtleBot in the lower-left corner of the screenshot) has a red rectangle around it:
Using the software described in this chapter, Crazyflie was able to take off from a position, hover, and then fly and land at a second position identified by a target marker.
The next section elaborates on the target detection method.
The target_detector
node works similar to the crazyflie_detector
node. The node subscribes to the Image
messages from the Kinect specifying the qhd
quality. Images of both image_color_rect
and image_depth_rect
are requested. When a color image (image_color_rect
) is received, the callback function image_callback
will use the same object detection techniques, as described for the Crazyflie, to find the target within the Kinect color image frame. The u and v pixel coordinates of the center of the target are saved by this function. These pixel coordinates are used by the callback function depth_callback
for the depth image (image_depth_rect
) to access the depth value at that location.
These values of u
, v
, and depth
are used as x
, y
, and z
respectively by the update_target_pose
function. This function assigns these values to a PoseStamped
message and publishes the message.
When _cf_state
changes to flight
from takeoff
, the target
flag is checked to determine whether a target PoseStamped
message has been received. If the target
flag is true, the target message has been received and _cf_state
will stay in flight
mode. The x
, y
, and z
PID controllers are used to calculate the control values for the cmd_vel
message. Similar to the processing described for hover
, the Crazyflie's location in the Kinect image frame must be changed to correspond to the direction of control for the Crazyflie's coordinate frame. The previous section What makes hover work? describes this remapping of Crazyflie's position in the camera frame to its coordinate axes. The following lines of code show this remapping:
# camera -x position self.fly.linear.x = self.m_pidX.update( (self.camera_width - cf_trans[0]), (self.camera_width - self.target_position.pose.position.x)) # camera -z position self.fly.linear.y = self.m_pidY.update(cf_trans[2], self.target_position.pose.position.z) # camera -y position self.fly.linear.z = self.m_pidZ.update( (self.camera_height - cf_trans[1]), (self.camera_height – self.target_position.pose.position.y + 25))
The update
methods of the m_pidX
, m_pidY
, and m_pidZ
object instances are used to obtain the values for the cmd_vel
message, as explained in the previous section. The difference is the hover values have been replaced by the PoseStamped
position values of the target coordinates.
This chapter would not be complete without documenting a few of the lessons we learned along the way and would like to pass on to you. These are as follows:
The use of logging messages was critical to the development of this mission. ROS provides a number of ways to gain insight into the output of nodes through publishing information and debugging messages to rosout
. These messages can be viewed while processes are active with rqt_console
or via stdout
(a terminal window). Messages can also be examined afterwards through log files written for each ROS node under the ~/.ros
directory.
There are several levels of logging messages that can be written to rosout
. These levels include the following:
The use of rospy.loginfo
and rospy.debug
messages have been scattered throughout the code developed for this mission. We encourage the use of these logging methods as you adapt to this software for your purposes.
Now, we wrap up this adventure!