Raspberry Pi Controls Precision Landing for Drone

When building a drone project, many electronics enthusiasts may encounter control challenges, especially when aiming for precision landing. For projects purely for entertainment, running a few steps may solve the problem, but in endeavors where strict requirements exist for drones to collaborate with other vehicles, achieving precision landing becomes crucial. This article TechSparks delves into utilizing Raspberry Pi to discuss how to achieve precise landing for drones at specified locations. The video below showcases the final implementation.

Table of Contents

Step 1: Installing ROS Kinetic

In this project, I opted for the more stable and versatile ROS Kinetic version. For a detailed installation guide on ROS Kinetic, you can refer to the official documentation for step-by-step instructions. In my practical implementation, I followed the steps outlined in the official documentation with only minor adjustments to the sequence of some steps. I documented the issues I encountered during the installation process and the corresponding strategies to address them.

Add ROS repository sources and keys:

					sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654


Install ROS dependencies and tools:

					sudo apt-get install -y python-rosdep python-rosinstall-generator python-wstool python-rosinstall build-essential cmake

Initialize rosdep and update:

					sudo rosdep init
rosdep update

Create ROS workspace and fetch packages:

					mkdir -p ~/ros_catkin_ws
cd ~/ros_catkin_ws
rosinstall_generator ros_comm --rosdistro kinetic --deps --wet-only --tar > kinetic-ros_comm-wet.rosinstall
wstool init src kinetic-ros_comm-wet.rosinstall

Fetch and compile additional dependencies:

					mkdir -p ~/ros_catkin_ws/external_src
cd ~/ros_catkin_ws/external_src
wget http://sourceforge.net/projects/assimp/files/assimp-3.1/assimp-3.1.1_no_test_models.zip/download -O assimp-3.1.1_no_test_models.zip
unzip assimp-3.1.1_no_test_models.zip
cd assimp-3.1.1
cmake .
sudo make install
cd ~/ros_catkin_ws
rosdep install -y --from-paths src --ignore-src --rosdistro kinetic -r --os=debian:buster

Return to ROS workspace and install dependencies:

					cd ~/ros_catkin_ws
rosdep install -y --from-paths src --ignore-src --rosdistro kinetic -r --os=debian:buster

Compile ROS workspace and set environment variables:

					sudo ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/kinetic
source /opt/ros/kinetic/setup.bash
echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc


Opencv Error

Opencv Error

If you are installing the desktop version and encounter errors during OpenCV installation, the error information may be displayed in the file /home/pi/ros_catkin_ws/src/opencv3/modules/stitching/include/opencv2/stitching/detail/matchers.hpp. The specific solution is to modify the path in the #include statement on line 52 of the error file to the absolute path of the CUDA header file cuda.hpp, like this:

					#include "/home/pi/ros_catkin_ws/src/opencv3/opencv_contrib/xfeatures2d/include/opencv2/xfeatures2d/cuda.hpp"

This modification is made to instruct the compiler about the correct location of the CUDA header file, resolving the compilation error.

Boost Error

If you encounter version-related issues causing errors with the Boost library, you can resolve them by following these steps:

Remove the existing Boost library:

					sudo apt remove libboost1.67-dev
sudo apt autoremove

Install the required Boost version:

					sudo apt install -y libboost1.58-dev libboost1.58-all-dev

Install additional dependencies and tools:

					sudo apt install -y g++-5 gcc-5
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 10
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 20
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 10
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 20
sudo update-alternatives --install /usr/bin/cc cc /usr/bin/gcc 30
sudo update-alternatives --set cc /usr/bin/gcc
sudo update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++ 30
sudo update-alternatives --set c++ /usr/bin/g++

Step 2: Installing MAVROS

MAVROS (MAVLink ROS) is a package for ROS (Robot Operating System) designed for communication with MAVLink-compatible drones.

Install dependencies:

					sudo apt-get install python-catkin-tools python-rosinstall-generator -y

Create and initialize a ROS workspace to store the source code of MAVROS:

					mkdir -p ~/mavros_ws/src
cd ~/mavros_ws
catkin init
wstool init src

Generate a ROS package manifest containing MAVLink and MAVROS, and save it to a temporary file:

					rosinstall_generator --rosdistro kinetic mavlink | tee /tmp/mavros.rosinstall
rosinstall_generator --upstream mavros | tee -a /tmp/mavros.rosinstall

Merge the generated package manifest into the ROS workspace and update the packages using wstool:

					wstool merge -t src /tmp/mavros.rosinstall
wstool update -t src -j4

Install the required dependencies using rosdep:

					rosdep install --from-paths src --ignore-src -y

Run the script to install the GeographicLib datasets:

					sudo ./src/mavros/mavros/scripts/install_geographiclib_datasets.sh


Missing Dependencies

Missing Dependencies

When encountering missing dependencies, follow these steps:

Execute the following command to generate a rosinstall file, replacing “missing package” with the package name inside the brackets in the error, such as trajectory_msgs, tf2_eigen, etc.:

					rosinstall_generator --rosdistro kinetic "missing package" | tee -a /tmp/mavros.rosinstall


Merge the generated rosinstall file into the source directory:

					wstool merge -t src /tmp/mavros.rosinstall

Update the source directory:

					wstool update -t src -j4

Install dependencies:

					rosdep install --from-paths src --ignore-src -y

Repeat these steps until no errors occur.

Connection Refused

If facing a “connection refused” error, try reinstalling. If the problem persists, enhance the connection speed by modifying the local hosts file:

Open the hosts file:

					sudo vim /etc/hosts


Add the following lines at the end of the file:

 github.com gist.github.com assets-cdn.github.com raw.githubusercontent.com


Restart the network service:

					sudo /etc/init.d/networking restart

Step 3: PX4 Autopilot Firmware Development Environment

PX4 Autopilot is an open-source flight control system designed for drones and other autonomous vehicles.

Create a workspace directory and navigate into it:

					mkdir -p ~/Firmware_ws/src
cd ~/Firmware_ws/src

Clone the PX4 Autopilot firmware repository from GitHub:

					git clone https://github.com/PX4/Firmware.git
cd Firmware

Initialize and update submodules in the Git repository:

					git submodule update --init --recursive

Retrieve the description information for the current Git repository, usually the latest tag name:

					git describe --always --tags

Extract the GNU Arm Embedded Toolchain in the Downloads directory:

					cd ~/Downloads
tar -jxf gcc-arm-none-eabi-8-2019-q3-update-linux.tar.bz2

Edit the bash configuration file and add the bin directory of the extracted toolchain to the environment variable:

					sudo vim ~/.bashrc
export PATH=~/Downloads/gcc-arm-none-eabi-8-2019-q3-update/bin:$PATH

For convenience, create symbolic links for arm-none-eabi-gcc and arm-none-eabi-g++ in the /usr/bin/ directory:

					sudo ln -s [path]/gcc-arm-none-eabi-8-2019-q3-update/bin/arm-none-eabi-gcc /usr/bin/
sudo ln -s [path]/gcc-arm-none-eabi-8-2019-q3-update/bin/arm-none-eabi-g++ /usr/bin/

Download the .deb package for ccache and install it:

					wget -O /tmp/ccache_3.7.7-1_amd64.deb http://launchpadlibrarian.net/459437926/ccache_3.7.7-1_amd64.deb
sudo dpkg -i /tmp/ccache_3.7.7-1_amd64.deb

Return to the PX4 Autopilot firmware directory and compile using the make command:

					cd Firmware
sudo make px4_fmu-v5_default

Step 4: Configuring PX4 and Gazebo Simulation Environment

Now let’s configure the PX4 open-source flight control system and Gazebo simulation environment for simulating and controlling a multirotor UAV.

In the root directory of PX4 Firmware, execute the following command to build the PX4 simulation environment:

					sudo make px4_sitl_default

This step is to create a virtual flight control system for simulation. Choose either px4_fmu-v5_default based on your requirements. If you choose not to compile, you’ll need to connect to an actual Pixhawk flight controller for simulation using mavros.

In the Firmware/Tools/sitl_gazebo directory, execute the following commands to build the Gazebo simulation environment:

					mkdir Build
export GAZEBO_PLUGIN_PATH=${GAZEBO_PLUGIN_PATH}:[path]/Tools/sitl_gazebo/Build
export GAZEBO_MODEL_PATH=${GAZEBO_MODEL_PATH}:[path]/Tools/sitl_gazebo/models
export SITL_GAZEBO_PATH=[path]/Tools/sitl_gazebo
cd Build
cmake ..

In the terminal, execute the following command to start the simulation:

					cd src/Firmware
make posix_sitl_default
export FIRMWARE_DIR=[path]  # Add the path to the firmware
source ~/catkin_ws/devel/setup.bash  # Only needed if mavros was compiled from source
source Tools/setup_gazebo.bash $(FIRMWARE_DIR) $(FIRMWARE_DIR)/build/px4_sitl_default
roslaunch px4 mavros_posix_sitl.launch

This step launches ROS nodes, including mavros and the Gazebo simulation environment. After executing roslaunch, you should see the simulation interface on the screen, indicating a successful environment setup.

Configuring PX4 and Gazebo Simulation Environment

Try Offboard control to achieve actions such as takeoff and hover for the UAV:

					# Create a package
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
catkin_create_pkg offb roscpp mavros
cd ~/catkin_ws/src/offb/src
vim offb_node.cpp
# Copy and paste the code from offb_node.cpp

Add the following to CMakeLists.txt:

					add_executable(offb_node src/offb_node.cpp)

Then go back to the catkin_ws directory, execute catkin_make to compile, and finally run the offb_node node using the rosrun command.

Add a front-facing camera to the UAV:

					<arg name="cam" default="iris_fpv_cam"/>

Simultaneously, change the following line:

					<arg name="sdf" default="$(find mavlink_sitl_gazebo)/models/$(arg vehicle)/$(arg vehicle).sdf"/>


					<arg name="sdf" default="$(find mavlink_sitl_gazebo)/models/$(arg cam)/$(arg cam).sdf"/>

After executing roslaunch px4 mavros_posix_sitl.launch, the camera feed will be published to the /iris/usb_cam/image_raw topic, and you can use rqt to view it.

Adjust the camera pose by opening Firmware/Tools/sitl_gazebo/models/iris_fpv_cam/iris_fpv_cam.sdf and modifying the pose tag to 0 0 0 0 1.57 0 to change the camera perspective.

Step 5: Importing Landmark Image into Gazebo

Create the following structure in ~/.gazebo/model:

| -- model.sdf
| -- model.config
| -- materials
    | -- scripts
        | -- land_mark.material (texture information)
    | -- textures
        | -- h.png (landmark image)

Here are the contents of each file:


					<?xml version='1.0'?>
<sdf version='1.4'>
  <model name="land_mark">
    <link name='link'>
      <pose>0 0 0.115 0 0 0</pose>
      <collision name='collision'>
            <size>.496 .496 .01</size>
      <visual name='visual'>
            <size>.496 .496 .01</size>


					<?xml version="1.0"?>
  <sdf version="1.4">model.sdf</sdf>
    Landing Mark


					material Mark/Diffuse
	receive_shadows off
				texture ../textures/h.png

Ensure that these files are organized as described above and save them in the ~/.gazebo/model/land_mark directory. This way, when you insert the model in Gazebo, it should load and display correctly.

Importing Landmark Image into Gazebo

Step 6: Landmark Recognition and Algorithmic Control

Landmark Recognition

Create a workspace:

					mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
catkin_create_pkg landing roscpp rospy std_msgs geometry_msgs mavros
cd ~/catkin_ws/src/landing
mkdir msg  # Store custom message types
mkdir scripts  # Store Python scripts

Define a custom message type:

Create a center.msg file in the msg folder with the following content:


					uint32 width
uint32 height
float64 x
float64 y
bool iffind

Create a Python script, track.py, in the scripts folder:

					import rospy
from std_msgs.msg import Header
from sensor_msgs.msg import Image
from landing.msg import center  # Custom message type
import cv2
import numpy as np
center_publish = rospy.Publisher('/center', center, queue_size=1)  # Publish rectangle centers
# Callback function to get camera image
def callback(Image):
    img = np.fromstring(Image.data, np.uint8)
    img = img.reshape(240, 320, 3)
    track(img, Image.width, Image.height)  # Find rectangles
# Subscribe to get camera image
def listener():
    rospy.Subscriber('/iris/usb_cam/image_raw', Image, callback)
# Find rectangle centers
def track(frame, width, height):
    img = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
    _, img = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY)
    contours = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[1]  # Contour extraction
    rects = []  # Store rectangles
    centers = []  # Store centers
    for contour in contours:
        if cv2.contourArea(contour) < 100:  # Filter out small rectangles
        epsilon = 0.02 * cv2.arcLength(contour, True)
        approx = cv2.approxPolyDP(contour, epsilon, True)
        if approx.shape[0] == 4 and cv2.isContourConvex(approx):  # Filter out non-quadrilaterals and non-convex shapes
            centers.append((approx[0] + approx[1] + approx[2] + approx[3]).squeeze() / 4)
    # Clustering algorithm
    center_iter = list(range(len(centers)))
    result = []
    threshold = 20
    while len(center_iter) != 0:
        j = 1
        resultnow = []
        while j < len(center_iter):
            if np.sum((centers[center_iter[0]] - centers[center_iter[j]]) ** 2) < threshold:
                j = j - 1
            j = j + 1
        if len(result) < len(resultnow):
            result = resultnow
    rects = np.array(rects)[result]
    # If the number of nested rectangles is greater than 2, extraction is considered successful
    if len(result) > 2:
        centers = np.sum(np.array(centers)[result], axis=0).astype(np.double) / len(result)
        publish(centers, width, height)  # Publish message
        center_temp = center()
        center_temp.iffind = False
    # The commented-out part below will draw the extracted contours
    # cv2.polylines(frame, rects, True, (0,0,255), 2)
    # cv2.imshow('w', frame)
    # cv2.waitKey(1)
# Publish center point message
def publish(centers, width, height):
    center_temp = center()
    center_temp.width = width
    center_temp.height = height
    center_temp.x = centers[1]
    center_temp.y = centers[0]
    center_temp.iffind = True
if __name__ == '__main__':

The following is the recognition effect:

Landmark Recognition

Algorithmic Control

					#include <ros/ros.h>
#include <geometry_msgs/PoseStamped.h>
#include <geometry_msgs/TwistStamped.h>
#include <mavros_msgs/CommandBool.h>
#include <mavros_msgs/SetMode.h>
#include <mavros_msgs/State.h>
#include <landing/center.h> // Import custom message type
mavros_msgs::State current_state;
void state_cb(const mavros_msgs::State::ConstPtr& msg){
    current_state = *msg;
geometry_msgs::PoseStamped local_position;
void position_cb(const geometry_msgs::PoseStamped::ConstPtr& msg){
    local_position = *msg;
landing::center landmark;
void center_cb(const landing::center::ConstPtr& msg){
    landmark = *msg;
int main(int argc, char **argv)
    ros::init(argc, argv, "landing_node");
    ros::NodeHandle nh;
    ros::Subscriber state_sub = nh.subscribe<mavros_msgs::State>
            ("mavros/state", 10, state_cb);
    ros::Subscriber position_sub = nh.subscribe<geometry_msgs::PoseStamped>
            ("mavros/local_position/pose", 10, position_cb);
    ros::Subscriber center_sub = nh.subscribe<landing::center>
            ("center", 10, center_cb);
    ros::Publisher local_pos_pub = nh.advertise<geometry_msgs::PoseStamped>
            ("mavros/setpoint_position/local", 10);
    ros::Publisher local_vel_pub = nh.advertise<geometry_msgs::TwistStamped>
            ("mavros/setpoint_velocity/cmd_vel", 10);
    ros::ServiceClient arming_client = nh.serviceClient<mavros_msgs::CommandBool>
    ros::ServiceClient set_mode_client = nh.serviceClient<mavros_msgs::SetMode>
    ros::Rate rate(20.0);
    while(ros::ok() && current_state.connected){
    geometry_msgs::PoseStamped pose; // Pose control
    pose.pose.position.x = 0;
    pose.pose.position.y = 0;
    pose.pose.position.z = 2;
    geometry_msgs::TwistStamped vel; // Velocity control
    // Publish initial pose for 5 seconds
    for(int i = 100; ros::ok() && i > 0; --i){
    mavros_msgs::SetMode offb_set_mode;
    offb_set_mode.request.custom_mode = "OFFBOARD";
    mavros_msgs::CommandBool arm_cmd;
    arm_cmd.request.value = true;
    ros::Time last_request = ros::Time::now();
    // Takeoff
        if( current_state.mode != "OFFBOARD" && (ros::Time::now() - last_request > ros::Duration(5.0)))
            if( set_mode_client.call(offb_set_mode) && offb_set_mode.response.mode_sent)
                ROS_INFO("Offboard enabled");
            last_request = ros::Time::now();
        else if( !current_state.armed && (ros::Time::now() - last_request > ros::Duration(5.0)))
            if( arming_client.call(arm_cmd) && arm_cmd.response.success)
                ROS_INFO("Vehicle armed");
            last_request = ros::Time::now();
        else if(ros::Time::now() - last_request > ros::Duration(5.0))
    // Fly around
    last_request = ros::Time::now();
        if(ros::Time::now() - last_request > ros::Duration(5.0))
        vel.twist.linear.x = 1;
        vel.twist.linear.y = 0;
        vel.twist.linear.z = 0;
    // Landing control
    last_request = ros::Time::now();
        // Switch to landing mode when altitude is below 0.3
        if(local_position.pose.position.z < 0.3)
        // If a rectangular landmark is detected, control the direction
            // Calculate error in both directions
            double err_x = landmark.height / 2.0 - landmark.x;
            double err_y = landmark.width / 2.0 - landmark.y;
            ROS_INFO_STREAM("state="<<err_x<<" "<<err_y);
            // Velocity control
            vel.twist.linear.x = err_x / 400;
            vel.twist.linear.y = err_y / 400;
            // If position is close to desired, start descending
            if(err_x < 10 && err_y < 10)
                vel.twist.linear.z = -0.2;
                vel.twist.linear.z = 0;
        // If no rectangular landmark is found, return to 2m altitude
            pose.pose.position.x = local_position.pose.position.x;
            pose.pose.position.y = local_position.pose.position.y;
            pose.pose.position.z = 2;
    offb_set_mode.request.custom_mode = "AUTO.LAND";
    if( set_mode_client.call(offb_set_mode) && offb_set_mode.response.mode_sent)
        ROS_INFO("Landing mode activated");
        last_request = ros::Time::now();
    return 0;

You Might Be Interested

raspberry pi autostart
How to Auto Start Programs on Raspberry Pi

Automating program startup on the Raspberry Pi can be achieved through various methods. Editing the “/etc/rc.local” file or using desktop applications, while simpler, may not

raspberry pi
What is Raspberry Pi?

Raspberry Pi, a revolutionary single-board computer introduced by the Raspberry Pi Foundation, has become a global sensation, initially designed for educational purposes. With its integrated

Scroll to Top