top of page
Auburn-Shelby-Center-for-Engineering-192
20190531_151334.jpg
20190529_172717.jpg

 Summer 2019 - Auburn University REU on Unmanned Aerial Vehicles 

My co-intern, Conor Green, and I were given a research opportunity under the Auburn University Research Experience for Undergraduates (REU) whose only real parameters were to vaguely involve UAV's. In an effort to narrow down the application further, and in building upon recent research, we decided to take a novel approach to 3D interior mapping using Light Detecting and Ranging (LiDAR). 

LiDAR_Final_Hallway_gif.gif

Final Hallway Stitch

Program Description

hallway.PNG

Sponsored by the NSF and associated with the Department of Defense, the Auburn REU on Smart UAV's is tasked with involving undergraduate students in opportunities to expand their research interests and get a feel for what graduate school may be like. More so, Auburn University has long been an epicenter for research in new technologies on-board airborne vehicles and students are given the freedom to develop upon past research topics, or develop a completely new path in the way of enhancing airborne technologies. 

​

Topics in the 18th cohort of the Auburn Smart UAV's were among the following:

  • Training a deep learning neural network to classify, recognize, and avoid other unmanned aerial vehicles

  • 3D simulation of path planning artificial intelligent using path optimization algorithms

  • 3D/4D LiDAR technology for simultaneous localization and mapping in GPS/GNSS denied environments

​

hand_heatmap3d.gif
hand_heatmap3d_2.gif

External Related Links: 

Abstract:

    Small, quadrotor helicopter, or quadcopter, unmanned aerial vehicles (UAVs) have unique abilities to map environments, particularly utilizing 3D flash light detection and radar (LiDAR) technologies, which yield point cloud data sets. However, the majority of applications to date utilize global navigation satellite systems (GNSS) to determine location in order to stitch together LiDAR frames, which excludes mapping in environments without readily available or reliable GNSS (e.g. inside concrete buildings or underground.) In the context of search and rescue, providing an accurate and extensive model through a UAV could provide emergency personnel critical information on the interior of the structure without risking human lives. Previous projects have been able to confirm the viability of autonomous flight through LiDAR and achieve simultaneous localization and mapping (SLAM) without GNSS. The project presented will equip a quadcopter UAV with a LiDAR sensor and manually navigate through an interior environment to provide point cloud data to be processed in post into a model. This computer mock-up will be the amalgamation of the LiDAR and GNSS denied SLAM data gathered throughout the entire flight path and will provide a comprehensive, 3D representation of the interior.

Methodology

By recording NTP protocol timestamped LiDAR data from a LiDAR camera and navigation data from the drone, we were able to synchronize the data, extract point cloud frames, and apply an accumulated transform to geo-reference every sequential frame to the first. The transformations were stored chronologically as rigid transformation matrices in a large cell array where each could be applied and added to the total rigid transformation after weighting with an iterative closest point algorithm. Using the iterative closest point (ICP) algorithm, we were able to correct any errors from drift in the navigation data and merge point clouds that look much like the desired structure. 

Capture2.PNG
office_vid1.gif
bottom of page