Danny is a graduate student in the Aeronautics & Astronautics Department. His research interests
include sensor fusion, machine learning, and applied control theory for practical robotics applications
involving safety-critical tasks. His research focuses on solving perception and motion planning problems
related to semi-autonomous and remotely piloted aerial vehicles used for search and rescue missions.
Danny has prior work experience as an aircraft mechanic and electrician responsible for repairing pilot
instrumentation, autopilot avionics, and stability augmentation systems on the B-52H. He received his
B.S. in Electrical Engineering from the University of Washington in 2012 and completed an M.S. in
Electrical Engineering from the Air Force Institute of Technology (AFIT) in 2018. While at AFIT, he worked
for the Autonomy and Navigation Center and developed algorithms for navigation in areas where Global
Navigation Satellite System (GNSS) services are unavailable.
Publications
WiSARD: A Labeled Visual and Thermal Image Dataset for Wilderness Search and Rescue
Broyles, D.*,
Hayner, C.*,
and Leung, K.
In IEEE/RSJ Int. Conf. on Intelligent Robots & Systems,
2022
Sensor-equipped unoccupied aerial vehicles (UAVs) have the potential to help reduce search times and alleviate safety risks for first responders carrying out Wilderness Search and Rescue (WiSAR) operations, the process of finding and evacuating person(s) lost in wilderness areas. Unfortunately, visual sensors alone do not address the need for robustness across all the possible terrains, weather, and lighting conditions that WiSAR operations can be conducted in. The use of multi-modal sensors, specifically visual-thermal imagers, is critical in enabling WiSAR UAVs to perform in diverse operating conditions. However, due to the unique challenges posed by the wilderness context, existing dataset benchmarks are inadequate for developing vision-based algorithms for autonomous WiSAR UAVs. To this end, we present WiSARD, more than 56,000 labeled visual and thermal images collected from UAV flights in various terrains, seasons, weather, and lighting conditions. To the best of our knowledge, WiSARD is the first large-scale dataset collected with multi-modal sensors for autonomous WiSAR operations. We envision that our dataset will provide researchers with a diverse and challenging benchmark that can test the robustness of their algorithms when applied to real-world (life-saving) applications.Sensor-equipped unoccupied aerial vehicles (UAVs) have the potential to help reduce search times and alleviate safety risks for first responders carrying out Wilderness Search and Rescue (WiSAR) operations, the process of finding and evacuating person(s) lost in wilderness areas. Unfortunately, visual sensors alone do not address the need for robustness across all the possible terrains, weather, and lighting conditions that WiSAR operations can be conducted in. The use of multi-modal sensors, specifically visual-thermal imagers, is critical in enabling WiSAR UAVs to perform in diverse operating conditions. However, due to the unique challenges posed by the wilderness context, existing dataset benchmarks are inadequate for developing vision-based algorithms for autonomous WiSAR UAVs. To this end, we present WiSARD, more than 56,000 labeled visual and thermal images collected from UAV flights in various terrains, seasons, weather, and lighting conditions. To the best of our knowledge, WiSARD is the first large-scale dataset collected with multi-modal sensors for autonomous WiSAR operations. We envision that our dataset will provide researchers with a diverse and challenging benchmark that can test the robustness of their algorithms when applied to real-world (life-saving) applications.
@inproceedings{BroylesHaynerEtAl2022,author={Broyles, D.* and Hayner, C.* and Leung, K.},booktitle={{IEEE/RSJ Int.\ Conf.\ on Intelligent Robots \& Systems}},title={{WiSARD}: A Labeled Visual and Thermal Image Dataset for Wilderness Search and Rescue},year={2022},arxiv={2309.04453},category={afsl},img={BroylesHaynerEtAl2022.jpg},html={https://sites.google.com/uw.edu/wisard/},owner={karenl7}}
HALO: Hazard-Aware Landing Optimization for Autonomous Systems
Hayner, C. R.,
Buckner, S. C.,
Broyles, D.,
Madewell, E.,
Leung, K.,
and Açıkmeşe, B.
In Proc. IEEE Conf. on Robotics and Automation,
2023
With autonomous aerial vehicles enacting safety-critical missions, such as the Mars Science Laboratory Curiosity rover’s entry, descent, and landing on Mars, awareness and reasoning regarding potentially hazardous landing sites is paramount. This paper presents a coupled perception-planning solution which address the real-time hazard detection, optimal landing trajectory generation, and contingency planning challenges encountered when landing in uncertain environments. The perception and planning components are addressed by the proposed Hazard-Aware Landing Site Selection (HALSS) framework and Adaptive Deferred-Decision Trajectory Optimization (Adaptive-DDTO) algorithm respectively. The HALSS framework processes point clouds through a segmentation network to predict a binary safety map which is analyzed using the medial axis transform to efficiently identify circular, safe landing zones. The Adaptive-DDTO algorithm address the need for contingency planning during target failure scenarios through adaptively recomputed multi-target trajectory optimization. Overall, Adaptive-DDTO achieves 18.16% increase in terms of landing success rate and 0.4% decrease in cumulative control effort compared to its predecessor, DDTO, while computing near real-time solutions, when coupled with HALSS, in a simulated environment.
@inproceedings{HaynerBucknerEtAl2023,author={Hayner, C. R. and Buckner, S. C. and Broyles, D. and Madewell, E. and Leung, K. and A\c{c}{\i}kme\c{s}e, B.},booktitle={{Proc.\ IEEE Conf.\ on Robotics and Automation}},title={{HALO}: Hazard-Aware Landing Optimization for Autonomous Systems},year={2023},arxiv={2304.01583},category={safetycritical},img={HaynerBucknerEtAl2023.png},selected={true},owner={karenl7}}
MISFIT-V: Misaligned Image Synthesis and Fusion using Information from Thermal and Visual
Detecting humans from airborne visual and thermal imagery is a fundamental challenge for Wilderness Search-and-Rescue (WiSAR) teams, who must perform this function accurately in the face of immense pressure. The ability to fuse these two sensor modalities can potentially reduce the cognitive load on human operators and/or improve the effectiveness of computer vision object detection models. However, the fusion task is particularly challenging in the context of WiSAR due to hardware limitations and extreme environmental factors. This work presents Misaligned Image Synthesis and Fusion using Information from Thermal and Visual (MISFIT-V), a novel two-pronged unsupervised deep learning approach that utilizes a Generative Adversarial Network (GAN) and a cross-attention mechanism to capture the most relevant features from each modality. Experimental results show MISFIT-V offers enhanced robustness against misalignment and poor lighting/thermal environmental conditions compared to existing visual-thermal image fusion methods.
@inproceedings{ChauhanRemyEtAl2023,author={Chauhan, A. and Remy, I. and Broyles, D. and Leung, K.},title={{MISFIT-V}: Misaligned Image Synthesis and Fusion using Information from Thermal and Visual},year={2023},arxiv={2309.13216},category={afsl},img={ChauhanRemyEtAl2023.png},note={(submitted)},keywords={preprint},owner={karenl7}}
Beyond Visual Line-of-Sight Uncrewed Aerial Vehicle for Search and Locate Operations
Madewell, E.,
Pollack, E.,
Kuni, H.,
Johri, S.,
Broyles, D.,
Vagners, J.,
and Leung, K.
The deployment of Uncrewed Aerial Vehicles (UAV) in wilderness search and locate operations has gained attention in the past few years. To help expand the effective search radius and provide more flexible UAV search capabilities, we propose a Reliable Uninterrupted Communications Kit for UAV Search (RUCKUS), a backpackable Beyond Visual Line-of-Sight UAV system utilizing an intermediate "relay UAV" to provide an uninterrupted communications link between the ground station and the search UAV. The proposed system is designed to be self-contained, modular, and affordable and can provide continuous sensor data and control flow between the search UAV and ground station, enabling the users to receive real-time video feedback from the search UAV and dynamically update the UAV’s search strategy. In this paper, we describe the proposed system architecture and characterization of the signal strength via a number of experimental flight tests. The end goal is to develop a flexible, cost-effective, and portable BVLOS solution to aid first responders and alike in safety-critical operations where extending the operational range can significantly improve mission success.
@inproceedings{MadewellPollackEtAl2024,author={Madewell, E. and Pollack, E. and Kuni, H. and Johri, S. and Broyles, D. and Vagners, J. and Leung, K.},booktitle={{AIAA Scitech Forum}},title={Beyond Visual Line-of-Sight Uncrewed Aerial Vehicle for Search and Locate Operations},year={2024},category={afsl},img={MadewellPollackEtAl2024.png},owner={karenl7}}