Research

Current Research

🐢TURTLMap: Real Time Localization and Dense Mapping For Unmanned Underwater Vehicles

We propose 🐢TURTLMap: the first low-cost real-time state estimation and dense mapping solution that is robust to low-texture underwater environments.

With the proposed localization package and a real-time volumetric mapping package, we demonstrate real-time and accurate dense mapping on an onboard embedded computer on a low-cost UUV with a downward-facing stereo camera in a low-texture underwater environment.

Project Website: https://umfieldrobotics.github.io/TURTLMap/

Accepted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2024

This work relates to Department of Navy award N00014-21-1-2149 issued by the Office of Naval Research.

CRKD: Enhanced Camera-Radar Object Detection with Cross-modality Knowledge Distillation

We enable cross-modality knowledge distillation from LiDAR-camera (LC) detector to camera-radar (CR) detector in the BEV feature space.

We conduct extensive evaluation on nuScenes to demonstrate the effectiveness of CRKD. CRKD can improve the mAP and NDS of student detectors by 3.5% and 3.2% respectively. Since our method focuses on a novel KD path with distinctively large modality gap, we provide thorough study and analysis to support our design choices.

Project Website: https://song-jingyu.github.io/CRKD/

Accepted to 2024 IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR)

This work was supported by a grant from Ford Motor Company via the Ford-UM Alliance under award N028603.

Classifying Surface Terrains via Machine Learning on Ground Penetrating Radar Images

This project proposes a novel use of Ground Penetrating Radar (GPR). GPR is typically used as a non-intrusive method to sense objects under the ground, such as pipes or water deposits. It is commonly used in construction, geology, and archaeology applications. Recent interest in the use of GPR for autonomous vehicle localization has grown interest in use for the sensor for robotic applications. As far as we are aware, we are the first group to propose the use of this sensor for surface terrain classification.

This work is supported by National Science Foundation Grant #DGE-2241144 and a University of Michigan Space Institute Pathfinder Grant.

Paper link:

"Learning Surface Terrain Classifications from Ground Penetrating Radar" (presented at CVPR 2024)

Our custom Clearpath Husky robot with a Ground Penetrating Radar mounted on the back uses classification networks to determine the surface terrain it is driving over.

LiRaFusion - Deep Adaptive LiDAR-Radar Fusion for 3D Object Detection

We propose LiRaFusion to tackle LiDAR-radar fusion for 3D object detection to fill the performance gap of existing LiDAR-radar detectors.

To improve the feature extraction capabilities from these two modalities, we design an early fusion module for joint voxel feature encoding, and a middle fusion module to adaptively fuse feature maps via a gated network. We perform extensive evaluation on nuScenes to demonstrate that LiRaFusion leverages the complementary information of LiDAR and radar effectively and achieves notable improvement over existing methods.

Project Website: https://github.com/Song-Jingyu/LiRaFusion

Accepted to 2024 IEEE International Conference on Robotics and Automation (ICRA)

This work was supported by a grant from Ford Motor Company via the Ford-UM Alliance under award N028603.

Learning Which Side to Scan: Multi-View Informed Active Perception with Side Scan Sonar for Autonomous Underwater Vehicles

Autonomous underwater vehicles often perform surveys that capture multiple views of targets in order to provide more information for human operators or automatic target recognition algorithms. In this work, we address the problem of choosing the most informative views that minimize survey time while maximizing classifier accuracy. We introduce a novel active perception framework leveraging Graph Neural Networks (GNNs) for multi-view adaptive surveying and reacquisition using side scan sonar imagery.

Our results demonstrate that our approach is able to surpass the state-of-the-art in classification accuracy and survey efficiency.

Paper: 

Learning Which Side to Scan: Multi-View Informed Active Perception with Side Scan Sonar for Autonomous Underwater Vehicles

Accepted to 2024 IEEE International Conference on Robotics and Automation (ICRA)

This work  was supported by the Naval Research Enterprise Intern Program (NREIP) and the Office of Naval Research through the NRL Base Program.

Perceptual Uncertainty for Marine Autonomy (PUMA)

An overview of the robot operating in a wave basin with ground truth 3D scan shown in black. The map constructed by our method is shown as a point cloud in color.

This project focuses on enabling uncertainty-aware acoustic localization and mapping. Our contributions include:

Project Website: https://umfieldrobotics.github.io/PUMA.github.io/ 

Accepted to OCEANS 2023 - Limerick - Outstanding Recognition in Student Poster Competition

This work relates to Department of Navy award N00014-21-1-2149 issued by the Office of Naval Research.

STARS: Zero-shot Sim-to-Real Transfer for Segmentation of Shipwrecks in Sonar Imagery

We address the problem of sim-to-real transfer for object segmentation when there is no access to real examples of an object of interest during training, i.e. zero-shot sim-to-real transfer for segmentation. We focus on the application of shipwreck segmentation in side scan sonar imagery.

Our novel segmentation network, STARS, addresses this challenge by fusing a predicted deformation field and anomaly volume, allowing it to generalize better to real sonar images and achieve more effective zero-shot sim-to-real transfer for image segmentation.

Paper: 

STARS: Zero-shot Sim-to-Real Transfer for Segmentation of Shipwrecks in Sonar Imagery

Accepted to 2023 British Machine Vision Conference (BMVC)* 

*oral presentation

This work is supported by a University of Michigan Robotics Department Fellowship and by the NOAA Ocean Exploration program under Award #NA21OAR0110196.

Machine Learning for Automated Detection of Shipwreck Sites from Large Area Robotic Surveys

This project will develop new methods using machine learning to process data collected from underwater robots to automatically detect submerged objects and perform targeted surveys of detected sites. As part of this project, we conducted field work in Lake Huron, MI and produced the AI4Shipwrecks dataset. This work is supported by NOAA Ocean Exploration under Award #NA21OAR0110196.

See more:

STARS: Zero-shot Sim-to-Real Transfer for Segmentation of Shipwrecks in Sonar Imagery

"Building Curious Machines", Michigan Engineering Research News 

Side scan image of the shipwreck Monrovia collected with the Michigan Technological University Great Lakes Research Center IVER-3 AUV. Image courtesy of Machine Learning for Automated Detection of Shipwreck Sites from Large Area Robotic Surveys. 

WaterNeRF: Neural Radiance Fields for Underwater Scenes

In this paper, we leverage state-of-the-art neural radiance fields (NeRFs) to enable physics-informed novel view synthesis with image restoration and dense depth estimation for underwater scenes. Our proposed method, WaterNeRF, estimates parameters of a physics-based model for underwater image formation and uses these parameters for novel view synthesis. After learning the scene structure and radiance field, we can produce novel views of degraded as well as corrected underwater images.

Paper: 

WaterNeRF: Neural Radiance Fields for Underwater Scenes

Accepted to OCEANS 2023 - MTS/IEEE U.S. Gulf Coast

AI4Shipwrecks Dataset

For this project, we collected sidescan sonar data of various shipwrecks in the Thunder Bay National Marine Sanctuary in Lake Huron, MI. Thunder Bay is a unique site due to its abundance of known and suspected shipwrecks. Field work was conducted during the summers of 2022 and 2023 in collaboration with Michigan Technological University and Louisiana State University. This work is supported by NOAA Ocean Exploration under Award #NA21OAR0110196.

See more:

Project Website

Paper

"Expedition Overview (Year 1)", NOAA Ocean Explorer 

"Scientists travel to the Thunder Bay National Marine Sanctuary", WBKBTV  

"Research team uses robots to search for shipwrecks", The Alpena News 

Presented at AUV Symposium 2024 in Boston, MA. Accepted to the International Journal of Robotics Research. 

Photo credits to Darby Hinkley of the Alpena News. From left to right: Onur Bagoren (UM), Anja Sheppard (UM), Mason Pesson (LSU), Corina Barbalata (LSU), William Ard (LSU), Jamey Anderson (MTU), and Katie Skinner (UM).

Past Research

Unsupervised Learning for Processing Underwater Imagery

Deep learning has demonstrated great success in modeling complex nonlinear systems but requires a large amount of training data, which is difficult to compile in subsea environments. Our prior work leverages physics-based models of underwater image formation to develop unsupervised learning approaches to advance perceptual capabilities of underwater robots. In particular, we have focused on unsupervised learning for color correction and depth estimation of monocular and stereo underwater imagery.

UWStereoNet.mp4

Perception for Autonomous Driving

We have also collaborated with the Ford Center for Autonomous Vehicles at University of Michigan to improve perception for autonomous vehicles in urban environments. The videos below show results from our work on transferring sensor-based effects from real data to simulated data to improve results of training on simulated data for the task of object detection. 

CameraEffects_ECCVW2018.mp4

Light Field Imaging in Underwater Environments

Light field cameras have a microlens array between the camera's main lens and image sensor, enabling recovery of a depth map and high resolution image from a single optical sensor. Our research has focused on leveraging light field cameras to improve underwater perception, with tasks including real-time 3D reconstruction and underwater image dehazing.

Underwater Bundle Adjustment

Our work developing underwater bundle adjustment integrates color correction into the structure recovery procedure for multi-view stereo reconstruction in underwater environments. 

Robotic Survey of Sunken Pirate City

Our team conducted a robotic survey of the submerged city of Port Royal, Jamaica to create a 3D reconstruction of the marine archaeological site. [Read more]