Sandilya Sai Garimella and Peter Stratton
Adv. Artif. Intell. Mach. Learn., 3 (3):1248-1258
Sandilya Sai Garimella : University of Michigan, Ann Arbor
Peter Stratton : University of Michigan, Ann Arbor
Article History: Received on: 15-May-23, Accepted on: 20-Jul-23, Published on: 27-Jul-23
Corresponding Author: Sandilya Sai Garimella
Citation: Peter Stratton, Sandilya Sai Garimella, Ashwin Saxena, Nibarkavi Amutha, Emaad Geram, (2023). Volume-DROID: A Real-Time Implementation of Volumetric Mapping with DROID-SLAM. Adv. Artif. Intell. Mach. Learn., 3 (3 ):1248-1258
This paper presents Volume-DROID, a novel approach for Simultaneous Localization and Mapping (SLAM) that integrates Volumetric Mapping and Differentiable Recurrent Optimization-Inspired Design (DROID). Volume-DROID takes camera images (monocular or stereo) or frames from a video as input and combines DROID-SLAM, point cloud registration, an off-the-shelf semantic segmentation network, and Convolutional Bayesian Kernel Inference (ConvBKI) to generate a 3D semantic map of the environment and provide accurate localization for the robot. The key innovation of our method is the real-time fusion of DROID-SLAM and Convolutional Bayesian Kernel Inference (ConvBKI), achieved through the introduction of point cloud generation from RGB-Depth frames and optimized camera poses. This integration, engineered to enable efficient and timely processing, minimizes lag and ensures effective performance of the system. Our approach facilitates functional real-time online semantic mapping with just camera images or stereo video input. Our paper offers an open-source Python implementation of the algorithm, available at https://github.com/peterstratton/Volume-DROID.