SuperEx: Enhancing Indoor Mapping and Exploration using Non-Line-of-Sight Perception

Delhi Technological University
Stony Brook University

SuperEx is a framework that captures multiple light bounces without additional hardware allowing robots to see beyond line of sight, enhancing indoor mapping and exploration.

Abstract

Efficient exploration and mapping in unknown indoor environments is a fundamental challenge, with high stakes in time-critical settings. In current systems, robot perception remains confined to line-of-sight; occluded regions remain unknown until physically traversed, leading to inefficient exploration when layouts deviate from prior assumptions.

In this work, we bring non-line-of-sight (NLOS) sensing to robotic exploration. We leverage single-photon LiDARs, which capture time-of-flight histograms that encode the presence of hidden objects -- allowing robots to look around blind corners. Recent single-photon LiDARs have become practical and portable, enabling deployment beyond controlled lab settings. Prior NLOS works target 3D reconstruction in static, lab-based scenarios, and initial efforts toward NLOS-aided navigation consider simplified geometries.

We introduce SuperEx, a framework that integrates NLOS sensing directly into the mapping–exploration loop. SuperEx augments global map prediction with beyond-line-of-sight cues by (i) carving empty NLOS regions from timing histograms and (ii) reconstructing occupied structure via a two-step physics-based and data-driven approach that leverages structural regularities. Evaluations on complex simulated maps and the real-world KTH Floorplan dataset show a 12% gain in mapping accuracy under 30% coverage and improved exploration efficiency compared to line-of-sight baselines, opening a path to reliable mapping beyond direct visibility.

Principle of NLOS Sensing

NLOS Principle Diagram

Single-photon LiDAR comprises a pulsed laser, single-photon detector, and timing circuits. (a) When the laser pulse strikes a visible wall, it diffuses, and some of the scattered rays hit the hidden object. Some of the light is scattered back and captured by the sensor as time-of-flight histograms (b), recording the number of photons in each time bin. These measurements are then converted into back-projection maps (c), which represent the likelihood of an object's presence at a certain distance from the wall.

Pipeline

SuperEx Pipeline Flowchart

The histograms captured by the single-photon LiDAR enable 1) carving out NLOS regions that are empty and 2) backprojection of occupied NLOS, that is filtered with a Pix2Pix network. Both the carved occupancy and filtered backprojection are fed into the Lama network for improved global map prediction, and then for enhanced frontier exploration.


SuperEx provides a complete pipeline for simulating and integrating non-line-of-sight (NLOS) perception into robotic mapping and exploration. The framework is divided into three modules:

  • Simulation: We develop a physics-based simulator for SPAD-based LiDARs that models multi-bounce photon propagation. This generates transient histograms and corresponding backprojection images that capture indirect reflections in complex environments.
  • Map Reconstruction: We use a sequential pipeline comprising an image-to-image translation model, Pix2Pix, and an image inpainting model, LaMa, to reconstruct NLOS occupancy maps from backprojection images. The reconstructed maps are fused with global map predictions to extend coverage into occluded regions.
  • Mapping and Exploration: We evaluate NLOS-informed mapping within state-of-the-art exploration frameworks. In particular, we adopt the indoor exploration scenarios and configurations introduced in the MapEx benchmark, enabling a direct comparison and demonstrating the benefits of incorporating NLOS perception.

Results

This simulation provides an overview of how the map is expanded through carving and actively updated as the robot explores new areas and navigates efficiently.

Technical Video

BibTeX

@article{to be added}
}