MODE: Multi-view Omnidirectional Depth Estimation with 360° Cameras

摘要

In this paper, we propose a two-stage omnidirectional depth estimation framework with multi-view 360◦ cameras. The framework first estimates the depth maps from different camera pairs via omnidirectional stereo matching and then fuses the depth maps to achieverobustness against mud spots, water drops on camera lenses, and glare caused by intense light. We adopt spherical feature learning to address the distortion of panoramas. In addition, a synthetic 360◦ dataset consisting of 12K road scene panoramas and 3K ground truth depth maps is presented to train and evaluate 360◦ depth estimation algorithms. Our dataset takes soiled camera lenses and glare into consideration, which is more consistent with the real-world environment. Experimental results show that the proposed framework generates reliable results in both synthetic and real-world environments, and it achieves state-of-the-art performance on different datasets.

出版物
In European Conference on Computer Vision
Ming Li(李明)
硕博连读(2017-2024)

简略介绍

Xueqian Jin(靳学乾)
硕士(2020-2023)

简略介绍