PanDA : Towards Panoramic Depth Anything with Unlabeled Panoramas and Mobius Spatial Augmentation

CVPR 2025

Zidong Cao1     Jinjing Zhu1    Weiming Zhang1    Hao Ai2    Haotian Bai1    Hengshuang Zhao3    Lin Wang4†   
1AI Thrust, HKUST(GZ)      2University of Birmingham      3HKU      4NTU  
† Corresponding author

PanDA is the fine-tuned version of Depth Anything v2 for 360 images under different spherical transformations. The fine-tuning process utilizes about 20k labeled 360 images and 100k unlabeled 360 images. The main contributions are as follows:

  • The first systematic analysis for Depth Anything model when applied to 360 images.
  • We design a Semi-supervised Learning pipeline with the proposed loss and augmentations.
  • Accurate 360 depth estimation with fine-grained details in both indoor and outdoor scenes. Also, SOTA results on two real-world datasets.

360 Depth Estimation in Open-World Scenarios

Comparison under Spherical Transforamtions

Vertical rotations. 1st row: input images. 2nd row: Depth Anything v2. 3rd row: PanDA.

Spherical zooms. 1st row: input images. 2nd row: Depth Anything v2. 3rd row: PanDA.

Abstract

Recently, Depth Anything Models (DAMs) - a type of depth foundation models - have demonstrated impressive zero-shot capabilities across diverse perspective images. Despite its success, it remains an open question regarding DAMs' performance on panorama images that enjoy a large field-of-view (180x360) but suffer from spherical distortions. To address this gap, we conduct an empirical analysis to evaluate the performance of DAMs on panoramic images and identify their limitations. For this, we undertake comprehensive experiments to assess the performance of DAMs from three key factors: panoramic representations, 360 camera positions for capturing scenarios, and spherical spatial transformations. This way, we reveal some key findings, e.g., DAMs are sensitive to spatial transformations. We then propose a semi-supervised learning (SSL) framework to learn a panoramic DAM, dubbed PanDA. Under the umbrella of SSL, PanDA first learns a teacher model by fine-tuning DAM through joint training on synthetic indoor and outdoor panoramic datasets. Then, a student model is trained using large-scale unlabeled data, leveraging pseudo-labels generated by the teacher model. To enhance PanDA's generalization capability, M"obius transformation-based spatial augmentation (MTSA) is proposed to impose consistency regularization between the predicted depth maps from the original and spatially transformed ones. This subtly improves the student model's robustness to various spatial transformations, even under severe distortions. Extensive experiments demonstrate that PanDA exhibits remarkable zero-shot capability across diverse scenes, and outperforms the data-specific panoramic depth estimation methods on two popular real-world benchmarks.

pipeline

Comparison with SOTA monocular 360 depth methods

The quantitative comparison with SOTA monocular 360 depth estimation methods is shown below.

pipeline

Our PanDA with different backbone models achieve SOTA performance on two real-world datasets.

pipeline