Depth Anything V2

Lihe Yang1       Bingyi Kang2†      Zilong Huang2     
Zhen Zhao      Xiaogang Xu      Jiashi Feng2      Hengshuang Zhao1*
1HKU                2TikTok
† project lead        * corresponding author
NeurIPS 2024

Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features:

  • more fine-grained details than Depth Anything V1
  • more robust than Depth Anything V1 and SD-based models (e.g., Marigold, Geowizard)
  • more efficient (10x faster) and more lightweight than SD-based models
  • impressive fine-tuned performance with our pre-trained models
We also release six metric depth models of three scales for indoor and outdoor scenes, respectively.

Quick Preview

Reminder: To ensure the best viewing experience, please choose the 1440p video quality in the player settings.


Comparison with Depth Anything V1 on Fine-grained Details

Comparison with Depth Anything V1 on Robustness

Comparison with Marigold and Geowizard

Depth Visualization on Videos

Note: Depth Anything V2 is an image-based depth estimation method, we use videos just to better exhibit our superiority.


Abstract

This work presents Depth Anything V2. Without pursuing fancy techniques, we aim to reveal crucial findings to pave the way towards building a powerful monocular depth estimation model. Notably, compared with V1, this version produces much finer and more robust depth predictions through three key practices: 1) replacing all labeled real images with synthetic images, 2) scaling up the capacity of our teacher model, and 3) teaching student models via the bridge of large-scale pseudo-labeled real images. Compared with the latest models built on Stable Diffusion, our models are significantly more efficient (more than 10x faster) and more accurate. We offer models of different scales (ranging from 25M to 1.3B params) to support extensive scenarios. Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models. In addition to our models, considering the limited diversity and frequent noise in current test sets, we construct a versatile evaluation benchmark with sparse depth annotations to facilitate future research.

Framework

We first train an initial largest teacher model (based on DINOv2-Giant) purely on synthetic images. Then, it produces high-quality pseudo labels for large-scale unlabeled real images. Finally, student models are trained solely on the pseudo-labeled real images.

pipeline

Data Coverage

We use 595K synthetic images to train the initial largest teacher model and 62M+ real pseudo-labeled images to train final student models.

pipeline

Citation

@article{depth_anything_v2,
  title={Depth Anything V2},
  author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Zhao, Zhen and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
  journal={arXiv:2406.09414},
  year={2024}
}