haotongl commited on
Commit
f465978
·
verified ·
1 Parent(s): 583c480

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +141 -3
  2. config.json +44 -0
  3. model.safetensors +3 -0
README.md CHANGED
@@ -1,3 +1,141 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - depth-estimation
5
+ - computer-vision
6
+ - monocular-depth
7
+ - multi-view-geometry
8
+ - pose-estimation
9
+ library_name: depth-anything-3
10
+ pipeline_tag: depth-estimation
11
+ ---
12
+
13
+ # Depth Anything 3: DA3MONO-LARGE
14
+
15
+ <div align="center">
16
+
17
+ [![Project Page](https://img.shields.io/badge/Project_Page-Depth_Anything_3-green)](https://depth-anything-3.github.io)
18
+ [![Paper](https://img.shields.io/badge/arXiv-Depth_Anything_3-red)](https://arxiv.org/abs/)
19
+ [![Demo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-blue)](https://huggingface.co/spaces/depth-anything/Depth-Anything-3) # noqa: E501
20
+ <!-- Benchmark badge removed as per request -->
21
+
22
+ </div>
23
+
24
+ ## Model Description
25
+
26
+ DA3 Monocular Large model for high-quality relative monocular depth estimation. Unlike disparity-based models (e.g., Depth Anything 2), it directly predicts depth, resulting in superior geometric accuracy.
27
+
28
+ | Property | Value |
29
+ |----------|-------|
30
+ | **Model Series** | Monocular Depth |
31
+ | **Parameters** | 0.35B |
32
+ | **License** | Apache 2.0 |
33
+
34
+
35
+
36
+ ## Capabilities
37
+
38
+ - ✅ Relative Depth
39
+ - ✅ Sky Segmentation
40
+
41
+ ## Quick Start
42
+
43
+ ### Installation
44
+
45
+ ```bash
46
+ git clone https://github.com/ByteDance-Seed/depth-anything-3
47
+ cd depth-anything-3
48
+ pip install -e .
49
+ ```
50
+
51
+ ### Basic Example
52
+
53
+ ```python
54
+ import torch
55
+ from depth_anything_3.api import DepthAnything3
56
+
57
+ # Load model from Hugging Face Hub
58
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
59
+ model = DepthAnything3.from_pretrained("depth-anything/da3mono-large")
60
+ model = model.to(device=device)
61
+
62
+ # Run inference on images
63
+ images = ["image1.jpg", "image2.jpg"] # List of image paths, PIL Images, or numpy arrays
64
+ prediction = model.inference(
65
+ images,
66
+ export_dir="output",
67
+ export_format="glb" # Options: glb, npz, ply, mini_npz, gs_ply, gs_video
68
+ )
69
+
70
+ # Access results
71
+ print(prediction.depth.shape) # Depth maps: [N, H, W] float32
72
+ print(prediction.conf.shape) # Confidence maps: [N, H, W] float32
73
+ print(prediction.extrinsics.shape) # Camera poses (w2c): [N, 3, 4] float32
74
+ print(prediction.intrinsics.shape) # Camera intrinsics: [N, 3, 3] float32
75
+ ```
76
+
77
+ ### Command Line Interface
78
+
79
+ ```bash
80
+ # Process images with auto mode
81
+ da3 auto path/to/images \
82
+ --export-format glb \
83
+ --export-dir output \
84
+ --model-dir depth-anything/da3mono-large
85
+
86
+ # Use backend for faster repeated inference
87
+ da3 backend --model-dir depth-anything/da3mono-large
88
+ da3 auto path/to/images --export-format glb --use-backend
89
+ ```
90
+
91
+ ## Model Details
92
+
93
+ - **Developed by:** ByteDance Seed Team
94
+ - **Model Type:** Vision Transformer for Visual Geometry
95
+ - **Architecture:** Plain transformer with unified depth-ray representation
96
+ - **Training Data:** Public academic datasets only
97
+
98
+ ### Key Insights
99
+
100
+ 💎 A **single plain transformer** (e.g., vanilla DINO encoder) is sufficient as a backbone without architectural specialization. # noqa: E501
101
+
102
+ ✨ A singular **depth-ray representation** obviates the need for complex multi-task learning.
103
+
104
+ ## Performance
105
+
106
+ 🏆 Depth Anything 3 significantly outperforms:
107
+ - **Depth Anything 2** for monocular depth estimation
108
+ - **VGGT** for multi-view depth estimation and pose estimation
109
+
110
+ For detailed benchmarks, please refer to our [paper](https://depth-anything-3.github.io). # noqa: E501
111
+
112
+ ## Limitations
113
+
114
+ - The model is trained on academic datasets and may have limitations on certain domain-specific images # noqa: E501
115
+ - Performance may vary depending on image quality, lighting conditions, and scene complexity
116
+
117
+
118
+ ## Citation
119
+
120
+ If you find Depth Anything 3 useful in your research or projects, please cite:
121
+
122
+ ```bibtex
123
+ @article{depthanything3,
124
+ title={Depth Anything 3: Recovering the visual space from any views},
125
+ author={Haotong Lin and Sili Chen and Jun Hao Liew and Donny Y. Chen and Zhenyu Li and Guang Shi and Jiashi Feng and Bingyi Kang}, # noqa: E501
126
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
127
+ year={2025}
128
+ }
129
+ ```
130
+
131
+ ## Links
132
+
133
+ - 🏠 [Project Page](https://depth-anything-3.github.io)
134
+ - 📄 [Paper](https://arxiv.org/abs/)
135
+ - 💻 [GitHub Repository](https://github.com/ByteDance-Seed/depth-anything-3)
136
+ - 🤗 [Hugging Face Demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-3)
137
+ - 📚 [Documentation](https://github.com/ByteDance-Seed/depth-anything-3#-useful-documentation)
138
+
139
+ ## Authors
140
+
141
+ [Haotong Lin](https://haotongl.github.io/) · [Sili Chen](https://github.com/SiliChen321) · [Junhao Liew](https://liewjunhao.github.io/) · [Donny Y. Chen](https://donydchen.github.io) · [Zhenyu Li](https://zhyever.github.io/) · [Guang Shi](https://scholar.google.com/citations?user=MjXxWbUAAAAJ&hl=en) · [Jiashi Feng](https://scholar.google.com.sg/citations?user=Q8iay0gAAAAJ&hl=en) · [Bingyi Kang](https://bingykang.github.io/) # noqa: E501
config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_name": "da3mono-large",
3
+ "config": {
4
+ "__object__": {
5
+ "path": "depth_anything_3.model.da3",
6
+ "name": "DepthAnything3Net",
7
+ "args": "as_params"
8
+ },
9
+ "net": {
10
+ "__object__": {
11
+ "path": "depth_anything_3.model.dinov2.dinov2",
12
+ "name": "DinoV2",
13
+ "args": "as_params"
14
+ },
15
+ "name": "vitl",
16
+ "out_layers": [
17
+ 4,
18
+ 11,
19
+ 17,
20
+ 23
21
+ ],
22
+ "alt_start": -1,
23
+ "qknorm_start": -1,
24
+ "rope_start": -1,
25
+ "cat_token": false
26
+ },
27
+ "head": {
28
+ "__object__": {
29
+ "path": "depth_anything_3.model.dpt",
30
+ "name": "DPT",
31
+ "args": "as_params"
32
+ },
33
+ "dim_in": 1024,
34
+ "output_dim": 1,
35
+ "features": 256,
36
+ "out_channels": [
37
+ 256,
38
+ 512,
39
+ 1024,
40
+ 1024
41
+ ]
42
+ }
43
+ }
44
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a799a7f95eb8d4c404c2ca8be3dc3276b350a417ddc4420db72ba850cc0e960
3
+ size 1336734448