From 6b731bb219e0bbcf8e1e362579035385622bd748 Mon Sep 17 00:00:00 2001 From: Merve Noyan Date: Tue, 9 Jul 2024 20:23:17 +0300 Subject: [PATCH] Update docs/source/en/tasks/monocular_depth_estimation.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/tasks/monocular_depth_estimation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/tasks/monocular_depth_estimation.md b/docs/source/en/tasks/monocular_depth_estimation.md index 67e969a6f0e6bd..1485cdadb48f13 100644 --- a/docs/source/en/tasks/monocular_depth_estimation.md +++ b/docs/source/en/tasks/monocular_depth_estimation.md @@ -31,7 +31,7 @@ There are two main depth estimation categories: - **Relative depth estimation**: Relative depth estimation aims to predict the depth order of objects or points in a scene without providing the precise measurements. These models output a depth map that indicates which parts of the scene are closer or farther relative to each other without the actual distances to A and B. -In this guide, we will see how to infer with [Depth Anything V2](https://huggingface.co/depth-anything/Depth-Anything-V2-Large), state-of-the-art zero-shot relative depth estimation model and [ZoeDepth](https://huggingface.co/docs/transformers/main/en/model_doc/zoedepth) an absolute depth estimation model. +In this guide, we will see how to infer with [Depth Anything V2](https://huggingface.co/depth-anything/Depth-Anything-V2-Large), a state-of-the-art zero-shot relative depth estimation model, and [ZoeDepth](https://huggingface.co/docs/transformers/main/en/model_doc/zoedepth), an absolute depth estimation model.