This work presents Depth Anything, a highly practical solution for robust monocular depth estimation. Without pursuing novel technical modules, we aim to build a simple yet powerful foundation model dealing with any images under any circumstances. To this end, we scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data (~17M), which significantly enlarges the data coverage and thus is able to reduce the generalization error. However, naively utilizing pseudo labels in training suffers severe performance drop. We investigate two simple yet effective strategies that make data scaling-up promising. First, a more challenging optimization target is created by leveraging data augmentation tools. It compels the model to actively seek extra visual knowledge and acquire more robust representations. Second, an auxiliary supervision is developed to enforce the model to inherit rich semantic priors from pre-trained encoders. We evaluate its zero-shot capabilities in various situations, including six public datasets and randomly captured photos. It demonstrates impressive generalization ability. Through fine-tuning it on NYUv2 and KITTI, new SOTAs are set. The models, training and evaluation code will all be released.