diff --git a/README.md b/README.md
index 9326f670..3fc27062 100644
--- a/README.md
+++ b/README.md
@@ -93,8 +93,8 @@ In this section you will find model benchmarks which were generated with _eva_.
|--------------------------------------------------|-------|-------|-------|----------|-----------|
| ViT-S/16 _(random)_ [1] | 0.410 | 0.617 | 0.501 | 0.753 | 0.728 |
| ViT-S/16 _(ImageNet)_ [1] | 0.695 | 0.935 | 0.831 | 0.864 | 0.849 |
-| ViT-B/8 _(ImageNet)_ [1] | 0.797 | 0.943 | 0.828 | 0.903 | 0.893 |
-| DINO(p=16) [2] | 0.710 | 0.935 | 0.814 | 0.870 | 0.856 |
+| ViT-B/8 _(ImageNet)_ [1] | 0.710 | 0.939 | 0.814 | 0.870 | 0.856 |
+| DINO(p=16) [2] | 0.801 | 0.934 | 0.768 | 0.889 | 0.895 |
| Phikon [3] | 0.725 | 0.935 | 0.777 | 0.912 | 0.915 |
| ViT-S/16 _(kaiko.ai)_ [4] | 0.797 | 0.943 | 0.828 | 0.903 | 0.893 |
| ViT-S/8 _(kaiko.ai)_ [4] | 0.834 | 0.946 | 0.832 | 0.897 | 0.887 |
@@ -113,7 +113,7 @@ _References_:
1. _"Emerging properties in self-supervised vision transformers”_
2. _"Benchmarking self-supervised learning on diverse pathology datasets”_
3. _"Scaling self-supervised learning for histopathology with masked image modeling”_
-4. . _"Towards Training Large-Scale Pathology Foundation Models: from TCGA to Hospital Scale”_
+4. _"Towards Training Large-Scale Pathology Foundation Models: from TCGA to Hospital Scale”_
## Contributing
diff --git a/docs/index.md b/docs/index.md
index d42e1c91..636d141d 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -67,7 +67,7 @@ We evaluated the following FMs on the 4 supported WSI-patch-level image classifi
-| FM-backbone | pretraining | BACH | CRC | MHIST | PCam/val* | PCam/test* |
+| FM-backbone | pretraining | BACH | CRC | MHIST | PCam/val | PCam/test |
|-----------------------------|-------------|------------------ |----------------- |----------------- |----------------- |-------------- |
| DINO ViT-S16 | N/A | 0.410 (±0.009) | 0.617 (±0.008) | 0.501 (±0.004) | 0.753 (±0.002) | 0.728 (±0.003) |
| DINO ViT-S16 | ImageNet | 0.695 (±0.004) | 0.935 (±0.003) | 0.831 (±0.002) | 0.864 (±0.007) | 0.849 (±0.007) |
diff --git a/docs/user-guide/getting-started/how_to_use.md b/docs/user-guide/getting-started/how_to_use.md
index 6fca671e..919aa78d 100644
--- a/docs/user-guide/getting-started/how_to_use.md
+++ b/docs/user-guide/getting-started/how_to_use.md
@@ -34,7 +34,7 @@ The setup for an *eva* run is provided in a `.yaml` config file which is defined
A config file specifies the setup for the *trainer* (including callback for the model backbone), the *model* (setup of the trainable decoder) and *data* module.
-To get a better understanding, inspect some of the provided [config files](https://github.com/kaiko-ai/eva/tree/main/configs/vision) (which you will download if you run the tutorials).
+The config files for the datasets and models that _eva_ supports out of the box, you can find on [GitHub](https://github.com/kaiko-ai/eva/tree/main/configs). We recommend that you inspect some of them to get a better understanding of their structure and content.
### Environment variables
diff --git a/docs/user-guide/getting-started/installation.md b/docs/user-guide/getting-started/installation.md
index 85b13ced..d93c5043 100644
--- a/docs/user-guide/getting-started/installation.md
+++ b/docs/user-guide/getting-started/installation.md
@@ -11,15 +11,6 @@
pip install "kaiko-eva[vision] @ git+https://github.com/kaiko-ai/eva.git"
```
-- To be able to use the existing configs, download them into directory where you installed *eva*. You can get them from our blob storage with:
-
-```
-azcopy copy https://kaiko.blob.core.windows.net/long-term-experimental/eva/configs . --recursive=true
-```
-
-(Alternatively you can also download them from the [*eva* GitHub repo](https://github.com/kaiko-ai/eva/tree/main))
-
-
## Run *eva*
Now you are all setup and you can start running *eva* with:
diff --git a/docs/user-guide/tutorials/offline_vs_online.md b/docs/user-guide/tutorials/offline_vs_online.md
index 7dae65d7..4ee81115 100644
--- a/docs/user-guide/tutorials/offline_vs_online.md
+++ b/docs/user-guide/tutorials/offline_vs_online.md
@@ -3,10 +3,11 @@
In this tutorial we run *eva* with the three subcommands `predict`, `fit` and `predict_fit`, and take a look at the difference between *offline* and *online* workflows.
### Before you start
+If you haven't downloaded the config files yet, please download them from [GitHub](https://github.com/kaiko-ai/eva/tree/main/configs).
For this tutorial we use the [BACH](../../datasets/bach.md) classification task which is available on [Zenodo](https://zenodo.org/records/3632035) and is distributed under [*Attribution-NonCommercial-ShareAlike 4.0 International*](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode) license.
-If you have not yet downloaded the BACH data to your machine, open `configs/vision/dino_vit/offline/bach.yaml` and enable automatic download by setting: `download: true`.
+To let **eva** automatically handle the dataset download, you can open `configs/vision/dino_vit/offline/bach.yaml` and set `download: true`. Before doing so, please make sure that your use case is compliant with the dataset license.
## *Offline* evaluations