Skip to content

Commit

Permalink
Updated docs regarding config files download (#329)
Browse files Browse the repository at this point in the history
* updated docs regarding config files download

* fix eva text

* fix typo in `README.md`

* removed * from pcam

* fixed readme table

---------

Co-authored-by: ioangatop <[email protected]>
Co-authored-by: roman807 <[email protected]>
  • Loading branch information
3 people authored Mar 20, 2024
1 parent 9c856ae commit 7375768
Show file tree
Hide file tree
Showing 5 changed files with 7 additions and 15 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,8 @@ In this section you will find model benchmarks which were generated with _eva_.
|--------------------------------------------------|-------|-------|-------|----------|-----------|
| ViT-S/16 _(random)_ <sup>[1]</sup> | 0.410 | 0.617 | 0.501 | 0.753 | 0.728 |
| ViT-S/16 _(ImageNet)_ <sup>[1]</sup> | 0.695 | 0.935 | 0.831 | 0.864 | 0.849 |
| ViT-B/8 _(ImageNet)_ <sup>[1]</sup> | 0.797 | 0.943 | 0.828 | 0.903 | 0.893 |
| DINO<sub>(p=16)</sub> <sup>[2]</sup> | 0.710 | 0.935 | 0.814 | 0.870 | 0.856 |
| ViT-B/8 _(ImageNet)_ <sup>[1]</sup> | 0.710 | 0.939 | 0.814 | 0.870 | 0.856 |
| DINO<sub>(p=16)</sub> <sup>[2]</sup> | 0.801 | 0.934 | 0.768 | 0.889 | 0.895 |
| Phikon <sup>[3]</sup> | 0.725 | 0.935 | 0.777 | 0.912 | 0.915 |
| ViT-S/16 _(kaiko.ai)_ <sup>[4]</sup> | 0.797 | 0.943 | 0.828 | 0.903 | 0.893 |
| ViT-S/8 _(kaiko.ai)_ <sup>[4]</sup> | 0.834 | 0.946 | 0.832 | 0.897 | 0.887 |
Expand All @@ -113,7 +113,7 @@ _References_:
1. _"Emerging properties in self-supervised vision transformers”_
2. _"Benchmarking self-supervised learning on diverse pathology datasets”_
3. _"Scaling self-supervised learning for histopathology with masked image modeling”_
4. . _"Towards Training Large-Scale Pathology Foundation Models: from TCGA to Hospital Scale”_
4. _"Towards Training Large-Scale Pathology Foundation Models: from TCGA to Hospital Scale”_

## Contributing

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ We evaluated the following FMs on the 4 supported WSI-patch-level image classifi

<center>

| FM-backbone | pretraining | BACH | CRC | MHIST | PCam/val* | PCam/test* |
| FM-backbone | pretraining | BACH | CRC | MHIST | PCam/val | PCam/test |
|-----------------------------|-------------|------------------ |----------------- |----------------- |----------------- |-------------- |
| DINO ViT-S16 | N/A | 0.410 (±0.009) | 0.617 (±0.008) | 0.501 (±0.004) | 0.753 (±0.002) | 0.728 (±0.003) |
| DINO ViT-S16 | ImageNet | 0.695 (±0.004) | 0.935 (±0.003) | 0.831 (±0.002) | 0.864 (±0.007) | 0.849 (±0.007) |
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/getting-started/how_to_use.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ The setup for an *eva* run is provided in a `.yaml` config file which is defined

A config file specifies the setup for the *trainer* (including callback for the model backbone), the *model* (setup of the trainable decoder) and *data* module.

To get a better understanding, inspect some of the provided [config files](https://github.com/kaiko-ai/eva/tree/main/configs/vision) (which you will download if you run the tutorials).
The config files for the datasets and models that _eva_ supports out of the box, you can find on [GitHub](https://github.com/kaiko-ai/eva/tree/main/configs). We recommend that you inspect some of them to get a better understanding of their structure and content.


### Environment variables
Expand Down
9 changes: 0 additions & 9 deletions docs/user-guide/getting-started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,6 @@
pip install "kaiko-eva[vision] @ git+https://github.com/kaiko-ai/eva.git"
```

- To be able to use the existing configs, download them into directory where you installed *eva*. You can get them from our blob storage with:

```
azcopy copy https://kaiko.blob.core.windows.net/long-term-experimental/eva/configs . --recursive=true
```

(Alternatively you can also download them from the [*eva* GitHub repo](https://github.com/kaiko-ai/eva/tree/main))


## Run *eva*

Now you are all setup and you can start running *eva* with:
Expand Down
3 changes: 2 additions & 1 deletion docs/user-guide/tutorials/offline_vs_online.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,11 @@
In this tutorial we run *eva* with the three subcommands `predict`, `fit` and `predict_fit`, and take a look at the difference between *offline* and *online* workflows.

### Before you start
If you haven't downloaded the config files yet, please download them from [GitHub](https://github.com/kaiko-ai/eva/tree/main/configs).

For this tutorial we use the [BACH](../../datasets/bach.md) classification task which is available on [Zenodo](https://zenodo.org/records/3632035) and is distributed under [*Attribution-NonCommercial-ShareAlike 4.0 International*](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode) license.

If you have not yet downloaded the BACH data to your machine, open `configs/vision/dino_vit/offline/bach.yaml` and enable automatic download by setting: `download: true`.
To let **eva** automatically handle the dataset download, you can open `configs/vision/dino_vit/offline/bach.yaml` and set `download: true`. Before doing so, please make sure that your use case is compliant with the dataset license.

## *Offline* evaluations

Expand Down

0 comments on commit 7375768

Please sign in to comment.