Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

updated run commands #320

Merged
merged 4 commits into from
Mar 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
155 changes: 48 additions & 107 deletions docs/user-guide/advanced/replicate_evaluations.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,11 @@ Make sure to replace `<task>` in the commands below with `bach`, `crc`, `mhist`

## DINO ViT-S16 (random weights)

Evaluating the backbone with randomly initialized weights serves as a baseline to compare the pretrained FMs
to an FM that produces embeddings without any prior learning on image tasks. To evaluate, run:
Evaluating the backbone with randomly initialized weights serves as a baseline to compare the pretrained FMs to an FM that produces embeddings without any prior learning on image tasks. To evaluate, run:

```
# set environment variables:
export PRETRAINED=false
export EMBEDDINGS_ROOT="./data/embeddings/dino_vits16_random"
export REPO_OR_DIR=facebookresearch/dino:main
export DINO_BACKBONE=dino_vits16
export CHECKPOINT_PATH=null
export IN_FEATURES=384
export NORMALIZE_MEAN=[0.485,0.456,0.406]
export NORMALIZE_STD=[0.229,0.224,0.225]

# run eva:
PRETRAINED=false \
EMBEDDINGS_ROOT="./data/embeddings/dino_vits16_random" \
eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
```

Expand All @@ -31,35 +21,17 @@ eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
The next baseline model, uses a pretrained ViT-S16 backbone with ImageNet weights. To evaluate, run:

```
# set environment variables:
export PRETRAINED=true
export EMBEDDINGS_ROOT="./data/embeddings/dino_vits16_imagenet"
export REPO_OR_DIR=facebookresearch/dino:main
export DINO_BACKBONE=dino_vits16
export CHECKPOINT_PATH=null
export IN_FEATURES=384
export NORMALIZE_MEAN=[0.485,0.456,0.406]
export NORMALIZE_STD=[0.229,0.224,0.225]

# run eva:
EMBEDDINGS_ROOT="./data/embeddings/dino_vits16_imagenet" \
eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
```

## DINO ViT-B8 (ImageNet)

To evaluate performance on the larger ViT-B8 backbone pretrained on ImageNet, run:
```
# set environment variables:
export PRETRAINED=true
export EMBEDDINGS_ROOT="./data/embeddings/dino_vitb8_imagenet"
export REPO_OR_DIR=facebookresearch/dino:main
export DINO_BACKBONE=dino_vitb8
export CHECKPOINT_PATH=null
export IN_FEATURES=384
export NORMALIZE_MEAN=[0.485,0.456,0.406]
export NORMALIZE_STD=[0.229,0.224,0.225]

# run eva:
EMBEDDINGS_ROOT="./data/embeddings/dino_vitb8_imagenet" \
DINO_BACKBONE=dino_vitb8 \
IN_FEATURES=768 \
eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
```

Expand All @@ -69,17 +41,11 @@ eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
on [GitHub](https://github.com/lunit-io/benchmark-ssl-pathology/releases/). To evaluate, run:

```
# set environment variables:
export PRETRAINED=false
export EMBEDDINGS_ROOT="./data/embeddings/dino_vits16_lunit"
export REPO_OR_DIR=facebookresearch/dino:main
export DINO_BACKBONE=dino_vits16
export CHECKPOINT_PATH="https://github.com/lunit-io/benchmark-ssl-pathology/releases/download/pretrained-weights/dino_vit_small_patch16_ep200.torch"
export IN_FEATURES=384
export NORMALIZE_MEAN=[0.70322989,0.53606487,0.66096631]
export NORMALIZE_STD=[0.21716536,0.26081574,0.20723464]

# run eva:
PRETRAINED=false \
EMBEDDINGS_ROOT="./data/embeddings/dino_vits16_lunit" \
CHECKPOINT_PATH="https://github.com/lunit-io/benchmark-ssl-pathology/releases/download/pretrained-weights/dino_vit_small_patch16_ep200.torch" \
NORMALIZE_MEAN=[0.70322989,0.53606487,0.66096631] \
NORMALIZE_STD=[0.21716536,0.26081574,0.20723464] \
eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
```

Expand All @@ -89,14 +55,11 @@ eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
[HuggingFace](https://huggingface.co/owkin/phikon). To evaluate, run:

```
# set environment variables:
export EMBEDDINGS_ROOT="./data/embeddings/dino_vitb16_owkin"

# run eva:
EMBEDDINGS_ROOT="./data/embeddings/dino_vitb16_owkin" \
eva predict_fit --config configs/vision/owkin/phikon/offline/<task>.yaml
```

Note: since ***eva*** provides the config files to evaluate tasks with the Phikon FM in
Note: since *eva* provides the config files to evaluate tasks with the Phikon FM in
"configs/vision/owkin/phikon/offline", it is not necessary to set the environment variables needed for
the runs above.

Expand All @@ -106,17 +69,11 @@ To evaluate [kaiko.ai's](https://www.kaiko.ai/) FM with DINO ViT-S16 backbone, p
on [GitHub](https://github.com/lunit-io/benchmark-ssl-pathology/releases/), run:

```
# set environment variables:
export PRETRAINED=false
export EMBEDDINGS_ROOT="./data/embeddings/dino_vits16_kaiko"
export REPO_OR_DIR=facebookresearch/dino:main
export DINO_BACKBONE=dino_vits16
export CHECKPOINT_PATH=[TBD*]
export IN_FEATURES=384
export NORMALIZE_MEAN=[0.5,0.5,0.5]
export NORMALIZE_STD=[0.5,0.5,0.5]

# run eva:
PRETRAINED=false \
EMBEDDINGS_ROOT="./data/embeddings/dino_vits16_kaiko" \
CHECKPOINT_PATH=[TBD*] \
NORMALIZE_MEAN=[0.5,0.5,0.5] \
NORMALIZE_STD=[0.5,0.5,0.5] \
eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
```

Expand All @@ -128,17 +85,12 @@ To evaluate [kaiko.ai's](https://www.kaiko.ai/) FM with DINO ViT-S8 backbone, pr
on [GitHub](https://github.com/lunit-io/benchmark-ssl-pathology/releases/), run:

```
# set environment variables:
export PRETRAINED=false
export EMBEDDINGS_ROOT="./data/embeddings/dino_vits8_kaiko"
export REPO_OR_DIR=facebookresearch/dino:main
export DINO_BACKBONE=dino_vits8
export CHECKPOINT_PATH=[TBD*]
export IN_FEATURES=384
export NORMALIZE_MEAN=[0.5,0.5,0.5]
export NORMALIZE_STD=[0.5,0.5,0.5]

# run eva:
PRETRAINED=false \
EMBEDDINGS_ROOT="./data/embeddings/dino_vits8_kaiko" \
DINO_BACKBONE=dino_vits8 \
CHECKPOINT_PATH=[TBD*] \
NORMALIZE_MEAN=[0.5,0.5,0.5] \
NORMALIZE_STD=[0.5,0.5,0.5] \
eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
```

Expand All @@ -150,17 +102,13 @@ To evaluate [kaiko.ai's](https://www.kaiko.ai/) FM with the larger DINO ViT-B16
run:

```
# set environment variables:
export PRETRAINED=false
export EMBEDDINGS_ROOT="./data/embeddings/dino_vitb16_kaiko"
export REPO_OR_DIR=facebookresearch/dino:main
export DINO_BACKBONE=dino_vitb16
export CHECKPOINT_PATH=[TBD*]
export IN_FEATURES=768
export NORMALIZE_MEAN=[0.5,0.5,0.5]
export NORMALIZE_STD=[0.5,0.5,0.5]

# run eva:
PRETRAINED=false \
EMBEDDINGS_ROOT="./data/embeddings/dino_vitb16_kaiko" \
DINO_BACKBONE=dino_vitb16 \
CHECKPOINT_PATH=[TBD*] \
IN_FEATURES=768 \
NORMALIZE_MEAN=[0.5,0.5,0.5] \
NORMALIZE_STD=[0.5,0.5,0.5] \
eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
```

Expand All @@ -172,17 +120,13 @@ To evaluate [kaiko.ai's](https://www.kaiko.ai/) FM with the larger DINO ViT-B8 b
run:

```
# set environment variables:
export PRETRAINED=false
export EMBEDDINGS_ROOT="./data/embeddings/dino_vitb8_kaiko"
export REPO_OR_DIR=facebookresearch/dino:main
export DINO_BACKBONE=dino_vitb8
export CHECKPOINT_PATH=[TBD*]
export IN_FEATURES=768
export NORMALIZE_MEAN=[0.5,0.5,0.5]
export NORMALIZE_STD=[0.5,0.5,0.5]

# run eva:
PRETRAINED=false \
EMBEDDINGS_ROOT="./data/embeddings/dino_vitb8_kaiko" \
DINO_BACKBONE=dino_vitb8 \
CHECKPOINT_PATH=[TBD*] \
IN_FEATURES=768 \
NORMALIZE_MEAN=[0.5,0.5,0.5] \
NORMALIZE_STD=[0.5,0.5,0.5] \
eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
```

Expand All @@ -194,18 +138,15 @@ To evaluate [kaiko.ai's](https://www.kaiko.ai/) FM with the larger DINOv2 ViT-L1
run:

```
# set environment variables:
export PRETRAINED=false
export EMBEDDINGS_ROOT="./data/embeddings/dinov2_vitl14_kaiko"
export REPO_OR_DIR=facebookresearch/dinov2:main
export DINO_BACKBONE=dinov2_vitl14_reg
export FORCE_RELOAD=true
export CHECKPOINT_PATH=[TBD*]
export IN_FEATURES=1024
export NORMALIZE_MEAN=[0.5,0.5,0.5]
export NORMALIZE_STD=[0.5,0.5,0.5]

# run eva:
PRETRAINED=false \
EMBEDDINGS_ROOT="./data/embeddings/dinov2_vitl14_kaiko" \
REPO_OR_DIR=facebookresearch/dinov2:main \
DINO_BACKBONE=dinov2_vitl14_reg \
FORCE_RELOAD=true \
CHECKPOINT_PATH=[TBD*] \
IN_FEATURES=1024 \
NORMALIZE_MEAN=[0.5,0.5,0.5] \
NORMALIZE_STD=[0.5,0.5,0.5] \
eva predict_fit --config configs/vision/dino_vit/offline/<task>.yaml
```

Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/getting-started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
- Install *eva* and the *eva-vision* package with:

```
pip install --index-url https://nexus.infra.prd.kaiko.ai/repository/python-all/simple 'kaiko-eva[vision]'
pip install "kaiko-eva[vision] @ git+https://github.com/kaiko-ai/eva.git"
```

- To be able to use the existing configs, download them into directory where you installed *eva*. You can get them from our blob storage with:
Expand Down
11 changes: 4 additions & 7 deletions docs/user-guide/tutorials/evaluate_resnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,11 @@ Now let's adapt the new `bach.yaml`-config to the new model:
drop_rate: 0.0
pretrained: false
```
To reduce training time, let's overwrite some of the default parameters. In the terminal where you run *eva*, set:
```
export OUTPUT_ROOT=logs/resnet/bach
export MAX_STEPS=50
export LR_VALUE=0.01
```
Now train and evaluate the model by running:
To reduce training time, let's overwrite some of the default parameters. Run the training & evaluation with:
```
OUTPUT_ROOT=logs/resnet/bach \
MAX_STEPS=50 \
LR_VALUE=0.01 \
eva fit --config configs/vision/resnet18/bach.yaml
```
Once the run is complete, take a look at the results in `logs/resnet/bach/<session-id>/results.json` and check out the tensorboard with `tensorboard --logdir logs/resnet/bach`. How does the performance compare to the results observed in the previous tutorials?
38 changes: 16 additions & 22 deletions docs/user-guide/tutorials/offline_vs_online.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@ If you have not yet downloaded the BACH data to your machine, open `configs/visi
First, let's use the `predict`-command to download the data and compute embeddings. In this example we use a randomly initialized `dino_vits16` as backbone.

Open a terminal in the folder where you installed *eva* and run:
```
export PRETRAINED=false
export EMBEDDINGS_ROOT=./data/embeddings/dino_vits16_random

```
PRETRAINED=false \
EMBEDDINGS_ROOT=./data/embeddings/dino_vits16_random \
eva predict --config configs/vision/dino_vit/offline/bach.yaml
```

Expand All @@ -37,14 +37,12 @@ Once the session is complete, verify that:

Now we can use the `fit`-command to evaluate the FM on the precomputed embeddings.

To ensure a quick run for the purpose of this exercise, let's overwrite some of the default parameters. In the terminal where you run *eva*, set:
```
export MAX_STEPS=20
export LR_VALUE=0.1
```
To ensure a quick run for the purpose of this exercise, we overwrite some of the default parameters. Run *eva* to fit the decoder classifier with:

Now fit the decoder classifier, by running:
```
N_RUNS=2 \
MAX_STEPS=20 \
LR_VALUE=0.1 \
eva fit --config configs/vision/dino_vit/offline/bach.yaml
```

Expand All @@ -67,13 +65,11 @@ With the `predict_fit`-command, the two steps above can be executed with one com

Go back to the terminal and execute:
```
export N_RUNS=1
export MAX_STEPS=20
export BATCH_SIZE=256
export LR_VALUE=0.1
export PRETRAINED=true
export EMBEDDINGS_ROOT=./data/embeddings/dino_vits16_pretrained

N_RUNS=1 \
MAX_STEPS=20 \
LR_VALUE=0.1 \
PRETRAINED=true \
EMBEDDINGS_ROOT=./data/embeddings/dino_vits16_pretrained \
eva predict_fit --config configs/vision/dino_vit/offline/bach.yaml
```

Expand All @@ -87,12 +83,10 @@ As in *Step 3* above, we again use a `dino_vits16` pretrained from ImageNet.

Run a complete online workflow with the following command:
```
export N_RUNS=1
export MAX_STEPS=20
export BATCH_SIZE=256
export LR_VALUE=0.1
export PRETRAINED=true

N_RUNS=1 \
MAX_STEPS=20 \
LR_VALUE=0.1 \
PRETRAINED=true \
eva fit --config configs/vision/dino_vit/online/bach.yaml
```

Expand Down