Skip to content

Latest commit

 

History

History
72 lines (51 loc) · 4.72 KB

File metadata and controls

72 lines (51 loc) · 4.72 KB

TimeSformer

Introduction

@misc{bertasius2021spacetime,
    title   = {Is Space-Time Attention All You Need for Video Understanding?},
    author  = {Gedas Bertasius and Heng Wang and Lorenzo Torresani},
    year    = {2021},
    eprint  = {2102.05095},
    archivePrefix = {arXiv},
    primaryClass = {cs.CV}
}

Model Zoo

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
timesformer_divST_8x32x1_15e_kinetics400_rgb short-side 320 8 TimeSformer ImageNet-21K 77.92 93.29 x 17874 ckpt log json
timesformer_jointST_8x32x1_15e_kinetics400_rgb short-side 320 8 TimeSformer ImageNet-21K 77.01 93.08 x 25658 ckpt log json
timesformer_sapceOnly_8x32x1_15e_kinetics400_rgb short-side 320 8 TimeSformer ImageNet-21K 76.93 92.90 x 12750 ckpt log json

Notes:

  1. The gpus indicates the number of gpu (32G V100) we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.005 for 8 GPUs x 8 videos/gpu and lr=0.00375 for 8 GPUs x 6 videos/gpu.
  2. We keep the test setting with the original repo (three crop x 1 clip).
  3. The pretrained model vit_base_patch16_224.pth used by TimeSformer was converted from vision_transformer.

For more details on data preparation, you can refer to Kinetics400 in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train TimeSformer model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/timesformer/timesformer_divST_8x32x1_15e_kinetics400_rgb.py \
    --work-dir work_dirs/timesformer_divST_8x32x1_15e_kinetics400_rgb.py \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test TimeSformer model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/timesformer/timesformer_divST_8x32x1_15e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json

For more details, you can refer to Test a dataset part in getting_started.