Skip to content

Commit

Permalink
Update TIM.md
Browse files Browse the repository at this point in the history
  • Loading branch information
JacobChalk authored Apr 26, 2024
1 parent d4c37d0 commit 54aedb9
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion _publications/TIM.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ paperurl: 'https://arxiv.org/pdf/2302.00646.pdf'

Diverse actions give rise to rich audio-visual signals in long videos. Recent works showcase that the two modalities of audio and video exhibit different temporal extents of events and distinct labels. We address the interplay between the two modalities in long videos by explicitly modelling the temporal extents of audio and visual events. We propose the Time Interval Machine (TIM) where a modality-specific time interval poses as a query to a transformer encoder that ingests a long video input. The encoder then attends to the specified interval, as well as the surrounding context in both modalities, in order to recognise the ongoing action.

We test TIM on three long audio-visual video datasets: EPIC-KITCHENS, Perception Test, and AVE, reporting state-of-the-art (SOTA) for recognition. On EPIC-KITCHENS, we beat previous SOTA that utilises LLMs and significantly larger pre-training by 2.9% top-1 action recognition accuracy. Additionally, we show that TIM can be adapted for action detection, using dense multi-scale interval queries, outperforming SOTA on EPIC-KITCHENS-100 for most metrics, and showing strong performance on the Perception Test. Our ablations show the critical role of integrating the two modalities and modelling their time intervals in achieving this performance. Code and models at: [https://github.com/JacobChalk/TIM].
We test TIM on three long audio-visual video datasets: EPIC-KITCHENS, Perception Test, and AVE, reporting state-of-the-art (SOTA) for recognition. On EPIC-KITCHENS, we beat previous SOTA that utilises LLMs and significantly larger pre-training by 2.9% top-1 action recognition accuracy. Additionally, we show that TIM can be adapted for action detection, using dense multi-scale interval queries, outperforming SOTA on EPIC-KITCHENS-100 for most metrics, and showing strong performance on the Perception Test. Our ablations show the critical role of integrating the two modalities and modelling their time intervals in achieving this performance. Code and models at: [https://github.com/JacobChalk/TIM](https://github.com/JacobChalk/TIM).

[Project Webpage](https://jacobchalk.github.io/TIM-Project) |
[Paper](https://arxiv.org/pdf/2404.05559) |
Expand Down

0 comments on commit 54aedb9

Please sign in to comment.