Skip to content

Source code for the ISCA Interspeech 2024 paper On the Utility of Speech and Audio Foundation Models for Marmoset Call Analysis by E. Sarkar and M. Magimai Doss (2024)

License

Notifications You must be signed in to change notification settings

idiap/speech-utility-bioacoustics

Repository files navigation

On the Utility of Speech and Audio Foundation Models for Marmoset Call Analysis

[Paper] [Slides]

python pytorch lightning hydra black isort license license

header

Cite

This repository contains the source code for the ISCA Interspeech 2024 paper On the Utility of Speech and Audio Foundation Models for Marmoset Call Analysis by E. Sarkar and M. Magimai Doss (2024). It was accepted at the 4th International Workshop on Vocal Interactivity in-and-between Humans, Animals and Robots (VIHAR 2024) workshop track.

Please cite the original authors for their work in any publication(s) that uses this work:

@inproceedings{sarkar24_vihar,
  title     = {On the Utility of Speech and Audio Foundation Models for Marmoset Call Analysis},
  author    = {Eklavya Sarkar and Mathew Magimai.-Doss},
  year      = {2024},
  booktitle = {4th International Workshop on Vocal Interactivity In-and-between Humans, Animals and Robots (VIHAR2024)},
  doi       = {10.5281/zenodo.13935495},
  isbn      = {978-2-9562029-3-6},
}

Dataset

InfantMarmosetsVox is a dataset for multi-class call-type and caller identification. It contains audio recordings of different individual marmosets and their call-types. The dataset contains a total of 350 files of precisely labelled 10-minute audio recordings across all caller classes. The audio was recorded from five pairs of infant marmoset twins, each recorded individually in two separate sound-proofed recording rooms at a sampling rate of 44.1 kHz. The start and end time, call-type, and marmoset identity of each vocalization are provided, labeled by an experienced researcher. It contains a total of 169,318 labeled audio segments, which amounts to 72,921 vocalization segments once removing the "Silence" and "Noise" classes. There are 11 different call-types (excluding "Silence" and "Noise") and 10 different caller identities.

The dataset is publicly available here, and contains a usable Pytorch Dataset and Dataloader. Any publication (eg. conference paper, journal article, technical report, book chapter, etc) resulting from the usage of InfantsMarmosetVox must cite this paper:

@inproceedings{sarkar23_interspeech,
  title     = {Can Self-Supervised Neural Representations Pre-Trained on Human Speech distinguish Animal Callers?},
  author    = {Eklavya Sarkar and Mathew Magimai.-Doss},
  year      = {2023},
  booktitle = {INTERSPEECH 2023},
  pages     = {1189--1193},
  doi       = {10.21437/Interspeech.2023-1968},
  issn      = {2958-1796},
}

More information on the usage is provided in the README.txt file of the dataset.

Installation

This package has very few requirements. To create a new conda/mamba environment, install conda, then mamba, and simply follow the next steps:

# Clone project
git clone https://github.com/idiap/speech-utility-bioacoustics
cd speech-utility-bioacoustics

# Create and activate environment
mamba env create -f environment.yml
mamba activate marmosets

Usage

Train model with chosen experiment configuration from configs/experiment/

python src/train.py experiment=experiment_name.yaml

You can override any parameter from command line like this

python src/train.py trainer.max_epochs=20

Experiments

The experiments conducted in this paper can be found in the scripts folder. These contain feature extraction, pairwise distance computation, and training scripts.

Sample run:

$ ./scripts/train/wavlm.sh

These use gridtk but can be reconfigured according to the user's needs.

Directory Structure

The structure of this directory is organized as the following:

.
├── CITATION.cff            # Setup
├── configs                 # Experiment configs
├── environment.yaml        # Environment file
├── hydra_plugins           # Plugins
├── img                     # Images
├── LICENSE                 # License
├── Makefile                # Setup
├── MANIFEST.in             # Setup
├── pyproject.toml          # Setup
├── README.md               # This file
├── requirements.txt        # Requirements
├── scripts                 # Scripts
├── setup.py                # Setup
├── src                     # Python source code
└── version.txt             # Version

Contact

For questions or reporting issues to this software package, kindly contact the first author.

About

Source code for the ISCA Interspeech 2024 paper On the Utility of Speech and Audio Foundation Models for Marmoset Call Analysis by E. Sarkar and M. Magimai Doss (2024)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published