Skip to content

Commit

Permalink
Merge pull request espnet#5730 from satvik-dixit/tmp
Browse files Browse the repository at this point in the history
ESPnet Recipe for ASR on the Makerere Radio Speech Corpus
  • Loading branch information
sw005320 authored May 1, 2024
2 parents 0d0428d + ac71eb4 commit 543f488
Show file tree
Hide file tree
Showing 20 changed files with 412 additions and 0 deletions.
1 change: 1 addition & 0 deletions egs2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,7 @@ See: https://espnet.github.io/espnet/espnet2_tutorial.html#recipes-using-espnet2
| lt_speech_commands | Lithuanian Speech Commands dataset | LIT | https://github.com/kolesov93/lt_speech_commands | |
| m4singer | Multi-Style, Multi-Singer and Musical Score Provided Mandarin Singing Corpus | SVS | CMN | https://drive.google.com/file/d/1xC37E59EWRRFFLdG3aJkVqwtLDgtFNqW/view?usp=share_link | |
| magicdata | MAGICDATA Mandarin Chinese Read Speech Corpus | ASR | ENG | https://www.openslr.org/68/ | |
| makerere | Makerere Radio Speech Corpus | ASR | LUG | https://zenodo.org/records/5855017 | |
| media | MEDIA speech database for French | SLU/Entity Classifi. | FRA | https://catalogue.elra.info/en-us/repository/browse/ELRA-S0272/ | |
| mediaspeech | MediaSpeech: Multilanguage ASR Benchmark and Dataset | ASR | FRA | https://www.openslr.org/108/ | |
| meld | MELD: Multimodal EmotionLines Dataset | SLU | ENG | https://affective-meld.github.io/ | |
Expand Down
48 changes: 48 additions & 0 deletions egs2/makerere/asr1/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Mar 28 00:34:47 EDT 2024`
- python version: `3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0]`
- espnet version: `espnet 202402`
- pytorch version: `pytorch 2.0.1`
- Git hash: `eed7751c910977290ef9a177ea0942a0e3c2fd35`
- Commit date: `Mon Mar 25 18:26:50 2024 +0000`
- HF link: https://huggingface.co/satvik-dixit/asr_makerere

## exp/asr_train_asr_demo_branchformer_raw_en_bpe300_sp
### WER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_beam_size10_ctc_weight0.3_asr_model_valid.acc.ave/test|2154|30850|58.1|33.9|8.0|5.2|47.1|98.5|

### CER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_beam_size10_ctc_weight0.3_asr_model_valid.acc.ave/test|2154|207551|88.9|4.0|7.1|4.3|15.4|98.5|

### TER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_beam_size10_ctc_weight0.3_asr_model_valid.acc.ave/test|2154|83642|74.7|15.4|9.9|4.7|30.0|98.5|

## exp/asr_train_asr_demo_branchformer_raw_en_bpe300_sp/inference_beam_size10_ctc_weight0.3_asr_model_valid.acc.ave
### WER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/train_dev|200|2560|52.1|37.0|10.9|4.2|52.1|98.0|

### CER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/train_dev|200|16726|85.3|4.9|9.7|4.2|18.9|98.0|

### TER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/train_dev|200|6636|68.8|18.3|12.9|4.6|35.8|98.0|
1 change: 1 addition & 0 deletions egs2/makerere/asr1/asr.sh
110 changes: 110 additions & 0 deletions egs2/makerere/asr1/cmd.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# ====== About run.pl, queue.pl, slurm.pl, and ssh.pl ======
# Usage: <cmd>.pl [options] JOB=1:<nj> <log> <command...>
# e.g.
# run.pl --mem 4G JOB=1:10 echo.JOB.log echo JOB
#
# Options:
# --time <time>: Limit the maximum time to execute.
# --mem <mem>: Limit the maximum memory usage.
# -–max-jobs-run <njob>: Limit the number parallel jobs. This is ignored for non-array jobs.
# --num-threads <ngpu>: Specify the number of CPU core.
# --gpu <ngpu>: Specify the number of GPU devices.
# --config: Change the configuration file from default.
#
# "JOB=1:10" is used for "array jobs" and it can control the number of parallel jobs.
# The left string of "=", i.e. "JOB", is replaced by <N>(Nth job) in the command and the log file name,
# e.g. "echo JOB" is changed to "echo 3" for the 3rd job and "echo 8" for 8th job respectively.
# Note that the number must start with a positive number, so you can't use "JOB=0:10" for example.
#
# run.pl, queue.pl, slurm.pl, and ssh.pl have unified interface, not depending on its backend.
# These options are mapping to specific options for each backend and
# it is configured by "conf/queue.conf" and "conf/slurm.conf" by default.
# If jobs failed, your configuration might be wrong for your environment.
#
#
# The official documentation for run.pl, queue.pl, slurm.pl, and ssh.pl:
# "Parallelization in Kaldi": http://kaldi-asr.org/doc/queue.html
# =========================================================~


# Select the backend used by run.sh from "local", "stdout", "sge", "slurm", or "ssh"
cmd_backend='local'

# Local machine, without any Job scheduling system
if [ "${cmd_backend}" = local ]; then

# The other usage
export train_cmd="run.pl"
# Used for "*_train.py": "--gpu" is appended optionally by run.sh
export cuda_cmd="run.pl"
# Used for "*_recog.py"
export decode_cmd="run.pl"

# Local machine logging to stdout and log file, without any Job scheduling system
elif [ "${cmd_backend}" = stdout ]; then

# The other usage
export train_cmd="stdout.pl"
# Used for "*_train.py": "--gpu" is appended optionally by run.sh
export cuda_cmd="stdout.pl"
# Used for "*_recog.py"
export decode_cmd="stdout.pl"


# "qsub" (Sun Grid Engine, or derivation of it)
elif [ "${cmd_backend}" = sge ]; then
# The default setting is written in conf/queue.conf.
# You must change "-q g.q" for the "queue" for your environment.
# To know the "queue" names, type "qhost -q"
# Note that to use "--gpu *", you have to setup "complex_value" for the system scheduler.

export train_cmd="queue.pl"
export cuda_cmd="queue.pl"
export decode_cmd="queue.pl"


# "qsub" (Torque/PBS.)
elif [ "${cmd_backend}" = pbs ]; then
# The default setting is written in conf/pbs.conf.

export train_cmd="pbs.pl"
export cuda_cmd="pbs.pl"
export decode_cmd="pbs.pl"


# "sbatch" (Slurm)
elif [ "${cmd_backend}" = slurm ]; then
# The default setting is written in conf/slurm.conf.
# You must change "-p cpu" and "-p gpu" for the "partition" for your environment.
# To know the "partion" names, type "sinfo".
# You can use "--gpu * " by default for slurm and it is interpreted as "--gres gpu:*"
# The devices are allocated exclusively using "${CUDA_VISIBLE_DEVICES}".

export train_cmd="slurm.pl"
export cuda_cmd="slurm.pl"
export decode_cmd="slurm.pl"

elif [ "${cmd_backend}" = ssh ]; then
# You have to create ".queue/machines" to specify the host to execute jobs.
# e.g. .queue/machines
# host1
# host2
# host3
# Assuming you can login them without any password, i.e. You have to set ssh keys.

export train_cmd="ssh.pl"
export cuda_cmd="ssh.pl"
export decode_cmd="ssh.pl"

# This is an example of specifying several unique options in the JHU CLSP cluster setup.
# Users can modify/add their own command options according to their cluster environments.
elif [ "${cmd_backend}" = jhu ]; then

export train_cmd="queue.pl --mem 2G"
export cuda_cmd="queue-freegpu.pl --mem 2G --gpu 1 --config conf/queue.conf"
export decode_cmd="queue.pl --mem 4G"

else
echo "$0: Error: Unknown cmd_backend=${cmd_backend}" 1>&2
return 1
fi
2 changes: 2 additions & 0 deletions egs2/makerere/asr1/conf/fbank.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
--sample-frequency=16000
--num-mel-bins=80
11 changes: 11 additions & 0 deletions egs2/makerere/asr1/conf/pbs.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Default configuration
command qsub -V -v PATH -S /bin/bash
option name=* -N $0
option mem=* -l mem=$0
option mem=0 # Do not add anything to qsub_opts
option num_threads=* -l ncpus=$0
option num_threads=1 # Do not add anything to qsub_opts
option num_nodes=* -l nodes=$0:ppn=1
default gpu=0
option gpu=0
option gpu=* -l ngpus=$0
1 change: 1 addition & 0 deletions egs2/makerere/asr1/conf/pitch.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
--sample-frequency=16000
12 changes: 12 additions & 0 deletions egs2/makerere/asr1/conf/queue.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Default configuration
command qsub -v PATH -cwd -S /bin/bash -j y -l arch=*64*
option name=* -N $0
option mem=* -l mem_free=$0,ram_free=$0
option mem=0 # Do not add anything to qsub_opts
option num_threads=* -pe smp $0
option num_threads=1 # Do not add anything to qsub_opts
option max_jobs_run=* -tc $0
option num_nodes=* -pe mpi $0 # You must set this PE as allocation_rule=1
default gpu=0
option gpu=0
option gpu=* -l gpu=$0 -q g.q
14 changes: 14 additions & 0 deletions egs2/makerere/asr1/conf/slurm.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Default configuration
command sbatch --export=PATH
option name=* --job-name $0
option time=* --time $0
option mem=* --mem-per-cpu $0
option mem=0
option num_threads=* --cpus-per-task $0
option num_threads=1 --cpus-per-task 1
option num_nodes=* --nodes $0
default gpu=0
option gpu=0 -p cpu
option gpu=* -p gpu --gres=gpu:$0 -c $0 # Recommend allocating more CPU than, or equal to the number of GPU
# note: the --max-jobs-run option is supported as a special case
# by slurm.pl and you don't have to handle it in the config file.
63 changes: 63 additions & 0 deletions egs2/makerere/asr1/conf/train_asr_demo_branchformer.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
batch_type: folded
batch_size: 64
accum_grad: 2 # gradient accumulation steps
max_epoch: 100
patience: none
init: xavier_uniform
best_model_criterion: # criterion to save best models
- - valid
- acc
- max
keep_nbest_models: 10 # save nbest models and average these checkpoints
use_amp: true # whether to use automatic mixed precision
num_att_plot: 0 # do not save attention plots to save time in the demo
num_workers: 2 # number of workers in dataloader

encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true

decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 1024
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1

model_conf:
ctc_weight: 0.3 # joint CTC/attention training
lsm_weight: 0.1 # label smoothing weight
length_normalized_loss: false

optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr # linearly increase and exponentially decrease
scheduler_conf:
warmup_steps: 800
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
1 change: 1 addition & 0 deletions egs2/makerere/asr1/db.sh
61 changes: 61 additions & 0 deletions egs2/makerere/asr1/local/data.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
#!/usr/bin/env bash
# Set bash to 'debug' mode, it will exit on :
# -e 'error', -u 'undefined variable', -o ... 'error in pipeline', -x 'print commands',
set -e
set -u
set -o pipefail

log() {
local fname=${BASH_SOURCE[1]##*/}
echo -e "$(date '+%Y-%m-%dT%H:%M:%S') (${fname}:${BASH_LINENO[0]}:${FUNCNAME[1]}) $*"
}
SECONDS=0

stage=1
stop_stage=100

log "$0 $*"
. utils/parse_options.sh

. ./db.sh
. ./path.sh
. ./cmd.sh

if [ $# -ne 0 ]; then
log "Error: No positional arguments are required."
exit 2
fi

if [ -z "${MAKERERE}" ]; then
log "Fill the value of 'makerere' of db.sh"
exit 1
fi

train_set="train_nodev"
train_dev="train_dev"
ndev_utt=200


if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
log "stage 1: Data preparation"
mkdir -p data/{train,test}


# Prepare data in the Kaldi format, including three files:
# text, wav.scp, utt2spk
python3 local/data_prep.py ${MAKERERE} sph2pipe

for x in test train; do
for f in text wav.scp utt2spk; do
sort data/${x}/${f} -o data/${x}/${f}
done
utils/utt2spk_to_spk2utt.pl data/${x}/utt2spk > "data/${x}/spk2utt"
done

# make a dev set
utils/subset_data_dir.sh --first data/train "${ndev_utt}" "data/${train_dev}"
n=$(($(wc -l < data/train/text) - ndev_utt))
utils/subset_data_dir.sh --last data/train "${n}" "data/${train_set}"
fi

log "Successfully finished. [elapsed=${SECONDS}s]"
51 changes: 51 additions & 0 deletions egs2/makerere/asr1/local/data_prep.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
import csv
import glob
import os
import random
import sys

if __name__ == "__main__":

if len(sys.argv) != 3:
print("Usage: python data_prep.py [root] [sph2pipe]")
sys.exit(1)

root = sys.argv[1]
sph2pipe = sys.argv[2]

all_audio_list = glob.glob(
os.path.join(root, "makerere_radio_dataset/transcribed/dataset", "*.wav")
)
random.shuffle(all_audio_list)

df = {}
with open(
"downloads/makerere_radio_dataset/transcribed/cleaned.csv", "r"
) as csvfile:
reader = csv.reader(csvfile)
next(reader)
for row in reader:
df[row[0]] = row[2]

for x in ["train", "test"]:
if x == "train":
audio_list = all_audio_list[0 : int(len(all_audio_list) * 0.8)]
else:
audio_list = all_audio_list[int(len(all_audio_list) * 0.8) :]
# print('audio_list', len(audio_list))

with open(os.path.join("data", x, "text"), "w") as text_f, open(
os.path.join("data", x, "wav.scp"), "w"
) as wav_scp_f, open(os.path.join("data", x, "utt2spk"), "w") as utt2spk_f:
i = 0
for audio_path in audio_list:
filename = os.path.basename(audio_path)
speaker = filename.split(".")[0][8]
if filename not in df or len(list(df[filename])) == 0:
continue
transcript = df[filename]
uttid = filename[:-4] # "sk-o73a"
wav_scp_f.write(f"{uttid} {audio_path}\n")
text_f.write(f"{uttid} {transcript}\n")
utt2spk_f.write(f"{uttid} {speaker}\n")
i = i + 1
Empty file.
1 change: 1 addition & 0 deletions egs2/makerere/asr1/path.sh
Loading

0 comments on commit 543f488

Please sign in to comment.