Skip to content

Commit

Permalink
update (#82)
Browse files Browse the repository at this point in the history
  • Loading branch information
Judyxujj authored Jun 9, 2024
1 parent 6cd804a commit 8872168
Show file tree
Hide file tree
Showing 4 changed files with 1,274 additions and 1 deletion.
12 changes: 11 additions & 1 deletion 2024-dynamic-encoder-size/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,14 @@ This folder contains configs and code related to the publication:

paper [Dynamic Encoder Size Based on Data-Driven Layer-wise Pruning for Speech Recognition]()

We use [RETURNN](https://github.com/rwth-i6/returnn) for training and our setups are based on [Sisyphus](https://github.com/rwth-i6/sisyphus).
We use [RETURNN](https://github.com/rwth-i6/returnn) for training and our setups are based on [Sisyphus](https://github.com/rwth-i6/sisyphus).

We use models parts from [i6-models](https://github.com/rwth-i6/i6_models/tree/jing-dynamic-encoder-size)


### TED-LIUM-v2 Simple-Top-K

ConformerCTCModel, ConformerCTCConfig and train_step in returnn config is defined in [here](https://github.com/rwth-i6/i6_experiments/blob/main/users/jxu/experiments/ctc/tedlium2/pytorch_networks/dynamic_encoder_size/simple_topk_refactored/jointly_train_simple_top_k_layerwise.py)


### TED-LIUM-v2 Iterative-Zero-Out
Loading

0 comments on commit 8872168

Please sign in to comment.