-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempting to adapt to English #6
Comments
I'm succesfully running with fisher using the set up described here: http://kaldi.sourceforge.net/online_decoding.html I still haven't been able to figure out how to merge this with your set up (as above). |
I'm planning to add English transcription system, and a language ID module so that the correct transcription path would be selected automatically for each utterance (with an option to force a specific language for the whole recording). Hopefully next week. |
Happy to help if I can. I'm taking a look at an open source Reverb system which is based on Kaldi http://reverb2014.dereverberation.com/workshop/reverb2014-papers/1569884459.pdf |
New to this , but looking to working on English as well, same as @aolney . You mention the language models are pruned; possibly adding guidance on building the LMs would definitely help expand the supported language pool faster. Happy to help if I can as well! |
I have created a language model construction outline which might be interesting for you http://cmusphinx.sourceforge.net/wiki/tutoriallmadvanced Overall, language model training is pretty complex process with some specifics. To use those models with Fisher you have to recompile the graph so it will take some preparation. |
Sorry for the delay. I started thinking that probably the English adaptation that I'm planning is not going to be satisfactory for most people. I think that those who wait for the English version want to use it in practice for transcribing actual data (interviews, speeches, whatever), and expect high accuracy. I'm not going to implement an English system that will be really usable for this because I don't have training data for English. |
So it's pretty straightforward to set up LIUM and Kaldi's Fisher English pre-built models to get a rough and ready transcription system going (I've basically already done this). What I don't have is the improvements in your system, for example multi-pass decoding. Would you be interested in providing some docs for how it would be accomplished using your set up? |
Hey, guys. We've been working with this system too, and have some tutorials available as well as an example of English adapted system, based on the Kaldi tedlium experiment. First, the language model building tutorial that applies to Kaldi experiments: http://speechkitchen.org/kaldi-language-model-building/ The adapted Kaldi Offline Transcriber for English, also in a VM, using the tedlium: http://speechkitchen.org/tedlium-vm/ There are also Docker and Vagrant versions available on our Github here, including ones based on switchboard and tedlium: https://github.com/srvk/srvk-sandbox We'd be thrilled to have people take a look, try out, and provide feedback on any of these VMs! |
Folks, did you end up findong to set up this with other models ? what is exactly needed for LM / PRUNED_LM / COMPOUNDER_LM and VOCAB ? is this all what we need to modify ? Cheers, |
We have changed to use other models by brute force; taking out In particular, for English we do only one pass of decoding, with only I recently updated a system to use even more different decoding: You could find the resulting code on the SRVK repo here: The changes occur primarily in the Makefile. I have copied the two sections RAPH_DIR?=$(EESEN_ROOT)/asr_egs/tedlium-fbank/data/lang_phn_test_pruned.lm3 FBANK calculationexample target:make build/trans/HVC000037/fbanknote the % pattern matches e.g. HVC000037build/trans/%/fbank: build/trans/%/spk2utt Decode with Eesen & 8kHz modelsexample targetmake build/trans/HVC000037/eesen8/decode/logbuild/trans/%/eesen8/decode/log: build/trans/%/spk2utt build/trans/%/fbank On 10/19/2015 07:20 AM, vince62s wrote:
Eric Riebling Interactive Systems Lab |
Hey @alumae , do you know if I can use the already files generated from http://www.openslr.org/11/ I am looking for to use files already to just transcribe the English audio file to text, thank you |
Let me see if I understand correctly: you want to use language models from OpenSLR instead of ones included with Eesen offline transcriber? If all you want to do is transcribe English audio to text, http://github.com/srvk/eesen-offline-transcriber includes models and is intended to do exactly this. On the other hand, if you wish to build your own language model from OpenSLR sources, that would take some work, but is not impossible. Some instructions on adapting the Eesen Offline Transcriber language model are here: http://speechkitchen.org/kaldi-language-model-building/ |
Please note that URLs above have changed to speech-kitchen.org |
I'm a Kaldi noob but interested in using your set up for English. I looked at your other project and the Kaldi discussion boards, and this model seems like a good fit
http://kaldi-asr.org/downloads/build/8/trunk/
However I'm not sure how to adapt your Makefile to use the new model. It seems I would need to at least swap out these lines:
but I'm not finding comparable files in Fisher.
The text was updated successfully, but these errors were encountered: