Framework for seamless fine-tuning and deploying Whisper Model developed to advance Automatic Speech Recognition (ASR): translation and transcription capabilities for African languages.
-
π§ Fine-Tuning: Fine-tune the Whisper model on any audio dataset from Huggingface, e.g., Mozilla's Common Voice, Fleurs, LibriSpeech, or your own custom private/public dataset etc
-
π Metrics Monitoring: View training run metrics on Wandb.
-
π³ Production Deployment: Seamlessly containerize and deploy the model inference endpoint for real-world applications.
-
π Model Optimization: Utilize CTranslate2 for efficient model optimization, ensuring faster inference times.
-
π Word-Level Transcriptions: Produce detailed word-level transcriptions and translations, complete with timestamps.
-
ποΈ Multi-Speaker Diarization: Perform speaker identification and separation in multi-speaker audio using diarization techniques.
-
π Alignment Precision: Improve transcription and translation accuracy by aligning outputs with Wav2vec models.
-
π‘οΈ Reduced Hallucination: Leverage Voice Activity Detection (VAD) to minimize hallucination and improve transcription clarity.
The framework implements the following papers:
-
Robust Speech Recognition via Large-Scale Weak Supervision : Speech processing systems trained to predict large amounts of transcripts of audio on the internet scaled to 680,000 hours of multilingual and multitask supervision.
-
WhisperX: Time-Accurate Speech Transcription of Long-Form Audio for time-accurate speech recognition with word-level timestamps.
-
Pyannote.audio: Neural building blocks for speaker diarization for advanced speaker diarization capabilities.
-
Efficient and High-Quality Neural Machine Translation with OpenNMT: Efficient neural machine translation and model acceleration.
For more details, you can refer to the Whisper ASR model paper.
Refer to the Documentation to get started
Contributions are welcome and encouraged.
Before contributing, please take a moment to review our Contribution Guidelines for important information on how to contribute to this project.
If you're unsure about anything or need assistance, don't hesitate to reach out to us or open an issue to discuss your ideas.
We look forward to your contributions!
This project is licensed under the MIT License - see the LICENSE file for details.
For any enquiries, please reach out to me through [email protected]