- Docker: [Installation Guide]
- Docker Compose: [Installation Guide]
- Compatibile with Linux and Windows Host
- Ensure port 8000 is not already in use
- Project can be ran on either CPU or GPU
The following table outlines the recommended hardware requirements for each Whisper model based on typical usage scenarios. Please ensure that your system meets or exceeds these specifications for optimal performance.
Model | Size (GB) | Minimum RAM (GB) | Recommended RAM (GB) | GPU Memory (VRAM) (GB) | Notes |
---|---|---|---|---|---|
tiny |
~0.07 | 2 | 4 | 1 | Suitable for lightweight tasks and low resource usage. |
base |
~0.14 | 4 | 6 | 2 | Good for basic transcription and smaller workloads. |
small |
~0.46 | 6 | 8 | 4 | Ideal for moderate tasks, offering a balance between performance and accuracy. |
medium |
~1.5 | 8 | 12 | 8 | Recommended for larger tasks with higher accuracy demands. |
large-v2 |
~2.88 | 10 | 16 | 10 | Best for high-performance tasks and large-scale transcription. |
large-v3 |
~2.88 | 12 | 16+ | 10+ | Highest accuracy and resource usage. Ideal for GPU-accelerated environments. |
Tip
For models running on GPU, using CUDA-enabled GPUs with sufficient VRAM is recommended to significantly improve performance. CPU-based inference may require additional RAM and processing time.
Warning
By default, base
, base.en
, & large-v3
models are loaded. Models can be configured from the backend/Dockerfile
. However, base
model must not be removed as it is statically configured to be the default model.
- Audio:
.mp3
,.wav
,.flac
,.m4a
, etc. - Video:
.mp4
,.mkv
,.avi
,.mov
, etc.
- Users can export the results in
.txt
,.json
,.srt
, or.vtt
formats.
Note
Project will run on GPU by default. To run on CPU, use the docker-compose.cpu.yml
instead
- Clone this repository and navigate to project folder
git clone https://github.com/NotYuSheng/Transcribe-Translate.git
cd Transcribe-Translate
- Build the Docker images:
docker-compose build
- Run images
docker-compose up -d
- Access webpage from host
<host-ip>:8000
The application backend is configured to handle 4 concurrent users using the --workers
option in the backend/Dockerfile
.
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--timeout-keep-alive", "600", "--workers", "4"]
The app supports file uploads of up to 5GB. This is configured by setting client_max_body_size
in the nginx/nginx.conf
.
client_max_body_size 5G;
To accommodate large uploads and longer processing times by providing 10 mins keep alive setting in backend/Dockerfile
with the following timeout settings in nginx/nginx.conf
.
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--timeout-keep-alive", "600", "--workers", "4"]
proxy_read_timeout 600s;
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
send_timeout 600s;
Caution
Project is intended to be use in a local network by trusted user, therefore there is no rate limit configured and the project is vulnerable to request floods. Consider switching to slowapi
if this is unacceptable.
Tip
For transcribing English inputs, .en
version of the models are recommended