Example translation app built on top of Prediction Guard, extendable with DeepL, OpenAI, etc.
To run:
- Copy
config_template.yml
toconfig.yml
- Fill in or modify
config.yml
as needed/ appropriate - Run
python main.py
(alternatively run in Docker)
This translation service can integrate with custom translation engines, assuming they fulfill the expected API contract. When adding a custom engine, the following kind of entry is needed in the config.yml
:
custom:
models:
model_name:
url: https://your-custom-url
api_key: your-custom-api-key
languages:
- eng
- fra
- deu
- cmn
The config.yml
entry should have:
url
: The endpoint location to call the engineapi_key
: The API key for the engine endpoint, which will be added asx-api-key
in the API call headerslanguages
: List of supported ISO639-3 language codes
The custom translation engine endpoint should expect a JSON body that looks like:
{
"text": "The sky is blue",
"model": "nllb",
"source_lang": "eng",
"target_lang": "fra"
}
Where:
text
(required): The text that will be translatedmodel
(required): The name of the model used (as some endpoints will integrate multiple models)target_lang
(required): The ISO639-3 (three letter) code specifying the target languagesource_lang
(optional): The ISO639-3 (three letter) code specifying the source language
The custom translation engine endpoint should respond with a JSON body including a translation
field containing the text translation (string).