This repository contains a Python implementation of an answer provider backend server for the HarpIA Ajax Moodle plugin.
ConstantAnswerProvider
: always generates the same answer regardless of the input.EchoAnswerProvider
: always echoes the input, optionally converting it to uppercase.GPTAnswerProvider
: uses OpenAI API to obtain the answers from GPT models.OllamaAnswerProvider
: uses Ollama API to obtain the answers from several local models.
- Python ≥ 3.11;
- Docker (recommended).
-
Create a configuration file. Create a copy of the config_TEMPLATE.py file and edit it to choose the models that will be provided. Follow the instructions in the file.
-
Build the Docker image:
docker build -t harpia-model-gateway:1.0 -f containers/prod/Dockerfile .
-
Test an answer provider by interacting with it on a terminal (replace
./config/config1.py
with the path to your configuration file, and replaceECHO
with the name of the desired model as specified in the configuration):docker run --rm -it --name harpia-gateway -v './config/config1.py':/cfg.py harpia-model-gateway:1.0 --config=/cfg.py cli --provider='ECHO'
-
Start the server (replace
./config/config1.py
with the path to your configuration file, optionally replace all instances of42774
with the desired port):docker run --rm -it --name harpia-gateway -v './config/config1.py':/cfg.py -p 42774:42774 harpia-model-gateway:1.0 --config=/cfg.py server --host=0.0.0.0 --port=42774 --debug