The backend which serves the pytorch model for image inference
Tech | Version |
---|---|
Python | 3.9.5 |
Flask | 2.1.2 |
PyTorch | 1.12.0 |
The server only exposes a single POST request on {hostname}/predict
Using Curl
curl -X POST -F file=@"<path to img>" {hostname}/predict
This setup was only tested on python 3.9.5
- Install python dependencies
pip install -r requirements.txt
- (Optional) edit model & label maps in
app.py
MODEL_PATH = 'models/resnet_large_resize_150_cpu.model'
LABEL_MAP_PATH = 'label_maps/full_label_map.json'
Note: Label maps must correspond to the model used (specified in the file name)
- Run server
flask run
Do use the heroku
branch if you're following this steps
- Setup new heroku
heroku login -i
heroku create <app name>
- Setup heroku remote and push
heroku git:remote -a <app name>
git push heroku heroku:master
- Setup environments for model & label map
heroku config:set MODEL_PATH=models/resnet_large_resize_150_cpu.model
heroku config:set LABEL_MAP_PATH=label_maps/full_label_map.json
The server should be up by now
Note: Feel free to change the model path and label map path, however as mentioned above do make sure they correspond to each other