v2.1.0
[2.1.0] - 2024-08-22
Fix
- Catch network exceptions and keep the loop going
Change
- Migrate default config to llama compatible config
- Use COMPUTE_DEVICE to determine gpu offloading
- Use TaskProcessingProvider class for registration
- Better handling of app enabled state
- Download models on /init