You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the project uses huggingface's "Inference API". It's a great free service but comes at the cost of having to wait for servers to be provisioned.
An option would be to move to https://huggingface.co/inference-endpoints, a paid service, which would allow high uptimes and potentially faster inference. This could be offered through some sort of low-cost subscription service (I'm hoping less than a few euros a month), if there is enough interest for this.
If you yourself would be interested in this, please react to this post. Feel free to also use this issue as a place to discuss this idea or ask questions.
The text was updated successfully, but these errors were encountered:
Currently, the project uses huggingface's "Inference API". It's a great free service but comes at the cost of having to wait for servers to be provisioned.
An option would be to move to https://huggingface.co/inference-endpoints, a paid service, which would allow high uptimes and potentially faster inference. This could be offered through some sort of low-cost subscription service (I'm hoping less than a few euros a month), if there is enough interest for this.
If you yourself would be interested in this, please react to this post. Feel free to also use this issue as a place to discuss this idea or ask questions.
The text was updated successfully, but these errors were encountered: