You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're using indictrans translation models, and the huggingface api is throwing a runtime error. We set up a docker file for the model on AWS but it's eating up around 16GB of RAM (and requiring t2 xl tier which is quite expensive). We want to use the translation model in production in our mobile app . Any ideas on what can be done to reduce costs or make the API calls easier?
The text was updated successfully, but these errors were encountered:
We're using indictrans translation models, and the huggingface api is throwing a runtime error. We set up a docker file for the model on AWS but it's eating up around 16GB of RAM (and requiring t2 xl tier which is quite expensive). We want to use the translation model in production in our mobile app . Any ideas on what can be done to reduce costs or make the API calls easier?
The text was updated successfully, but these errors were encountered: