Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low deployment success rate using the HF text-embedding-inference container, with missing pooling and tokenizer #115

Open
weigary opened this issue Oct 16, 2024 · 2 comments

Comments

@weigary
Copy link
Collaborator

weigary commented Oct 16, 2024

Hi HF folks,

We noticed that the deployment success rate of the TEI models with the HF text-embedding-inference container is very low for the recent top trending TEI models, in our deployment verification pipeline. We see only 23 out of a total of 500 were successful.

Example deployment

IMAGE=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-embeddings-inference-cu122.1-4.ubuntu2204

MODEL_ID=ncbi/MedCPT-Article-Encoder
docker run --gpus all -p 7080:80 -e MODEL_ID=${MODEL_ID} --pull always $IMAGE

We looked into the root cause of the failures, and we found the following two common failures:

  1. Error: The --pooling arg is not set and we could not find a pooling configuration (1_Pooling/config.json) for this model.
    My understanding is that the container expects a config file 1_Pooling/config.json in the repo, but the model owner failed to provide one. In this case, what is your suggestion should we do here?

We tried adding a default value of pooling=mean as an environment variable to the container and we see a significant improvement of the deployment success rate. 23/500 -> 243/500.

We know the pooling parameter is required for the TEI models. However, we are not quite sure if it is the correct way to add a default value for all the TEI models and have a concern if it has a negative impact on the model quality. Can you advise how should we do here?

  1. Caused by:
    0: request error: HTTP status client error (404 Not Found) for url (https://huggingface.co/facebook/MEXMA/resolve/main/tokenizer.json)
    1: HTTP status client error (404 Not Found) for url (https://huggingface.co/facebook/MEXMA/resolve/main/tokenizer.json)
    My understanding is that the container expects a tokenizer.json file in the repo but the model owner failed to provide one.

Looking on the model card, the model is a XLM-RoBERTa model, I guess it is the reason the repo does not have a tokenizer.json file? To use the model, the user also needs to load the tokenizer first

tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")

There are many models failed with the same error, can you advise if there is anything we can do here? we also have a concern that if we provide a default tokenizer will negatively impact on the model quality.

Thanks!

@weigary weigary changed the title Low deployment success rate using the HF text-embedding-inference container Low deployment success rate using the HF text-embedding-inference container, with missing pooling and tokenizer Oct 16, 2024
@OlivierDehaene
Copy link
Member

Hello
This PR will help with your first point.

Regarding the second one, unfortunately TEI relies on this file for tokenization. We will update the Hub tag to make sure that these models are removed.

@weigary
Copy link
Collaborator Author

weigary commented Oct 18, 2024

Thank you Olivier!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants