We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Right now, we only use the queue.sidecar.serving.knative.dev/resourcePercentage annotation to configure the autoscaling and its value is configured globally per environment.
queue.sidecar.serving.knative.dev/resourcePercentage
As described here, we also cannot specify autoscaling policy for predictor (the model) and transformer specifically.
This issue tracks how to enables the user to specify autoscaling configuration for their model.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Right now, we only use the
queue.sidecar.serving.knative.dev/resourcePercentage
annotation to configure the autoscaling and its value is configured globally per environment.As described here, we also cannot specify autoscaling policy for predictor (the model) and transformer specifically.
This issue tracks how to enables the user to specify autoscaling configuration for their model.
The text was updated successfully, but these errors were encountered: