Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add payload limit #2726

Merged
merged 2 commits into from
Nov 21, 2024
Merged

feat: add payload limit #2726

merged 2 commits into from
Nov 21, 2024

Conversation

OlivierDehaene
Copy link
Member

Related to #1802

Copy link
Member

@danieldk danieldk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, wondered if the cu_seqlen_prefill changes are intentionally in the same PR? Docs need to be regenerated for the additional launcher argument.

self.cu_seqlen_prefill = torch.nn.functional.pad(
torch.cumsum(self.input_lengths_tensor, dim=0), (1, 0)
).to(torch.int32)
cu_seqlen_prefill = self.input_lengths_tensor.new_zeros(len(self) + 1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these changes in the same PR intentional?

@@ -287,6 +290,7 @@ async fn main() -> Result<(), TensorRtLlmBackendError> {
tokenizer_name,
tokenizer_config_path,
revision,
false,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this intentional ?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@OlivierDehaene OlivierDehaene merged commit ab7ccf5 into main Nov 21, 2024
7 of 9 checks passed
@OlivierDehaene OlivierDehaene deleted the feat/limit branch November 21, 2024 18:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants