Skip to content

Commit

Permalink
Tokenizer kwargs in textgeneration pipe (huggingface#28362)
Browse files Browse the repository at this point in the history
* added args to the pipeline

* added test

* more sensical tests

* fixup

* docs

* typo
;

* docs

* made changes to support named args

* fixed test

* docs update

* styles

* docs

* docs
  • Loading branch information
thedamnedrhino authored and wgifford committed Jan 21, 2024
1 parent 25cc99e commit 5151ea2
Show file tree
Hide file tree
Showing 3 changed files with 49 additions and 3 deletions.
6 changes: 6 additions & 0 deletions docs/source/en/preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,6 +216,12 @@ array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
</tf>
</frameworkcontent>

<Tip>
Different pipelines support tokenizer arguments in their `__call__()` differently. `text-2-text-generation` pipelines support (i.e. pass on)
only `truncation`. `text-generation` pipelines support `max_length`, `truncation`, `padding` and `add_special_tokens`.
In `fill-mask` pipelines, tokenizer arguments can be passed in the `tokenizer_kwargs` argument (dictionary).
</Tip>

## Audio

For audio tasks, you'll need a [feature extractor](main_classes/feature_extractor) to prepare your dataset for the model. The feature extractor is designed to extract features from raw audio data, and convert them into tensors.
Expand Down
30 changes: 27 additions & 3 deletions src/transformers/pipelines/text_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,9 +104,20 @@ def _sanitize_parameters(
handle_long_generation=None,
stop_sequence=None,
add_special_tokens=False,
truncation=None,
padding=False,
max_length=None,
**generate_kwargs,
):
preprocess_params = {"add_special_tokens": add_special_tokens}
preprocess_params = {
"add_special_tokens": add_special_tokens,
"truncation": truncation,
"padding": padding,
"max_length": max_length,
}
if max_length is not None:
generate_kwargs["max_length"] = max_length

if prefix is not None:
preprocess_params["prefix"] = prefix
if prefix:
Expand Down Expand Up @@ -208,10 +219,23 @@ def __call__(self, text_inputs, **kwargs):
return super().__call__(text_inputs, **kwargs)

def preprocess(
self, prompt_text, prefix="", handle_long_generation=None, add_special_tokens=False, **generate_kwargs
self,
prompt_text,
prefix="",
handle_long_generation=None,
add_special_tokens=False,
truncation=None,
padding=False,
max_length=None,
**generate_kwargs,
):
inputs = self.tokenizer(
prefix + prompt_text, padding=False, add_special_tokens=add_special_tokens, return_tensors=self.framework
prefix + prompt_text,
return_tensors=self.framework,
truncation=truncation,
padding=padding,
max_length=max_length,
add_special_tokens=add_special_tokens,
)
inputs["prompt_text"] = prompt_text

Expand Down
16 changes: 16 additions & 0 deletions tests/pipelines/test_pipelines_text_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,22 @@ def test_small_model_pt(self):
{"generated_token_ids": ANY(list)},
],
)

## -- test tokenizer_kwargs
test_str = "testing tokenizer kwargs. using truncation must result in a different generation."
output_str, output_str_with_truncation = (
text_generator(test_str, do_sample=False, return_full_text=False)[0]["generated_text"],
text_generator(
test_str,
do_sample=False,
return_full_text=False,
truncation=True,
max_length=3,
)[0]["generated_text"],
)
assert output_str != output_str_with_truncation # results must be different because one hd truncation

# -- what is the point of this test? padding is hardcoded False in the pipeline anyway
text_generator.tokenizer.pad_token_id = text_generator.model.config.eos_token_id
text_generator.tokenizer.pad_token = "<pad>"
outputs = text_generator(
Expand Down

0 comments on commit 5151ea2

Please sign in to comment.