Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support telechat2 #35415

Open
wants to merge 28 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -598,6 +598,8 @@
title: T5v1.1
- local: model_doc/tapex
title: TAPEX
- local: model_doc/telechat2
title: TeleChat2
- local: model_doc/transfo-xl
title: Transformer XL
- local: model_doc/ul2
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -325,6 +325,7 @@ Flax), PyTorch, and/or TensorFlow.
| [Table Transformer](model_doc/table-transformer) | ✅ | ❌ | ❌ |
| [TAPAS](model_doc/tapas) | ✅ | ✅ | ❌ |
| [TAPEX](model_doc/tapex) | ✅ | ✅ | ✅ |
| [TeleChat2](model_doc/telechat2) | ✅ | ❌ | ❌ |
| [Time Series Transformer](model_doc/time_series_transformer) | ✅ | ❌ | ❌ |
| [TimeSformer](model_doc/timesformer) | ✅ | ❌ | ❌ |
| [TimmWrapperModel](model_doc/timm_wrapper) | ✅ | ❌ | ❌ |
Expand Down
85 changes: 85 additions & 0 deletions docs/source/en/model_doc/telechat2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

-->

# TeleChat2

## Overview

The TeleChat2 model was proposed in [TELECHAT TECHNICAL REPORT](https://arxiv.org/pdf/2401.03804) by TeleAI.

### Summary

The abstract from the paper is the following:

TeleChat is a series of large language models, offering decoder-based language models in various sizes (3B, 7B, and 12B). For each size, we provide both the base pretrained model and the fine-tuned chat model aligned with human preferences. TeleChat leverages a Transformer architecture with features such as SwiGLU activation, advanced attention mechanisms (QKV bias, group query attention), and support for sliding window attention. The models are optimized for bilingual proficiency (English and Chinese) and include an enhanced tokenizer adaptable to diverse natural languages and coding formats.

## Usage tips

The original code for telechat2 can be found [here](https://huggingface.co/Tele-AI/TeleChat2-7B).

In the following, we demonstrate how to use `TeleChat2-7B` for the inference. Note that we have used the ChatML format for dialog, in this demo we show how to leverage `apply_chat_template` for this purpose.

```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> device = "cuda" # the device to load the model onto

>>> model = AutoModelForCausalLM.from_pretrained("Tele-AI/TeleChat2-7B", device_map="auto")
>>> tokenizer = AutoTokenizer.from_pretrained("Tele-AI/TeleChat2-7B")

>>> prompt = "Give me a short introduction to large language model."

>>> messages = [{"role": "user", "content": prompt}]

>>> text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

>>> model_inputs = tokenizer([text], return_tensors="pt").to(device)

>>> generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True)

>>> generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]

>>> response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```

## TeleChat2Config

[[autodoc]] TeleChat2Config


## TeleChat2Model

[[autodoc]] TeleChat2Model
- forward

## TeleChat2ForCausalLM

[[autodoc]] TeleChat2ForCausalLM
- forward

## TeleChat2ForSequenceClassification

[[autodoc]] TeleChat2ForSequenceClassification
- forward

## TeleChat2ForTokenClassification

[[autodoc]] TeleChat2ForTokenClassification
- forward

## TeleChat2ForQuestionAnswering

[[autodoc]] TeleChat2ForQuestionAnswering
- forward
2 changes: 2 additions & 0 deletions docs/source/en/perf_infer_gpu_one.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,7 @@ FlashAttention-2 is currently supported for the following architectures:
* [Qwen2VL](https://huggingface.co/docs/transformers/model_doc/qwen2_vl#transformers.Qwen2VLModel)
* [RAG](https://huggingface.co/docs/transformers/model_doc/rag#transformers.RagModel)
* [SpeechEncoderDecoder](https://huggingface.co/docs/transformers/model_doc/speech_encoder_decoder#transformers.SpeechEncoderDecoderModel)
* [TeleChat2](https://huggingface.co/docs/transformers/model_doc/telechat2)
* [VisionEncoderDecoder](https://huggingface.co/docs/transformers/model_doc/vision_encoder_decoder#transformers.VisionEncoderDecoderModel)
* [VisionTextDualEncoder](https://huggingface.co/docs/transformers/model_doc/vision_text_dual_encoder#transformers.VisionTextDualEncoderModel)
* [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperModel)
Expand Down Expand Up @@ -303,6 +304,7 @@ For now, Transformers supports SDPA inference and training for the following arc
* [MusicGen Melody](https://huggingface.co/docs/transformers/model_doc/musicgen_melody#transformers.MusicgenMelodyModel)
* [Nemotron](https://huggingface.co/docs/transformers/model_doc/nemotron)
* [SpeechEncoderDecoder](https://huggingface.co/docs/transformers/model_doc/speech_encoder_decoder#transformers.SpeechEncoderDecoderModel)
* [TeleChat2](https://huggingface.co/docs/transformers/model_doc/telechat2)
* [VideoLlava](https://huggingface.co/docs/transformers/model_doc/video_llava)
* [VipLlava](https://huggingface.co/docs/transformers/model_doc/vipllava)
* [VisionEncoderDecoder](https://huggingface.co/docs/transformers/model_doc/vision_encoder_decoder#transformers.VisionEncoderDecoderModel)
Expand Down
20 changes: 20 additions & 0 deletions src/transformers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -788,6 +788,7 @@
"TapasConfig",
"TapasTokenizer",
],
"models.telechat2": ["TeleChat2Config"],
"models.time_series_transformer": ["TimeSeriesTransformerConfig"],
"models.timesformer": ["TimesformerConfig"],
"models.timm_backbone": ["TimmBackboneConfig"],
Expand Down Expand Up @@ -3573,6 +3574,16 @@
"load_tf_weights_in_tapas",
]
)
_import_structure["models.telechat2"].extend(
[
"TeleChat2ForCausalLM",
"TeleChat2ForQuestionAnswering",
"TeleChat2ForSequenceClassification",
"TeleChat2ForTokenClassification",
"TeleChat2Model",
"TeleChat2PreTrainedModel",
]
)
_import_structure["models.time_series_transformer"].extend(
[
"TimeSeriesTransformerForPrediction",
Expand Down Expand Up @@ -5801,6 +5812,7 @@
TapasConfig,
TapasTokenizer,
)
from .models.telechat2 import TeleChat2Config
from .models.time_series_transformer import (
TimeSeriesTransformerConfig,
)
Expand Down Expand Up @@ -8135,6 +8147,14 @@
TapasPreTrainedModel,
load_tf_weights_in_tapas,
)
from .models.telechat2 import (
TeleChat2ForCausalLM,
TeleChat2ForQuestionAnswering,
TeleChat2ForSequenceClassification,
TeleChat2ForTokenClassification,
TeleChat2Model,
TeleChat2PreTrainedModel,
)
from .models.time_series_transformer import (
TimeSeriesTransformerForPrediction,
TimeSeriesTransformerModel,
Expand Down
1 change: 1 addition & 0 deletions src/transformers/convert_slow_tokenizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -1603,6 +1603,7 @@ def converted(self) -> Tokenizer:
"CodeLlamaTokenizer": LlamaConverter,
"GemmaTokenizer": GemmaConvert,
"Phi3Tokenizer": LlamaConverter,
"TeleChat2Tokenizer": LlamaConverter,
}


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,9 @@

TOKENIZER_CLASSES = {
# Phi3 uses Llama tokenizer
name: getattr(transformers, "LlamaTokenizerFast" if name == "Phi3Tokenizer" else name + "Fast")
name: getattr(
transformers, "LlamaTokenizerFast" if name in ["Phi3Tokenizer", "TeleChat2Tokenizer"] else name + "Fast"
)
for name in SLOW_TO_FAST_CONVERTERS
}

Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,6 +251,7 @@
t5,
table_transformer,
tapas,
telechat2,
time_series_transformer,
timesformer,
timm_backbone,
Expand Down
2 changes: 2 additions & 0 deletions src/transformers/models/auto/configuration_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -278,6 +278,7 @@
("t5", "T5Config"),
("table-transformer", "TableTransformerConfig"),
("tapas", "TapasConfig"),
("telechat2", "TeleChat2Config"),
("time_series_transformer", "TimeSeriesTransformerConfig"),
("timesformer", "TimesformerConfig"),
("timm_backbone", "TimmBackboneConfig"),
Expand Down Expand Up @@ -608,6 +609,7 @@
("table-transformer", "Table Transformer"),
("tapas", "TAPAS"),
("tapex", "TAPEX"),
("telechat2", "TeleChat2"),
("time_series_transformer", "Time Series Transformer"),
("timesformer", "TimeSformer"),
("timm_backbone", "TimmBackbone"),
Expand Down
5 changes: 5 additions & 0 deletions src/transformers/models/auto/modeling_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -256,6 +256,7 @@
("t5", "T5Model"),
("table-transformer", "TableTransformerModel"),
("tapas", "TapasModel"),
("telechat2", "TeleChat2Model"),
("time_series_transformer", "TimeSeriesTransformerModel"),
("timesformer", "TimesformerModel"),
("timm_backbone", "TimmBackbone"),
Expand Down Expand Up @@ -556,6 +557,7 @@
("speech_to_text_2", "Speech2Text2ForCausalLM"),
("stablelm", "StableLmForCausalLM"),
("starcoder2", "Starcoder2ForCausalLM"),
("telechat2", "TeleChat2ForCausalLM"),
("transfo-xl", "TransfoXLLMHeadModel"),
("trocr", "TrOCRForCausalLM"),
("whisper", "WhisperForCausalLM"),
Expand Down Expand Up @@ -1029,6 +1031,7 @@
("starcoder2", "Starcoder2ForSequenceClassification"),
("t5", "T5ForSequenceClassification"),
("tapas", "TapasForSequenceClassification"),
("telechat2", "TeleChat2ForSequenceClassification"),
("transfo-xl", "TransfoXLForSequenceClassification"),
("umt5", "UMT5ForSequenceClassification"),
("xlm", "XLMForSequenceClassification"),
Expand Down Expand Up @@ -1105,6 +1108,7 @@
("splinter", "SplinterForQuestionAnswering"),
("squeezebert", "SqueezeBertForQuestionAnswering"),
("t5", "T5ForQuestionAnswering"),
("telechat2", "TeleChat2ForQuestionAnswering"),
("umt5", "UMT5ForQuestionAnswering"),
("xlm", "XLMForQuestionAnsweringSimple"),
("xlm-roberta", "XLMRobertaForQuestionAnswering"),
Expand Down Expand Up @@ -1207,6 +1211,7 @@
("stablelm", "StableLmForTokenClassification"),
("starcoder2", "Starcoder2ForTokenClassification"),
("t5", "T5ForTokenClassification"),
("telechat2", "TeleChat2ForTokenClassification"),
("umt5", "UMT5ForTokenClassification"),
("xlm", "XLMForTokenClassification"),
("xlm-roberta", "XLMRobertaForTokenClassification"),
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/auto/tokenization_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -493,6 +493,7 @@
),
("tapas", ("TapasTokenizer", None)),
("tapex", ("TapexTokenizer", None)),
("telechat2", ("LlamaTokenizer", "LlamaTokenizerFast" if is_tokenizers_available() else None)),
("transfo-xl", ("TransfoXLTokenizer", None)),
("tvp", ("BertTokenizer", "BertTokenizerFast" if is_tokenizers_available() else None)),
(
Expand Down
65 changes: 65 additions & 0 deletions src/transformers/models/telechat2/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# Copyright 2024 EleutherAI and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING

from ...utils import (
OptionalDependencyNotAvailable,
_LazyModule,
is_torch_available,
)


_import_structure = {
"configuration_telechat2": ["TeleChat2Config"],
}

try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_telechat2"] = [
"TeleChat2ForCausalLM",
"TeleChat2ForQuestionAnswering",
"TeleChat2ForSequenceClassification",
"TeleChat2ForTokenClassification",
"TeleChat2Model",
"TeleChat2PreTrainedModel",
]


if TYPE_CHECKING:
from .configuration_telechat2 import TeleChat2Config

try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_telechat2 import (
TeleChat2ForCausalLM,
TeleChat2ForQuestionAnswering,
TeleChat2ForSequenceClassification,
TeleChat2ForTokenClassification,
TeleChat2Model,
TeleChat2PreTrainedModel,
)


else:
import sys

sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
Loading