Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Time series] Add patchtst #27581

Merged
merged 311 commits into from
Nov 29, 2023
Merged
Show file tree
Hide file tree
Changes from 250 commits
Commits
Show all changes
311 commits
Select commit Hold shift + click to select a range
a0815ee
add distribution head to forecasting
namctin Oct 2, 2023
d727ef7
formatting
kashif Oct 4, 2023
de5a55b
Merge branch 'main' into add-patchtst
kashif Oct 4, 2023
e391bd3
Add generate function for forecasting
namctin Oct 4, 2023
a50f6c2
Add generate function to prediction task
namctin Oct 5, 2023
8daf165
formatting
kashif Oct 5, 2023
e2f8fd8
use argsort
kashif Oct 5, 2023
1c8ec9d
add past_observed_mask ordering
kashif Oct 5, 2023
7ebaa61
fix arguments
kashif Oct 5, 2023
f3dca25
docs
kashif Oct 5, 2023
5349bf4
add back test_model_outputs_equivalence test
kashif Oct 5, 2023
eb7f547
formatting
kashif Oct 5, 2023
a1cf42c
cleanup
kashif Oct 5, 2023
6392f99
formatting
kashif Oct 5, 2023
8a91544
use ACT2CLS
kashif Oct 5, 2023
dfbea05
formatting
kashif Oct 5, 2023
1a0c55e
fix add_start_docstrings decorator
kashif Oct 5, 2023
0a4e58b
add distribution head and generate function to regression task
namctin Oct 5, 2023
72a6e1e
add distribution head and generate function to regression task
namctin Oct 5, 2023
9908c6a
fix typos
kashif Oct 5, 2023
91a4c46
add forecast_masking
kashif Oct 6, 2023
17c60a7
fixed tests
kashif Oct 6, 2023
0eabe39
Merge branch 'main' into add-patchtst
kashif Oct 6, 2023
a61ac77
use set_seed
kashif Oct 6, 2023
de7fb9e
fix doc test
kashif Oct 6, 2023
0fd0ce7
formatting
kashif Oct 6, 2023
cb52b6f
Update docs/source/en/model_doc/patchtst.md
kashif Oct 7, 2023
3daec96
better var names
kashif Oct 8, 2023
c82022d
rename PatchTSTTranspose
kashif Oct 9, 2023
687e3c8
fix argument names and docs string
kashif Oct 9, 2023
5469748
remove compute_num_patches and unused class
kashif Oct 9, 2023
d5c8359
remove assert
kashif Oct 9, 2023
a25d433
renamed to PatchTSTMasking
kashif Oct 9, 2023
db96ed8
use num_labels for classification
kashif Oct 9, 2023
b6d3b4e
use num_labels
kashif Oct 9, 2023
ca648ef
use default num_labels from super class
kashif Oct 9, 2023
e56de11
move model_type after docstring
kashif Oct 9, 2023
fcfa103
renamed PatchTSTForMaskPretraining
kashif Oct 9, 2023
cd0133f
bs -> batch_size
kashif Oct 9, 2023
bc2bf31
more review fixes
kashif Oct 9, 2023
b8a8231
use hidden_state
kashif Oct 9, 2023
8c3ab7f
rename encoder layer and block class
kashif Oct 9, 2023
2553965
remove commented seed_number
namctin Oct 9, 2023
85538b1
edit docstring
namctin Oct 9, 2023
c36370d
Add docstring
namctin Oct 9, 2023
fe3f4d4
formatting
kashif Oct 10, 2023
11feb7c
use past_observed_mask
kashif Oct 10, 2023
3af8567
doc suggestion
kashif Oct 10, 2023
ed5b26b
make fix-copies
kashif Oct 10, 2023
ddffd71
use Args:
kashif Oct 10, 2023
ccdd013
add docstring
namctin Oct 10, 2023
c993a50
add docstring
namctin Oct 11, 2023
ddc7521
change some variable names and add PatchTST before some class names
namctin Oct 11, 2023
0d7d92d
formatting
kashif Oct 11, 2023
2381994
fix argument types
kashif Oct 11, 2023
e79f0fd
fix tests
kashif Oct 11, 2023
b61bec0
change x variable to patch_input
namctin Oct 11, 2023
e908862
format
namctin Oct 11, 2023
25e669b
formatting
kashif Oct 12, 2023
2ef1564
Merge branch 'main' into add-patchtst
kashif Oct 18, 2023
b9c01ff
fix-copies
kashif Oct 18, 2023
9955e4f
Update tests/models/patchtst/test_modeling_patchtst.py
kashif Oct 19, 2023
5dbe619
move loss to forward
kashif Oct 19, 2023
099b76c
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Oct 19, 2023
b9c935f
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Oct 19, 2023
b8d59f8
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Oct 19, 2023
b7c04c7
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Oct 19, 2023
c920eee
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Oct 19, 2023
6642ab9
formatting
kashif Oct 19, 2023
7869767
fix a bug when pre_norm is set to True
namctin Oct 19, 2023
2fb7417
output_hidden_states is set to False as default
namctin Oct 19, 2023
9168ca2
set pre_norm=True as default
namctin Oct 19, 2023
7829a57
format docstring
namctin Oct 20, 2023
5cc98cb
format
kashif Oct 20, 2023
3a09a1e
output_hidden_states is None by default
kashif Oct 20, 2023
1777fc3
add missing docs
kashif Oct 20, 2023
21803e0
better var names
kashif Oct 20, 2023
87068ca
docstring: remove default to False in output_hidden_states
namctin Oct 20, 2023
4c8c7d0
change labels name to target_values in regression task
namctin Oct 23, 2023
dccbc31
format
kashif Oct 23, 2023
3d12866
fix tests
kashif Oct 23, 2023
c489972
change to forecast_mask_ratios and random_mask_ratio
namctin Oct 23, 2023
6f0170d
Merge branch 'add-patchtst' of https://github.com/namctin/transformer…
namctin Oct 23, 2023
6318cd3
change mask names
namctin Oct 23, 2023
6734b65
change future_values to target_values param in the prediction class
namctin Oct 23, 2023
8a7f2a0
remove nn.Sequential and make PatchTSTBatchNorm class
namctin Oct 23, 2023
f23ff20
black
kashif Oct 24, 2023
d6eebdb
fix argument name for prediction
kashif Oct 24, 2023
61b9da5
add output_attentions option
namctin Oct 24, 2023
2b84706
add output_attentions to PatchTSTEncoder
namctin Oct 25, 2023
0be6440
formatting
kashif Oct 25, 2023
8972a92
Add attention output option to all classes
namctin Oct 25, 2023
a2ff8ef
Remove PatchTSTEncoderBlock
namctin Oct 25, 2023
d11ea0e
create PatchTSTEmbedding class
namctin Oct 25, 2023
93b88cf
use config in PatchTSTPatchify
namctin Oct 26, 2023
8175505
Use config in PatchTSTMasking class
namctin Oct 26, 2023
63684aa
add channel_attn_weights
namctin Oct 26, 2023
e8faa8b
Add PatchTSTScaler class
namctin Oct 26, 2023
6389fbf
add output_attentions arg to test function
namctin Oct 26, 2023
dd5e25d
format
kashif Oct 26, 2023
705beb2
Merge branch 'huggingface:main' into add-patchtst
namctin Oct 26, 2023
b07c55f
Update doc with image patchtst.md
kashif Oct 27, 2023
395b044
Merge branch 'main' into add-patchtst
kashif Oct 30, 2023
b63cb28
Merge branch 'main' into add-patchtst
kashif Nov 6, 2023
546f3e2
fix-copies
kashif Nov 6, 2023
e7c687e
rename Forecast <-> Prediction
kashif Nov 6, 2023
609a9d3
change name of a few parameters to match with PatchTSMixer.
namctin Nov 6, 2023
7f05610
Remove *ForForecasting class to match with other time series models.
namctin Nov 6, 2023
9807142
make style
kashif Nov 6, 2023
3b8a306
Remove PatchTSTForForecasting in the test
namctin Nov 6, 2023
d438e29
Merge branch 'add-patchtst' of https://github.com/namctin/transformer…
namctin Nov 6, 2023
ff45a20
remove PatchTSTForForecastingOutput class
namctin Nov 6, 2023
abc64c0
change test_forecast_head to test_prediction_head
namctin Nov 6, 2023
6b3fb30
style
kashif Nov 7, 2023
69897e3
fix docs
kashif Nov 7, 2023
42ec43b
fix tests
kashif Nov 7, 2023
f451c05
change num_labels to num_targets
namctin Nov 7, 2023
131454d
Remove PatchTSTTranspose
namctin Nov 8, 2023
58dd1ec
remove arguments in PatchTSTMeanScaler
namctin Nov 8, 2023
9a69973
remove arguments in PatchTSTStdScaler
namctin Nov 8, 2023
7ab8d59
add config as an argument to all the scaler classes
namctin Nov 9, 2023
4caa376
reformat
namctin Nov 9, 2023
03d27f7
Add norm_eps for batchnorm and layernorm
namctin Nov 9, 2023
03e3220
reformat.
namctin Nov 9, 2023
a8dc48a
reformat
namctin Nov 9, 2023
d002bac
edit docstring
namctin Nov 9, 2023
97d75e6
update docstring
namctin Nov 9, 2023
49232db
change variable name pooling to pooling_type
namctin Nov 9, 2023
3684320
fix output_hidden_states as tuple
namctin Nov 9, 2023
b818036
fix bug when calling PatchTSTBatchNorm
namctin Nov 10, 2023
fb5f490
change stride to patch_stride
namctin Nov 10, 2023
f45baef
create PatchTSTPositionalEncoding class and restructure the PatchTSTE…
namctin Nov 10, 2023
32f11dc
formatting
kashif Nov 10, 2023
dcfd201
initialize scalers with configs
kashif Nov 10, 2023
ede1060
Merge branch 'main' into add-patchtst
kashif Nov 10, 2023
5ed7a9f
edit output_hidden_states
namctin Nov 13, 2023
5f695df
Merge branch 'add-patchtst' of https://github.com/namctin/transformer…
namctin Nov 13, 2023
01294fd
style
kashif Nov 13, 2023
9bf4074
fix forecast_mask_patches doc string
kashif Nov 13, 2023
621341d
Merge branch 'main' into add-patchtst
kashif Nov 13, 2023
cb18303
doc improvements
kashif Nov 13, 2023
1447efb
move summary to the start
kashif Nov 13, 2023
6f38f3d
typo
kashif Nov 13, 2023
1838c31
fix docstring
kashif Nov 13, 2023
b476113
turn off masking when using prediction, regression, classification
wgifford Nov 14, 2023
a319609
return scaled output
wgifford Nov 14, 2023
8883bd4
Merge pull request #1 from namctin/patchtst_fixes
namctin Nov 14, 2023
eded5eb
adjust output when using distribution head
wgifford Nov 14, 2023
14fc3d1
Merge remote-tracking branch 'origin/patchtst-doc' into add-patchtst
kashif Nov 14, 2023
16afe22
Merge branch 'add-patchtst' of https://github.com/namctin/transformer…
namctin Nov 14, 2023
0605eb4
Merge branch 'add-patchtst' into patchtst_fixes
wgifford Nov 14, 2023
85baa5e
Merge pull request #2 from namctin/patchtst_fixes
namctin Nov 14, 2023
70a9e56
remove _num_patches function in the config
namctin Nov 14, 2023
5e94ebd
get config.num_patches from patchifier init
namctin Nov 14, 2023
c146899
add output_attentions docstring, remove tuple in output_hidden_states
namctin Nov 14, 2023
61463ed
change SamplePatchTSTPredictionOutput and SamplePatchTSTRegressionOut…
namctin Nov 14, 2023
c9afadf
remove print("model_class: ", model_class)
namctin Nov 14, 2023
c0d86e0
change encoder_attention_heads to num_attention_heads
namctin Nov 14, 2023
e4ed54e
change norm to norm_layer
namctin Nov 14, 2023
776a3dd
change encoder_layers to num_hidden_layers
namctin Nov 14, 2023
111a72b
change shared_embedding to share_embedding, shared_projection to shar…
namctin Nov 14, 2023
78398a7
add output_attentions
namctin Nov 14, 2023
14dc846
more robust check of norm_type
namctin Nov 14, 2023
153af9f
change dropout_path to path_dropout
namctin Nov 14, 2023
194ee8e
edit docstring
namctin Nov 14, 2023
6b98fb3
remove positional_encoding function and add _init_pe in PatchTSTPosit…
namctin Nov 14, 2023
df345bc
edit shape of cls_token and initialize it
namctin Nov 15, 2023
d4f6fd2
add a check on the num_input_channels.
namctin Nov 15, 2023
0b41f55
edit head_dim in the Prediction class to allow the use of cls_token
namctin Nov 15, 2023
e6c2194
remove some positional_encoding_type options, remove learn_pe arg, in…
namctin Nov 15, 2023
b0dc803
change Exception to ValueError
namctin Nov 15, 2023
9c904ff
format
namctin Nov 16, 2023
7299225
norm_type is "batchnorm"
kashif Nov 16, 2023
6509195
Merge branch 'main' into add-patchtst
kashif Nov 16, 2023
77697c2
make style
kashif Nov 16, 2023
39e70cc
change cls_token shape
namctin Nov 16, 2023
24bb02a
Change forecast_mask_patches to num_mask_patches. Remove forecast_mas…
namctin Nov 16, 2023
cbbe73d
Bring PatchTSTClassificationHead on top of PatchTSTForClassification
namctin Nov 16, 2023
48e72a4
change encoder_ffn_dim to ffn_dim and edit the docstring.
namctin Nov 16, 2023
315b908
update variable names to match with the config
namctin Nov 16, 2023
f5d6e5f
add generation tests
namctin Nov 17, 2023
8af6d73
change num_mask_patches to num_forecast_mask_patches
namctin Nov 17, 2023
a8c2fd9
Add examples explaining the use of these models
namctin Nov 17, 2023
635a652
Merge branch 'huggingface:main' into add-patchtst
kashif Nov 17, 2023
cea5a36
make style
kashif Nov 17, 2023
80e84ea
Revert "Revert "[time series] Add PatchTST (#25927)" (#27486)"
wgifford Nov 17, 2023
6b5ade9
make style
kashif Nov 18, 2023
a44c577
fix default std scaler's minimum_scale
kashif Nov 18, 2023
c1ad011
fix docstring
kashif Nov 18, 2023
399e63d
close code blocks
kashif Nov 20, 2023
e1dd797
Update docs/source/en/model_doc/patchtst.md
kashif Nov 21, 2023
f4dc064
Update tests/models/patchtst/test_modeling_patchtst.py
kashif Nov 21, 2023
ed3385e
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Nov 21, 2023
ce73222
Update src/transformers/models/patchtst/configuration_patchtst.py
kashif Nov 21, 2023
8758852
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Nov 21, 2023
998e3e4
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Nov 21, 2023
c5c9e7f
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Nov 21, 2023
878c480
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Nov 21, 2023
7937054
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Nov 21, 2023
d217142
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Nov 21, 2023
efef115
Update src/transformers/models/patchtst/modeling_patchtst.py
kashif Nov 21, 2023
0654e30
fix tests
kashif Nov 21, 2023
7199318
add add_start_docstrings
kashif Nov 21, 2023
027d092
move examples to the forward's docstrings
kashif Nov 21, 2023
43510a6
update prepare_batch
kashif Nov 21, 2023
e3ce5f0
update test
kashif Nov 21, 2023
db7f1f4
fix test_prediction_head
kashif Nov 21, 2023
0e78bc3
fix generation test
kashif Nov 22, 2023
c86370a
Merge branch 'main' into add-patchtst-2
kashif Nov 23, 2023
7581f08
use seed to create generator
kashif Nov 23, 2023
b704fdb
add output_hidden_states and config.num_patches
namctin Nov 24, 2023
24bb558
add loc and scale args in PatchTSTForPredictionOutput
namctin Nov 24, 2023
f9ef75c
edit outputs if if not return_dict
namctin Nov 24, 2023
fb796f4
use self.share_embedding to check instead checking type.
namctin Nov 24, 2023
3b30203
remove seed
namctin Nov 24, 2023
fb992b4
make style
kashif Nov 24, 2023
dad21cc
seed is an optional int
kashif Nov 24, 2023
d67debb
fix test
kashif Nov 24, 2023
16b4926
generator device
kashif Nov 24, 2023
605da91
Merge branch 'add-patchtst-2' of https://github.com/namctin/transform…
namctin Nov 24, 2023
a9ed083
Fix assertTrue test
namctin Nov 25, 2023
00983ea
swap order of items in outputs when return_dict=False.
namctin Nov 25, 2023
10d6bf3
add mask_type and random_mask_ratio to unittest
namctin Nov 26, 2023
d817de6
Update modeling_patchtst.py
namctin Nov 26, 2023
7624079
add add_start_docstrings for regression model
namctin Nov 26, 2023
fe2bff2
make style
kashif Nov 27, 2023
64c9dac
update model path
kashif Nov 27, 2023
b5a5cd8
Edit the ValueError comment in forecast_masking
namctin Nov 27, 2023
b51408f
update examples
namctin Nov 27, 2023
20d719e
make style
kashif Nov 27, 2023
cb511b6
fix commented code
kashif Nov 27, 2023
6a7f359
update examples: remove config from from_pretrained call
namctin Nov 27, 2023
3e0ff39
Merge branch 'add-patchtst-2' of https://github.com/namctin/transform…
namctin Nov 27, 2023
91e8277
Edit example outputs
namctin Nov 27, 2023
d364a1a
Set default target_values to None
namctin Nov 27, 2023
342a532
remove config setting in regression example
namctin Nov 27, 2023
cc97d4b
Update configuration_patchtst.py
namctin Nov 27, 2023
cc05c36
Update configuration_patchtst.py
namctin Nov 27, 2023
ffab00b
remove config from examples
namctin Nov 27, 2023
1b3235d
change default d_model and ffn_dim
namctin Nov 27, 2023
d091bb2
norm_eps default
kashif Nov 27, 2023
db8ae74
set has_attentions to Trye and define self.seq_length = self.num_patche
namctin Nov 28, 2023
1a2ee5c
update docstring
namctin Nov 28, 2023
0630696
change variable mask_input to do_mask_input
namctin Nov 28, 2023
cc150ca
fix blank space.
namctin Nov 28, 2023
231c246
change logger.debug to logger.warning.
namctin Nov 28, 2023
69b2d3e
remove unused PATCHTST_INPUTS_DOCSTRING
namctin Nov 28, 2023
ebc2406
remove all_generative_model_classes
kashif Nov 28, 2023
c20c711
set test_missing_keys=True
namctin Nov 28, 2023
5c1c717
remove undefined params in the docstring.
namctin Nov 28, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -439,6 +439,7 @@ Current number of checkpoints: ![](https://img.shields.io/endpoint?url=https://h
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (from Google AI) released with the paper [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby.
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (from IBM) released with the paper [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/abs/2211.14730) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu.
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
Expand Down
1 change: 1 addition & 0 deletions README_es.md
Original file line number Diff line number Diff line change
Expand Up @@ -414,6 +414,7 @@ Número actual de puntos de control: ![](https://img.shields.io/endpoint?url=htt
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (from Google AI) released with the paper [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby.
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (from IBM) released with the paper [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu.
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
Expand Down
1 change: 1 addition & 0 deletions README_hd.md
Original file line number Diff line number Diff line change
Expand Up @@ -388,6 +388,7 @@ conda install -c huggingface transformers
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI से) साथ में कागज [विज़न ट्रांसफॉर्मर्स के साथ सिंपल ओपन-वोकैबुलरी ऑब्जेक्ट डिटेक्शन](https:/ /arxiv.org/abs/2205.06230) मैथियास मिंडरर, एलेक्सी ग्रिट्सेंको, ऑस्टिन स्टोन, मैक्सिम न्यूमैन, डिर्क वीसेनबोर्न, एलेक्सी डोसोवित्स्की, अरविंद महेंद्रन, अनुराग अर्नब, मुस्तफा देहघानी, ज़ुओरन शेन, जिओ वांग, ज़ियाओहुआ झाई, थॉमस किफ़, और नील हॉल्सबी द्वारा पोस्ट किया गया।
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (Google AI से) Matthias Minderer, Alexey Gritsenko, Neil Houlsby. द्वाराअनुसंधान पत्र [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) के साथ जारी किया गया
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (IBM से) Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam. द्वाराअनुसंधान पत्र [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf) के साथ जारी किया गया
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google की ओर से) साथ में दिया गया पेपर [लंबे इनपुट सारांश के लिए ट्रांसफ़ॉर्मरों को बेहतर तरीके से एक्सटेंड करना](https://arxiv .org/abs/2208.04347) जेसन फांग, याओ झाओ, पीटर जे लियू द्वारा।
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (दीपमाइंड से) साथ में पेपर [पर्सीवर आईओ: संरचित इनपुट और आउटपुट के लिए एक सामान्य वास्तुकला] (https://arxiv.org/abs/2107.14795) एंड्रयू जेगल, सेबेस्टियन बोरग्यूड, जीन-बैप्टिस्ट अलायराक, कार्ल डोर्श, कैटलिन इओनेस्कु, डेविड द्वारा डिंग, स्कंद कोप्पुला, डैनियल ज़ोरान, एंड्रयू ब्रॉक, इवान शेलहैमर, ओलिवियर हेनाफ, मैथ्यू एम। बोट्विनिक, एंड्रयू ज़िसरमैन, ओरिओल विनियल्स, जोआओ कैरेरा द्वारा पोस्ट किया गया।
Expand Down
1 change: 1 addition & 0 deletions README_ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -448,6 +448,7 @@ Flax、PyTorch、TensorFlowをcondaでインストールする方法は、それ
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (Meta AI から) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al から公開された研究論文: [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068)
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI から) Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby から公開された研究論文: [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230)
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (Google AI から) Matthias Minderer, Alexey Gritsenko, Neil Houlsby. から公開された研究論文 [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683)
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (IBM から) Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam. から公開された研究論文 [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf)
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (Google から) Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu から公開された研究論文: [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777)
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google から) Jason Phang, Yao Zhao, and Peter J. Liu から公開された研究論文: [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347)
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (Deepmind から) Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira から公開された研究論文: [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795)
Expand Down
1 change: 1 addition & 0 deletions README_ko.md
Original file line number Diff line number Diff line change
Expand Up @@ -363,6 +363,7 @@ Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (Meta AI 에서) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 의 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 논문과 함께 발표했습니다.
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI 에서) Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 의 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 논문과 함께 발표했습니다.
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (Google AI 에서 제공)은 Matthias Minderer, Alexey Gritsenko, Neil Houlsby.의 [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683)논문과 함께 발표했습니다.
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (IBM 에서 제공)은 Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.의 [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf)논문과 함께 발표했습니다.
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (Google 에서) Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 의 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 논문과 함께 발표했습니다.
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google 에서) Jason Phang, Yao Zhao, Peter J. Liu 의 [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 논문과 함께 발표했습니다.
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (Deepmind 에서) Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 의 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 논문과 함께 발표했습니다.
Expand Down
1 change: 1 addition & 0 deletions README_zh-hans.md
Original file line number Diff line number Diff line change
Expand Up @@ -387,6 +387,7 @@ conda install -c huggingface transformers
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (来自 Meta AI) 伴随论文 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 由 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 发布。
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (来自 Google AI) 伴随论文 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 由 Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 发布。
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (来自 Google AI) 伴随论文 [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) 由 Matthias Minderer, Alexey Gritsenko, Neil Houlsby 发布。
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (来自 IBM) 伴随论文 [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf) 由 Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam 发布。
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (来自 Google) 伴随论文 [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 由 Jason Phang, Yao Zhao, Peter J. Liu 发布。
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。
Expand Down
1 change: 1 addition & 0 deletions README_zh-hant.md
Original file line number Diff line number Diff line change
Expand Up @@ -399,6 +399,7 @@ conda install -c huggingface transformers
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (from Google AI) released with the paper [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby.
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (from IBM) released with the paper [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, Peter J. Liu.
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
Expand Down
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -751,6 +751,8 @@
title: Autoformer
- local: model_doc/informer
title: Informer
- local: model_doc/patchtst
title: PatchTST
- local: model_doc/time_series_transformer
title: Time Series Transformer
title: Time series models
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,7 @@ Flax), PyTorch, and/or TensorFlow.
| [OPT](model_doc/opt) | ✅ | ✅ | ✅ |
| [OWL-ViT](model_doc/owlvit) | ✅ | ❌ | ❌ |
| [OWLv2](model_doc/owlv2) | ✅ | ❌ | ❌ |
| [PatchTST](model_doc/patchtst) | ✅ | ❌ | ❌ |
| [Pegasus](model_doc/pegasus) | ✅ | ✅ | ✅ |
| [PEGASUS-X](model_doc/pegasus_x) | ✅ | ❌ | ❌ |
| [Perceiver](model_doc/perceiver) | ✅ | ❌ | ❌ |
Expand Down
Loading