You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for summarizing these articles, it is very helpful for my research work.
I recently read two papers on using large language models for time series classification. These two articles inspired me a lot, I recommend them to you here.
The first one is "InstructTime: Advancing Time Series Classification with Multimodal Language Modeling", and this article proposes a new method called InstructTime, which uses gpt-2 to encode time series and achieves better classification results. You can find the article at https://arxiv.org/abs/2403.12371. And code can be used at https://github.com/Mingyue-Cheng/InstructTime.
The second paper is “CrossTimeNet: Learning Transferable Time Series Classifier with Cross-Domain Pre-training from Language Model.”
This article uses BERT and GPT-2 to encode time series and uses self-supervision methods to improve classification performance. You can find the article at https://arxiv.org/pdf/2403.12372. And code can be found at https://github.com/Mingyue-Cheng/CrossTimeNet.
These two articles can be divided into the "PFMs for Time Series" part。
Could you please add the two papers to your summary list? Thanks a lot.
The text was updated successfully, but these errors were encountered:
Thank you for summarizing these articles, it is very helpful for my research work.
I recently read two papers on using large language models for time series classification. These two articles inspired me a lot, I recommend them to you here.
The first one is "InstructTime: Advancing Time Series Classification with Multimodal Language Modeling", and this article proposes a new method called InstructTime, which uses gpt-2 to encode time series and achieves better classification results. You can find the article at https://arxiv.org/abs/2403.12371. And code can be used at https://github.com/Mingyue-Cheng/InstructTime.
The second paper is “CrossTimeNet: Learning Transferable Time Series Classifier with Cross-Domain Pre-training from Language Model.”
This article uses BERT and GPT-2 to encode time series and uses self-supervision methods to improve classification performance. You can find the article at https://arxiv.org/pdf/2403.12372. And code can be found at https://github.com/Mingyue-Cheng/CrossTimeNet.
These two articles can be divided into the "PFMs for Time Series" part。
Could you please add the two papers to your summary list? Thanks a lot.
The text was updated successfully, but these errors were encountered: