-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with lags_seq with Weekly data input #83
Comments
Also, I have a question: when freq_str is "Q", offset is:<QuarterEnd: startingMonth=12>, offset.n= 1 does it mean it will get past 1 and 8 and 9 and 11, 12,13 data of my first target data? |
using |
Hi Thank you. |
I am guessing that lag-llama will omit the D, H, T, S automatically if your data frequency is weekly. |
I was thinking the same too. But I checked the code. when it is doing prediction_splitter, It will get self.context_length32 + max(self.lags_seq)1092 data. So I'm confused here |
Hi! tldr: Irrespective of your frequency, lag llama uses the lags of all frequencies. So you should never change So, lag-llama was trained with an initial linear layer mapping all lags (from all frequencies). These lag indices computed from all frequencies finally are these: The other alternative is to train a model from scratch on your own data, with your specific frequencies. This is only possible if you have a large amount of data in your case. Or, you could re-train on the datasets we trained on, with just frequencies you care about for your downstream usecases. |
Thank you so much. |
Exactly. And note that you don't need 1124 points in your dataset. It uses it if it's available; else just uses nothing in its place, and can still forecast. |
Hi,
My data is weekly data. As you see here. So I set freq = "7D".
I think it makes sense to me if I set lags_seq = ["Q", "M", "W", "D"] in LagLlamaEstimator becuase I don't have second or hour or T data.
Now my module is :
create_lightning_module {'input_size': 1, 'context_length': 32, 'max_context_length': 2048, 'lags_seq': [0, 7, 8, 10, 11, 12, 13, 14, 19, 20, 21, 22, 23, 24, 26, 27, 28, 29, 30, 34, 35, 36, 50, 51, 52, 55, 83, 102, 103, 104, 154, 155, 156, 362, 363, 364, 726, 727, 728, 1090, 1091, 1092], 'n_layer': 8, 'n_embd_per_head': 16, 'n_head': 9, 'scaling': 'robust', 'distr_output': gluonts.torch.distributions.studentT.StudentTOutput(), 'num_parallel_samples': 100, 'rope_scaling': None, 'time_feat': True, 'dropout': 0.0}
Total lags_seq is 42.
But I got this error:
RuntimeError: Error(s) in loading state_dict for LagLlamaLightningModule:
size mismatch for model.transformer.wte.weight: copying a param with shape torch.Size([144, 92]) from checkpoint, the shape in current model is torch.Size([144, 50]).
The text was updated successfully, but these errors were encountered: