You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, I would like to thank you for the great contribution you've made with this work and, above all, for sharing it so that the rest of us can learn from it.
I am a first-year PhD student who is trying to understand how the scenario embedding process is done. What I would like to know is how training data is generated for the embedding networks (i.e., how to generate y_tr and y_val).
Although my coding skills are far from being superb, as far as I understood from the code, there is a function called "load_data_expected" (which can be found in the file train_model.py) which takes this piece of information I am interested in directly from the corresponding data file of each case study.
The paper clearly explains scenarios are first projected into their latent space and then mean aggregation is performed among the K embeddings. I am afraid I am a bit lost in how to perform the former step. Thus, any reference or clarification on how training data for NN-E embedding networks is generated would be highly appreciated.
Thanks in advance for the attention.
Kind regards
The text was updated successfully, but these errors were encountered:
Dear developers of Neur2SP,
First, I would like to thank you for the great contribution you've made with this work and, above all, for sharing it so that the rest of us can learn from it.
I am a first-year PhD student who is trying to understand how the scenario embedding process is done. What I would like to know is how training data is generated for the embedding networks (i.e., how to generate y_tr and y_val).
Although my coding skills are far from being superb, as far as I understood from the code, there is a function called "load_data_expected" (which can be found in the file train_model.py) which takes this piece of information I am interested in directly from the corresponding data file of each case study.
The paper clearly explains scenarios are first projected into their latent space and then mean aggregation is performed among the K embeddings. I am afraid I am a bit lost in how to perform the former step. Thus, any reference or clarification on how training data for NN-E embedding networks is generated would be highly appreciated.
Thanks in advance for the attention.
Kind regards
The text was updated successfully, but these errors were encountered: