Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generation of training data for the embedding networks #1

Open
if-phd opened this issue Feb 9, 2024 · 0 comments
Open

Generation of training data for the embedding networks #1

if-phd opened this issue Feb 9, 2024 · 0 comments

Comments

@if-phd
Copy link

if-phd commented Feb 9, 2024

Dear developers of Neur2SP,

First, I would like to thank you for the great contribution you've made with this work and, above all, for sharing it so that the rest of us can learn from it.

I am a first-year PhD student who is trying to understand how the scenario embedding process is done. What I would like to know is how training data is generated for the embedding networks (i.e., how to generate y_tr and y_val).

Although my coding skills are far from being superb, as far as I understood from the code, there is a function called "load_data_expected" (which can be found in the file train_model.py) which takes this piece of information I am interested in directly from the corresponding data file of each case study.

The paper clearly explains scenarios are first projected into their latent space and then mean aggregation is performed among the K embeddings. I am afraid I am a bit lost in how to perform the former step. Thus, any reference or clarification on how training data for NN-E embedding networks is generated would be highly appreciated.

Thanks in advance for the attention.

Kind regards

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant