You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am also exploring the log_sample() method of the code. Yes, it is used to reconstruct the sample at each end of the training epoch and both the reconstrucred samples are put into checkpoint/name/sample. On tensorboard, these samples are also logged along with the respective batch of real images.
I understand that autoencoding is being performed on a sampled batch of images. I am, however, unsure where those original images are being used to condition the generation. In other words, I was expecting cond = encoder(x_start) to be used as a condition to the DDIM, but I don't see it happening, and yet somehow, the generated images are reconstructions of of a sampled real image batch.
I am not sure if I have explained the problem clearly, but can anyone please help clarify it to me??
Thanks for your code for the project!
I need to find out the log_sample() after the batch training.
Is it used to reconstruct the sample at each end of the training epoch? And then, put the reconstructed samples into checkpoint/name/sample folder?
The text was updated successfully, but these errors were encountered: