You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thanks for adding vae_text example. I have a question related to MLPTransformConnector. MLPTransformConnector expects an input of size self.encoder.cell.hidden_size*2. I don't understand why it should be multiplied by 2. Shouldn't we just use h of UnidirectionalRNNEncoder as input to MLPTransformConnector instead of (h, c)?
The text was updated successfully, but these errors were encountered:
Hi, thanks for the issue! Since the example is ported from Texar-TF, I just checked the original code and indeed only h is used. Sorry for this careless mistake, I guess this is because UnidirectionalRNNEncoder in Texar-TF by default doesn't return the cell state while in Texar-PyTorch it does.
However, I feel that having either h or both (h, c) as input doesn't really affect the outcome of the model, but I'm not very sure about this. Does your previous experience with VAEs show that one of the methods is better than the other?
I don't think it will have a big impact on the performance but I think it's worth fixing because someone might use this implementation in a paper. And don't feel bad about it, you guys are doing an amazing job.
Thanks for the issue. Using either h or (h, c) combined is legitimate (e.g., different papers implement in different ways). But it'd be good to make it clear in the doc, and probably add the option of using h.
Hi,
Thanks for adding vae_text example. I have a question related to MLPTransformConnector. MLPTransformConnector expects an input of size
self.encoder.cell.hidden_size*2
. I don't understand why it should be multiplied by 2. Shouldn't we just useh
of UnidirectionalRNNEncoder as input to MLPTransformConnector instead of(h, c)
?The text was updated successfully, but these errors were encountered: