You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think the input dimension of the scale and translate neural networks $s$ and $t$ may be incorrect.
The original RealNVP paper states that the dimensionality of networks $s$ and $t$ are $R^d \rightarrow R^D-d$ for some $d < D$. In this code, the data (moons and normal) is $D=2$. However, the code defines the input layers of networks $s$ and $t$ as $d=2$ instead of $d=1 < D$. I believe this is a mistake that has not been detected despite the popularity of this repo.
Screenshot from this code example:
Screenshot from the paper:
This mistake is easy to make because a later equation in the paper suggests that the input dimension of networks $s$ and $t$ should be $D$ instead of some $d < D$. In the screenshot below, the terms $s(b \cdot x)$ and $t(b \cdot x)$ are misleading because although $b \cdot x \in R^D$, we actually want to pass in only the non-masked elements of $x$ (which is in $R^d$).
If I'm making a mistake, please let me know!
The text was updated successfully, but these errors were encountered:
I think the input dimension of the scale and translate neural networks$s$ and $t$ may be incorrect.
The original RealNVP paper states that the dimensionality of networks$s$ and $t$ are $R^d \rightarrow R^D-d$ for some $d < D$ . In this code, the data (moons and normal) is $D=2$ . However, the code defines the input layers of networks $s$ and $t$ as $d=2$ instead of $d=1 < D$ . I believe this is a mistake that has not been detected despite the popularity of this repo.
Screenshot from this code example:
Screenshot from the paper:
This mistake is easy to make because a later equation in the paper suggests that the input dimension of networks$s$ and $t$ should be $D$ instead of some $d < D$ . In the screenshot below, the terms $s(b \cdot x)$ and $t(b \cdot x)$ are misleading because although $b \cdot x \in R^D$ , we actually want to pass in only the non-masked elements of $x$ (which is in $R^d$ ).
If I'm making a mistake, please let me know!
The text was updated successfully, but these errors were encountered: