You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing your code for the Keras implementation. I had a question about dropouts that are added. In the original paper, dropout is only added once after the convolution layer with a dropout of rate of 0.5, this is also true in Denny Britz's implementation. In your implementation dropout of 0.5 is added after embedding layer and drop out of 0.8 is added after the convolution layer.
Just want to confirm if this is a deviation from the above two sources, and what was the reasoning for this?
Thanks!
The text was updated successfully, but these errors were encountered:
Thank you for sharing your code for the Keras implementation. I had a question about dropouts that are added. In the original paper, dropout is only added once after the convolution layer with a dropout of rate of 0.5, this is also true in Denny Britz's implementation. In your implementation dropout of 0.5 is added after embedding layer and drop out of 0.8 is added after the convolution layer.
Just want to confirm if this is a deviation from the above two sources, and what was the reasoning for this?
Thanks!
The text was updated successfully, but these errors were encountered: