Skip to content

Recurrent neural network #2445

Answered by laggui
wangjiawen2013 asked this question in Q&A
Oct 29, 2024 · 4 comments · 2 replies
Discussion options

You must be logged in to vote

We have transformer blocks. Autoencoders, as with a lot of the other networks in the list, are just combinations of layers that we already support (e.g., convolutions) with a specific structure and loss.

Spiking neural networks I am not very familiar with but I think that's an entirely different type of application which might not be suited to what we offer.

Replies: 4 comments 2 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@nathanielsimard
Comment options

Comment options

You must be logged in to vote
0 replies
Answer selected by laggui
Comment options

You must be logged in to vote
1 reply
@laggui
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants
Converted from issue

This discussion was converted from issue #2437 on October 31, 2024 12:31.