You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using TransformerWordEmbedding to obtain contextualized embeddings from RoBERTa pre-trained weights. I was seeking more clarity about what the fine-tune parameter does. What is meant by fine-tuneable embeddings in this case? Does this allow for backpropagation through all of the layers in the RoBERTa model? If not, is there a way to achieve this?
I am using these contextualized embeddings as pre-processed inputs to my custom GNN model, and ideally, I would like to backprop back to all of the RoBERTa layers.
The text was updated successfully, but these errors were encountered:
Question
I am using
TransformerWordEmbedding
to obtain contextualized embeddings from RoBERTa pre-trained weights. I was seeking more clarity about what thefine-tune
parameter does. What is meant by fine-tuneable embeddings in this case? Does this allow for backpropagation through all of the layers in the RoBERTa model? If not, is there a way to achieve this?I am using these contextualized embeddings as pre-processed inputs to my custom GNN model, and ideally, I would like to backprop back to all of the RoBERTa layers.
The text was updated successfully, but these errors were encountered: