You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The difference between Tutorials A "use word embeddings in the pretrained bert model" and Tutorials C "bert+lstm/blstm-crf/enc-dec focus model models"
#11
Open
LXM-Emily opened this issue
Oct 17, 2022
· 2 comments
Hello, your work is very excellent. However, due to my poor understanding, I have some questions when reading the code, and would like to ask for for your help. That is, BERT is used in Tutorial A to obtain word embedding, while BERT+ BLSTM model in Tutorial C is also used to obtain word embedding. What is the difference between the two? And if I want to use bert to get word embeddings and use blstm as the basic model, should I use tutorial A or tutorial C? I will be very happy if you can reply me.
The text was updated successfully, but these errors were encountered:
Thank you for your reply and advice vey much. Can I understand it this way? Tutorial A takes the mean of word vectors for the same words in the dataset, and each word corresponds to a word vector, while Tutorial C takes a sentence as a unit and inputs bert to obtain the word vector of this sentence. Thank you again!
Hello, your work is very excellent. However, due to my poor understanding, I have some questions when reading the code, and would like to ask for for your help. That is, BERT is used in Tutorial A to obtain word embedding, while BERT+ BLSTM model in Tutorial C is also used to obtain word embedding. What is the difference between the two? And if I want to use bert to get word embeddings and use blstm as the basic model, should I use tutorial A or tutorial C? I will be very happy if you can reply me.
The text was updated successfully, but these errors were encountered: