You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So the resultant word embeddings (stacked_embeddings) will be a concatenation of the two embeddings, or is it element-wise mean embedding, or anything else?
Thank you
The text was updated successfully, but these errors were encountered:
StackedEmbeddings use the stacking/concatination operator as their name implies.
Note that element-wise pooling would not be possible, as the embeddings usually don't have the same embedding length.
Sorry, I cannot follow how you come to that conclusion, but I can asure you that the StackedEmbeddings work for both, TokenEmbeddings and DocumentEmbeddings.
Question
How the StackedEmbeddings function actually works?
Is it concatinating the two word embeddings, i.e. using torch.cat([emb1, emb2]).
I want to concatinate BytePairEmbeddings with TransformerWordEmbeddings, so i'm doing like this:
bert_emb = TransformerWordEmbeddings(
model='xlm-roberta-base',
layers="-1",
subtoken_pooling="mean",
fine_tune=True,
use_context=True,
)
bpe_emb = BytePairEmbeddings('en')
stacked_embeddings = StackedEmbeddings([bert_emb , bpe_emb])
So the resultant word embeddings (stacked_embeddings) will be a concatenation of the two embeddings, or is it element-wise mean embedding, or anything else?
Thank you
The text was updated successfully, but these errors were encountered: