I am using BertEmbeddings from transformers.models.bert.modeling_bert for a custom model.
I wasn’t sure if the output includes the sin/cos positional encoding (Attention Is All You Need), or if I need to add the position encoding myself.
from transformers import BertConfig
from transformers.models.bert.modeling_bert import BertEmbeddings
bert_config = BertConfig("bert-base-uncased")
bert_embeds = BertEmbeddings(bert_config)
text_embeddings = bert_embeds(text_ids)
text_embeddings # Does this include positional embeddings?
# Or should I do
final_embeddings = text_embeddings + positional_embeddings