Bert embedding layer

I have taken specific word embeddings and considered bert model with those embeddings

self.bert = BertModel.from_pretrained(‘bert-base-uncased’)
self.bert(inputs_embeds=x,attention_mask=attention_mask, *args, **kwargs)

Does this means I’m replacing the bert input embeddings(token+position+segment embeddings)

How to consider all embeddings i.e.,(token+position+segment+custom embeddings)


As you can see here, if you provide inputs_embeds yourself, they will only be used to replace the token embeddings. The token type and position embeddings will be added separately.