How new token embedding will be learned through transformer?

How a new token will be learned through transformer?
When we apply tokenizer.add_token([list_tokens]) what will be the embedding of those new tokens?