How a new token will be learned through transformer?
When we apply tokenizer.add_token([list_tokens])
what will be the embedding of those new tokens?
How a new token will be learned through transformer?
When we apply tokenizer.add_token([list_tokens])
what will be the embedding of those new tokens?