Is there any difference in the tokenized output if I load the tokenizer from a different pretrained model

So let’s say if I do
GPT2TokenizerFast.from_pretrained('gpt2-medium') vs GPT2TokenizerFast.from_pretrained('distilgpt2')

Is there actually any differences in their tokenized output?

In that particular case, I don’t think so, but there are definitely cases where tokenizers from the same model type but different pretrained configurations are different. bert-base-uncased vs bert-base-cased would be one clear example.

2 Likes

Thank you for your clarification :slight_smile:

1 Like