Get "using the `__call__` method is faster" warning with DataCollatorWithPadding

When I use the out-of-the-box DataCollatorWithPadding I get my output filled with the warning:

You’re using a DebertaV2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding.

If I change to use a different, custom collator, then the warning goes away.
Would anyone know what I could be doing wrong that’s causing this warning?

Or alternatively, if I can’t fix the problem that’s causing this warning, is there a way to hide it?
I’ve tried a few different ways of turning off warnings, but so far I’ve had no luck and because it gets written out multiple times it starts to swamp the actual output from my training.

1 Like

I’m getting the same thing using a BertTokenizerFast with DataCollatorWithPadding - the error appears once for each worker every time I loop over a DataLoader. I would prefer not to silence warnings in my training code, but here’s how I’m getting around it (based on this line in the PretrainedTokenizerBase class, referencing this section in the custom logger):

import os
os.environ['TRANSFORMERS_NO_ADVISORY_WARNINGS'] = 'true'

Same situation - happened with usage of DistilBertTokenizerFast tokenizer on using DataCollatorWithPadding. I don’t know what does this warning actually mean? And if I wanted to follow the suggestion as hinted in the warning message, what should I do?

1 Like

I am getting following suggestion while using NllbTokenizerFast tokenize
You’re using a NllbTokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding.

Can anyone explain the given suggestion?