DataCollator uses Tokenizer while having BatchEncodings?

I did not understand the intuition behind this: The DataCollatorForLanguageModeling expects tokenizer while it forces to operate on tokenized data (BatchEncodings).

I tried to provide text dataset into trainer while having datacollator, it gave me an error:
transformers/tokenization_utils_base.py", line 3245, in pad
raise ValueError(
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided [ ]