Help understanding how to build a dataset for language as with the old TextDataset


I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.

I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a ‚Äútokenizable‚ÄĚ size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator:

model_checkpoint = 'distilbert-base-uncased'

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

from transformers import TextDataset

dataset = TextDataset(

For now, what I have is the following, which, of course, throws an error/warning because each line is longer than the maximum block size in the tokenizer:

import datasets
dataset = datasets.load_dataset('path/to/text_file.txt')

model_checkpoint = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

def tokenize_function(examples):
    return tokenizer(examples["text"])

tokenized_datasets =, batched=True, num_proc=4, remove_columns=["text"])


So what would be the ‚Äústandard‚ÄĚ way of creating a dataset in the way it was done before?

Thank you very much for the help :))

Hi !

If you want to tokenize line by line, you can use this:

max_seq_length = 512
num_proc = 4

def tokenize_function(examples):
    # Remove empty lines
    examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()]
    return tokenizer(

tokenized_dataset =

Though the TextDataset was doing a different processing by concatenating all the texts and building blocks of size 512. If you need this behavior, then you must apply an additional map function after the tokenization:

# Main data processing function that will concatenate all texts from
# our dataset and generate chunks of max_seq_length.
def group_texts(examples):
    # Concatenate all texts.
    concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
    total_length = len(concatenated_examples[list(examples.keys())[0]])
    # We drop the small remainder, we could add padding if the model supported it instead of this drop,
    # you can customize this part to your needs.
    total_length = (total_length // max_seq_length) * max_seq_length
    # Split by chunks of max_len.
    result = {
        k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
        for k, t in concatenated_examples.items()
    return result

# Note that with `batched=True`, this map processes 1,000 texts together,
# so group_texts throws away a remainder for each of those groups of 1,000 texts.
# You can adjust that batch_size here but a higher value might be slower to preprocess.

tokenized_dataset =

This code comes from the processing of the example script of transformers

Thanks, this was what I was looking for!! :hugs:

I managed to find this code in this example, but I was not sure how to adapt it.

Thank you for this @lhoestq. Could you please explain what is the benefit of doing this -

#We drop the small remainder, we could add padding if the model supported it instead of this drop,
#you can customize this part to your needs.
total_length = (total_length // max_seq_length) * max_seq_length

Why not just skip that statement ? Thank you.

Hi ! This statement makes all your input samples have the same length equal to max_seq_length.
It crops the end of each batch, otherwise you end up with a sample smaller than max_seq_length.

So you can remove this statement but you you may need to apply padding to the last sample in this case to make it have a length of max_seq_length

1 Like