Tokenizer dataset is very slow

This is my tokenizer method. I found that no matter how much batch_size is set, the speed is the same. Tokenizer Spend time even longer than training. How cloud I do. Thanks very much.

def tokenize_function(example):
    return tokenizer(example["sentence1"], truncation=True, max_length = 512)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True, batch_size = 8)
tokenized_datasets = tokenized_datasets.remove_columns(["sentence1"])

Hi! What tokenizer are you using? What does tokenizer.is_fast return? If the returned value is False, you can set num_proc > 1 to leverage multiprocessing in map. Fast tokenizers use multithreading to process a batch in parallel on a single process by default, so it doesn’t make sense to use num_proc there.

1 Like

I’ve checked mine, and I have a fast tokenizer. However, it’s still taking about 20 seconds per example for tokenization, which is too slow.

Here’s the code,

base_model_id = "google/gemma-7b"
tokenizer = AutoTokenizer.from_pretrained(
    base_model_id,
    padding_side="left",
    add_eos_token=True,
    add_bos_token=True,
)
tokenizer.pad_token = tokenizer.eos_token
max_length = 1026

def generate_and_tokenize_prompt(prompt):
    result = tokenizer(
        formatting_func(prompt),
        truncation=True,
        max_length=max_length,
        padding="max_length",
    )
    result["labels"] = result["input_ids"].copy()
    return result

train_dataset = dataset_split['train']
eval_dataset = dataset_split['test']
tokenized_train_dataset = train_dataset.map(generate_and_tokenize_prompt)
tokenized_val_dataset = eval_dataset.map(generate_and_tokenize_prompt)

Can someone please help me figure out what I’m missing? Thanks.

Feel free to report this issue in the tokenizers repo.

1 Like