[HELP] RuntimeError: CUDA error - when training my model?

Hello everyone,

I am encountering an error when training my language model from scratch, having trained a tokenizer beforehand.

I have just trained my tokenizer from scratch on a WordPiece model like BERT, following this notebook: notebooks/tokenizer_training.ipynb at master · huggingface/notebooks · GitHub

I then saved my model using this code:

new_tokenizer.save_pretrained("/content/drive/MyDrive/my-new-tokenizer")

Thus, the folder structure of my-new-tokenizer looks something like this:

vocab.txt
tokenizer.json
tokenizer_config.json
special_tokens_map.json

After training my tokenizer from scratch, I followed the notebook to train a language model from scratch - this notebook: notebooks/language_modeling_from_scratch.ipynb at master · huggingface/notebooks · GitHub

Then executed and used the following code from that notebook:

from datasets import load_dataset

You can replace the dataset above with any dataset hosted on [the hub](https://huggingface.co/datasets) or use your own files. Just uncomment the following cell and replace the paths with values that will lead to your files:

datasets = load_dataset('csv', data_files={'train': ['/content/drive/MyDrive/data.csv'],
                                              'validation': '/content/drive/MyDrive/data.csv'})

You can also load datasets from a csv or a JSON file, see the [full documentation](https://huggingface.co/docs/datasets/loading_datasets.html#from-local-files) for more information.

To access an actual element, you need to select a split first, then give an index:

datasets["train"][10]

To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.

from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML

def show_random_elements(dataset, num_examples=10):
    assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
    picks = []
    for _ in range(num_examples):
        pick = random.randint(0, len(dataset)-1)
        while pick in picks:
            pick = random.randint(0, len(dataset)-1)
        picks.append(pick)
    
    df = pd.DataFrame(dataset[picks])
    for column, typ in dataset.features.items():
        if isinstance(typ, ClassLabel):
            df[column] = df[column].transform(lambda i: typ.names[i])
    display(HTML(df.to_html()))

show_random_elements(datasets["train"])

As we can see, some of the texts are a full paragraph of a Wikipedia article while others are just titles or empty lines.

## Causal Language modeling

For causal language modeling (CLM) we are going to take all the texts in our dataset and concatenate them after they are tokenized. Then we will split them in examples of a certain sequence length. This way the model will receive chunks of contiguous text that may look like:

part of text 1

or 

end of text 1 [BOS_TOKEN] beginning of text 2

depending on whether they span over several of the original texts in the dataset or not. The labels will be the same as the inputs, shifted to the left.

We will use the [`gpt2`](https://huggingface.co/gpt2) architecture for this example. You can pick any of the checkpoints listed [here](https://huggingface.co/models?filter=causal-lm) instead. For the tokenizer, you can replace the checkpoint by the one you trained yourself.

model_checkpoint = "gpt2"
tokenizer_checkpoint = "/content/drive/MyDrive/Train Tokenizer and LM /Tokenizer/my-new-tokenizer"

To tokenize all our texts with the same vocabulary that was used when training the model, we have to download a pretrained tokenizer. This is all done by the `AutoTokenizer` class:

from transformers import AutoTokenizer
    
tokenizer = AutoTokenizer.from_pretrained(tokenizer_checkpoint)

We can now call the tokenizer on all our texts. This is very simple, using the [`map`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) method from the Datasets library. First we define a function that call the tokenizer on our texts:

def tokenize_function(examples):
    return tokenizer(examples["Tweets"])

Then we apply it to all the splits in our `datasets` object, using `batched=True` and 4 processes to speed up the preprocessing. We won't need the `text` column afterward, so we discard it.

tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["Tweets"])

If we now look at an element of our datasets, we will see the text have been replaced by the `input_ids` the model will need:

tokenized_datasets["train"][1]

Now for the harder part: we need to concatenate all our texts together then split the result in small chunks of a certain `block_size`. To do this, we will use the `map` method again, with the option `batched=True`. This option actually lets us change the number of examples in the datasets by returning a different number of examples than we got. This way, we can create our new samples from a batch of examples.

First, we grab the maximum length our model was pretrained with. This might be a big too big to fit in your GPU RAM, so here we take a bit less at just 128.

# block_size = tokenizer.model_max_length
block_size = 128

Then we write the preprocessing function that will group our texts:

def group_texts(examples):
    # Concatenate all texts.
    concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
    total_length = len(concatenated_examples[list(examples.keys())[0]])
    # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
        # customize this part to your needs.
    total_length = (total_length // block_size) * block_size
    # Split by chunks of max_len.
    result = {
        k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
        for k, t in concatenated_examples.items()
    }
    result["labels"] = result["input_ids"].copy()
    return result

First note that we duplicate the inputs for our labels. This is because the model of the 🤗 Transformers library apply the shifting to the right, so we don't need to do it manually.

Also note that by default, the `map` method will send a batch of 1,000 examples to be treated by the preprocessing function. So here, we will drop the remainder to make the concatenated tokenized texts a multiple of `block_size` every 1,000 examples. You can adjust this behavior by passing a higher batch size (which will also be processed slower). You can also speed-up the preprocessing by using multiprocessing:

lm_datasets = tokenized_datasets.map(
    group_texts,
    batched=True,
    batch_size=1000,
    num_proc=4,
)

And we can check our datasets have changed: now the samples contain chunks of `block_size` contiguous tokens, potentially spanning over several of our original texts.

tokenizer.decode(lm_datasets["train"][1]["input_ids"])

Now that the data has been cleaned, we're ready to instantiate our `Trainer`. First we create the model using the same config as our checkpoint, but initialized with random weights:

from transformers import AutoConfig, AutoModelForCausalLM

config = AutoConfig.from_pretrained(model_checkpoint)
model = AutoModelForCausalLM.from_config(config)

And we will needsome `TrainingArguments`:

from transformers import Trainer, TrainingArguments

training_args = TrainingArguments(
    "test-clm",
    evaluation_strategy = "epoch",
    learning_rate=2e-5,
    weight_decay=0.01,
)

The last two arguments are to setup everything so we can push the model to the [Hub](https://huggingface.co/models) at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of `push_to_hub_model_id` to something you would prefer.

We pass along all of those to the `Trainer` class:

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=lm_datasets["train"],
    eval_dataset=lm_datasets["validation"],
)

And we can train our model:

trainer.train()

I then get this error:

RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

I THINK THE ERROR HAS SOMETHING TO DO WITH THE FOLLOWING CODE:

model_checkpoint = "gpt2"
tokenizer_checkpoint = "/content/drive/MyDrive/my-new-tokenizer"

I TRAINED MY TOKENIZER ON A WORD PIECE MODEL LIKE BERT, SO SHOULD THE MODEL CHECKPOINT BE DIFFERENT?

THANKS!

Hi @anon58275033

I think that could indeed be the issue. Since the tokenizer has a different vocabulary size this is likely incompatible with the config you are loading which contains the vocab size of the original model. You can fix it with:

config = AutoConfig.from_pretrained(model_checkpoint, vocab_size=len(tokenizer))

I hope this helps!

PS: sometimes debugging these CUDA errors can be unreadable and it can help to execute the code for debugging purposes on the CPU instead (training_args.device=‘cpu’ should do the trick).

1 Like

@lvwerra Hi, sorry for the late reply. I am still getting the error.