Index Error: Target {} is out of bounds

I’m replicating the text classification on IMDB dataset instructions on my own dataset.
When I check my labels, I get the right labels, but then when I run my training code, it says one of my labels is out of bounds.

I have the notebook linked, if anyone has a solution.

Here is the code.
training_args = TrainingArguments(
output_dir=“my_awesome_model”,
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=2,
weight_decay=0.01,
evaluation_strategy=“epoch”,
save_strategy=“epoch”,
load_best_model_at_end=True,
push_to_hub=True,
)

trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_incom[“train”],
eval_dataset=tokenized_incom[“test”],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)

trainer.train()


IndexError Traceback (most recent call last)
in <cell line: 24>()
22 )
23
—> 24 trainer.train()

10 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
3057 if size_average is not None or reduce is not None:
3058 reduction = _Reduction.legacy_get_string(size_average, reduce)
→ 3059 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
3060
3061

IndexError: Target 742 is out of bounds.

Unique labels: {928, 384, 742}