Not enough values to unpack (expected 2, got 1) in training IMDB dataset

Hi everyone,
i am fine-tuning this model with IMDB dataset: mrm8488/t5-base-finetuned-imdb-sentiment · Hugging Face
following the notebook from: Fine-tuning a pretrained model — transformers 4.10.1 documentation

however, when i try to train it, i get the error:

***** Running training *****
Num examples = 10
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 6


ValueError Traceback (most recent call last)
in ()
1
2 train = MyDataset(small_train_dataset)
----> 3 trainer.train()
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1282 tr_loss += self.training_step(model, inputs)
1283 else:
→ 1284 tr_loss += self.training_step(model, inputs)
1285 self.current_flos += float(self.floating_point_ops(inputs))
1286
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs)
1787 loss = self.compute_loss(model, inputs)
1788 else:
→ 1789 loss = self.compute_loss(model, inputs)
1790
1791 if self.args.n_gpu > 1:
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1819 else:
1820 labels = None
→ 1821 outputs = model(**inputs)
1822 # Save past state if it exists
1823 # TODO: this needs to be fixed and made cleaner later.
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1617 output_attentions=output_attentions,
1618 output_hidden_states=output_hidden_states,
→ 1619 return_dict=return_dict,
1620 )
1621
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
898 inputs_embeds = self.embed_tokens(input_ids)
899
→ 900 batch_size, seq_length = input_shape
901
902 # required mask seq length can be calculated via length of past
ValueError: not enough values to unpack (expected 2, got 1)

I don’t understand why? it’s the imdb dataset from:

from datasets import load_dataset
raw_datasets = load_dataset(“imdb”)

can anyone advise on this issue please?

thank you in advance!!

:innocent: :face_with_head_bandage: