Input type Runtime Error in fine-tuning hf blog tutorial

Hi folks, I’m running through the hf Wav2Vec2 fine-tuning blog tutorial here: Fine-Tune Wav2Vec2 for English ASR in Hugging Face with 🤗 Transformers

On the training step I’m getting a RuntimeError appearing to indicate that not all my data is on the GPU. Unfortunately I’m not sure how I should modify the tutorial-provided code to resolve the error and ensure everything is on the GPU where it should be! It looks like the model is on the GPU but the training input isn’t. I expect there have been some framework updates in the time since the tutorial was written. Would appreciate some help!

The Colab Notebook is located here and I’ve pasted the error stack below, which occurs when I run trainer.train()

Thanks!

Error Stack:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-52-3435b262f1ae> in <module>()
----> 1 trainer.train()

13 frames
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
   1314                         tr_loss_step = self.training_step(model, inputs)
   1315                 else:
-> 1316                     tr_loss_step = self.training_step(model, inputs)
   1317 
   1318                 if (

/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs)
   1845         if self.use_amp:
   1846             with autocast():
-> 1847                 loss = self.compute_loss(model, inputs)
   1848         else:
   1849             loss = self.compute_loss(model, inputs)

/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
   1879         else:
   1880             labels = None
-> 1881         outputs = model(**inputs)
   1882         # Save past state if it exists
   1883         # TODO: this needs to be fixed and made cleaner later.

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py in forward(self, input_values, attention_mask, output_attentions, output_hidden_states, return_dict, labels)
   1497             output_attentions=output_attentions,
   1498             output_hidden_states=output_hidden_states,
-> 1499             return_dict=return_dict,
   1500         )
   1501 

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py in forward(self, input_values, attention_mask, mask_time_indices, output_attentions, output_hidden_states, return_dict)
   1062         return_dict = return_dict if return_dict is not None else self.config.use_return_dict
   1063 
-> 1064         extract_features = self.feature_extractor(input_values)
   1065         extract_features = extract_features.transpose(1, 2)
   1066 

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py in forward(self, input_values)
    335         hidden_states = input_values[:, None]
    336         for conv_layer in self.conv_layers:
--> 337             hidden_states = conv_layer(hidden_states)
    338 
    339         return hidden_states

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py in forward(self, hidden_states)
    256 
    257     def forward(self, hidden_states):
--> 258         hidden_states = self.conv(hidden_states)
    259         hidden_states = self.layer_norm(hidden_states)
    260         hidden_states = self.activation(hidden_states)

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in forward(self, input)
    300 
    301     def forward(self, input: Tensor) -> Tensor:
--> 302         return self._conv_forward(input, self.weight, self.bias)
    303 
    304 

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
    297                             _single(0), self.dilation, self.groups)
    298         return F.conv1d(input, weight, bias, self.stride,
--> 299                         self.padding, self.dilation, self.groups)
    300 
    301     def forward(self, input: Tensor) -> Tensor:

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

I ran into the same issue. The solution described in HugginFace dataset error: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor is what I am trying next. I’ll let you know if I run into more issues after upgrading.

What it means is that you have loaded your model to GPU using model.to(‘cuda’), but your inputs to the model are not on cuda yet. So just need to do inputs.to(‘cuda’)