AttributeError: module 'torch' has no attribute 'chalf'

Hi,

I’m trying to run the following :

from transformers import Seq2SeqTrainer

trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice[ā€œtrainā€],
eval_dataset=common_voice[ā€œtestā€],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)

but I face this error :

Where this is the torch version I’m using :
AttributeError Traceback (most recent call last)
Cell In[71], line 3
1 from transformers import Seq2SeqTrainer
----> 3 trainer = Seq2SeqTrainer(
4 args=training_args,
5 model=model,
6 train_dataset=common_voice[ā€œtrainā€],
7 eval_dataset=common_voice[ā€œtestā€],
8 data_collator=data_collator,
9 compute_metrics=compute_metrics,
10 tokenizer=processor.feature_extractor,
11 )

File ~/anaconda3/envs/whisper-training/lib/python3.9/site-packages/transformers/trainer_seq2seq.py:57, in Seq2SeqTrainer.init(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics)
43 def init(
44 self,
45 model: Union[ā€œPreTrainedModelā€, nn.Module] = None,
(…)
55 preprocess_logits_for_metrics: Optional[Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None,
56 ):
—> 57 super().init(
58 model=model,
59 args=args,
60 data_collator=data_collator,
…
79 }
82 def _calc_scale_factor(tensor):
83 converted = tensor.numpy() if not isinstance(tensor, np.ndarray) else tensor

AttributeError: module ā€˜torch’ has no attribute ā€˜chalf’

Name: torch
Version: 1.10.0+cu113
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration

with python 3.9.19
Note: I’m trying to fine-tune whisper-large-v3

Could you provide a Colab notebook to reproduce?

I’m following this notebook, but I just change the model type to large:

I’m following this notebook, but I change the model whisper 3 large

I’m following the Haggingface article for fine-tune Whisper model on Common Voice dataset.

It looks like your PyTorch version is outdated (1.10). Please consider upgrading PyTorch to the latest version: pip install --upgrade torch as explained here: Start Locally | PyTorch

I tried with torch 2.3.9 but I face error with loading the model at :

from transformers import WhisperForConditionalGeneration

model = WhisperForConditionalGeneration.from_pretrained(ā€œopenai/whisper-large-v3ā€)

Error: RuntimeError: Failed to import transformers.models.whisper.modeling_whisper because of the following error (look up to see its traceback):
Failed to import transformers.generation.utils because of the following error (look up to see its traceback):
cannot import name ā€˜DEFAULT_CIPHERS’ from ā€˜urllib3.util.ssl_’

It looks like there’s something wrong with your local environment. I’d recommend creating a fresh virtual environment with a new PyTorch and Transformers installation

Do you think it can be CUDA problem? the current version I’m using ic CUDA 12.4 with torch 2.3.9