Bug in the train-with-pytorch-trainer?

There may be a bug in Fine-tune a pretrained model

From that link I select, “Open in Colab”. The I selected from the menu, “Runtime->Run All”. Everything appears to work file until I get to the section:

from transformers import TrainingArguments

training_args = TrainingArguments(output_dir=“test_trainer”)

I get the following output:
/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py:3553 in run_code │
│ │
│ 3550 │ │ │ │ elif async_ : │
│ 3551 │ │ │ │ │ await eval(code_obj, self.user_global_ns, self.user_ns) │
│ 3552 │ │ │ │ else: │
│ ❱ 3553 │ │ │ │ │ exec(code_obj, self.user_global_ns, self.user_ns) │
│ 3554 │ │ │ finally: │
│ 3555 │ │ │ │ # Reset our crash handler in place │
│ 3556 │ │ │ │ sys.excepthook = old_excepthook │
│ in <cell line: 3>:3 │
│ in init:114 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1400 in post_init
│ │
│ 1397 │ │ if ( │
│ 1398 │ │ │ self.framework == “pt” │
│ 1399 │ │ │ and is_torch_available() │
│ ❱ 1400 │ │ │ and (self.device.type != “cuda”) │
│ 1401 │ │ │ and (self.device.type != “npu”) │
│ 1402 │ │ │ and (get_xla_device_type(self.device) != “GPU”) │
│ 1403 │ │ │ and (self.fp16 or self.fp16_full_eval) │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1857 in device │
│ │
│ 1854 │ │ The device used by this process. │
│ 1855 │ │ “”" │
│ 1856 │ │ requires_backends(self, [“torch”]) │
│ ❱ 1857 │ │ return self._setup_devices │
│ 1858 │ │
│ 1859 │ @property
│ 1860 │ def n_gpu(self): │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py:54 in get
│ │
│ 51 │ │ attr = “_cached” + self.fget.name
│ 52 │ │ cached = getattr(obj, attr, None) │
│ 53 │ │ if cached is None: │
│ ❱ 54 │ │ │ cached = self.fget(obj) │
│ 55 │ │ │ setattr(obj, attr, cached) │
│ 56 │ │ return cached │
│ 57 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/training_args.py:1772 in _setup_devices │
│ │
│ 1769 │ │ logger.info(“PyTorch: setting up devices”) │
│ 1770 │ │ if not is_sagemaker_mp_enabled(): │
│ 1771 │ │ │ if not is_accelerate_available(min_version=“0.20.1”): │
│ ❱ 1772 │ │ │ │ raise ImportError( │
│ 1773 │ │ │ │ │ "Using the Trainer with PyTorch requires accelerate>=0.20.1: P │
│ 1774 │ │ │ │ ) │
│ 1775 │ │ │ AcceleratorState._reset_state(reset_partial_state=True) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: Using the Trainer with PyTorch requires accelerate>=0.20.1: Please run pip install transformers[torch] or pip install accelerate -U

Nothing runs after that. Is it user error or is there a problem?

Thanks

I’d recommend doing a quick search on the forums, this is a very popular question. pip install accelerate -U and restart your runtime

Thank you @muellerzr for the quick and accurate response. Sorry for the duplicate question, I had searched but was clearly using the wrong search terms. I also noted that ‘evaluate’ had to be pip installed as well. As a suggestion to huggingface team, maybe the model code could be updated or at least the text surrounding the accelerate and evaluate issues.