Hi there,
I’m trying to fine tune the Donut model using this notebook found here.
I’m using an M1 and running into this error when I’m training my model
File ~/miniconda3/envs/donut/lib/python3.10/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py:88, in TrainingBatchLoop.advance(self, kwargs)
84 if self.trainer.lightning_module.automatic_optimization:
85 optimizers = _get_active_optimizers(
86 self.trainer.optimizers, self.trainer.optimizer_frequencies, kwargs.get("batch_idx", 0)
87 )
---> 88 outputs = self.optimizer_loop.run(optimizers, kwargs)
...
--> 222 pixel_values = nn.functional.pad(pixel_values, pad_values)
223 if height % self.patch_size[0] != 0:
224 pad_values = (0, 0, 0, self.patch_size[0] - height % self.patch_size[0])
TypeError: Cannot convert a float64 Tensor to MPS as the MPS framework doesn't support float64. Please use float32 instead.
From this error I gather that this error is occurring as a result of this line thats defined within the DonutModelPLModel
:
def configure_optimizers(self):
# TODO add scheduler
optimizer = torch.optim.Adam(self.parameters(), lr=self.config.get("lr"))
return optimizer
I’m new to pytorch and I’m not sure how to specify the precision for Adam. Any guidance will be most helpful!
Thanks