Finetuning TrOCR on the IAM dataset

I’m finetuning TrOCR model using Seq2SeqTrainer API, and I’ve taken reference from this notebook.

My training is completed but there is no model saved, I’ve attached the image of the content in the output directory.

Also, I’ve attached my training notebook for reference.

Hi,

The error you’re getting is:

OSError: Can’t load feature extractor for ‘./model’. If you were trying to load it from ‘Models - Hugging Face’, make sure you don’t have a local directory with the same name. Otherwise, make sure ‘./model’ is the correct path to a directory containing a preprocessor_config.json file

That’s because the Seq2SeqTrainer only saves the model files (namely the weights as a pytorch_model.bin file and the configuration as a config.json file).

However, you’re also loading the processor (TrOCRProcessor) from the directory. A processor combines a feature extractor (for the vision modality) and a tokenizer (for the text modality), hence it requires a preprocessor_config.json file as well as a vocab.txt file for the tokenizer. It seems that you’re just using this one:

processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")

Hence, you can use this one when performing inference. You can always save its files using save_pretrained. You can see the files required here: microsoft/trocr-base-handwritten at main