I am also experiencing this error. I load the tokenizer without problem from the same checkpoint as the model, but on loading the model I get raise OSError( OSError: Unable to load weights from pytorch checkpoint file
. The checkpoint was created under Pytorch 1.9, my current Pytorch is 1.10. The checkpoint is on a network drive, if I try my code and checkpoint on a local drive then I have no problem, its just when operating from a network.
Because the tokenizer is constructed with no problem from this same checkpoint I was wondering if there is a difference in the handling of OS file types between the tokenizer and the model. Because there was a space in the name of one of the directories on the path I tried relocating to a place with no space in the name, and it seemed to fix the problem, but only initially - when I tried again a day later the problem reappeared.
Still, I feel there is something different in how the OS, path, or file is handled between the tokenizer and the model use of the checkpoint.