Safetensors format issue

I have a training script and an inference script that I have been using since a long time without any issues. But this time the model got saved in .safetensors format. I tried upgrading the transformers package and received the following error while loading the model.

Code I am using:

model_ckpt = 'path//to/finetuned/model'
extractor = AutoFeatureExtractor.from_pretrained(model_ckpt)
model = AutoModel.from_pretrained(model_ckpt)
hidden_dim = model.config.hidden_size

Here’s a stack trace I receive I when I try to load my fine tuned, model using automodel_pretrained:

NameError                                 Traceback (most recent call last)
Cell In[54], line 7
      5 model_ckpt = '/Users/aayush/PycharmProjects/cortex-swin/model'
      6 extractor = AutoFeatureExtractor.from_pretrained(model_ckpt)
----> 7 model = AutoModel.from_pretrained(model_ckpt)
      8 hidden_dim = model.config.hidden_size

File ~/miniconda3/envs/cortex-swin/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:463, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
    461 elif type(config) in cls._model_mapping.keys():
    462     model_class = _get_model_class(config, cls._model_mapping)
--> 463     return model_class.from_pretrained(
    464         pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
    465     )
    466 raise ValueError(
    467     f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
    468     f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
    469 )

File ~/miniconda3/envs/cortex-swin/lib/python3.10/site-packages/transformers/modeling_utils.py:2184, in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
   2182 # Save the model
   2183 for shard_file, shard in shards.items():
-> 2184     if safe_serialization:
   2185         # At some point we will need to deal better with save_function (used for TPU and other distributed
   2186         # joyfulness), but for now this enough.
   2187         safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"})
   2188     else:

File ~/miniconda3/envs/cortex-swin/lib/python3.10/site-packages/transformers/modeling_utils.py:386, in load_state_dict(checkpoint_file)
    377 def load_sharded_checkpoint(model, folder, strict=True, prefer_safe=True):
    378     """
    379     This is the same as
    380     [`torch.nn.Module.load_state_dict`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=load_state_dict#torch.nn.Module.load_state_dict)
    381     but for a sharded checkpoint.
    382 
    383     This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being
    384     loaded in the model.
    385 
--> 386     Args:
    387         model (`torch.nn.Module`): The model in which to load the checkpoint.
    388         folder (`str` or `os.PathLike`): A path to a folder containing the sharded checkpoint.
    389         strict (`bool`, *optional`, defaults to `True`):
    390             Whether to strictly enforce that the keys in the model state dict match the keys in the sharded checkpoint.
    391         prefer_safe (`bool`, *optional*, defaults to `False`)
    392             If both safetensors and PyTorch save files are present in checkpoint and `prefer_safe` is True, the
    393             safetensors files will be loaded. Otherwise, PyTorch files are always loaded when possible.
    394 
    395     Returns:
    396         `NamedTuple`: A named tuple with `missing_keys` and `unexpected_keys` fields
    397             - `missing_keys` is a list of str containing the missing keys
    398             - `unexpected_keys` is a list of str containing the unexpected keys
    399     """
    400     # Load the index
    401     index_file = os.path.join(folder, WEIGHTS_INDEX_NAME)

NameError: name 'safe_open' is not defined

this was a version issue, upgrading to the latest tranformers package version solved the issue.

cool, glad it solved your issue