Error saving quantized model

I get the following error trying to save a quantized model. Can anyone help? Thanks.

quantize model

quantized_model = torch.quantization.quantize_dynamic(
model, {torch.nn.Linear}, dtype=torch.qint8
)
quantized_model.save_pretrained(quantized_dir)


AttributeError Traceback (most recent call last)
in ()
3 model, {torch.nn.Linear}, dtype=torch.qint8
4 )
----> 5 quantized_model.save_pretrained(quantized_dir)

1 frames
/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py in shard_checkpoint(state_dict, max_shard_size)
291
292 for key, weight in state_dict.items():
→ 293 weight_size = weight.numel() * dtype_byte_size(weight.dtype)
294
295 # If this weight is going to tip up over the maximal size, we split.

AttributeError: ‘torch.dtype’ object has no attribute ‘numel’

I found the solution here