I am using the following function to upload my custom CLIP model. The issue is that given that my vision and text models have different configs, the config of the last overwrites the first. It is not taking into account the variant
argument to tag it.
Any tips on the “correct” way to save this model and upload it (fingers crossed without using the git functionality).
Side question: It’s also uploading a readme.md file to my understanding. Can I avoid doing this?
def _upload_model_to_hub(
vision_encoder: models.TinyCLIPVisionEncoder, text_encoder: models.TinyCLIPTextEncoder
):
vision_encoder.save_pretrained(
str(config.MODEL_PATH),
variant="vision_encoder",
safe_serialization=True,
push_to_hub=True,
repo_id="debug-clip-model",
)
text_encoder.save_pretrained(
str(config.MODEL_PATH),
variant="text_encoder",
safe_serialization=True,
push_to_hub=True,
repo_id="debug-clip-model",
)
If it helps this is part of the save_pretrained
(in transformers/modelling_utls.py
) code that is causing the issue imo. As can be seen there is no option attach the variant argument to the config.
# Save the config
if is_main_process:
if not _hf_peft_config_loaded:
model_to_save.config.save_pretrained(save_directory)