I wanted to know how to push adapter fusion layers after training. The adapters trained on a task can simply be pushed using push_adapter_to_hub() , as given in huggingface, but what about Adapter fusion layers? Can anyone guide me the correct way to upload adapter fusion in my huggingface hub
If Adapter and PEFT are not applicable to your model, you probably have to merge it once and then convert it to the HF native format.
Since it is okay to upload a model that is not in HF native format, why not just upload it as is?
It is not suitable for use from the Inference API, but it should not be a problem for use from Spaces or Colab.
So on a huggingface pre-trained model, I was able to use adapter fusion. I even trained a model via adapter fusion. But the point is i wanted to save or upload this model somewhere(the adapter fusion layers) , so that it could be used by me or anyone else in nlp community later on for research purpose. So is there any way of uploading fusion layers on the hub. This would help me release the models for the entire community.
I believe that files under 50 GB can be uploaded using the following method, although in a Windows environment, care if the file is over 5 GB.
As long as you have written the README.md properly, the person who wants it will be able to find the data.