I have a custom BERT-like model (with modified attention) that I pretrained with PyTorch. Data preparation was done with a Huggingface tokenizer. Now I want to integrate this PyTorch-model in the Huggingface environment so it can be used in pipelines and for finetuning as a PreTrainedModel. How do I generate the necessary config files?
There might be a better way, but I would:
- subclass PretrainedModel with your own class
- load your trained model weights into this new class
- run yourmodel.save_pretrained() to save the model weights for this class
- now you can do YourCustomModel.from_pretrained() as it now can use those HF methods