Hi all,
I’m trying to learn how to use HuggingFace Hub by fine-tuning GPT2 with additional text data that I have, with the intent of putting it on the Hub, and then using the Inference API to query it remotely.
I used Transformers and Torch to train the GPT2 model from HuggingFace Hub. I created my .pt file. Made a new model on HF Hub, ok cool - loaded the .pt file, filled out the parts of the card that needed to be filled out. Realized I was supposed to have a config.json, but I didn’t - so I made one with config.to_json_file() and uploaded that. Still seemed to be doing OK. After a few moments my widget started working… and then I ran into this error:
Could not load model soudainana/m3logmodel with any of the following classes: (<class ‘transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel’>, <class ‘transformers.models.gpt2.modeling_tf_gpt2.TFGPT2LMHeadModel’>).
Now I am vexed because the model was created using GPT2LMHeadModel.from_pretrained(‘gpt2’), so I’m not sure why I’m getting this error or how to fix it.
I can verify that the model does work when run locally. I am not a paying customer (trying to test if this is worth it before subscribing). I based the code I used to finetune the model on this post: How to Fine-Tune GPT-2 for Text Generation | by François St-Amant | Towards Data Science
Thanks for any insights you can offer.