Fine-tuned DistilBERT won't load in Spaces (AssertionError)

I fine-tuned a DistilBERT and uploaded it to the hub and my Space in March 2022 and I had not updated the code at all since then. Everything was working just fine until recently my Space was throwing this runtime error:

Runtime error
Space not ready. Reason: Error, exitCode: 1, message: None

Container logs:

Fetching model from: https://huggingface.co/m-newhauser/distilbert-political-tweets
Traceback (most recent call last):
  File "app.py", line 8, in <module>
    interface = gr.Interface.load("huggingface/m-newhauser/distilbert-political-tweets",
  File "/home/user/.local/lib/python3.8/site-packages/gradio/interface.py", line 73, in load
    interface_info = load_interface(name, src, api_key, alias)
  File "/home/user/.local/lib/python3.8/site-packages/gradio/external.py", line 270, in load_interface
    interface_info = repos[src](name, api_key, alias)
  File "/home/user/.local/lib/python3.8/site-packages/gradio/external.py", line 22, in get_huggingface_interface
    assert response.status_code == 200, "Invalid model name or src"
AssertionError: Invalid model name or src

System info
transformers version: v4.16.2
Platform: Linux-3.10.0-1160.62.1.el7.x86_64-x86_64-with-glibc2.17
Python version: 3.7.13
PyTorch version (GPU?): 1.11.0+cu113

Interestingly, when I try to load my model from the hub in a Colab notebook, this code executes perfectly:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

model_base = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", 
                                                           num_labels=2)
model_finetuned = AutoModelForSequenceClassification.from_pretrained("m-newhauser/distilbert-political-tweets", 
                                                           num_labels=2)

I haven’t changed any of the code since I uploaded the model and it was working fine and the inference API on the model page was also returning predictions.

Please let me know if I’ve left out any information!

Hello @m-newhauser, sorry for the delay.
There is indeed an issue with your model, you added to the README library_name: pytorch which is not supported, valid values are either nothing (transformers by default), transformers or one of api-inference-community/docker_images at main · huggingface/api-inference-community · GitHub.
As for Colab it might work if the model was previously cached.

I changed library_name in my README file to transformers and everything works again. Would have never figured that one out. Thanks, @chris-rannou !