Model loading gets stuck when calling "from_pretrained"

Hi, when I use the GroundingDino from the huggingface, the model loading is always got stuck when calling the “from_pretrained(model_id)”. The issue happens today and before I have not met this kind of issue and the loading process is normal.

I tried to switch different version of “transformers” library and also deleted the local downloaded files for several times. But all of these don’t work. I also tried this on another machine, this weird issue will not happen. Could you please give me some suggestions to solve this?

The code I used is from the official repo:

import requests
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection 

model_id = "IDEA-Research/grounding-dino-base"
device = "cuda" if torch.cuda.is_available() else "cpu"

processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForZeroShotObjectDetection.from_pretrained(model_id).to(device)

image_url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
1 Like

It is a really strange problem. I don’t see anything particularly odd in your code or in the repo.
The last commit to this model repo was in May, and I don’t think it has anything to do with the problem.
Do you get the same symptoms with other models?

I’m getting the same exact issue, for Grounding DINO, did you find a solution?

1 Like

This will fix the mild symptoms…

pip install -U huggingface_hub

hm, didn’t seem to work I’m still getting stuck.

Sample code:

import torch
from transformers import AutoModelForMaskGeneration, AutoModelForZeroShotObjectDetection, AutoProcessor

print('starting')
dino_processor = AutoProcessor.from_pretrained("IDEA-Research/grounding-dino-base")
dino_model = AutoModelForZeroShotObjectDetection.from_pretrained("IDEA-Research/grounding-dino-base").to('cuda')
print('done')

I am using Poetry for package handling and huggingface_hub was upgraded to version 0.28.1

1 Like

I tried it now and got this error…

What’s your torch and transformers version? Mine are:

torch: 2.4.0+cu121
transformers: 4.48.1

EDIT: HF doesn’t let me reply because I’m a new user. Anyhow, the PR fixing the issue was merged with the main branch :thinking:

1 Like

This.

torch                     2.4.0+cu124
transformers              4.48.3

And I found a similar issue.

Could there be a problem with this class…? => AutoModelForZeroShotObjectDetection

Hi, I solve this issue by re-installing my conda environment from scratch. I guess it might due to some library conflicts.

3 Likes

I’m getting the exact same problems :frowning: May I know what version your transformers and pytorch are?

1 Like