Hi, everyone. I am new to huggingface and my code is as follows:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import pipeline
pipe = pipeline("text-generation", model="microsoft/phi-2",force_download=True, resume_download=False)
print(pipe("what color is apple?"))
It only outputs
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
then exits. It seems the model has been successfully downloaded based on previous runs (which are not shown here), but it still failed on working properly. What does this output mean? I’ll appreciate any attention and help. Thanks.