BERT Model - OSError

When running this BERT Model , it outputs OSError. the following is the model “nlptown/bert-base-multilingual-uncased-sentiment” ,

looking at the 2 recommended solutions, not 100 % positive if they both apply. For the second recommended solution, the file i see missing is “model.ckpt” from the uploaded model in huggingface, but not sure if that matters.

Below is the Code and Error Message,

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import requests
from bs4 import BeautifulSoup
import re

## Instantiate Model
# the nlp bert pre-trained model used here is from website huggingface. the website is " https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment "

tokenizer = AutoTokenizer.from_pretrained('nlptown/bert-base-multilingual-uncased-sentiment')  
model = AutoModelForSequenceClassification.from_pretrained('nlptown/bert-base-multilingual-uncased-sentiment')


## Encode and Calculate Sentiment ( Now entering string words to test the sentiment score )

tokens = tokenizer.encode('I hated this, absolutely the worst', return_tensors='pt' )
result = model(tokens)

print(result)

Error Message Below,
" OSError: Can’t load the model for ‘nlptown/bert-base-multilingual-uncased-sentiment’. If you were trying to load it from ‘Models - Hugging Face’, make sure you don’t have a local directory with the same name. Otherwise, make sure ‘nlptown/bert-base-multilingual-uncased-sentiment’ is the correct path to a directory containing a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack. "

3 Likes

Any solution found for this issue? I am also facing same issue.

Hey folks :wave:

I can’t reproduce the issue you described (see this colab).

My suggestions include:

  1. Make sure you don’t have a local folder with the same name
  2. Create a fresh python environment and install a recent version of transformers