I am interested in using pre-trained models from Huggingface for named entity recognition (NER) tasks without any further training or testing of the model. The only information on Huggingface for the model are to use the following lines:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
I tried the following code, but I am getting a tensor output instead of class labels for each named entity.
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
text = "my text for named entity recognition here."
input_ids = torch.tensor(tokenizer.encode(text, padding=True, truncation=True,max_length=50, add_special_tokens = True)).unsqueeze(0)
with torch.no_grad():
output = model(input_ids, output_attentions=True)
Can anyone suggest what am I doing wrong here? It will be good to have a short tutorial on how to use a pre-trained model for NER (without any fine tuning).