Error using pretraining tokenizer for spanish biomedical ner

Hi, guys, I’m working with NLP and with spanish electronic healt records, I’d like to know what I’m doing wrong when I use the tokenizer, the variable returned me the tokens so weird, please, can u give a hand? thanks a lot!!
I’m using this amazing model: PlanTL-GOB-ES/bsc-bio-ehr-es · Hugging Face

This is the code:

import transformers

from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline

BERT_PATH="bsc-bio-es"
MODEL_PATH="bsc-bio-es/pytorch_model.bin"
bert=transformers.BertModel.from_pretrained(BERT_PATH)

parameters=bert.num_parameters()

tokenizer=AutoTokenizer.from_pretrained(BERT_PATH)
text=[ "paciente de 84 años de edad presenta hépatisis con cáncer persistente",
"Hola estoy es una prueba para determinar como este modelo está haciendo la tokenización"] 

tokens=tokenizer.tokenize(text)