I want to get the pretrained embeddings from the bert base model. I have the code below. The problem is that I am expecting a 12x768 embedding for every word where each of the 12 correspond to one of the layers, but I am getting a 1x768 for every word
pipeline = pipeline('feature-extraction', model='bert-base-uncased', tokenizer=tokenizer)
data = pipeline("This is a loooong word")
print (data)