Get output embeddings out of a transformer model

I have found this in the docs

hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

But when using this option, i thought i should get Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). but after passing this to the model, the one for the output embedding is in shape (1, hidden_size) instead of (1, seq_lenght, hidden_size)

note: the Batch size is 1