Shoud we add position embeddings to Values

In vit_mae, src/transformers/models/vit_mae, there when passing through ViTMAESdpaAttention or ViTMAESelfAttention, position embeddings(PEs) are added before projected by Q-, K- , and V-layers; But many work announce that the PEs should only add to K and Q; Is that a bug?
thank you!

1 Like