Isn't KV cache influenced by position encoding in inference?

The KV cache can speed up inference because after the first iteration, the KV of the post sentance will not change, therefore we can store it.
However, every iteration will append the input sentance with the new token, and in my opinion, that means the position encoding should be changed, and thus the KV value of the old part is changed too, which makes the KV cache become useless.
So which part of my describition is wrong, and how exactly the KV cache work?