Reduce the size of the latent space in a VQModel

I was trying to use the VQModel to create a Vector Quantized Autoencoder. The size of the latent space is still the same as the size of the input image. I have a way to reduce the number of dimensions using the parameter latent_channels, but how do I change the dimensions of the encoded space? Say, input was 512 x 512 and I want to have the encoded space to be 64 x 64 as described in the paper.