Setting different embedding dim of original model when training

Hello, due to the large memory space embeddings take, is it possible, when training another model (derived from a previous one), to set a differente (smaller) dimensionality parameter in Pooling like:

# Use Huggingface/transformers model (like BERT, RoBERTa, XLNet, XLM-R) for mapping tokens to embeddings
word_embedding_model = models.Transformer(model_name) #Original model works with 1024 dim

# Apply mean pooling to get one fixed sized sentence vector
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),
                               pooling_mode_mean_tokens=True,
                               pooling_mode_cls_token=False,
                               pooling_mode_max_tokens=False)

Then, instead of using get_word_embedding_dimension(), is it just possible to set the desired dimension to the model being trained?

Thanks so much for the attention, I am making this question so I can have some choices for a team…