I would like to obtain sentence embeddings for the texts in my dataset. To achieve this, I am utilizing the Sentence Transformer model called “bert-base-nli-mean-tokens.” During the training of this pre-trained model, the developers used mean pooling as the pooling method. However, I would like to use the model with max-pooling instead, without undergoing any additional training. I have written the following code, which runs without errors, but I am unsure if it is a valid approach. Could you assist me with this?
model_name = ‘sentence-transformers/bert-base-nli-mean-tokens’
word_embedding_model = models.Transformer(model_name)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])