Can Similarity Sentence Returns the Similarity Content?

Hi There,

I’m a new fan of hugging face (also NLP / LLMs). All these are fresh new to me. I’m trying the SentenceTransformers library these days.

I copied below SCRIPT from Semantic Textual Similarity — Sentence-Transformers documentation,
the code generate embedding for sentence,I cannot get the similarity content from the page but only the cosine_scores, does anybody can give some clues how to do that? Thank you in advance.

from sentence_transformers import SentenceTransformer, util

model = SentenceTransformer(‘/Users/xxx/xxx/all-MiniLM-L6-v2’)

sentences = [‘The cat sits outside’,
‘A man is playing guitar’,
‘I love pasta’,
‘The new movie is awesome’,
‘The cat plays in the garden’,
‘A woman watches TV’,
‘The new movie is so great’,
‘Do you like pizza?’]

#Compute embeddings
embeddings = model.encode(sentences, convert_to_tensor=True)

#Compute cosine-similarities for each sentence with each other sentence
cosine_scores = util.cos_sim(embeddings, embeddings)

#Find the pairs with the highest cosine similarity scores
pairs =
for i in range(len(cosine_scores)-1):
for j in range(i+1, len(cosine_scores)):
pairs.append({‘index’: [i, j], ‘score’: cosine_scores[i][j]})

#Sort scores in decreasing order
pairs = sorted(pairs, key=lambda x: x[‘score’], reverse=True)

for pair in pairs[0:10]:
i, j = pair[‘index’]
print(“{} \t\t {} \t\t Score: {:.4f}”.format(sentences[i], sentences[j], pair[‘score’]))

The new movie is awesome The new movie is so great Score: 0.9286
{‘index’: [3, 6], ‘score’: tensor(0.9286)}
… …