SentenceTransformer labels for SoftmaxLoss

I’m trying to finetune a SentenceTransformer using this tutorial: Train and Fine-Tune Sentence Transformers Models.

My data is similar to case #1:
sentence1 | sentence2 | label
sentence1 | sentence2 | label

sentence1 | sentence2 | label

I have 4 integer labels ranging from 0 to 3, and I’m using SoftmaxLoss, because it seems that ContrastiveLoss only expects 2 labels, and I have 4.
My question is: Should the labels indicate similarity or should they indicate distance? I.e. should 0 mean the sentences are close or should it mean that the sentences are distant?

And another question, when I fit the model and provide an evaluator, evaluation metrics are not printed during training:

train_loss = losses.SoftmaxLoss(model=model, num_labels=4, sentence_embedding_dimension=model.get_sentence_embedding_dimension())
evaluator = evaluation.EmbeddingSimilarityEvaluator.from_input_examples(val_examples)

model.fit(train_objectives=[(train_dataloader, train_loss)],
          evaluator=evaluator,
          epochs=3)

Output:


As you can see, no metrics are printed even though I provided an evaluator. Any idea why?

Any help is appreciated.

Thanks