Hi All,
I am running a simple sBERT model for fine tuning. And I am having below error
TypeError: SentenceTransformerTrainer.compute_loss() got an unexpected keyword argument ‘num_items_in_batch’
I don’t use ‘num_items_in_batch’ anywhere in my code. But getting the error.
And I could not fine any solution.
The code is a below
from sentence_transformers import SentenceTransformer, InputExample, losses
from torch.utils.data import DataLoader
import pandas as pd
Load pre-trained SBERT model
model = SentenceTransformer(‘sentence-transformers/all-MiniLM-L6-v2’)
Prepare dataset: Replace with your actual dataset containing ‘sentence1’, ‘sentence2’, ‘label’
data = [
{“sentence1”: “This is a cat.”, “sentence2”: “This is a dog.”, “label”: 0},
{“sentence1”: “The sky is blue.”, “sentence2”: “The sky is clear.”, “label”: 1},
]
df = pd.DataFrame(data)
Convert to InputExample instances
train_examples = [
InputExample(
texts=[row[‘sentence1’], row[‘sentence2’]],
label=row[‘label’]
)
for _, row in df.iterrows()
]
DataLoader setup
train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=16)
Define the loss function for binary classification
train_loss = losses.SoftmaxLoss(
model=model,
sentence_embedding_dimension=model.get_sentence_embedding_dimension(),
num_labels=2 # 2 classes: 0 (no match) and 1 (match)
)
Train model using fit()
model.fit(
train_objectives=[(train_dataloader, train_loss)],
epochs=4,
warmup_steps=10
)
print(“Training complete.”)
I will appreciate your great help!
Thank you,
Seyhan