I have been trying to fine-tune sam model.
I have an image of shape [B,3,1024,1024], masks of shape [B,256,256], and bounding box of shape [B,num_boxes,4]
example:
I don’t knew where , I was going wrong but there is problem in the training
num_epochs = 100
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
model.train()
for epoch in range(num_epochs):
epoch_losses = []
for batch in tqdm(train_loader):
# forward pass
outputs = model(pixel_values=image.to(device),
input_boxes=boxes.to(device),
multimask_output=False)
# compute loss
predicted_masks = outputs.pred_masks.squeeze(1)
ground_truth_masks = batch["ground_truth_mask"].float().to(device)
loss = seg_loss(predicted_masks, ground_truth_masks.unsqueeze(1))
# backward pass (compute gradients of parameters w.r.t. loss)
optimizer.zero_grad()
loss.backward()
# optimize
optimizer.step()
epoch_losses.append(loss.item())
# Release GPU memory
torch.cuda.empty_cache()
print(f'EPOCH: {epoch}')
print(f'Mean loss: {mean(epoch_losses)}')
And was encountering the following error in the training of the model
The tutorial posted by @nielsr has the tutorial about Sam in his repo.
@nielsr here is the error. How to get rid of this error?


