After fine tuning model with model.train it gives different predictions for same text

This is how I fine tuned the model:

 input_ids=tokenizer(str(parseddata), padding=True, truncation=True, max_length=500, 
    return_tensors="pt")
    labels = torch.tensor([0])

    lr_scheduler = get_scheduler(
        name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=2
    )

    # del banmodel

    model.train(mode=True)
    for i in range(2):
        outputs = model(input_ids, labels=labels)
        loss = outputs[0]
        loss.backward()
        optimizer.step()
        lr_scheduler.step()
        optimizer.zero_grad()

This is how I predicted it:

def predict_label(text):
    # input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
    input_ids=tokenizer(text, padding=True, truncation=True, max_length=500, return_tensors="pt")
    logits = model(**input_ids)[0]
    probs = torch.nn.functional.softmax(logits, dim=1)
    
    return probs

Only after training it model gives different answers for the same text input. However, when I close the entire process and turn it on again it works and gives me the same prediction. Any help would be extremely appreciated thanks.

what do you mean by closing the entire process?