Huggingface classification struggling with prediction

I am fine tuning longformer and then making prediction using TextClassificationPipeline and model(**inputs) methods. I am not sure why I get different results

import pandas as pd
import datasets
from transformers import LongformerTokenizerFast, LongformerForSequenceClassification, Trainer, TrainingArguments, LongformerConfig
import torch.nn as nn
import torch
from torch.utils.data import DataLoader#Dataset, 
import numpy as np
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
from tqdm import tqdm
#import wandb
import os
from datasets import Dataset
from transformers import TextClassificationPipeline, AutoTokenizer, AutoModelForSequenceClassification

tokenizer = LongformerTokenizerFast.from_pretrained('folder_path/', max_length = maximum_len)

Loading the fine tuned model from a saved location. Using original tokenizer

saved_location='c:/xyz'
model_saved=AutoModelForSequenceClassification.from_pretrained(saved_location)
pipe = TextClassificationPipeline(model=model_saved, tokenizer=tokenizer, device=0)#tokenizer_saved, padding=True, truncation=True)
prediction = pipe(["The text to predict"], return_all_scores=True)
prediction
[[{'label': 'LABEL_0', 'score': 0.7107483148574829},
  {'label': 'LABEL_1', 'score': 0.2892516553401947}]]

2nd method

inputs = tokenizer("The text to predict", return_tensors="pt").to(device)
outputs = model_saved(**inputs)#, labels=labels)
print (outputs['logits'])
#tensor([[ 0.4552, -0.4438]], device='cuda:0', grad_fn=<AddmmBackward0>)
torch.sigmoid(outputs['logits'])
#tensor([[0.6119, 0.3908]], device='cuda:0', grad_fn=<SigmoidBackward0>)

AutoModelForSequenceClassification returns probabilities 0.71 and 0.29. When I look at the 2nd method. It returns logits 0.4552, -0.4438 which convert to probabilities 0.6119, 0.3908