Avoid installation of pytorch with transformers for onnx inference

This is the code i am following, as onnxruntime donot need pytorch, but when i install transformers pytorch is automatically get installed. Is there a way to avoid it.

from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer
model_checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint)