Huggingface using only half of the cores for inference


I’m performing inference using a fine-tuned BERT model for text classification. I have a massive dataset with over 7 million samples to classify. Currently, I’m conducting performance tests with a smaller subset. The issue I’m encountering is that Hugging Face is only utilizing half of the available CPUs. I understand that using GPUs would be much more efficient, but unfortunately, I don’t have that option at the moment.
The transformers version is 4.30.2 and the code is below. Do you know how to use all the available cores or set a specific amount of cores?

from transformers import BertTokenizer, BertForSequenceClassification
import pandas as pd
from datasets import Dataset
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
tokenizer_path = './model'
model_path = './model'
tokenizer = BertTokenizer.from_pretrained(tokenizer_path)
model = BertForSequenceClassification.from_pretrained(model_path)
data = pd.read_csv('input.csv')
data = data.dropna(subset=['input'])
dataset = Dataset.from_pandas(data)
pipe = pipeline("text-classification", model = model, tokenizer= tokenizer, truncation= True, )
results = pipe(KeyDataset(dataset, 'input'), batch_size = 64)
l_result = [
    for out in tqdm(