Optimization strategie

Hello community, I’m making predictions with huggingface transformer model in the following way, but it’s obvious to me that’s not optimal, because i’m doing it one sample by one.
so any one have an idea about how to do it in a batched way? thanks

``for inp in tqdm(data.input.values):`
`    inputs = tokenizer.encode_plus(inp,`
`                                     return_tensors='pt',`
`                                     padding=True,`
`                                     truncation=True)` `    outputs = model(**inputs)`
`    seq_class_index = torch.argmax(outputs.sequence_logits, dim=-1)`
`    seq_class = model.sequence_tags[seq_class_index[0]]`
`    token_class_index = torch.argmax(outputs.token_logits, dim=-1)`
`    tokens = tokenizer.convert_ids_to_tokens(inputs.input_ids[0][1:-1])`
`    tags = [model.token_tags[i] for i in token_class_index[0].tolist()[1:-1]]` `    preds.append(seq_class)