Tapas-base-finetuned-wtq model is very slow

I tried running Tapas Table question answering model of 30sample CSV data, model response is quick.

same model on 4k data with more complexity, its is very very slow…
how can make this fast? like can I convert this into Parque or Json or some other format for faster response?
or I should have better computational power?

tqa = pipeline(task="table-question-answering", model="google/tapas-base-finetuned-wtq")
query = "Who has lowest innings?"
print(tqa(table=table, query=query)["answer"])

also is there a way to fine tune this model to make it quicker??