Speeding up electra inference, multilabel classification

I have a problem concerning the inference speed of my finetuned ELECTRA model. It took about 15 minutes on a Google Colab Pro engine (is that what you call it?) to classify 10.000 sentences of maximum 280 characters per sentence. It has to classify each sentence on 10 parameters and the output is in probability for each dimension. The output looks something like:

        tweet_id        	user_username	text	created_at	user_name	user_verified	sourcetweet_text	morality_binary	emotion_binary	...	negative_binary	care_binary	fairness_binary	authority_binary	sanctity_binary	harm_binary	injustice_binary	betrayal_binary	subversion_binary	degradation_binary
0	1	443011743288393728	jahimes	        People are now using @metronorth like a subway...	2014-03-10T13:13:25.000Z	Jim Himes	True	NaN	0.068876	0.088321	...	0.055407	0.042869	0.048118	0.051975	0.038184	0.041714	0.043601	0.032611	0.038528	0.038586
1	2	443011451142537216	jahimes	        Spent morning on @metronorth issues with Rep. ...	2014-03-10T13:12:15.000Z	Jim Himes	True	NaN	0.064062	0.073806	...	0.059262	0.043094	0.045094	0.053616	0.039912	0.043484	0.049103	0.038017	0.043561	0.040557

Any suggestions on how to speed up this process?