Using TAPAS model for large datasets

It seems that TAPAS models can only handle around 100 rows or so in a given CSV file :frowning:

This is not very helpful in an Enterprise environment.
How do I train the google/tapas-large-finetuned-wtq model using a CSV file which has around 80,000 rows?
Any code will be helpful.