I wanted to query a dataframe via the "table-question-answering" pipeline. It works well with small dataframes, however as soon as I import larger dataframes (e.g. with ~400 rows), I’ve got the following issue:
He’s on vacation so you might have to wait for to weeks Looking at the code, you past more rows than allowed by tokenizer.max_row_id, sp you should send a shorter table.
There also seems to be an option drop_rows_to_fit=True that you can pass to avoid this error.