"table-question-answering" is not an available task under pipeline

I am trying to load the “table-question-answering” task using the pipeline but I keep getting the message that -

"Unknown task table-question-answering, available tasks are ['feature-extraction', 
'sentiment-analysis',............

Below are the lines I run.

from transformers import pipeline
import pandas as pd

tqa = pipeline("table-question-answering")

You should check your version of transformers, it looks like it’s not up-to-date.

Oh ! I had transformers 3.3.1 installed. I upgraded to 4.2.1. now it has a different error though

404 Client Error: Not Found for url: https://huggingface.co/nielsr/tapas-base- 
finetuned-wtq/resolve/main/config.json

I tried with a different model and with the following script

from transformers import AutoTokenizer, AutoModelForTableQuestionAnswering

tokenizer = AutoTokenizer.from_pretrained("nielsr/tapas-base-finetuned-sqa")

model = AutoModelForTableQuestionAnswering.from_pretrained("nielsr/tapas-base- 
finetuned-sqa")

I get the following error

ImportError: 
TapasModel requires the torch-scatter library but it was not found in your 
environment. You can install it with pip as
explained here: https://github.com/rusty1s/pytorch_scatter.

I tried pip installing from that location but I get a different error

error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ 
Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/

I already have Visual C++ 14.x in my machine. I guess now it has to do with my machine.

Thank you, I will keep trying.

cc @lysandre and @nielsr who will know better than me.

Hi there,

I ran this myself and it throws an error indeed, indicating that ‘nielsr/tapas-base-finetuned-wtq’ does not exist. This is logical since I deleted that model from the hub, because everything is now under the Google namespace. @lysandre we should update the default model of the TableQuestionAnsweringPipeline to ‘google/tapas-base-finetuned-wtq’ instead of ‘nielsr/tapas-base-finetuned-wtq’.

For now, it works when you pass in an appropriate model name that exists in the model hub (note that you should install the torch-scatter dependency for your environment):

from transformers import pipeline
import pandas as pd

tqa = pipeline("table-question-answering", model="google/tapas-base-finetuned-wtq")

Then you can make predictions. I’ve made a Colab notebook that you can run here: https://colab.research.google.com/drive/15i5hPANX9uDB03DyGNEqD_fCYwVv0xeQ?usp=sharing

1 Like

Started a PR here:

https://github.com/huggingface/transformers/pull/9729

Thank you so much for the colab notebook. I am working on a way to install the torch-scatter. Thanks.