TypeError: DebertaV2ForQuestionAnswering object argument after ** must be a mapping, not Tensor

Hi,

Iam getting the above type error in my inference code where i input the file contents as context and query to predict answers on a custom dataset trained and evaluated using deberat-v2-xlarge model.
I have divided the contents of file into chunks of size 512(max_len handled by model).
answer_start_scores = self.model(**chunk)[0]
answer_end_scores = self.model(**chunk)[1]

when i run my inference code on N txt files, my inference code can process only 4 text files and i get the below error.pls let me know how to resolve this error

ERROR:

Traceback (most recent call last):
File “contract_final_updated.py”, line 291, in
final=reader.get_answer(ques)#model inference on input(question,context)
File “contract_final_updated.py”, line 91, in get_answer
answer_start_scores = self.model(**chunk)[0]
TypeError: DebertaV2ForQuestionAnswering object argument after ** must be a mapping, not Tensor

The function get_answer()

def get_answer(self,ques):
if self.chunked:
answer = ‘’
for k, chunk in self.inputs.items():

            #to code for finding N best predictions
            nbest = 3
            answer_start_scores = self.model(**chunk)[0]
            answer_end_scores = self.model(**chunk)[1]
            #print('answer_start_scores',answer_start_scores)#similar to start_logits calculated in get_answer_update()
            #print(torch.topk(answer_start_scores.flatten(),3).indices)

            best_indices_start = torch.topk(answer_start_scores,3).indices#to get best 3 start indices,similar to start_indexes
            best_indices_end = torch.topk(answer_end_scores,3).indices#to get best 3 end indices,similar to end_indexes

            answer_start_scores = answer_start_scores[0].detach().numpy()
            answer_end_scores = answer_end_scores[0].detach().numpy()
1 Like

Did you find any way to handle large sentences and divide them into chunks of size 512, that correctly works?