Text Length FinBert - Serverless Inference Endpoint

Hi guys, I’m trying to send a long text (longer than 512) to a FinBert inference endpoint deployed on a serveless inference endpoint on aws.
I’m receiving the following error: “The size of tensor a (639) must match the size of tensor b (512) at non-singleton dimension 1”.

I have a list of text that I would like to classify without splitting them, how can I fix?

Thank you in advance

The model has a max_sequence_length of 512. You can provide truncation=True as parameter, e.g.

{
  "inputs": "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!",
  "parameters": {
   "truncation": True
  }
}

1 Like