How to increase the max_seq_model LayoutLMV3

Hi there,

the model I am using is LayoutLMV3 (LayoutLMv3ForTokenClassification).

I want that the model can take more than 512 tokens because when the text is very large it does not classify the rest of it.

I want to increase the seq_max_length, I have changed from LayoutLMv3Config → max_position_embeddings = 1024, the bboxes dataset that I am using has a 1024 + 196 + 1 = 1221 size, but this hasn’t worked.

I got a CUDA error:

.../python3.8/layoutlm_model-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
   2197         # remove once script supports set_grad_enabled
   2198         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2199     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)

RuntimeError: CUDA error: device-side assert triggered

This should be a matrix sizes error.

Can someone explain me, how can I increase the max_seq_length please.