Fine tune with different max_length

This is probably a stupid question, but I cant find the answer anywhere.

Can I fine tune with a longer max_length that what was trained with the initial model. My initial LM model was trained with max_length=150 but I want to fine tune a sequence for classification with max_length=300. Is it possible or should I retrain with the longer length.

I will also ask ( I am training on protein sequences ). My model was trained on a mixture of individual sequences of A and B, but I want to fine tune on concatenated sequences of AB, (hence the longer length) is this folly or acceptable? Thanks for any insight!

Drew

Hi, drewaight,

Could you please give more details? I am not sure what you mean about “trained on a mixture of individual sequences of A and B”? What is the difference between A and B?

I think it would be better to retrain the model if it needs to modify the max length.

Absolutely, the model is trained on ~60million unpaired heavy and light chain (~150 character) sequences from the OAS. The model learns representations of these protein sequences (heavy and light sequences).

…only the functional unit (antibody… that we have data on) is a heterodimer of one light chain sequence and one heavy chain sequence. Essentially any heavy chain and any light chain can pair.

I was thinking if I retrain the model with max_length = 300, would it make any sense to fine-tune (paired data) with light-heavy concatenated sequences. Would the LM model even recognize that its just two concatenated shorter “sentences” that its been trained on or would it be nonsense because it always expects padded region 150-300 to be nothing.

Good thing this is the beginners forum. Thanks for your help!

Drew

paper for inspiration