Context window decreased after finetuning?

Hello,
I did fine tune a model with 4k window using tokenizer with max_length=512 because of memory usage.
Now the fine tuned model doesn’t generate anything after 512. Is that expected?