I’ve been exploring LLM training with datasets containing variable sequence lengths. When sequences exceed the max length and are split, how does this affect the model’s ability to learn dependencies like p(m+1|m)? What strategies or techniques are effective in ensuring that critical sequence transitions are properly trained and optimized for model performance?