How do I filter sentence transformer models by context window size? I’m experimenting with sentence transformers for multi-class classification and my input text is large; most of the models I’ve looked at cut off their input at 256 or 512 “word pieces” or fewer and that number tends to be buried in the model description if it’s mentioned at all.
If I can’t search by context length, can anyone suggest a sentence transformer model with a ~4096 token context window / token length / word piece count?