Model for Long Text Classification

I have economic reports on different countries around the world. Each report is associated or not associated with a period of financial distress, the whole dataset is labelled. There are around 1300 reports in the train set and 350 in the test set. However, each report is a long text (from 1 500 up to 10 000 words). Moreover, both sets are rather unbalanced in terms of the class I want to predict.

I want to use LLMs to identify whether a given report describes a period of financial crisis. My computing power is quite low, only my laptop. What should my strategy be? Use pre-trained word embeddings from LLMs and then use any classifier?