Need advise for fine-tuning BERT on opinion mining

I am going to do the Opinion Mining over Tweets. I need to predict which tweet is with and which one is against the specific topic (in my case that is “covid vaccination”) and which one is neutral. I was going to fine-tune a BERT model with Transformers but I did not get good results as I tried with different pre-trained models (like BERT, DistilBert ,…) and some different models as for prediction. Even, I tried to activate gradients for some Transformer Blocks, but I got at top about 60% of accuracy on balanced data. My training dataset includes 30,000 tweets.
Is there any suggestion to perform it better?