Proofreader using local LLM?

I’m working on a proofreading project using local, open-source LLM like Llama2. This task is supposed to be much simpler than tasks like summarization or question-answering, yet I’m struggling to achieve the desired accuracy. The proofreading performance isn’t up to par – it’s adding / changing too much and not following instructions for output formatting well, no matter how I try to prompt. Also surprisingly, I can’t find much info online to help. I even tried the 70b Llama2 model and got disappointed. OpenAI API works very well, but it’s neither local nor open-source.

Did I miss something obvious? Any tips for coding a simple proofreader using a local LLM?

1 Like