Is this the best I can get?

Hi, I’m very new to this space and working with LLMs. I’m a pretty experienced ChatGPT user, but only just started working with my own LLMs. I just got privateGPT to work with Mistral 7B Instruct v0.3 running in local mode.

I expected much more from this? I just feed it a simple form pdf and I’m baffled that I can’t even get it to accurately capture all form fields in JSON format. It either does a great job but ends work after 1 page, or it just randomly starts making things up or dropping entire parts of the form. Is this the most I can expect at this point? I fail to see the use case for anything this poorly performing.

1 Like

I’m not a main LLM, so I’m just rambling, but I think the initial chat GPT parameter count was 175B?
I think 4o something was over 1000B. Even if that model is 72B, it’s too small to expect the same thing as ChatGPT.
They’re assuming some application, such as training it for a limited purpose, using it as a component of another program, or having it call another program. If a small one and a big one fight without a strategy, the big one will win. Of course, the efficiency of the same size is improving day by day…