Open source more or less, LLMs disappoint in production. They are huge models, average ones have 13 billion parameters, that’s 13,000,000,000!
Yet they try to specialize in everything and are good in nothing.
On the one hand, there are people with fake enthusiasm on youtube talking about the new OpenSource LLM (insert random name), which has now more parameters.
Yet when it comes to putting it into practise, my benchmark is chatting with a pdf, they fall flat, hallucinate, give answers not based on the document et cetera.
They are just good at nothing, but know a little bit about everything, just never enough to be useful.
And nobody seems to bat an eye, anywhere.
This is dead wrong.
All approaches to optimize the result fall flat, instead, there is a new model that is supposed to cure all ailments and does nothing well.
Who in his sane mind is excited about something like this?
Open Source more or less, but those models don’t shine next to chatgpt - at all.