Smollm or othe SLM's example uses andmfeedback for getting the most of of them

and their seems little online discourse re specifics of what’s been tried and tested and found to work with these base models before one would spend time and effort in fine tuning.

LLMs increase through merging in addition to releases and training, so feedback is not keeping up with the speed at which they increase.
There are a few models that are collectively produced with detailed feedback in closed communities and external forums…

Anyway, I also realize that even simple feedback on the LLM, such as how it feels to use, is surprisingly small compared to the number of downloads. Well, the Discussion section is next to the Like button, so it may be that it has a high visual hurdle. In fact, it can be used as if it were a BBS.
A few of us have posted something similar about HF’s weakness in this area in the following request post.
If these areas could be improved, it would be energy efficient and we could make progress in improving the model.
There may also be a means of strengthening ties with outside communities.

Also, it seems that many people use the 0.1B to 8B models offline for their own use.
It seems that they don’t often give feedback on the Internet about the results of adjustments they made secretly for their own use.

I’m new to HF and have very little knowledge of LLM, but there are several active experts who frequent this Forum and HF Discord, so if you want to hear more about the technology, you can contact them directly by sending a mentions. (@+hf_username)

Discord: