Hi every one.
Providing to LLM specific system prompt in order to teach it some expected behavior is there some general rules for what is more efficient providing a logic with general rules in plain NL vs giving as much examples as possible so that to let it make its own generalizations and abstractions based on it.
It would be really good to have some general concept creations
Or does it really depends on
- specific model
- specific task logic
In this case is there any crucial condition points which could still device some global domain down to a still general regions - for such cases it works like this
- for the rest it works like that
- etc.
Thanks