Training in a long prompt

Hi! I’m just getting started so hopefully this isn’t too daft a question! I’ve constructed a prompt that instructs the LLM on how it should coach users to form goals and finds ways of achieving them. It also explains how it should format it outputs in JSON to interact with a front-end JS chatbot

At the moment, its included as a prompt template, but its pretty big … is there a way of ‘training in’ the prompt so that it doesn’t have to be sent for every interaction?

Hi @markpeace, is the prompt always the same?

Yup - and then user input appended to the end

@markpeace Then yeah if you finetune your model with the user inputs and the desired outputs, you can probably omit the prompt and the model will implicitly learn it. Or maybe you can train with a simplified/condensed version instead of the long version.