What is precisely happening during LLM fine-tuning with autoTrain?

Can you point me to the documentation or describe precisely what is happening on the server-side when using auto train for LLM fine-tuning? I’m specifically interested when selecting the “generic” option - does the server try to break up the “text” field into its own best guess of “prompt”/“response”, or does it treat it as a stream of tokens where it tries to predict the next token? A pointer to relevant docs would be helpful, but the current autoTrain documentation is very sparse on this.

Many thanks

1 Like

I would really appreciate if anybody could help.
Thanks in advance