For context, I am currently working in IT maintenance on a large accounting application for a bank.
Using different LLMs and discovering fine-tuning, I wondered if it was possible to fine-tune a conversational model with data from the technical documentation of the application?
If I am not mistaken, this would speed up the work of the maintenance team a lot if we can just ask questions about the application to a model.
Will fine-tuning allow the model to retain the “knowledge” of the application?
The application contains many obscure variable names. Will the Llama tokenizer be suitable?
Would we get better results by simply passing the information in the context of the LLM or via files as GPT4 now allows ?