Giving a personality to a bot using a LLM

I am working on a bot that has a personality that I can adjust.

I would like to teach my bot to respond with his proper name and age that are both supplied by me. A lot of LLM are actually “instruct LLM” where we ask for some information and the LLM is providing it, so they are not exactly suitable to self aware conversations.

What is the correct strategy, so that the bot always responds correctly? I am aware that it depends on the model. I have experimented with GODEL for example. It also depends on the prompts that are used. My question is more if someone can share experience and overall strategy.

In general there are two possibilities for the implementation:

  • fine tuning
  • in-context learning / few-shot learning (FSL)

I tried providing ‘knowledge’ which is the FSL case, but the results are not very stable:

  • the bot answers with a different name than the one supplied by me
  • the bot answers with the name supplied by me and another name: “I am Peter. My name is Mary”.
  • same thing with age - sometimes it is correct sometimes it is not
1 Like