QA model with human like answers

Hello, i’m looking for a model that not only answers my questions but also returns an answer in the way that a human would answer it and not just as an extraction of information from the context.

Examples:

Context:
I have a red jumpsuit and it is available in sizes S, M and L
Question:
what color is the jumpsuit?
Expected answer:
the jumpsuit is red

Most QA model would answer simply “red”

You could maybe use a GPT2 model and prime it with that info. I’d give it a few examples like… prime_text = “Context: I have a red jumpsuit and it is available in sizes S, M and L. Question: What color is the jumpsuit? Answer: The jumpsuit is red. Context: I have a black jumpsuit and it is available in sizes S, M and L. Question: What color is the jumpsuit? Answer: The jumpsuit is black. Context: I have a blue jump suit and it is available in sizes S, M and L. Question: What color is the jumpsuit? Answer: The jumpsuit is blue.”

After that you could get your prompt however you want. A good example would be as follows.

prompt = “Context: I have a yellow jumpsuit and it is available in sizes, S, M and L. Question: What sizes are available for your jumpsuit?”

Next add the prime_text and prompt together like this:

final_prompt = prime_text + prompt + " Answer:"

Then pass your final prompt to your model. Before you print or send the generated text you could process it like this:

string_with_unwanted_characters = result[0][‘generated_text’]
step1 = string_with_unwanted_characters.replace(final_prompt, “”)
step2 = step1.split(“Answer”)[1]
step2 = step2.replace(“Context:”, “”)
step2 = step2.replace(“Question:”, “”)
step2 = step2.replace(“Answer:”, “”)
step3 = step2.split(“.”)[0] +“.”
After that you can remove other stuff a similar way if needed such as…
step3 = step3.replace(“\n”, “”)
step3 = step3.replace(“\t”, “”)
step3 = step3.replace(“_”, “”)

Finally output step3.
print(step3)

I don’t know if this is a good method or not as I’m very new to all this but it seems to work in some situations. You could fine tune a model to be even better for your use case.