When a LLM gives a wrong answer, is it more likely to give a wrong answer on subsequent unrelated questions?

Imagine you ask the same question twice to an LLM. Due to the randomness of the replies, it gives you a correct answer and a wrong answer.

If you ask an unrelated question in the same thread, isn’t it more likely to give a wrong answer in the case where it already gave a wrong answer for the first question? This is because someone who gives a wrong answer once is more likely to give a wrong answer again, and LLMs mostly predict text. How does this compare to the case where it gave the right answer initially?

If that is the case, would telling the LLM that it made a mistake make it more likely to give another wrong answer for an unrelated question, compared to not acknowledging anything? This is because it might “validate” the idea that the role the LLM is “playing” is of someone who gives bad answers.

1 Like

This question is specific and interesting, but in the end, I think it depends on the design and performance of the LLM.
It depends on the ability to understand the context and the amount of information given to the LLM at once. However, in general, I think that it would work properly in many of the current consumer-use LLMs if the question content was adjusted manually by a human each time and a new question was asked.

This question is specific and interesting, but in the end, I think it depends on the design and performance of the LLM.
It depends on the ability to understand the context and the amount of information given to the LLM at once. However, in general, I think that it would work properly in many of the current consumer-use LLMs if the question content was adjusted manually by a human each time and a new question was asked.