Different response from different UI's

I am new to LLM’s and am just getting started. I have been spending a lot of time on deciding what type of LLM will work for my use case, but never considered that the user interface would have a more dramatic effect on my results than the model. Case in point, I ran two UI’s side by side, Msty and OpenwebUI, I asked the the same question (I want you to act as an AI assisted doctor. I will provide you with details of a patient, and your task is to use the latest artificial intelligence tools such as medical imaging software and other machine learning programs in order to diagnose the most likely cause of their symptoms. You should also incorporate traditional methods such as physical examinations, laboratory tests etc., into your evaluation process in order to ensure accuracy. Patient is complaining of a sharp pain to the right of his belly button) and the results were not even close. Msty provide me with an umbilicus on a 50 year old Caucasian male, where as OpenWebUI gave me a 6 step detailed diagnosis process which is what I expected.

Realizing that I am new to all this and that there may be something not obvious to me that I have done to cause this, I ask the question, is this normal for ui’s to influence the response in this manner?

2 Likes

Such things do happen. Strictly speaking, it is due to differences in the part between the LLM and the UI, pre-processing, settings, programming language libraries, etc., but even with the same model, there can be differences in output results.
In some cases, the model simply does not perform as it should due to a bug.

1 Like

Thanks for the reply. Now in addition to paying attention to the model I select, I at lease know I also need to pay attention to the UI

2 Likes

Sounds to me like there may also be system prompts in play in one or the other or both. These are directions you want included in every conversation.

1 Like