I am new to LLM’s and am just getting started. I have been spending a lot of time on deciding what type of LLM will work for my use case, but never considered that the user interface would have a more dramatic effect on my results than the model. Case in point, I ran two UI’s side by side, Msty and OpenwebUI, I asked the the same question (I want you to act as an AI assisted doctor. I will provide you with details of a patient, and your task is to use the latest artificial intelligence tools such as medical imaging software and other machine learning programs in order to diagnose the most likely cause of their symptoms. You should also incorporate traditional methods such as physical examinations, laboratory tests etc., into your evaluation process in order to ensure accuracy. Patient is complaining of a sharp pain to the right of his belly button) and the results were not even close. Msty provide me with an umbilicus on a 50 year old Caucasian male, where as OpenWebUI gave me a 6 step detailed diagnosis process which is what I expected.
Realizing that I am new to all this and that there may be something not obvious to me that I have done to cause this, I ask the question, is this normal for ui’s to influence the response in this manner?