mistralai/Mistral-7B-v0.1 temperature

I use the HF end point for the mistralai/Mistral-7B-v0.1 model. I set the temperature to 1. I run the same question and get the same answer. Isn’t it should be the opposite? (to give the same output for temp=0 and the most randomness for temp=1)

1 Like

No, that is correct; it is the same for Llamacpp as well as Transformers.
If you want it to work even more deterministically, set doSample=False.

The temperature parameter in language models influences the randomness of their outputs. Setting it to 1 should generally allow for variability, so if you’re still getting the same answer, it might be due to implementation issues, model-specific behavior, or endpoint configuration affecting the output. Double-checking API settings and testing with different temperature values could be helpful.

By the way, if you’re still navigating which LLMs work best for your needs, you might want to check out PepperMill Beta. It’s an AI evaluation and orchestration platform with over 200 LLMs to choose from, helping businesses quickly identify and deploy the most effective AI solutions. It ensures optimal LLM choices, prompt quality, and token management tailored to your specific requirements.