HOW TO Overcoming the Influence of Seed and Enhancing the Role of Text Prompts

I fine-tuned a text2img model using Lora, based on the v1.5 version of stable diffusion. The results generated are very good, but they can’t be controlled. It seems that the generated results are more based on the seed. Changing the seed changes the image, but if I don’t change the seed and only change the text prompt, the result doesn’t change, or there are only very slight changes. How should I solve this problem?