How to merge multiple LoRA back to base model?

I had finetuned two LoRA. The first one was tuned with raw text to let model know the knowledge and the second one was tuned with many Querstion/Answer pairs. Both have somewhat better result than base model after they are merged backed to base model. But if i merged them together back to base model, it doesn’t get better result. What should i do for it? What’s the best practice to merge multiple lora?