How to remove biasness and censorship while training LLms?

I am fine-tuning LLama2 but found that it had a lot of censorship and did not get done as much it should. I am working on a Sexual health bot, which is indeed a sensitive topic and also in local language(Nepali language).

What are the ways of :

  • removing bias
  • censorship
  • improving responses

I am using Knowledge embedding for build better prompts, is not very effective.