Hi Everyone!
I’m starting a small research project on how LLMs can generate biased answers based on things like race, sex, or age of the person asking the questions. It’s based on this paper that talks about how even the name of the user can affect the likelihood of getting certain answers over another, even if asking the same question.
Some questions I’m interested in are things like:
- Average Salaries for jobs that are male-dominated, or race-dominated
- Estimation on how much a person’s bail/fine would be
- Rating how well an essay is written
- Credit Scores
- Numbers of households affected by redlining or gentrification
I would be interested to hear if anyone else has done this, or if anyone would like to contribute to the discussion!