Why does Classifer-Free Guidance (CFG) add guidances to a negative-prompts-conditional distribution instead of an unconditional distribution?

Note that the negative prompt is used only when guidance_scale > 1, e.g., for a guidance scale of 6, we get:
noise_pred = 6 * noise_pred_text - 5 * negative_prompt,

So, essentially it is steering away from the negative prompt.