Using Quantization with fp16/bf16 Trainer flag

In HF’s colab notebook for QLora, they use fp16=True in the training arguments even though quantization config uses bf16 for compute.

I have two questions here:

  1. What is the purpose of the fp16 flag in training arguments? I believe this flag is for mixed precision training but that shouldn’t be relevant if we’re using QLora training?
  2. Shouldn’t the fp16 flag be False and the bf16 flag be True?