Do I Need to Use zero_to_fp32.py After Training Llama with run_clm.py?

Hey community,

I’ve been training a Llama model using run_clm.py and came across something I’m unsure about. After training, in the checkpoints folder, I found this zero_to_fp32.py script. My question is, do I actually need to run this on my completed model, or can I just go ahead with the model.safetensors file which seems to hold the bf16 weights?

Appreciate any insights you guys might have!