Hi Guys, I want to further reduce the memory consumption of my model. Is there any way to run int8 training while using deepspeed? Currently, it’s running at fp16.
Hi Guys, I want to further reduce the memory consumption of my model. Is there any way to run int8 training while using deepspeed? Currently, it’s running at fp16.