Is Int8 quantization training possible while using deepspeed?

Hi Guys, I want to further reduce the memory consumption of my model. Is there any way to run int8 training while using deepspeed? Currently, it’s running at fp16.