How to fit Versatile Diffusion into colab RAM?

Passing torch_dtype=torch.float16 doesnt help.
Any suggestions?