Distributed training on different gpus

Is there any way to do distributed training (e.g. DDP) on different GPUs (e.g. from 1080 to 4090) efficiently? like without being bounded by the weakest GPU? (I have some older GPUs of different models and I want to utilize them)