What hardware do you use to train your models? Cloud or local?

This answer surely has a wide range of answers depending on the user. Nevertheless, I would love to hear from you.

What do you use to run your ML experiments? Do you use a cloud environment (Colab, Kaggle, AWS, Azure, something else?) or have you bought a nice machine with GPUs?
I know that buying a good machine with GPUs can get quite expensive and would only payoff if one really uses the GPUs that much.
Is there a configuration that you recommend that it’s more affordable and worth for, occasional usage? Did you buy all the parts and assembled it yourself?

Or what kind of cloud resources do you use? I feel that Colab+ and Kaggle are still the best bet in terms of good GPU and price. AWS was good when they would let you use GPU with spot instances. Now it seems impossible to get a GPU spot-instance.

Thank you!