Please guide the on-promise spec for LLM fine-tuning

I would like to build an on-promise setting in order to tune the LLM model locally.
I just want to know the trend or some other guidelines to build the env setting.
which setting is most prefer one between 1ea A100 80G Vs 4ea 3090 24G?

And… Can the colab pro+ be an alternative way? instead of on-promise?
of course, it depends on the service, but we aim to make a model locally and privately.