New: Distributed GPU Platform

Hey, HF community. My team and I are running a survey on a platform for distributed GPU. I would love your input here, or you can contact me on Twitter.

Some questions we have for the community:

  1. Where do you rent GPUs?
  2. What is the first consideration when looking for a place to rent? Is it cost?
  3. What specific niche or field do you work in within the machine learning community?
  4. How does your niche differ from other ML communities regarding computing needs?
  5. What do you use to train your models? (e.g. AWS)
  6. Where do you get your data? Store your data?

I appreciate any input. You can provide one-word answers. :upside_down_face:


Iā€™m usually looking for inference with energy efficiency as key point and accuracy, F1 etc.

1 Like
  1. usually runpod, lambda, or whomever else

  2. cost

  3. i train large scale open source(and closed source) models for general performance in generative tasks, usually llms

  4. i am the drain by which the compute falls, the final destroyer of water (doing my best to fix that though!)

  5. if you mean trainers here, we have our own that weve built (axolotl, openchat)

  6. make it, usually - store on huggingface!

1 Like