I’ve been building a tool called EzEpoch that lets you:
-
Pick a model (text, vision, audio, multimodal)
-
Point it at your dataset
-
Start training in about 5 minutes
No environment setup, no dependency issues, no training scripts, and no DevOps. EzEpoch handles:
-
Model auto‑configuration
-
Package generation + installation
-
GPU deployment (Vast.ai + RunPod)
-
True MSL detection
-
Data structuring + curriculum building
-
Crash recovery + checkpoint protection
-
Model export (full, quantized, or push to HF)
Try the Hugging Face Space:
EzEpoch - a Hugging Face Space by wiljasonhurley
More about EzEpoch:
https://ezepoch.com
If you’ve ever struggled with configs, environments, or GPU cloud setup, I’d love your feedback.