Hi team,
Going through the last part got me thinking on some questions regarding quota & limits:
- Is there any limit number of repos a user can have private and public?
- Is there any limit to the size an individual dataset/model repo can have? Or a per-account limit? (eg: On Kaggle, each user gets a certain fixed GBs to host and sum total should remain within limit.)
- If I do a 1000 commits of a 1GB model, is 1TB going to be ‘always-accesible’, or we have some stack limitations wrt git history?
- Is there any limit on number of downloads per model (specifically a privately uploaded model)?
Questions are not necessarily as to what’s supported right now, but with some near-future perspective as well. eg: If I upload a public/private model (hypothetically for both commercial/non-commercial use) and not do the inference-api (just storage), will there be any threat to the 1)stability 2)scalability of such a pipeline?