1GB storage limit in spaces?

As for the question, is there any “new” storage limit for spaces? I had the feeling that this was set to 50GB, but today I got

batch response: Repository storage limit reached (Max: 1 GB)
error: failed to push some refs to …

Is there any limit to 1GB now?

1 Like

The large model weights should be put into a new hf dataset.

1 Like

Yeah. True. I think it was probably last year when that restriction was introduced. Placing large files in model repositories or dataset repositories and downloading them during execution is fine. Downloading from HF repositories to HF runtime completes almost instantly. Writing the code is a bit of a hassle, but that’s all.


Space repositories are capped at ~1 GB of Git/LFS storage. The “50 GB” you remember is the default runtime disk for a free CPU Space. Store large weights and datasets in model or dataset repos and pull them at runtime, not in the Space repo. (Hugging Face Forums)

What you hit

  • The error batch response: Repository storage limit reached (Max: 1 GB) indicates you exceeded the Space repo cap, not the runtime disk. This cap currently applies to Free and Pro accounts and isn’t user-upgradeable. (Hugging Face Forums)

Two different “storages” to keep straight

  • Repo storage (Git/LFS): ~1 GB limit for the Space repository. Designed for app code, small assets, and config only. Not for large weights. (Hugging Face Forums)
  • Runtime disk: the VM filesystem your Space runs on. Default 50 GB ephemeral on free CPU hardware. You can add persistent storage mounted at /data on paid tiers. These are separate from the Git/LFS cap. (Hugging Face)

Where to put big files

  • Put model shards and datasets in model/dataset repos. Per-file limit 50 GB; account-level quotas are much higher and upgradeable. Your Space then downloads or streams them at startup or on first use. (Hugging Face)

Recommended pattern

Keep the Space repo tiny. Fetch artifacts at runtime and cache them to persistent storage if you enabled it.

# app.py
# References:
# - Spaces overview (50 GB runtime disk): https://huggingface.co/docs/hub/en/spaces-overview
# - Spaces persistent storage (/data): https://huggingface.co/docs/hub/en/spaces-storage
# - Hub storage limits (50 GB per-file, super-squash): https://huggingface.co/docs/hub/en/storage-limits

import os
from huggingface_hub import snapshot_download, hf_hub_download

# If you added persistent storage, keep caches under /data to survive restarts.
os.environ.setdefault("HF_HOME", "/data/.huggingface")

# Pull full model repo (weights/config) to local cache or working dir.
_ = snapshot_download(repo_id="org/large-model", revision="main")

# Or pull a single file from a model repo.
cfg = hf_hub_download(repo_id="org/large-model", filename="config.json")

If you already hit the 1 GB Space-repo limit

  1. Delete large LFS files in the Space: Repo → Settings → Storage → “List LFS files,” then remove unneeded items. Deleting pointers alone doesn’t free space. (Hugging Face)
  2. Reduce history bloat: use the Hub’s super_squash_history to rewrite history and reclaim quota. This is destructive by design. (Hugging Face)
  3. Move big assets out: push them to a model/dataset repo and fetch at runtime as shown above. Per-file limit is 50 GB. (Hugging Face)

Common failure modes to avoid

  • Confusing repo cap with runtime disk: Hitting 1 GB on the repo blocks git push. Hitting 50 GB on the VM causes eviction messages like “Workload evicted, storage limit exceeded (50G).” Different fixes. (Hugging Face Forums)
  • LFS pointer deletions: Removing only LFS pointers does not reclaim storage. Use the Storage page or API to delete actual LFS objects, or super-squash. (Hugging Face)

Quick answers to your questions

  • “Is there any limit to 1 GB now?” Yes, for Space repos. Not upgradeable by plan today. (Hugging Face Forums)
  • “50 GB” refers to the default ephemeral runtime disk of a Space VM, not the repo. (Hugging Face)
  • “Should large model weights go in a dataset/model repo, not the space repo?” Correct. Use model/dataset repos and download at runtime. (Hugging Face)

Short, curated references

  • Docs
    • Spaces overview and default hardware (50 GB disk). (Hugging Face)
    • Spaces persistent storage and /data usage. (Hugging Face)
    • Hub storage limits, 50 GB per-file, LFS cleanup, super-squash. (Hugging Face)
  • Forum confirmations
    • “There isn’t a way to increase the 1 GB storage limit for a Space repo.” (Hugging Face Forums)
    • Multiple reports of Repository storage limit reached (Max: 1 GB) on Spaces. (Hugging Face Forums)
  • Related
    • Eviction when VM disk crosses 50 GB. (Hugging Face Forums)
    • GitHub issue confirming 50 GB per-file behavior. (GitHub)

Bottom line: keep the Space repo lean. Put big files in model/dataset repos. Cache to /data if you enable persistent storage. This separates deployability (git) from capacity (VM disk and Hub storage). (Hugging Face)