Spaces Persistent Storage Upgrade Not Accessible

Hi,

I am working on a small space for my Master’s project which will log and store session data during usage. I was storing this in /tmp and recently moved to /data. On the free tier this is still ephemeral, which I expected, however I have no option to upgrade to a paid tier to activate the persistent storage.

I have linked a payment method, and I have searched all settings panels (space, account, etc., ) to no luck. I was hoping to activate the small storage tier and begin running my experiments with users this week.

If anyone could help me out in understanding why I don’t have the option, and what (if anything) I can do to get this working i’d be very grateful!

Thank you!

1 Like

There seems to be a valid case where the Persistent Storage option doesn’t appear for Enterprise orgs…?

Otherwise, it’s likely a Hugging Face bug or browser malfunction, so contact Hugging Face via email.


You are not missing a hidden toggle.
Given Hugging Face’s own docs, the option to buy persistent storage should appear in your Space’s Settings. If it doesn’t, that is almost certainly a Hugging Face–side issue (account / org / region / feature flag), not something you can fix in code or by clicking around more. (Hugging Face)

Below is the concrete picture, from first principles.


1. Background: how storage in Spaces actually works

There are three different “storages” involved. Mixing them up is what usually causes confusion.

1.1. Runtime disk (what /tmp and /data sit on)

  • Every Space gets a runtime filesystem when it runs.
  • On free CPU hardware that is 50 GB and ephemeral: when the Space restarts or is stopped, the contents are wiped. (Hugging Face)
  • By default, everything under / (including /tmp) lives on that ephemeral disk.

Hugging Face’s own “Disk usage on Spaces” page states that this default disk is ephemeral and that if you need longer-term data you must either subscribe to persistent storage or use a dataset as a datastore. (Hugging Face)

So your observation:

  • /tmp → data gone on restart
  • /data on free tier → also gone on restart

is exactly what the docs say should happen when you have no paid storage attached.

1.2. Persistent storage

Persistent storage is a separate paid volume that you attach to a Space.

Key properties from the docs:

  • You “upgrade your Space to have access to persistent disk space from the Settings tab.” (Hugging Face)

  • It is mounted into the container at /data. You read/write to it like a normal disk. (Hugging Face)

  • Tiers and pricing (per Space):

    • Free tier: 50 GB, not persistent (this is the default runtime disk)
    • Small: 20 GB, persistent, ≈ $5/month
    • Medium: 150 GB, persistent
    • Large: 1 TB, persistent (Hugging Face)

So under normal circumstances, once you buy “Small”, /data is on that paid volume and survives restarts. If you never buy it, /data is just another path on the ephemeral disk.

1.3. Hub repos and datasets (a separate persistence channel)

The same docs explicitly say that if you need data that outlives the Space itself you can also use a dataset repository as a data store. (Hugging Face)

That option is important for you, because it does not depend on the persistent-storage feature working.


2. What you should see in the UI

Multiple official docs and third-party guides assume the same user interface:

  1. Open your Space in the browser.

  2. Go to the Settings tab of that Space (not your global account settings).

  3. Somewhere on that page, you see:

    • Hardware configuration (CPU/GPU) and

    • A Storage / Persistent storage section with a selector for:

      • Ephemeral (default, free), and
      • Small / Medium / Large persistent tiers. (Hugging Face)

The JupyterLab-on-Spaces guide states this very plainly:

To set up persistent storage on the Space, you go to the Settings page of your Space and choose one of the options: small, medium and large. (Hugging Face)

The general “Disk usage on Spaces” doc also says:

You can upgrade your Space to have access to persistent disk space from the Settings tab. (Hugging Face)

And the Spaces Overview page shows storage tiers (Ephemeral, Small, Medium, Large) and tells you to click Settings and select your preferred hardware environment, including persistent storage. (Hugging Face)

So under normal conditions, after you have a card on file, the Storage card exists, and you pick a tier there.

In your case, that entire upgrade control is missing despite:

  • You having a working Space.
  • You understanding that /data is ephemeral on free tier.
  • You having added a payment method. (Hugging Face Forums)

That is not described in any public docs as expected behavior.


3. Why the upgrade option can be missing

There is no public page that says “persistent storage only exists for specific user plans” or “only for certain Spaces.” Instead, the docs treat it as a normal paid add-on you can always choose in Settings or via API. (Hugging Face)

Given that, your situation almost certainly comes from how your Space or account is configured on Hugging Face’s side, not from anything you did.

3.1. Region / advanced-compute constraints (especially orgs)

For Enterprise orgs, Hugging Face supports storage regions (US, EU, etc.) and explicitly says that:

Available hardware configurations vary by region, and some features may not be available in all regions, like persistent storage associated to a Space. (Hugging Face)

If your Space is:

  • Under an organization that uses a non-default region, or
  • Under some internal “advanced compute” configuration,

then persistent storage may be disabled for that region, and the UI will simply not show it.

3.2. Owner vs organization billing

Billing for Spaces upgrades (hardware and storage) is tied to the owner account / organization, not to each individual contributor.

  • The Spaces overview and “Manage your Space” docs both say upgraded hardware and storage are requestable when you have a payment method and use space_storage="small" or request_space_storage. (Hugging Face)
  • Those API calls are sent on behalf of whichever account or org owns the Space.

If:

  • The Space is under an org,
  • You added a card only to your personal billing settings, and
  • The org has no payment method registered,

it is plausible that the UI hides storage upgrades for that Space because the owner (the org) has no valid billing setup, even though you personally do.

There is no explicit doc line confirming “we hide the storage card when billing is missing,” but this pattern is consistent with how hardware upgrades require card or grant on the owner. (Hugging Face)

3.3. Feature flag or UI bug

HF clearly runs a lot of things via backend flags:

  • For example, the Hub docs mention features “not currently exposed to end users” that HF can toggle if you email website@huggingface.co. (Hugging Face)
  • Persistent storage itself is configurable via API (request_space_storage) with a SpaceStorage enum (small/medium/large). (Hugging Face)

If your account or Space did not get the correct flag, the Settings UI will not render the storage selector even though the docs assume it exists. That matches exactly what you see.

Community answers on other persistent-storage problems often end with “contact HF support (website@huggingface.co, billing@huggingface.co) for paid storage issues,” which is another hint that some parts of this system are not self-service. (Hugging Face Forums)


4. Concrete checks you can do yourself

These checks will not fix a backend bug, but they help narrow down the cause before you email support.

4.1. Confirm you’re in the Space Settings, not only account settings

From the docs’ perspective, the flow is:

  1. Open the Space page (e.g., https://huggingface.co/spaces/you/your-space).

  2. Click the Settings tab on that Space, next to “App / Files / Community”.

  3. Scroll; there should be:

    • Hardware section.
    • Storage section with “Ephemeral / Small / Medium / Large” or a “Persistent storage” card. (Hugging Face)

You already looked, but it is important: if the entire storage card is missing, that’s the problem. Not a “wrong place” issue.

4.2. Check whether the Space is owned by you or by an org

On the Space page, look at the identifier:

  • your-username/space-name → personal Space.
  • org-name/space-name → organization-owned Space.

If it is under an organization:

  • Ask an org admin to open that Space’s Settings and see whether they see a storage section.
  • Confirm whether the org has a payment method in its own billing settings. (Hugging Face)

If the org has no billing configured, that increases the odds that upgrades are hidden.

4.3. Create a tiny test Space under your personal account

Create a bare-bones Gradio Space under your personal user (not the org), then:

  • During creation, some templates show a storage selector (Ephemeral vs persistent) as part of the “Create Space” dialog. (Label Studio)
  • After creation, open its Settings and check if a Storage section appears.

Outcomes:

  • If the test Space does show storage options but your MSc Space does not, the problem is specific to that existing repo (a broken flag or older configuration).
  • If neither Space shows storage options, this is more likely account-level (or region-level) gating.

4.4. Optional: try the Python API from outside Spaces

If you are comfortable with Python, you can directly call the public API to request storage and see what error you get:

from huggingface_hub import HfApi, SpaceStorage

api = HfApi(token="hf_...")  # use a user token with write access

repo_id = "OWNER/SPACE_NAME"  # e.g. "littlewoko/my-msc-space"

api.request_space_storage(repo_id=repo_id, storage=SpaceStorage.SMALL)

This is the documented way to add persistent storage programmatically. (Hugging Face)

  • If this returns something like “feature not available” or 403, that suggests a permission / plan / region gate.
  • If it 200s but the UI still shows no storage card, that suggests a pure UI bug.

Either way, the response is something you can attach to your support message.


5. How to get it fixed: contacting Hugging Face

For anything involving paid features, the official path is via Hugging Face support:

  • Billing and paid features: billing@huggingface.co (address given in the official billing docs and Hub overview). (Hugging Face)
  • General site / feature toggles: website@huggingface.co is often mentioned in Hub docs as the contact point for features that are not user-exposed yet. (Hugging Face)

When you email them, include:

  • A link to the affected Space.

  • A screenshot of that Space’s Settings tab, showing that there is no Storage / persistent storage section.

  • A short description:

    • You understand /data is ephemeral on free tier.
    • You have a card configured in billing settings.
    • You cannot see any way to request Small / Medium / Large persistent storage, contrary to the docs. (Hugging Face)
  • Optionally: the status code and message from request_space_storage if you tried the API.

From public documentation, there is no setting you can change to “un-hide” this; HF staff need to check the Space’s region, owner, and internal flags.


6. What to do for your MSc experiments right now

You want to start experiments this week. You do not need to wait for HF to fix persistent storage to get reliable logging.

The safe design is:

Treat the Space as stateless compute and UI.
Treat persistence as remote: a dataset repo or external DB.

6.1. Using a Hub dataset as your “database”

The storage docs explicitly recommend using a dataset repository as a durable store when you need data that outlives the Space. (Hugging Face)

Pattern:

  1. Create a dataset repo, for example your-username/msc-session-logs.
  2. Create an HF token with write access.
  3. In the Space, set that token as a secret (Settings → Secrets). (Hugging Face)
  4. When a user finishes a session, append a JSON or CSV record to the dataset via huggingface_hub.

Very simple example:

import json, uuid, datetime as dt
from huggingface_hub import HfApi

DATASET_ID = "your-username/msc-session-logs"  # dataset repo
api = HfApi(token=os.environ["HF_TOKEN"])      # set HF_TOKEN as a Space secret

def log_session(data: dict):
    session_id = str(uuid.uuid4())
    ts = dt.datetime.utcnow().isoformat()
    path_in_repo = f"logs/{ts}_{session_id}.json"

    api.upload_file(
        repo_id=DATASET_ID,
        repo_type="dataset",
        path_in_repo=path_in_repo,
        path_or_fileobj=json.dumps(data).encode("utf-8"),
    )

This gives you:

  • One JSON file per session under logs/, persisted on the Hub.
  • Data independent of the Space’s runtime disk.
  • Easy offline analysis: you can clone the dataset and run your MSc analysis locally.

This is exactly the type of pattern the docs suggest in their “Dataset storage” section. (Hugging Face)

6.2. External database (if you prefer “classic” infra)

You can also use a managed DB (Postgres, Supabase, Firebase, etc.) and send logs from the Space over HTTPS. This is the same pattern many teams follow when using Spaces as a front end and keeping state in their own infra. (Hugging Face)

Tradeoffs:

  • More moving parts and credentials.
  • More control and potentially easier integration with university data systems.

6.3. Where persistent storage fits if/when it works

If HF later fixes your persistent storage UI:

  • Use /data mainly for caches and local working files (e.g. HF_HOME=/data/.huggingface so models and datasets do not re-download every time). (Hugging Face)
  • Keep the authoritative research data (logs, labels, outputs used in the thesis) in a dataset or DB, which is easier to clone, back up, and reference in your write-up. (Hugging Face)

That way, your experiment does not depend on the fate of a single Space’s attached disk.


7. Summarised next steps

  • You are not doing anything wrong. Docs say the persistent storage selector should appear in your Space’s Settings; if it is missing, that is inconsistent with the documented behavior. (Hugging Face)
  • Check basics: Confirm the Space owner (you vs org), verify billing for that owner, and test with a fresh personal Space. This distinguishes account-level, org-level, and repo-level issues. (Hugging Face)
  • Optionally probe via API: Call request_space_storage with SpaceStorage.SMALL and see whether you get “not allowed” vs success; include that in your support message. (Hugging Face)
  • Escalate to Hugging Face: Email billing@huggingface.co (and if you like, website@huggingface.co) with Space link, screenshots, and a short description that the persistent storage UI is missing despite the docs. (Hugging Face)
  • For your MSc this week: Implement logging to a Hub dataset or external DB now, and treat persistent storage as an optional future optimization for speed/caching rather than the primary data store. (Hugging Face)

This keeps your experiment robust and unblocked, regardless of how quickly HF resolves the missing upgrade option.

Amazing, thank you! Couldn’t have asked for a more robust response. I’ll try out getting the upgrade working programmatically, but regardless you’ve given me plenty of direction so thanks again! I’ll ping hugging face an email with details so that they’re aware of the bug/ issue.

1 Like

Update: the programmatic update worked and I can see that the small persistent storage tier is now visible and selected. Within the UI I still have no option for any of the other storage tiers, but at least I’m on the one I need for now! Definitely a strange little UI bug.

Thanks so much again for getting on this so quickly @John6666 , you’ve been incredibly helpful.

1 Like

Hi, I’m also having trouble with persistence storage. I can’t even enable the small version because it’s not available. I’ve checked on two accounts, updated payment methods, and nothing—I don’t have persistence storage.

What can I do?

1 Like

Running this script did the trick for me (fill in with your specific details).

from huggingface_hub import HfApi, SpaceStorage

api = HfApi(token="hf_...")  # use a user token with write access

repo_id = "OWNER/SPACE_NAME"  # e.g. "littlewoko/my-msc-space"

api.request_space_storage(repo_id=repo_id, storage=SpaceStorage.SMALL)

Otherwise, @John6666 ‘s response had some comprehensive steps to persist what you need otherwise. Hope you can get it working! Maybe it’s a more widespread issue that has started recently.

3 Likes

Thanks! That worked for me :slight_smile:

2 Likes

this worked for me too,
thanks for the advice

1 Like

Hello,

I just wanted to check if this is a related issue. I am completely missing the Storage/ Persistent Storage option within my Space settings, and I only see the Storage Usage (screenshot below). I am running a docker version of Argilla but can only see the Space Hardware options in the Space settings page.

I did not try enabling it programmatically, but wanted to check if there is some setup that I’m doing wrongly with my Spaces that doesn’t allow me to see it.

What I’m seeing under my Space settings

1 Like

@meganariley Paid option issue?

Hi @Minimartzz There’s a similar discussion here.

Persistent storage isn’t available for new Spaces, but you can use Dataset Storage instead.

1 Like

Hi @meganariley @John6666 Thanks for the info, I’ll checkout those links!

1 Like

Worked thanks

1 Like

Posting this as a follow-up for others facing the same issue.

We ran into the same problem with an Argilla Space — after the persistent storage upgrade, the Space became inaccessible and we lost the dataset.
The core issue seemed to be a disconnect: the small storage tier we had originally configured in Argilla is no longer available, leaving the Space in a broken state with no clear migration path.

We tried three approaches, none of which fully resolved the problem:

1. Requesting the storage upgrade programmatically via the API

Following the steps from this forum, we called request_space_storage() with a write-access token:

load_dotenv(Path.cwd() / “.env”)  HF_STORAGE = os.environ.get(“HF_STORAGE”)

api = HfApi(token=HF_STORAGE)repo_id = “your-username/your-space-name”

api.request_space_storage(repo_id=repo_id, storage=SpaceStorage.SMALL)print(f"Storage requested for {repo_id}")

Result:

HfHubHTTPError: Client error ‘404 Not Found’ for url‘https://huggingface.co/api/spaces/your-username/your-space-name/storage’

The endpoint returns 404 — the SMALL storage tier appears to no longer be available for this Space. The upgrade option is also absent from the Space
Settings UI.

2.Uploading the dataset to a Hub dataset repository

We followed the standard pattern — created a dataset repo, set an HF token with write access as a Space secret, and pushed the data via huggingface_hub. The data transferred fine, but this approach does not reproduce the Argilla-specific configuration: dataset settings, annotation guidelines, label schemas, and field definitions set up in the Argilla UI are not exported with the data. You get the raw records but lose the annotation workspace entirely.

3.Self-hosting Argilla on an external VPS (Vultr)

We deployed Argilla on a Vultr instance (4 GB RAM / 128 GB storage, Ubuntu) using the official Argilla Docker Compose setup:

  1. Provision an Ubuntu instance on Vultr
  2. Install Docker via the official Docker apt repository: https://docs.argilla.io/latest/getting_started/how-to-deploy-argilla-with-docker/
  3. Download the Argilla docker-compose.yaml from argilla-io/argilla on GitHub
  4. Edit credentials in the compose file
  5. Start with docker compose up -d and open the Argilla port via iptables

This worked initially — the instance runs at a custom server IP with no dependency on HF persistent storage. However, data uploaded to Argilla on the VPS is also lost when the server is reset, so this does not fully solve the durability problem either.

We are still looking for a reliable solution. Has anyone found a way to either recover access to the original persistent storage tier, or set up a durable Argilla deployment that survives resets?

1 Like

I’m not very familiar with Argilla, but it seems that exporting and importing data at the Argilla data unit level—rather than on a file-by-file basis—is important.

I don’t know the background behind the sudden discontinuation of Persistent Storage, but it certainly seems to have been discontinued…


For your situation, the best practice is to treat the Space as disposable and move durability somewhere else. The current Hugging Face docs say persistent storage is no longer available, while other Hugging Face docs and API references still describe the older storage flow. That means chasing the old small tier is not a reliable operating plan anymore. (Hugging Face)

The core rule

Do not let a Hugging Face Space be the only copy of Argilla state.
Use the Space for UI and compute. Keep durable data outside it. That matches Hugging Face’s newer storage direction, where repos are Git-based and buckets are mutable object storage. (Hugging Face)

Best practices for you

1. Stop trying to recover the old storage tier as your main plan

You can still ask support, but I would treat that as a low-probability rescue path, not as the design. The docs currently say the persistent-storage setting is ignored, even though other docs still expose storage-management methods. That is a documentation and product-state mismatch, not a stable workflow. (Hugging Face)

2. Back up at the Argilla dataset layer, not just the file-upload layer

Your backup unit should be the Argilla dataset, not just “files pushed to the Hub.” Argilla’s docs say a complete dataset includes the configuration in rg.Settings plus the records, and their to_hub / from_hub and to_disk / from_disk flows are specifically for exporting and restoring that full dataset. The to_disk reference says the export contains the dataset model, settings, and records as JSON files. (Argilla Docs)

3. Keep two independent backups at all times

For every important milestone, keep:

  • one Hub dataset repo copy for versioned external backup,
  • one local disk copy outside the Space or VPS.
    This is the safest pattern because it protects you from both Space failure and single-host failure, and it uses the Argilla-supported export flows rather than ad hoc copying. (Argilla Docs)

4. Use the right Hugging Face storage for the right job

Use a dataset repo when you want versioned, inspectable snapshots of Argilla datasets. Use a Storage Bucket for mutable artifacts such as logs, uploaded files, exports in progress, checkpoints, or other large files that change often. Hugging Face’s docs explicitly distinguish repos from buckets this way. (Hugging Face)

5. For any serious Argilla deployment, use a real stateful stack

Argilla is not just one container with a folder. Its docs describe a relational database layer, a search layer, and related server configuration, and the official Compose example includes Argilla, PostgreSQL, Elasticsearch, Redis, and named volumes. That is the shape you should think in when the data matters. (Argilla Docs)

6. Separate container persistence from host persistence

Docker volumes persist beyond the life of an individual container. That protects you from docker compose down / up cycles and container recreation. It does not protect you from losing the host itself, rebuilding the VM, or destroying the underlying disk. Docker’s own docs are explicit that volumes persist beyond the container lifecycle. (Docker Documentation)

7. On Vultr, add host-level backups intentionally

This is where your VPS attempt likely fell short. Vultr’s docs say automatic backups cover the compute instance’s active file system, but do not include attached Block Storage volumes. So a durable Vultr design needs both persistent storage for the live service and a separate backup plan for that storage. (Vultr Docs)

8. Test restore, not just backup

A backup is only real if restore works. Argilla’s docs support restoring datasets from the Hub and from disk. So your standard operating procedure should include a restore drill into a fresh workspace or staging instance after major annotation milestones. (Argilla Docs)

What I would do in your place

Good enough for a short project

Use the Space only as a temporary frontend. After each session or milestone:

  1. export the dataset with Argilla’s export methods,
  2. push to a Hub dataset repo,
  3. also write a local disk export,
  4. store large mutable artifacts outside the repo, ideally in a bucket. (Argilla Docs)

Better for an ongoing team workflow

Keep Hugging Face for sharing and snapshots, but move the authoritative runtime off the Space. Run Argilla on a VPS or managed environment with Postgres, Elasticsearch or OpenSearch, Redis, Docker volumes, and host-level backups. That matches Argilla’s documented architecture much better than relying on Space-local state. (Argilla Docs)

Best for anything you would be upset to lose

Use three layers:

  • live runtime: self-hosted Argilla with persistent volumes or managed data services,
  • versioned dataset backup: Argilla to_hub to a dataset repo,
  • offline or second-site backup: Argilla to_disk copied off the VM. (Argilla Docs)

What not to do

Do not rely on:

  • a Space’s local disk as the source of truth,
  • plain file uploads as a substitute for Argilla-native dataset export,
  • Docker volumes alone as your whole disaster-recovery plan,
  • Vultr backups without checking whether the actual data disk is included. (Hugging Face)

A practical operating checklist

Use this as your default routine:

  • After creating or changing a dataset schema: export with Argilla, not raw file copy. (Argilla Docs)
  • After each annotation milestone: export to Hub and to disk. (Argilla Docs)
  • Before any deployment change or restart-risking change: snapshot the dataset first. (Argilla Docs)
  • For VPS deployments: confirm named volumes are present for Argilla, Postgres, and Elasticsearch. (GitHub)
  • For VPS durability: enable snapshots/backups and verify which disks they cover. (Vultr Docs)
  • Once per month: restore one backup into a clean environment and verify the dataset loads correctly. (Argilla Docs)

My recommendation, plainly

For you, the best practice is:

Space for convenience. External storage for truth. Self-hosted stack for durability.

That means:

  • do not spend more energy on reviving the old Space persistent-storage path,
  • use Argilla-native export/import as your backup primitive,
  • keep two backups,
  • and move long-lived Argilla state to infrastructure you control. (Hugging Face)
1 Like

Hi and thanks for reaching out! We’ve sunset persistent storage for Spaces and now recommend utilising HF Mount, which lets you mount Hub repositories or Storage Buckets directly as volumes. For Spaces, this is currently beta available via the API and UI support is coming soon.

Until this is ready & we publish documentation on using HF Mount for Spaces, you can check out more information on using the already available HF Mount for Jobs: https://huggingface.co/docs/hub/jobs-configuration#volumes

We’ll circle back when this is fully available for Spaces! :rocket:

1 Like