Manual model download, and then move to HF cache

i have downloaded all the files and folders in the FLUX.1-dev repo,

and now i want it to be available at the HuggingFace framework from within a python code (without the need to hard-code the model folder path),

but i also don’t want to re-download the whole model again using the Huggingface CLI command (huggingface-cli download black-forest-labs/FLUX.1-dev)

i have read the entire post about how to Use Huggingface CLI and did not found the answer there,

is there a way to take a downloaded repo (that was manually downloaded) and insert it into the Huggingface Cache folder (on windows it is usually located at: %USERPROFILE%\.cache\huggingface_cache\hub)

There is a rule for naming cache folders, so it’s not impossible by trial and error.
But I think it is definitely faster to download again from_pretrained and store it in the cache automatically…
I think the hard-code prohibition is very strict.

the file structure is made of folder name:
[Object Type] [Double Dash as Separator] [Company] [Double Dash as Separator] [Model Name], for instance:

models--meta-llama--Meta-Llama-3.1-8B-Instruct

in that folder they keep the folder structure:

/
=| - blobs (the actual files with their md5 hash as a file name without extension)
=| - refs (main file with md5 hash that is the folder name under snapshots)
=| - snapshots
===| - 5206a32…253f (shortcuts with actual file names that link to the files in blobs)

more data on cache structure and uses here:

i thought there will be an easier way to “import” models rather then faking the structure

I’ve run the program after modifying the cache, not from scratch either, so I’m sure that would work.
Well, flux is big and you don’t want to download it over and over…