How to add or download files and folders in/from the space

how to add or download files and folders in/from the space

hi i have a certain python files and folders that i wants to add into the huggingface space project… does any one has any idea how to add or import them into the project space cause i don’t find any of the option to do so.

And even please guide where & how to download or export the python files created here for the same.

Involving github is not a option.

Thank you

Hi @nightfury !

You want to put files in your space that you don’t want to check into the GitHub repo of your space?

Maybe you can put them in a private model repo and then clone the private repo while the space is being built?

1 Like

You can use the hub client library to programmatically check out a private repository on the hub:

1 Like

You mean git repo not GitHub repo no? :slight_smile:

1 Like

actually I wants to import/upload a files&folders to my particular space & wants to refernce those & use in the app.
But i can’t find any usefull interface here to import, export, selecting or deleting any files here.

secondly… yes wants to clone or download particular folder to huggingface from available github repository… that i thing is not available.

i can’t even find cmd line tool interface here too. so help me with that too.

Thanks

Like as in with web upload file option i uploaded a zip file now i wants to extract those files and folders in it but than that’s not possible as such interface not possible.
next is doesn’t have a terminal option available to unzip.

pardon me here im new to this… so this ltl queries gets arising.

Hi @nightfury !

You are correct that there is not currently a command line interface for the machine running your space.

If you need to do some “setup” actions in your space, you can do that in the app.py file prior to when you define and launch your gradio app.

So something like

import gradio as gr

download_and_extract_zip_file()
clone_github_repository()

with gr.Blocks() as demo:
    ...application code goes here ...


demo.launch()

The download_and_extract_zip_file() and clone_github_repository() code will only run once.

So you’re basically limited to what you can do with python.

Here are some resources I found that could help you get started:

1 Like

thanks… but this all creates

#runtime error

Space not ready. Reason: Completed, exitCode: 0, message: None

i tried…
1)
import git
git.Git(“./master”).clone(“GitHub - ThereforeGames/txt2mask: Automatically create masks for Stable Diffusion inpainting using natural language.”)

from zipfile import ZipFile
ZipFile(“master.zip”).extractall(master)

but this all creates a files and folders in runtime build enviornment ie. in ‘home/user/app/’ not in actual huggingface spaces project’s directory.

I found this code…::

import sys
import os
path = “./master”
clone = “git clone https://github.com/<git_repository_path>.git”
os.system(“sshpass -p your_password ssh user_name@your_localhost”)
os.chdir(path) # Specifying the path where the cloned project needs to be copied
os.system(clone) # Cloning

but not sure about ‘your_localhost’ variable info as its managed from your end.

I could see many projects in here having multiple folders and files created and commited at once. may i know how are they doing so. what importing machanisms are they applying.

Does this huggingface has repository in github if its so than how to access them. is it the same credentials created in here applies there too

i used local git - gone through its working all over again and cloned, pulled, push, & committed with desired folders and file structures.

i think huggingface needs to provide interface with out of the box gui interface and features.
developer to developer kind.

needs to getaway its dependency from git type structure and aPi if being used.

hey i’m facing the issue for ‘cpu’ device… in app.py · nightfury/SD-InPainting at main
as no gpu - ‘Cuda’ available.

if i set torch_dtype=torch.float16,
thn it throws
RuntimeError: expected scalar type Float but found BFloat16

if i set torch_dtype=torch.bfloat16,
thn it throws
RuntimeError: expected scalar type BFloat16 but found Float,

if i set torch_dtype=torch.half,
thn it throws
RuntimeError: “LayerNormKernelImpl” not implemented for ‘Half’

if i set torch_dtype=torch.double,
thn it throws
RuntimeError: expected scalar type BFloat16 but found Double

if i set torch_dtype=torch.long,
thn it throws
raise TypeError('nn.Module.to only accepts floating point or complex ’
TypeError: nn.Module.to only accepts floating point or complex dtypes, but got desired dtype=torch.int64

so i am really confused on what torch_dtype to use for successful run.