Help with Duplicating a SadTalker Space on Hugging Face

Subject: Help with Duplicating a SadTalker Space on Hugging Face

Hey there,

I’m working on a SadTalker project and using the basic CPU plan (2 vCPU, 16 GB RAM) on Hugging Face Spaces. I’ve run into a few issues when trying to clone or duplicate a space. The cloning process just doesn’t finish, and I can’t continue setting things up.

I also tried cloning the space to my local machine, but that didn’t work out either. I’m hoping to create or duplicate a space that runs smoothly within the basic plan’s limits without causing any issues.

Any advice or tips on how I can get this working would be awesome!

Thanks a lot!

I looked at it and it is due to Gradio having many incompatibilities between 3.x and 4.x.
I removed the incompatibility parameter and fixed it to the point where it starts up for now.
It’s kind of a rite of passage in HF to suffer Gradio version upgrades.

But I wonder if this will work in CPU space in terms of performance…

i have cloned the original sadtalker repo from github into my local machine and then uploading it’s files and making a space out of it and i faced this issue:

runtime error
Exit code: 1. Reason: Traceback (most recent call last):
File “/home/user/app/app.py”, line 3, in
from src.gradio_demo import SadTalker
File “/home/user/app/src/gradio_demo.py”, line 5, in
from src.facerender.animate import AnimateFromCoeff
File “/home/user/app/src/facerender/animate.py”, line 23, in
from src.utils.face_enhancer import enhancer_generator_with_len, enhancer_list
File “/home/user/app/src/utils/face_enhancer.py”, line 4, in
from gfpgan import GFPGANer
File “/usr/local/lib/python3.10/site-packages/gfpgan/init.py”, line 2, in
from .archs import *
File “/usr/local/lib/python3.10/site-packages/gfpgan/archs/init.py”, line 2, in
from basicsr.utils import scandir
File “/usr/local/lib/python3.10/site-packages/basicsr/init.py”, line 4, in
from .data import *
File “/usr/local/lib/python3.10/site-packages/basicsr/data/init.py”, line 22, in
_dataset_modules = [importlib.import_module(f’basicsr.data.{file_name}‘) for file_name in dataset_filenames]
File “/usr/local/lib/python3.10/site-packages/basicsr/data/init.py”, line 22, in
_dataset_modules = [importlib.import_module(f’basicsr.data.{file_name}’) for file_name in dataset_filenames]
File “/usr/local/lib/python3.10/importlib/init.py”, line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “/usr/local/lib/python3.10/site-packages/basicsr/data/realesrgan_dataset.py”, line 11, in
from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
File “/usr/local/lib/python3.10/site-packages/basicsr/data/degradations.py”, line 8, in
from torchvision.transforms.functional_tensor import rgb_to_grayscale
ModuleNotFoundError: No module named ‘torchvision.transforms.functional_tensor’

here is my space:

Overall, the dependencies (requirement.txt) are broken. The author seems to assume Gradio version 3.x as well.
This is partly unavoidable since Gradio is like a different language with versions 3.x and 4.x…
I wonder if this is not code for running on HF’s space, but for local PCs, like StableDiffusion’s WebUI, for example?

At least I was able to get it to boot up…

Thank you so much

If it works, it’s OK.:grinning:

1 Like

John6666/SadTalkerGitHub doesn’t work it gives me “Error” in the generated video section

I want to ask another question, how can I make a notebook on Kaggle and Colab for Sad Talker and use Gradio’s Webui or link it to Ngrok? I can’t use Sad Talker on Colab and I tried so hard but I couldn’t reach a solution.

FileNotFoundError: [Errno 2] No such file or directory: ‘checkpoints/auido2pose_00140-model.pth’

It looks like I need to put this (maybe other files too) in the Spaces folder or download and relocate it myself in the program. (I can do that, but it’s a pain P.S. I uploaded the previous space’s for now.).
This is still for local PCs bacically.

Sorry, I’ve never used Kaggle or Colab. I mean, I’m not familiar with cloud services in general, I’m just a guy who recently got back into programming and is in rehab.

_pickle.UnpicklingError: unpickling stack underflow

Something seems to be an old version of the model and I get a loading error, where is the new one?

P.S.

I found the model. Now if we are going to get this to work, someone should first maintain it so that it is up-to-date.
It seems to me that it would work with the new pytorch, especially if you fix the use of functions that are not in the current version of torchvision.
Maintenance stopped in 2023…

Can you run these 2 spaces on ZeroGPU?

Sorry, but the slots are full, including space for private development and conversion. (That’s a limit of 10 per person…the organization has 30. Multiple accounts should be possible, but it’s a pain in the ass.)

If you’re talking about processing to work on Zero GPU space, I can do that.
Just put import spaces at the top of app.py and top of each function’s .py and pinpoint the @spaces.GPU decorators to the functions you want to use GPU.

P.S.
I did.

do you mean @spaces.GPU(duration=120)

i didn’t find [add duration] section

do you mean @spaces.GPU(duration=120)

Actually, duration= and () are optional.
If I don’t specify it, it will be treated as 60 seconds.
You can set them as needed. The longer it is, the longer it can be used for processing, but the longer the requested seconds, the more likely it is to be caught by quotas.
I often set it to 30 seconds for light processing and 60 to 70 seconds for text-to-image inference.

How to apply these changes?

i committed the changes on your space and caused this issue, i think i made it wrong:

runtime error

Exit code: 1. Reason: File “/home/user/app/app.py”, line 112 @spaces.GPU SyntaxError: invalid syntax

Container logs:

===== Application Startup at 2024-08-30 06:11:38 =====

  File "/home/user/app/app.py", line 112
    @spaces.GPU
SyntaxError: invalid syntax
  File "/home/user/app/app.py", line 112
    @spaces.GPU
SyntaxError: invalid syntax

I fixed it. I also added “spaces” to the non-GitHub version.

1 Like

where can i find this? and commit @spaces.GPU to the space:

Just put import spaces at the top of app.py and top of each function’s .py and pinpoint the @spaces.GPU decorators to the functions you want to use GPU.

where can i find this?

The only way to find and specify the function that is responsible for the most important part of the process, which is faster when using a GPU.
If you read app.py from the top and follow all the imports and from’s, you will find it someday.

An easier way is to find the function called in Gradio’s events.

In SadTalker’ app.py, the clue is here.

            submit.click(
                        fn=sad_talker.test, 
                        inputs=[source_image,

The function that is called when a button is pressed is usually the main body of the inference function.
Because that’s how they are usually made.

If you follow this, you will reach gradio_demo.py in the src folder, find the test method from there, add a decorator, and you are done.
(I did it this time).

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.