I was testing with a duplicate of a space: DucHaiten Webui on Cpu - a Hugging Face Space by ritpop
It worked a few times but then it stops, getting into this loop.
I’m not quite sure if what I was doing was permited, maybe thats why, but i was using it to merge loras models, mainly XLs.
Don’t worry, there should be no permit or anything like that, as it shouldn’t exist. It’s a bug anyway. Often.
If you have a problem with your computer or smartphone, reboot it anyway.
@Yntec Do you know anything about this error? Your name was on the maintainer.
I got it to work, thanks for the help. the problem was a 1111-webui extention. In this case, civitaiPlus.
the problem was a 1111-webui extention. In this case, civitaiPlus.
I heard that the summer update of WebUI made a lot of Extensions unusable, but I wonder if it’s related to that.
Well, I’m glad it’s resolved.
Oh, hello! Just noticed my name mentioned here.
Yeah, some new versions of extensions are incompatible with that version of A1111, so when they’re updated and fetched again, the whole thing breaks, and gets into a loop and the space never builds.
The solution (other than just disabling the extension) is to find a previous version of the extension and make it install that one instead of the default, but I have no idea about how to do that. It’s also possible to update to a more recent A1111 version that can be built and is compatible with the new version of the extension, but when I tried that I ran into a lot of problems because something else breaks, and that was the reason the spaces based on this remain incompatible with SDXL and newer base models (the time I successfully managed to upgrade everything generation times jumped from 15 minutes to 45 minutes, making it unusable).
So it was a version issue, as expected.
The solution (other than just disabling the extension) is to find a previous version of the extension and make it install that one instead of the default, but I have no idea about how to do that.
In Python, this kind of specification is possible, but in the case of WebUI, it is probably impossible because of the semi-automatic updating of Extensions to the latest version.
https://github.com/hnmr293/stable-diffusion-webui-dumpunet/tree/53f15242c96debcb61290e76268351b0427accb4
to
pip install git+https://github.com/hnmr293/stable-diffusion-webui-dumpunet@53f15242c96debcb61290e76268351b0427accb4
generation times jumped from 15 minutes to 45 minutes,
It was made faster and less memory for GPU, but for CPU it was the other way around…
Maybe this one can be fixed by tweaking the startup options. Don’t ask me what to do to fix it…
Thanks John6666, I found another way. I had to! The SuperMerger extension vanished from the tabs of the webui after they updated it to support FLUX. So it’s not compatible anymore with this version of A1111, but at least it lets it build without the loop (or I’d have needed to test each extension to see which one was the culprit!), it just doesn’t work.
The solution was to go to the extension’s forks here:
And find this one that was the most recent backup:
And that one works just fine, phew! It’s missing some fixes, maybe that’ll become relevant if I ever want to use Locons.
So one would find a fork of a version of the extension that was compatible and use that one, because it’s behind the problematic commits.
It was curious to find myself in the fork list with one I don’t even remember what was it about, but obviously I never really fixed any Lora because I never switched to my forked version.
I heard that WebUI was Gradio3 until the Flux transition this summer. The rest is the usual pattern. No need to explain.
The rest of the big changes are gguf and bitsandbytes support. This may have radically changed the dependencies.
How i solved the situation a few days ago, I removed the problematic extention I also started using a diferent version of super merger but for an unrelated reasson.
Since I only use for merging loras I think is good enough for me.
What we need is a replacement of A1111 WebUI CPU that can run Flux, for people without the hardware Huggingface’s cpu spaces has been our only option to do everything we want, at the cost of speed.
This became important with the release of nyanko7/flux-dev-de-distill · Hugging Face - what Black Forest Lab did was making 3 versions of Flux, Pro, Dev and Schnell. Schnell has steps and CFG distilled, which makes CFG stay at a fixed value and for steps beyond 4 to make no difference (and 4 are not enough for great text, making dev superior.)
Dev had CFG distilled to a fixed value, which means you can’t control the creativity and how much it sticks to the prompt. With Stable Diffusion that did not matter and a CFG of 7.5 could be used for everything, but with Flux it was important because it knows plenty of styles and artists but the fixed CFG makes everything you generate be the same.
They did this so people wanting to unleash Flux’s true potential had to pay to use their Pro version, but flux-dev-de-distill solves all that (which would bridge the gap between dev and pro), except it’s incompatible with diffusers so people without the hardware to run it are out of luck!
So it would be time to get a huggingface space that can run Forge on CPU or Comfy UI on CPU or something like that… unless Flux requires more VRAM than free CPU spaces allow (which would be the end of the dream…)
Otherwise, that would make automatic1111 obsolete and we could move on, being able to run true CFG with Flux like people with the hardware, because the only option is to wait until a model superior to flux appears and it’s compatible with diffusers, a technology we now depend on because there’s no other alternative.
Sorry if this is offtopic, I’m not familiar with this forum format and I don’t know if I’m supposed to open a new thread about it, or something.
It would not be off-topic because it is a very relevant content.
I think ComfyUI was supported in the Nyanko7 version, or someone else did it, but I don’t know if WebUI is ready yet. Maybe Forge’s lllyasviel will make it compatible if no one else does.
Then we can make it work with HF’s Spaces.
Or I could simply create a Nyanko7 version of the CPU space. (I’m afraid I won’t be able to generate it in 30 minutes and it will error out…)
Diffusers is not official, but it is supported, or rather the Nyanko7 version is compatible with Diffusers because it is custom code for Diffusers and the recently released custom pipeline code. I’d like to see it officially adopted, but that one is effectively a different algorithmic model from Flux from a programmatic standpoint. At least multimodalart was aware of the de-distill, so I’m sure they’re thinking about how to implement it. That is a model that could be a game changer.
Diffusers code for de-distill version
Edit:
I’ve got the code for both CPU space and GPU space, but it crashed because I didn’t have enough RAM. Well, that’s what happens.
Oh yeah this one is it: FLUX.1 [dev]-De-Distill - a Hugging Face Space by ameerazam08 but doesn’t allow to modify True CFG along with Guidance, which would be the point of the destill…
But then this dropped!:
Seems to have be one step away from being compatible with the inference api:
“”“The repository for jimmycarter/LibreFLUX contains custom code in transformer/transformer which must be executed to correctly load the model. You can inspect the repository content at https://hf.co/jimmycarter/LibreFLUX/transformer/transformer.py. Please pass the argument trust_remote_code=True
to allow custom code to be run.”“”
But with so many solutions appearing I think I just need to be patient, at this rate we should be able to get the open source version of Dall-e 3!
I see that SD3.5 is also released.
Diffusers developers will have a hard time, but the de-distill version will be supported eventually.
Until HF officially supports it, we’ll have to run it locally or on Zero GPU space. I can’t do it on my local PC, though!
The day before yesterday, they worked hard to support loading in quantized state.
There is a time lag between Diffusers support and server support, but that can’t be helped.
How things fast move, that day so many base models were released and even a local video model that was better than the previous best local we had.
But the very next day destill basically became obsolete, ha!
https://www.reddit.com/r/comfyui/comments/1g9wfbq/simple_way_to_increase_detail_in_flux_and_remove/
Wow!
Turns out Flux Dev is already capable of such detail and creativity without needing more training! The problem was that the Schedulers were removing too much noise at each step, if you only remove 95% of it, then you get this magic!
I don’t know what it would take for people like us to be able to do this on the inference API, but I suspect it’d be a matter of setting up a duplication of Black Forest Lab’s repo for Flux Dev with a Scheduler file that decreased sigmas to 0.95 the way ComfyUI’s plugins are doing, I think this is something obscure and most people don’t know what they’re missing.
Stable Diffusion 3.5 did not impress me, and I realized why I became Flux’s fan. Bad eyes, bad anatomy and bad hands and bad generations on a regular basis, Flux is so great I getn perfect eyes and anatomy and good generations 95% of the time, so it’s not a matter of parameters, Black Forest Lab knew what they were doing, while Stability AI didn’t, and I know it’s not a thing that can be fixed, all the models without those problems of the SD1.5 era had to sacrifice compositions, creativity and interesting poses, and SD3.5 already has boring poses for subjects, so it can’t be the future.
Flux just lacks detail and creativity but modifying sigmas, or even a new Scheduler that doesn’t take as much noise from the picture by default, may be the answer. Too bad it ruins text, but at least all the fonts I downloaded could be useful again, hehe…
This is simply amazing, I wonder if people around sayakpaul or nyanko7 or lllyasviel or DN6 or multimodalart would implement this as soon as they know about it?
Some of the schedulers can specify sigma-related behavior, but I don’t think it’s been implemented yet in Diffusers’ scheduler for Flux.
Don’t say one way or the other, just use the sigma adjustment function together with de-distill or Lite.
So much potential for Flux…
Automatic1111 Webui CPU spaces have been broken recently, you can see that suplicating a working space like this:
Will not be able to build it, you will get a Build error:
Which reads:
Requested pytorch_lightning==1.7.7 from https://files.pythonhosted.org/packages/00/eb/3b2152f9c3a50d265f3e75529254228ace8a86e9a4397f3004f1e3be7825/pytorch_lightning-1.7.7-py3-none-any.whl (from -r /tmp/requirements.txt (line 20)) has invalid metadata: .* suffix can only be used with `==` or `!=` operators
torch (>=1.9.*)
~~~~~~^
Please use pip<24.1 if you need to use this version.
ERROR: Could not find a version that satisfies the requirement pytorch_lightning==1.7.7 (from versions: 0.0.2, 0.2, 0.2.2, 0.2.3, 0.2.4, 0.2.4.1, 0.2.5, 0.2.5.1, 0.2.5.2, 0.2.6, 0.3, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.4.1, 0.3.5, 0.3.6, 0.3.6.1, 0.3.6.3, 0.3.6.4, 0.3.6.5, 0.3.6.6, 0.3.6.7, 0.3.6.8, 0.3.6.9, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.5.0, 0.5.1, 0.5.1.2, 0.5.1.3, 0.5.2, 0.5.2.1, 0.5.3, 0.5.3.1, 0.5.3.2, 0.5.3.3, 0.6.0, 0.7.1, 0.7.3, 0.7.5, 0.7.6, 0.8.1, 0.8.3, 0.8.4, 0.8.5, 0.9.0, 0.10.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, 1.1.8, 1.2.0rc0, 1.2.0rc1, 1.2.0rc2, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.2.8, 1.2.9, 1.2.10, 1.3.0rc1, 1.3.0rc2, 1.3.0rc3, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.3.7.post0, 1.3.8, 1.4.0rc0, 1.4.0rc1, 1.4.0rc2, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.5.10.post0, 1.6.0rc0, 1.6.0rc1, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.5.post0, 1.7.0rc0, 1.7.0rc1, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.8.0rc0, 1.8.0rc1, 1.8.0rc2, 1.8.0, 1.8.0.post1, 1.8.1, 1.8.2, 1.8.3, 1.8.3.post0, 1.8.3.post1, 1.8.3.post2, 1.8.4, 1.8.4.post0, 1.8.5, 1.8.5.post0, 1.8.6, 1.9.0rc0, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 2.0.0rc0, 2.0.0, 2.0.1, 2.0.1.post0, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.9.post0, 2.1.0rc0, 2.1.0rc1, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.2.0rc0, 2.2.0, 2.2.0.post0, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.3.0, 2.3.1, 2.3.2, 2.3.3, 2.4.0, 2.5.0rc0, 2.5.0, 2.5.0.post0)
ERROR: No matching distribution found for pytorch_lightning==1.7.7
It seems related to this: [Bug]: WARNING: Ignoring version 1.7.6 of pytorch_lightning since it has invalid metadata: · Issue #16213 · AUTOMATIC1111/stable-diffusion-webui · GitHub
However, I seem to be unable to use pip version 24.0 even if I put it at the top of requirements.txt
One can build it if one upgrades pytorch_lightning to version 2.0.0, but then some extensions break, like the SuperMerger extension which is the one I’m interested in using, so right now I’m unable to merge new models with it.
I sent PR for enabling pip 24.0 and later.
Thanks! But now I predict that solution isn’t going to work, something deeper than that was broken, I just found out your space:
That one is currently working perfectly as expected, with the SuperMerger tab appearing on it. However, if you try to Duplicate it, it will build without error, however, the SuperMerger tab will be gone! You can see this one:
The SuperMerger tab will not appear because of these errors:
*** Error loading script: supermerger.py
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 920, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/ip_adapter.py", line 36, in <module>
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection, SiglipImageProcessor, SiglipVisionModel
ImportError: cannot import name 'SiglipImageProcessor' from 'transformers' (/usr/local/lib/python3.10/site-packages/transformers/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 920, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 24, in <module>
from ...loaders import FromSingleFileMixin, IPAdapterMixin, StableDiffusionLoraLoaderMixin, TextualInversionLoaderMixin
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 910, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 922, in _get_module
raise RuntimeError(
RuntimeError: Failed to import diffusers.loaders.ip_adapter because of the following error (look up to see its traceback):
cannot import name 'SiglipImageProcessor' from 'transformers' (/usr/local/lib/python3.10/site-packages/transformers/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/stable-diffusion-webui/modules/scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/home/user/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/user/stable-diffusion-webui/extensions/sd-webui-supermerger/scripts/supermerger.py", line 23, in <module>
import scripts.mergers.pluslora
File "/home/user/stable-diffusion-webui/extensions/sd-webui-supermerger/scripts/mergers/pluslora.py", line 21, in <module>
from scripts.kohyas import extract_lora_from_models as ext
File "/home/user/stable-diffusion-webui/extensions/sd-webui-supermerger/scripts/kohyas/extract_lora_from_models.py", line 12, in <module>
from scripts.kohyas import sai_model_spec,model_util,sdxl_model_util,lora
File "/home/user/stable-diffusion-webui/extensions/sd-webui-supermerger/scripts/kohyas/model_util.py", line 16, in <module>
from diffusers import AutoencoderKL, DDIMScheduler, StableDiffusionPipeline # , UNet2DConditionModel
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 911, in __getattr__
value = getattr(module, name)
File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 911, in __getattr__
value = getattr(module, name)
File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 910, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 922, in _get_module
raise RuntimeError(
RuntimeError: Failed to import diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion because of the following error (look up to see its traceback):
Failed to import diffusers.loaders.ip_adapter because of the following error (look up to see its traceback):
cannot import name 'SiglipImageProcessor' from 'transformers' (/usr/local/lib/python3.10/site-packages/transformers/__init__.py)
In fact, if you restart the space or it’s restarted to clean up after you exceed the free space the SuperMerger tab will disappear! Something at huggingface changed around 4 days or a week ago that broke it because it’s the same code that was running fine back then.
If you could make a duplicate of your space that shows the SuperMerger tab like the one running does it would solve all the problems.
UPDATE: We did it! For people from the future that want to use Automatic1111’s SuperMerger, what we did is create a file called pre-requirements.txt with the following line in it:
pip==24.0
Change requirements.txt to have the following:
spaces
torch
torchvision>=0.16.0
torchaudio
torchtext
torchdata
astunparse
blendmodes
accelerate
git+https://github.com/XPixelGroup/BasicSR
git+https://github.com/TencentARC/GFPGAN
fonts
font-roboto
gradio==3.29.0
numpy<2
omegaconf
opencv-contrib-python
requests
piexif
Pillow
pytorch_lightning>=1.7.7
realesrgan
scikit-image>=0.19
timm>=0.4.12
transformers<=4.46.3
diffusers<=0.31.0
einops
jsonmerge
clean-fid
resize-right
torchdiffeq
kornia
lark
inflection
GitPython
git+https://github.com/google-research/torchsde
safetensors
psutil
rich
httpx>=0.24.1
And if that doesn’t work, for good measure copy the contents of app.py from a sample space like this one!:
The SuperMerger extension gets to live for another day, and I’m back in business!