How do you load multiple concepts and tokens at once?

As the title says, how can this be achieved? Is this possible? Like if you name each bin as the concept name, would the tokenizer and text encoder accept all the concepts in the embeds path, as well as take all the tokens? Does anyone have an example? I’m thinking it’d be cool to just an option to load the HF concepts library at once, and then to try and out stuff, or in combination, use their tokens.

This is possible, but not done automatically. In the textual inversion inference notebook there’s code to load one of the trained embeddings. Essentially, repeating the process you could download several trained concepts at once and use them all in a single prompt.

That’s sorta what I figured, but wasn’t entirely sure. It looked like the names for the methods actually expected multiple embeds to be loaded, not just one.

Another question. In that notebook, it has a hidden example for loading a manual embed from URL. But when I do this, I get a invalid v error. Is this feature currently broken? It seems to be related to manually adding the token for the downloaded embed instead of using token file.

I think that cell should work if you put an URL to a embeddings file in embeds_url and leave the other one empty. For example, you can try this URL: https://huggingface.co/sd-concepts-library/alf/resolve/main/learned_embeds.bin that I took from the “Files” tab of the corresponding concept trained by a community member here.

The downside is that you need to know the token that was used for training. You can easily print it in the notebook by adding a line print(trained_token) as the last line of the function in the next cell, as shown here:

Hi there, I am using two embeddings I trained (one for object, one for style). I loaded these two embeddings but the generated image only has the object, style is completely lost.

Hi, Is there any update how to load multiple concepts? can someone provide code examples?
From here: diffusers/examples/textual_inversion at main · huggingface/diffusers · GitHub
I trained one concept and one style/ two concepts, but from the example in below, how do i load once concept and one style or multiple concepts in a single prompt because load pre-trained model_id is different, how to merge? if anyone know, can pls help? thanks a lot.

from diffusers import StableDiffusionPipeline

model_id = “path-to-your-trained-model”
pipe = StableDiffusionPipeline.from_pretrained(model_id,torch_dtype=torch.float16).to(“cuda”)

prompt = “A backpack”

image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]

image.save(“cat-backpack.png”)

Hi, did you solve this problem you mentioned? if solved, can pls share? thanks

Hello, did you solve this problem you mentioned? if solved, can pls share? thanks

Since you are using textual embedding, you just need to load the basic SD model and your trained embeddings. My problem is that my embeddings are under-trained.

Thanks! I was reading the code again last night and I think I understand you said. Let me find time to try today. Then follow up if any questions. Thanks!

Thanks, I tried and I think it works.

Update: this PR is about to be merged and it will make this process easier. Take a look at this example by Patrick: https://github.com/huggingface/diffusers/pull/2009#issuecomment-1481017738