Troubleshooting a Gradio dropdown component error when attempting to train a first LoRA model using Kohya SS in Docker

Need Help: First LoRA Training Attempt with Kohya SS (Docker) Failing

Topic in one sentence: Troubleshooting a Gradio dropdown component error when attempting to train a first LoRA model using Kohya SS in Docker.

Hello everyone,

I’m new to the community and trying to train my first LoRA model using Kohya SS, but I’ve run into some issues. I’d really appreciate any insight or help you could provide.

Setup & Process

  • Running Kohya_ss GUI in Docker release v25.0.3 (latest version as of April 7, 2025)
  • Training an SD1.5 model with custom images
  • Used CVAT to label my dataset and exported as CVAT 1.1
  • Ran a Python script to align my metadata with Kohya’s expected format

The Error

When attempting to train, I’m getting the following error from within the container:

Traceback (most recent call last):
File "/home/1000/.local/lib/python3.10/site-packages/gradio/queueing.py", line 625, in process_events
    response = await route_utils.call_process_api(
File "/home/1000/.local/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
    output = await app.get_blocks().process_api(
File "/home/1000/.local/lib/python3.10/site-packages/gradio/blocks.py", line 2133, in process_api
    inputs = await self.preprocess_data(
File "/home/1000/.local/lib/python3.10/site-packages/gradio/blocks.py", line 1814, in preprocess_data
    processed_input.append(block.preprocess(inputs_cached))
File "/home/1000/.local/lib/python3.10/site-packages/gradio/components/dropdown.py", line 194, in preprocess
    choice_values = [value for _, value in self.choices]
File "/home/1000/.local/lib/python3.10/site-packages/gradio/components/dropdown.py", line 194, in <listcomp>
    choice_values = [value for _, value in self.choices]
ValueError: not enough values to unpack (expected 2, got 0)

This seems to indicate that Gradio is trying to process an empty or misformatted dropdown list, expecting (label, value) tuples and getting either a flat list or nothing at all.


:white_check_mark: What I’ve Tried

  • :white_check_mark: Verified Docker and GPU setup is working correctly
  • :white_check_mark: Using the latest version of Kohya SS from the main GitHub repo
  • :white_check_mark: Training data is mounted correctly in /app/data/NP_LORA_V1.0, and contains .jpg/.png + .txt caption files
  • :white_check_mark: Pretrained model (stable-diffusion-v1-5.safetensors) is located in /app/models
  • :white_check_mark: All presets (Dreambooth, Finetune, LoRA) exist in /app/presets
  • :white_check_mark: Verified all dropdowns in the UI are populated (or untouched) before clicking “Print Training Command”
  • :white_check_mark: Tried to locate and patch dropdown.py inside the container (to add a safe fallback for empty lists), but:
    • venv/bin/activate does not exist
    • ~/.local/lib/... does not exist
    • dropdown.py could not be located via find or python -c "import gradio; print(gradio.__file__)"

Goal

I’d like to resolve this error so I can use Kohya’s WebUI without Gradio crashing when I click on Print Training Command or Start Training. I’m trying to avoid patching or hacking around this unless absolutely necessary — ideally, I’d like a clean solution that works with the official Docker build.


Questions

  1. Has anyone encountered this specific error before with Kohya SS + Gradio?
  2. Could this be related to how I processed my CVAT image + caption dataset?
  3. Is there something specific I need to configure in the Kohya UI (or dropdown defaults) that I might be missing?
  4. Are there any known issues with the latest version (v25.x) that could cause this?
  5. Where exactly is Gradio installed inside the current Docker container?
  6. Is there a known patch or fix for the dropdown.py unpacking issue?

Any help or guidance would be greatly appreciated! I’m eager to get my first LoRA model training successfully.

Thank you for taking the time to read this.

1 Like

There doesn’t seem to be an exact duplicate issue (there are many issues where errors occur in Gradio). Personally, I suspect that something may have gone wrong when the author fixed the pydantic==2.10.6-related errors in Hugging Face and Gradio last week.

If you’re running it in a local Docker container, there shouldn’t be any changes to the training content even if you use a slightly older version, so I think it’s a good idea to try using a slightly older version. There must have been a moment when it was working.

Gradio’s behavior changes quite a bit when the minor version is slightly different, so the author doesn’t notice bugs in environments other than their own…

1 Like

Thank you for your helpful response! Do you happen to know which specific Docker image version of Kohya SS was working reliably before the pydantic/Gradio updates? I appreciate your insight about the potential cause being related to recent changes.

1 Like

It seems that v25.0.3 is the bug fix, and Gradio has been changed to the 5 series in the earlier version, v25.0.2. v25.0.0 seems to be the old version of Gradio and Stable (just my impression), so v25.0.0 is probably the version with the fewest problems among the newer versions.

The problem that was fixed in v25.0.3 was a serious error that occurred suddenly, so I suspect that perhaps not much testing was done on this version other than to avoid errors.

If it doesn’t work, try changing to an older version, and if it still doesn’t work, try questioning your environment and data set. Well, I think the problem this time is probably a version problem on the library side…

1 Like

Hey! I looked into your post and the traceback you shared, and it seems like you’re very close — your setup is solid overall, but you’ve probably hit a common issue caused by version mismatch or malformed dropdown data.

Gradio is expecting each dropdown entry to be a tuple in the form of (label, value), but it’s likely receiving either a flat list, a single string, or an empty entry. The error:

ValueError: not enough values to unpack (expected 2, got 1)

usually means your list structure doesn’t match Gradio’s expected input.

Here’s what might be causing it:

  1. You’re using the latest version of Kohya SS and Gradio, which is great, but they might not yet be fully compatible. A recent update in either could have changed the expected format of dropdown components.

  2. One of your presets or dropdowns (e.g., model type, training preset, or scheduler) might be missing entries or formatted incorrectly. Even one broken file can trigger this.

  3. Sometimes, an empty directory (e.g., /app/presets) or a .json file without proper (label, value) tuples can break the whole UI.

What you can try:

Double-check all dropdown source lists (especially presets and model files) to ensure they have the correct structure.

Temporarily pin your Gradio version to one that’s known to work well with Kohya SS. Try adding this to your Docker or install script:

pip install gradio==3.32.0

If you’re customizing the code, wrap the dropdown list generation with a simple check like:

dropdown_values = [(x, x) for x in your_list if isinstance(x, str)]

to ensure nothing weird gets passed in.


You’re doing everything right — Docker, GPU, dataset prep — it’s just one of those compatibility edges. Let me know if you want me to help you inspect your presets or model list structure more directly.

Good luck and welcome to the community!

1 Like

Thank you so much for this detailed analysis! You’ve really pinpointed what’s happening with the error. I wasn’t aware that Gradio expected tuples in that format for dropdowns, so this makes a lot of sense now.

I’ll try your suggestions - especially pinning the Gradio version to 3.32.0 since that seems like the quickest fix to try first. If that doesn’t work, I’ll check the preset directories and files to see if there are any formatting issues.

Just to clarify - are there any specific preset files or directories that tend to cause this issue more often than others in your experience? And would a simple Docker pull of an older Kohya SS image potentially resolve this, or should I specifically focus on the Gradio version?

1 Like