Skip to content

Unable to load Flux LoRA trained with OneTrainer – NotImplementedError in _convert_mixture_state_dict_to_diffusers #11441

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
iamwavecut opened this issue Apr 28, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@iamwavecut
Copy link

iamwavecut commented Apr 28, 2025

Describe the bug

Loading a LoRA that was:

  • trained with OneTrainer (master, FLUX-1 mode)
  • exported as a single .safetensors file (on Civitai)

via DiffusionPipeline.load_lora_weights() (or indirectly through Nunchaku’s v0.2.0 compose_lora) crashes at application start-up with:

File "diffusers/loaders/lora_conversion_utils.py", line 76, in _convert_mixture_state_dict_to_diffusers
    raise NotImplementedError
NotImplementedError

The exception is thrown because the LoRA state-dict contains keys that start with
lora_transformer_single_transformer_blocks_…, a pattern that the conversion helper does not yet handle.

Here is the list of key types in state_dict of the mentioned LoRA (multiple digit prefixed keys collapsed to %d):

lora_transformer_context_embedder
lora_transformer_norm_out_linear
lora_transformer_proj_out
lora_transformer_single_transformer_blocks_%d_attn_to_k
lora_transformer_single_transformer_blocks_%d_attn_to_q
lora_transformer_single_transformer_blocks_%d_attn_to_v
lora_transformer_single_transformer_blocks_%d_norm_linear
lora_transformer_single_transformer_blocks_%d_proj_mlp
lora_transformer_single_transformer_blocks_%d_proj_out
lora_transformer_time_text_embed_guidance_embedder_linear_%d
lora_transformer_time_text_embed_text_embedder_linear_%d
lora_transformer_time_text_embed_timestep_embedder_linear_%d
lora_transformer_transformer_blocks_%d_attn_add_k_proj
lora_transformer_transformer_blocks_%d_attn_add_q_proj
lora_transformer_transformer_blocks_%d_attn_add_v_proj
lora_transformer_transformer_blocks_%d_attn_to_add_out
lora_transformer_transformer_blocks_%d_attn_to_k
lora_transformer_transformer_blocks_%d_attn_to_out_0
lora_transformer_transformer_blocks_%d_attn_to_q
lora_transformer_transformer_blocks_%d_attn_to_v
lora_transformer_transformer_blocks_%d_ff_context_net_0_proj
lora_transformer_transformer_blocks_%d_ff_context_net_2
lora_transformer_transformer_blocks_%d_ff_net_0_proj
lora_transformer_transformer_blocks_%d_ff_net_2
lora_transformer_transformer_blocks_%d_norm1_context_linear
lora_transformer_transformer_blocks_%d_norm1_linear
lora_transformer_x_embedder

Reproduction

Download LoRA file first.

from diffusers import DiffusionPipeline
import torch, os

MODEL_ID = "black-forest-labs/FLUX.1-dev"
LORA_PATH = "./rus1.3_100k.safetensors"

pipe = DiffusionPipeline.from_pretrained(MODEL_ID, torch_dtype=torch.float16).to("cuda")
pipe.load_lora_weights(LORA_PATH)          # ← crashes

#  ── OR (with nunchaku) ─────────────────────────────────────────
from nunchaku.lora.flux import compose_lora
compose_lora([(LORA_PATH, 1.0)])           # same stack-trace

Logs

ERROR:    Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/starlette/routing.py", line 692, in lifespan
    async with self.lifespan_context(app) as maybe_state:
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
            ^^^^^^^^^^^^^^^^^^^^^
  File "/app/api.py", line 44, in lifespan
    composed_lora = compose_lora(enabled_loras)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/nunchaku/lora/flux/compose.py", line 17, in compose_lora
    lora = to_diffusers(lora)
            ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/nunchaku/lora/flux/diffusers_converter.py", line 17, in to_diffusers
    new_tensors, alphas = FluxLoraLoaderMixin.lora_state_dict(tensors, return_alphas=True)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/diffusers/loaders/lora_pipeline.py", line 1886, in lora_state_dict
    state_dict = _convert_kohya_flux_lora_to_diffusers(state_dict)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/diffusers/loaders/lora_conversion_utils.py", line 898, in _convert_kohya_flux_lora_to_diffusers
    return _convert_mixture_state_dict_to_diffusers(state_dict)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/diffusers/loaders/lora_conversion_utils.py", line 731, in _convert_mixture_state_dict_to_diffusers
    raise NotImplementedError
NotImplementedError

ERROR:    Application startup failed. Exiting.

System Info

diffusers@main as for April 28 2025

Who can help?

@sayakpaul I believe you've been involved in fixing OneTrainer LoRAs loading issues recently, would you be so kind to take your educated guess on this one.

@iamwavecut iamwavecut added the bug Something isn't working label Apr 28, 2025
@sayakpaul
Copy link
Member

Could you provide a full yet minimal reproducible snippet so that I could take a look?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants