You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
unidiffusers cpu_offload failed with the log in Reproduction column.
I took a deeper look, it seems that in this case, self.text_decoder.encode will be called after text_encoder and before image_encoder. The thing is this is just a submodule of text_decoder and not in model_cpu_offload_seq, so didn't register hook while enable_model_cpu_offload. It became an orphan. I don't have a good idea to fix it, since it's an embeded submodule in a sub-model and whether to trigger it is a runtime decision based on reduce_text_emb_dim. But I'm willing to contribute to the fix of it.
pytest -rA tests/pipelines/unidiffuser/test_unidiffuser.py::UniDiffuserPipelineFastTests::test_model_cpu_offload_forward_pass you can see below error log. The same issue happens on CUDA too.
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and xpu:0! (when checking argument for argument mat1 in method wrapper_XPU_addmm)
Logs
System Info
N/A
Who can help?
No response
The text was updated successfully, but these errors were encountered:
Describe the bug
unidiffusers cpu_offload failed with the log in Reproduction column.
I took a deeper look, it seems that in this case, self.text_decoder.encode will be called after
text_encoder
and beforeimage_encoder
. The thing is this is just a submodule oftext_decoder
and not inmodel_cpu_offload_seq
, so didn't register hook whileenable_model_cpu_offload
. It became an orphan. I don't have a good idea to fix it, since it's an embeded submodule in a sub-model and whether to trigger it is a runtime decision based onreduce_text_emb_dim
. But I'm willing to contribute to the fix of it.@yiyixuxu @sayakpaul @DN6
Reproduction
pytest -rA tests/pipelines/unidiffuser/test_unidiffuser.py::UniDiffuserPipelineFastTests::test_model_cpu_offload_forward_pass
you can see below error log. The same issue happens on CUDA too.Logs
System Info
N/A
Who can help?
No response
The text was updated successfully, but these errors were encountered: