Skip to content

Conversation

@hlky
Copy link
Contributor

@hlky hlky commented Dec 4, 2025

What does this PR do?

In the original code this is not a typical ControlNet, it is integrated into the transformer and relies on operations performed in the transformer's forward. In this PR we implement it as a typical ControlNet by duplicating the necessary operations from the transformer's forward into the ControlNet's forward and pass transformer to ZImageControlNetModel's forward to access the necessary transformer modules, as a result this is perhaps a little slower than the original implementation, but it keeps things clean and in style. ZImageTransformer2DModel has minimal changes, controlnet_block_samples is introduced, this is a Dict[int, torch.Tensor] returned from ZImageControlNetModel where the int is the ZImageTransformer2DModel layers index, this is another difference from typical ControlNet where every block has the ControlNet output applied. ZImageControlNetPipeline has minimal changes, compared to ZImagePipeline it adds prepare_image function, adds control_image and controlnet_conditioning_scale parameters, prepares and encodes control_image and calls controlnet to obtain controlnet_block_samples which are passed to transformer. control_guidance_start/control_guidance_end is not yet implemented.

Test code

python scripts/convert_z_image_controlnet_to_diffusers.py --original_controlnet_repo_id "alibaba-pai/Z-Image-Turbo-Fun-Controlnet-Union" --filename "Z-Image-Turbo-Fun-Controlnet-Union.safetensors" --output_path "z-image-controlnet-hf"
import torch
from diffusers import ZImageControlNetPipeline
from diffusers import ZImageControlNetModel
from diffusers.utils import load_image

controlnet_model = "z-image-controlnet-hf"
controlnet = ZImageControlNetModel.from_pretrained(
    controlnet_model, torch_dtype=torch.bfloat16
)
pipe = ZImageControlNetPipeline.from_pretrained(
    "Tongyi-MAI/Z-Image-Turbo", controlnet=controlnet, torch_dtype=torch.bfloat16
)
pipe = pipe.to("cuda")
control_image = load_image("https://huggingface.co/alibaba-pai/Z-Image-Turbo-Fun-Controlnet-Union/resolve/main/asset/pose.jpg?download=true")
prompt = "一位年轻女子站在阳光明媚的海岸线上,白裙在轻拂的海风中微微飘动。她拥有一头鲜艳的紫色长发,在风中轻盈舞动,发间系着一个精致的黑色蝴蝶结,与身后柔和的蔚蓝天空形成鲜明对比。她面容清秀,眉目精致,透着一股甜美的青春气息;神情柔和,略带羞涩,目光静静地凝望着远方的地平线,双手自然交叠于身前,仿佛沉浸在思绪之中。在她身后,是辽阔无垠、波光粼粼的大海,阳光洒在海面上,映出温暖的金色光晕。"
image = pipe(
    prompt,
    control_image=control_image,
    controlnet_conditioning_scale=0.75,
    height=1728,
    width=992,
    num_inference_steps=9,
    guidance_scale=0.0,
    generator=torch.Generator("cuda").manual_seed(43),
).images[0]
image.save("zimage.png")

Output

PR Original
z-image_controlnet 00000003

Fixes #12769

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Z-Image Turbo Controlnet Union is out

1 participant