Target_size issue

I am using ImageToImageTargetSize paramenter with InferenceClient

from huggingface_hub.inference._generated.types.image_to_image import ImageToImageTargetSize

target_size=ImageToImageTargetSize(256, 256)

But the output is still same as input image size. Can anyone help me to figure out what thing I am doing wrong?

1 Like

The parameter seems to be ignored…

Depending on the model, resolution constraints or the input image resolution may take precedence, causing the output resolution parameter to be ignored. Or is it a bug?

from huggingface_hub import InferenceClient, ImageToImageTargetSize

client = InferenceClient(model="Qwen/Qwen-Image-Edit")
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_homepage.jpg" # (1312, 800)

img = client.image_to_image(
    url,
    prompt="cinematic lighting",
    target_size=ImageToImageTargetSize(height=256, width=256),
    provider="fal"
)
print(img.size) # (1312, 800)
img.save("out.jpg")

I have read the full image to image inference repo files, there i find two output classes out of which ImageToImageTargetSize is defined in the main parameter class.

ImageToImageOutput is the other one which do the same functioning ig.

Here you can find it - https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/\_generated/types/image_to_image.py

I think it is a bug and I have reported it.

1 Like

Similar behavior was observed with prithivMLmods/Monochrome-Pencil. If the size specification parameter doesn’t work in Flux Kontext’s LoRA, then there are probably very few Endpoints that support size specification…

Could it be that parameters aren’t being passed correctly when TGI uses Diffusers as the backend…? @michellehbn

The bug has been fixed and released in huggingface_hub==0.35.3

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.