Hmm… I’m not familiar with n8n…
Why you’re seeing JSON instead of an image
Even if you send Accept: image/png, you only get an actual PNG when the server returns a successful image response. In practice, when you see JSON it’s almost always one of these:
- It’s an error response (auth, wrong endpoint/provider, gated model not accepted, unsupported model, etc.). Errors are typically returned as JSON even if you asked for
image/png.
- You’re not actually sending the payload the endpoint expects (wrong JSON keys like
pmt instead of inputs, wrong body mode, missing Content-Type: application/json).
- n8n did receive binary, but you’re looking at the JSON wrapper (n8n always outputs “items” as JSON; binary lives in a separate “Binary” property when configured correctly). n8n’s HTTP Request node explicitly separates these. (docs.n8n.io)
Hugging Face’s Inference Providers “Text to Image” task is specified to return the generated image as raw bytes on success. (Hugging Face)
So: JSON = not a successful image bytes response (or you’re not capturing binary correctly).
The request Hugging Face expects (this matters)
For Hugging Face Inference Providers Text-to-Image, the spec is:
- Header:
Authorization: Bearer hf_... where the token has “Inference Providers” permission (Hugging Face)
- JSON body uses
inputs (string prompt), not pmt (Hugging Face)
- Successful response body is raw image bytes (Hugging Face)
Minimal correct JSON body
{
"inputs": "Astronaut riding a horse"
}
Optional parameters (examples)
{
"inputs": "Astronaut riding a horse",
"parameters": {
"width": 1024,
"height": 1024,
"num_inference_steps": 20,
"guidance_scale": 3.5,
"seed": 12345
}
}
These parameter names (width/height/steps/guidance/seed/etc.) are exactly the ones listed in the Text-to-Image task spec. (Hugging Face)
If you’re sending something like { "pmt": "..." } or { "inputs": { "pmt": "..." } }, the router is free to return JSON errors.
n8n: how to prove what you’re getting back (and why “File” seems to do nothing)
Step 1 — turn on the 2 options that make debugging obvious
In the HTTP Request node:
- Response → “Include Response Headers and Status” (so you can see status code +
content-type) (docs.n8n.io)
- (Optionally) “Never Error” so the node still outputs the response body even on non-2xx (docs.n8n.io)
Now you can distinguish:
content-type: image/png + status 200 ⇒ you got the image bytes
content-type: application/json or 401/403/404/503 ⇒ you got an error JSON
Step 2 — set the response to binary the n8n way
In Response settings:
- Response Format: File
- Put Output in Field: choose a name (example:
image) (docs.n8n.io)
This is the part people miss: in n8n, “File” means binary data is stored under a binary property name (whatever you set in “Put Output in Field”). (docs.n8n.io)
Step 3 — where the image actually appears in n8n
After executing the node, look at the node output:
- JSON tab: you may still see JSON metadata
- Binary tab: you should see something like
image (your field name)
If you never get a Binary tab entry, then the response wasn’t binary (or n8n couldn’t treat it as such).
The 3 most common concrete mistakes in your exact setup
Mistake A — Missing Authorization header
Your description didn’t mention sending Authorization: Bearer hf_....
Without it, you typically get JSON (401/403). Hugging Face’s Text-to-Image spec explicitly requires the bearer token with “Inference Providers” permission. (Hugging Face)
Fix (n8n Headers):
Authorization: Bearer <your_hf_token>
Accept: image/png
Content-Type: application/json
Mistake B — Wrong JSON key (pmt instead of inputs)
The Text-to-Image API expects inputs as the prompt string. (Hugging Face)
If you send pmt, you’re not calling the spec the router expects, so JSON errors are likely.
Fix: Use:
{ "inputs": "your prompt" }
Mistake C — Wrong provider for the model (very likely with FLUX)
black-forest-labs/FLUX.1-schnell is commonly run via providers like Fal AI / Replicate, and the official “first call” guide demonstrates using FLUX with provider fal (via the widget + code snippet). (Hugging Face)
Also, the Hub’s provider listings show FLUX.1-schnell under Fal AI’s text-to-image models. (Hugging Face)
If you force the HF Inference provider route for a model that isn’t actually hosted there, you can get JSON errors (often 404). A similar “wrong endpoint/wrong provider → expected 404” pattern is documented in community debugging writeups. (Hugging Face Forums)
Practical takeaway: if your response headers show 404/JSON, it may not be your n8n config—it may be that that provider route can’t serve that model.
A quick “no-n8n” sanity test (same request, saves to file)
Run this from any shell. If it produces a valid PNG, your endpoint + token + payload are correct; if it saves JSON to the file, you’ll immediately see the real error message.
curl -v \
"https://router.huggingface.co/hf-inference/models/black-forest-labs/FLUX.1-schnell" \
-H "Authorization: Bearer $HF_TOKEN" \
-H "Accept: image/png" \
-H "Content-Type: application/json" \
-d '{"inputs":"Astronaut riding a horse"}' \
--output out.png
- If
out.png opens as an image ⇒ your n8n issue is just capture/field settings.
- If
out.png contains JSON text ⇒ read it; it will usually say exactly what’s wrong (auth, gated, wrong provider, model not available, etc.).
If you want FLUX to work reliably: two working routes
Route 1 — Use the provider that actually serves FLUX via Hugging Face (recommended)
Hugging Face’s own “Your First Inference Provider Call” guide uses FLUX.1-schnell via Inference Providers and highlights selecting a provider (example uses fal). (Hugging Face)
And the Hub API/provider listings show FLUX.1-schnell available under Fal AI. (Hugging Face)
If you want to confirm availability programmatically, the Hub API supports listing models by provider + pipeline tag. (Hugging Face)
Route 2 — Call Fal directly (JSON response with image URLs)
Fal’s synchronous HTTP endpoints are documented as:
POST https://fal.run/{model_id}
- returns JSON containing image URLs and metadata (docs.fal.ai)
Example from Fal docs (note: returns JSON, not raw PNG bytes): (docs.fal.ai)
curl -X POST "https://fal.run/fal-ai/fast-sdxl" \
-H "Authorization: Key $FAL_KEY" \
-d '{"prompt":"a cat"}'
So if you go direct-to-Fal, your n8n workflow becomes two steps:
- generate (JSON with URL)
- download image from the URL (binary)
Similar cases + good references (worth skimming)
n8n-specific (binary handling)
- n8n HTTP Request node: Response Format = File + Put Output in Field (binary storage) (docs.n8n.io)
- n8n community: FLUX works in HTTP Request node; adding
Accept header mattered; also notes some agent/tool contexts can’t return binary (n8n Community)
Hugging Face-specific (what “success” looks like)
- HF Inference Providers Text-to-Image task spec:
inputs prompt + bearer token; response is raw bytes (Hugging Face)
- HF guide using FLUX.1-schnell with providers (shows the intended workflow) (Hugging Face)
- Hub provider listings showing which provider serves which text-to-image models (FLUX appears under Fal AI) (Hugging Face)
Provider direct docs
- Fal synchronous HTTP API (
https://fal.run/...) and example curl (docs.fal.ai)
Bottom line: the most likely fix for your configuration
- Add
Authorization: Bearer <HF_TOKEN> (token with Inference Providers permission). (Hugging Face)
- Send JSON body with
inputs (not pmt), and set Content-Type: application/json. (Hugging Face)
- In n8n: Response Format File + set Put Output in Field (example
image), and enable Include Response Headers and Status so you can see what you’re truly getting. (docs.n8n.io)
- If headers show JSON/404: you’re likely hitting the wrong provider for FLUX; use the provider that lists FLUX support (Fal AI) rather than forcing HF Inference. (Hugging Face)