Python Code (App.py on Spaces)
import gradio as gr
from PIL import Image
def segment_image():
# Dummy segmentation (convert to grayscale)
segmented = img.convert("L")
return segmented
demo = gr.Interface(
fn=segment_image,
inputs=gr.Image(type="pil"),
outputs=gr.Image(type="pil"),
)
demo.launch()
JavaScript Code(being run locally on my machine)
I have this code in my JS file that I’m trying to send to my Spaces app, but it’s giving an error I can’t figure out.
import { Client } from "@gradio/client";
import fs from 'fs';
const app = await Client.connect("user4635/trial-gradio-blank");
const imageBuffer = fs.readFileSync('./Cristiano_Ronaldo_WC2022_-_02.jpg'); // Replace with your path
const base64Image = `data:image/jpeg;base64,${imageBuffer.toString('base64')}`;
// Send to Gradio
const result = await app.predict("/predict", [
base64Image // Pass base64 string
]);
const output = result.data[0];
// Save the output image
fs.writeFileSync('./output_image.png', Buffer.from(output.data));
The error on my node console
node:internal/process/esm_loader:40
internalBinding('errors').triggerUncaughtException(
^
{
type: 'status',
endpoint: '/predict',
fn_index: 0,
time: 2025-03-20T02:56:19.193Z,
original_msg: undefined,
queue: true,
title: undefined,
message: null,
visible: undefined,
duration: undefined,
stage: 'error',
code: undefined,
success: false
}
Does anyone know why this could be?
1 Like
For example, this sometimes happens when the Gradio client and Gradio GUI versions don’t match, but I don’t know how to downgrade the module version using JavaScript…
Same error but unrelated issue…
opened 06:39PM - 16 Oct 24 UTC
bug
triage
### Checklist
- [X] The issue has not been resolved by following the [troublesh… ooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)
- [X] The issue exists on a clean installation of Fooocus
- [X] The issue exists in the current version of Fooocus
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Using gradio api receive error
node:internal/process/esm_loader:40
internalBinding('errors').triggerUncaughtException(
^
{
type: 'status',
stage: 'error',
endpoint: '/predict',
fn_index: 13,
message: null,
queue: false,
time: 2024-10-16T18:26:15.489Z
}
### Steps to reproduce the problem
run code shown
import { client } from "@gradio/client";
const response_0 = await fetch("https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png");
const exampleImage = await response_0.blob();
const app = await client("https://xxxxxxx.com/", {auth:['user','password'});
const result = await app.predict(10, [
exampleImage, // blob in 'Image' Image component
]);
console.log(result.data);
### What should have happened?
Not received an error.
### What browsers do you use to access Fooocus?
Google Chrome
### Where are you running Fooocus?
Locally
### What operating system are you using?
WIndows 11
### Console logs
```Shell
C:\Users\Scott\Downloads\Fooocus_win64_2-5-0>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --port 7866 --listen 0.0.0.0
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--port', '7866', '--listen', '0.0.0.0']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.5.5
[Cleanup] Attempting to delete content of temp dir C:\Users\Scott\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
Total VRAM 12282 MB, total RAM 65277 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
Running on local URL: http://0.0.0.0:7866
model_type EPS
UNet ADM Dimension 2816
IMPORTANT: You are using gradio version 3.41.2, however version 5.0.1 is available, please upgrade.
--------
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
VAE loaded: None
Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1)] for model [C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.30 seconds
Started worker with PID 55052
App started successful. Use the app with http://localhost:7866/ or 0.0.0.0:7866
To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
File "C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\gradio\blocks.py", line 1429, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\gradio\blocks.py", line 1239, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
File "C:\Users\Scott\Downloads\Fooocus_win64_2-5-0\Fooocus\modules\gradio_hijack.py", line 277, in preprocess
assert isinstance(x, str)
AssertionError
```
### Additional information
_No response_