Structured ouput with gpt-oss-120b

Hi guys! I am trying to use structured output with gpt-oss-120b, but the snippet provided by HF on this link keeps on failing with this error:

/hf_inference/node_modules/openai/core/error.mjs:41return new BadRequestError(status, error, message, headers);^

BadRequestError: 400 Failed to format non-streaming choice: Unexpected EOS while waiting for message header to completeat APIError.generate (file:///Users/app/development/hf_inference/node_modules/openai/core/error.mjs:41:20)at OpenAI.makeStatusError (file:///Users/app/development/hf_inference/node_modules/openai/client.mjs:156:32)at OpenAI.makeRequest (file:///Users/app/development/hf_inference/node_modules/openai/client.mjs:301:30)at process.processTicksAndRejections (node:internal/process/task_queues:95:5)at async file:///Users/app/development/hf_inference/index.js:9:18 {status: 400,

anybody knows how to fix this?

1 Like

The inference method supported varies depending on the inference provider. In this case, it might be better to use Fireworks rather than Celebras for structured tasks.

For structured tasks like data extraction, you can force the model to return a valid JSON object using the response_format parameter. We use the Fireworks AI provider.