Error Loading Custom Transformers.js model from hugging face hub

I have trained a custom model and compiled to ONNX for transformers.js as outlined here: Use custom models

The model is located here: teapotai/instruct-teapot 路 Hugging Face

I am running this node code:

import { pipeline, env } from '@xenova/transformers';
env.allowLocalModels = false;
env.backends.onnx.wasm.numThreads = 1;
const generative_model = await pipeline('text2text-generation', 'teapotai/instruct-teapot');

This works when running locally, but when deployed, the model is unable to load.

I have found the following error:

Uncaught (in promise) Error: Could not locate file: "{hugging_face_url}/teapotai/instruct-teapot/resolve/main/onnx/decoder_model_merged_quantized.onnx"

I鈥檝e noticed that
{hugging_face_url}/teapotai/instruct-teapot/blob/main/onnx/decoder_model_merged_quantized.onnx
exists but the requested
{hugging_face_url}/teapotai/instruct-teapot/resolve/main/onnx/decoder_model_merged_quantized.onnx
does not exist.

However for the example models in the docs, the resolve/ does seem to exist:
{hugging_face_url}/Xenova/LaMini-Flan-T5-783M/resolve/main/onnx/decoder_model_merged_quantized.onnx

Is there a setting or configuration on huggingface required to serve this repo? Thanks!

(Apologies for the {hugging_face_url}, the forum would only let me link 2 urls)