Pushing a VisionTransformer Regression Task to hub

I managed to create a regression task from a BeiT Vision Trasformer model by changing num_labels =1 (as described in several articles/posts). Before pushing to the hub, the trainer’s prediction goes as expected-- with a successful regression output.

Once I push the model to the hub though, the inference changes, and it only returns [{‘score’: 1.0, ‘label’: ‘LABEL_0’}] every single time.

This happens both when performing prediction in the Hosted Inference API, and when downloading the model to my notebook from the hub.

Would love some help. Thanks in advance!

UPDATE: I realize that the issue is that the pipeline used for the inference in the applies a softmax to the logits (which is just the outputs of the regression, in this case).

Is there a way to push to the hub in a way where it gives me the logits in the prediction instead of the softmax?