Fine-tune a Hugginface model with only loss function(without labels)?

For the task I want to do I don’t have any labels, only a loss function to minimize. Is there a way for me to fine-tune a model without target output?

More concretely, I want to initialize the fine-tuning by putting the classes I want to predict for token classification in random places in the input text until it learns to output those. When the model learns to output the classifications I have a function that takes the output and gives the loss back. Hopefully, this will let the model learn to classify correctly.

The loss is continuous, but can also be binary by using a threshold.

Is this possible to do without going under the hood?

I’m new to this. Any information at all would be appreciated. Thanks :slight_smile: