Model fine-tuning and inference of Bloom 560M

Hi all,

I’m the Co-founder of inferencetraining.ai and openagi.ai

How do I simply fine-tune Bloom 560M and make inferences, post? I’ve followed the steps Finetune BLOOM (Token Classification) | Kaggle but that seems to be only for Named Entity Recognition. Correct, me if I’m wrong.

Essentially, I’m trying to do text generation, and predict the following sequence of characters.

Hi, @emmanuelnsanga ,
As you have fine tunned bloom 560M on custom dataset. how is result ,it is better than the bench mark encoder model like (bert , Roberta).

Thanks
Deelip

Try LLAMA it’s State-of-the-art…