I Made a simple CLI for playing with BLOOM

I made a simple CLI for playing with BLOOM. It allows you to launch a terminal and enter prompts to call the huggingface inference API. It logs all prompts and generated texts so you can look back at them later.

https://github.com/getorca/bloom-cli

I made it just for fun and to quickly try different short prompts. I’m currently working on quite a NLP project and information extraction. Unfortunately its quite limited due to the current token limit with BLOOM on huggingface inference.

an example of how import prompt engineer is with continuation models like BLOOM:

Enter prompt for completion: TLDR: Of Mice and Men
⠦ Processing... 0:00:08
Done ✔ 
Bloom generated text:
TLDR: Of Mice and Men is a great book, but it is not a great novel. It is a 
great book because it
Enter prompt for completion: TLDR: of Mice and Men \n\n Post:
⠇ Processing... 0:00:07
Done ✔ 
Bloom generated text:
TLDR: of Mice and Men \n\n Post: \n\n The story is about a man named George and 
a mouse named Lennie. George and L
Enter prompt for completion:

The first you can see it trys to continue the sentence, but add a couple line breaks and the prompt Post: and it knows it needs to create a post about the TLDR for “Of Mice and Men”. Where was this when I was in junior high! It appears the token limit also applies to generated_text

I would like to hear what everyone thinks. If you think it’s helpful. If there’s interest I could possibly add more text completion models for comparison.

Are you able to fine-tune with this CLI? Do you know how to fine-tune?

No, fine-tuning is not available in the my CLI, it simply a way to play with prompts and log them and responses.

Huggingface has a docs on fine tuning pre-trained models, Fine-tune a pretrained model, although I haven’t tried it.