Text Classification models give "you should probably train your model"

I tried using a few of the top downloaded transformer models for text classification SamLowe/roberta-base-go_emotions · Hugging Face, ProsusAI/finbert · Hugging Face. The pages seems to imply it should work out of the box to yield sentiment given a sentence. However, I get a warning saying:

If you want to use RobertaLMHeadModel as a standalone, add is_decoder=True.
Some weights of RobertaForCausalLM were not initialized from the model checkpoint at SamLowe/roberta-base-go_emotions and are newly initialized: [‘lm_head.bias’, ‘lm_head.decoder.bias’, ‘lm_head.dense.bias’, ‘lm_head.dense.weight’, ‘lm_head.layer_norm.bias’, ‘lm_head.layer_norm.weight’]
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

Using pipeline(“text-generation”, model=“SamLowe/roberta-base-go_emotions”, top_k=None, is_decoder=True) throws unrecognized kwarg.

If I try to use it as is, it appends characters to input text rather than trying to classify sentiment. Am I using this model wrong?
Thanks much