Chapter 1 questions


why?

1 Like


This was suggested by chatgpt but I have not got my answer yet as for why and how does this change. Why the result is ā€œPOSITIVEā€?

1 Like

Hi Everyone, I am following along the course on page Transformers, what can they do? - Hugging Face LLM Course. I am trying to generate text using HuggingFaceTB/SmolLM2-360M with the paramaters max_length=30, num_return_sequences=2.
It gives me the following error: ValueError: Greedy methods without beam search do not support num_return_sequences different than 1 (got 2).

Would anyone be able to help me understand what it means and how to resolve this issue?

1 Like

I don’t understand the logic, but this solved the problem.

from transformers import pipeline

generator = pipeline("text-generation", model="HuggingFaceTB/SmolLM2-360M")
generator(
    "In this course, we will teach you how to",
    max_length=30,
    num_return_sequences=2,
    do_sample=True,
)
1 Like

Can you provide certifcate of completion if i finished this course?

2 Likes

Hi @Pizofreude,

I just published two blog posts about recent RL algorithms for reasoning tasks such as GRPO and Dr. GRPO.
You can find them on Medium:

I hope you find it helpful in some way.

Thank you,
Jen

1 Like

I am working on the Transformers, what can they do? section, on the Text Generation section. When I try to run

from transformers import pipeline

generator = pipeline("text-generation", model="HuggingFaceTB/SmolLM2-360M")
generator(
    "In this course, we will teach you how to",
    max_length=30,
    num_return_sequences=2,
)

I get this error: Both max_new_tokens(=256) andmax_length(=15) seem to have been set. max_new_tokens will take precedence.

I read the documentation and it states that max_length ā€œcorresponds to the length of the input prompt + max_new_tokensā€. Should I just use max_new_tokens instead of max_length?

1 Like

I tend to use max_new_tokens.