This was suggested by chatgpt but I have not got my answer yet as for why and how does this change. Why the result is āPOSITIVEā?
Hi Everyone, I am following along the course on page Transformers, what can they do? - Hugging Face LLM Course. I am trying to generate text using HuggingFaceTB/SmolLM2-360M with the paramaters max_length=30, num_return_sequences=2.
It gives me the following error: ValueError: Greedy methods without beam search do not support num_return_sequences
different than 1 (got 2).
Would anyone be able to help me understand what it means and how to resolve this issue?
I donāt understand the logic, but this solved the problem.
from transformers import pipeline
generator = pipeline("text-generation", model="HuggingFaceTB/SmolLM2-360M")
generator(
"In this course, we will teach you how to",
max_length=30,
num_return_sequences=2,
do_sample=True,
)
Can you provide certifcate of completion if i finished this course?
Hi @Pizofreude,
I just published two blog posts about recent RL algorithms for reasoning tasks such as GRPO and Dr. GRPO.
You can find them on Medium:
- The Evolution of Policy Optimization: Understanding GRPO, DAPO, and Dr. GRPOās Theoretical Foundations
- Bridging Theory and Practice: Understanding GRPO Implementation Details in Hugging Faceās TRL Library
I hope you find it helpful in some way.
Thank you,
Jen
I am working on the Transformers, what can they do? section, on the Text Generation section. When I try to run
from transformers import pipeline
generator = pipeline("text-generation", model="HuggingFaceTB/SmolLM2-360M")
generator(
"In this course, we will teach you how to",
max_length=30,
num_return_sequences=2,
)
I get this error: Both
max_new_tokens(=256) and
max_length(=15) seem to have been set.
max_new_tokens will take precedence.
I read the documentation and it states that max_length
ācorresponds to the length of the input prompt + max_new_tokens
ā. Should I just use max_new_tokens
instead of max_length
?
I tend to use max_new_tokens
.