This was suggested by chatgpt but I have not got my answer yet as for why and how does this change. Why the result is “POSITIVE”?
Hi Everyone, I am following along the course on page Transformers, what can they do? - Hugging Face LLM Course. I am trying to generate text using HuggingFaceTB/SmolLM2-360M with the paramaters max_length=30, num_return_sequences=2.
It gives me the following error: ValueError: Greedy methods without beam search do not support num_return_sequences
different than 1 (got 2).
Would anyone be able to help me understand what it means and how to resolve this issue?
I don’t understand the logic, but this solved the problem.
from transformers import pipeline
generator = pipeline("text-generation", model="HuggingFaceTB/SmolLM2-360M")
generator(
"In this course, we will teach you how to",
max_length=30,
num_return_sequences=2,
do_sample=True,
)
Can you provide certifcate of completion if i finished this course?
Hi @Pizofreude,
I just published two blog posts about recent RL algorithms for reasoning tasks such as GRPO and Dr. GRPO.
You can find them on Medium:
- The Evolution of Policy Optimization: Understanding GRPO, DAPO, and Dr. GRPO’s Theoretical Foundations
- Bridging Theory and Practice: Understanding GRPO Implementation Details in Hugging Face’s TRL Library
I hope you find it helpful in some way.
Thank you,
Jen
I am working on the Transformers, what can they do? section, on the Text Generation section. When I try to run
from transformers import pipeline
generator = pipeline("text-generation", model="HuggingFaceTB/SmolLM2-360M")
generator(
"In this course, we will teach you how to",
max_length=30,
num_return_sequences=2,
)
I get this error: Both
max_new_tokens(=256) and
max_length(=15) seem to have been set.
max_new_tokens will take precedence.
I read the documentation and it states that max_length
“corresponds to the length of the input prompt + max_new_tokens
”. Should I just use max_new_tokens
instead of max_length
?
I tend to use max_new_tokens
.
I’ve noticed that the GitHub page for this course doesn’t have a translation in my native language, which is spoken by around 100 million people. I was wondering if spending time translating the page into my local language would provide any benefits for me or others who might be interested in taking the course. If so, I might consider translating other courses as well
Hello! I’ve been trying to get the course commands to run locally on my Mac. I have Apple M4 Pro on 15.6.1, and I have activated the venv.
You can see my pip freeze here:
$ pip freeze | grep -E "transform|torch|tensor"
safetensors==0.6.2
tensorboard==2.20.0
tensorboard-data-server==0.7.2
tensorflow==2.20.0
torch==2.8.0
torchaudio==2.8.0
torchvision==0.23.0
transformers==4.56.1
Python is the one inside the transformers-course/.env/bin/python
, and
$ python --version
Python 3.13.7
Do you know what may be the issue here?
>>> from transformers import pipeline
libc++abi: terminating due to uncaught exception of type std::__1::system_error: mutex lock failed: Invalid argument
zsh: abort python
I seem to have similar issues with a handful of random import commands… Thank you!
Do you know what may be the issue here?
Python 3.13.7
Python 3.13 or later sometimes cause issues with many libraries highly possibly…
I recommend 3.12 or 3.11.
Unfortunately the same issue applied to python 3.12…
$ python --version
Python 3.12.11
$ pip freeze | grep -E "transform|torch|tensor"
safetensors==0.6.2
tensorboard==2.20.0
tensorboard-data-server==0.7.2
tensorflow==2.20.0
torch==2.8.0
torchaudio==2.8.0
torchvision==0.23.0
transformers==4.56.1
And python 3.11…
$ python3.11
Python 3.11.11 (main, Jul 13 2025, 04:58:55) [Clang 16.0.0 (clang-1600.0.26.6)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import pipeline
libc++abi: terminating due to uncaught exception of type std::__1::system_error: mutex lock failed: Invalid argument
zsh: abort python3.11
Thank you for looking!
Some versions of TensorFlow seem to crash on Mac…
Guess this is the only way?
pip uninstall -y tensorflow tf-keras tf_keras keras onnxruntime
export USE_TF=0 # prevents TF checks inside transformers
python -c "from transformers import pipeline; print('import ok')"
Yes, lowering the tensorflow version worked thank you!
This combination worked for me.
$ pip freeze | grep -E "tensor|numpy|protobuf"
numpy==2.1.3
protobuf==5.29.5
safetensors==0.6.2
tensorboard==2.19.0
tensorboard-data-server==0.7.2
tensorflow==2.19.1