Hugging Face Forums
Beam_search bottlenecks inference with only 1 used cpu
🤗Transformers
adelplace
October 13, 2022, 12:06pm
2
It seems like I am not the only one facing this problem :
Any ideas of solution ?
show post in topic
Related topics
Topic
Replies
Views
Activity
Model.generate() is extremely slow while using beam search
🤗Transformers
2
5451
July 24, 2022
Multiple gpu not properly parallelized during model.generate()
🤗Transformers
4
1654
October 9, 2022
Big `generate()` refactor
🤗Transformers
7
3776
November 26, 2021
BART_LM: Odd Beam Search Output
Intermediate
18
1851
August 17, 2020
Beam search (FlaxT5) generates PAD tokens mid generation
🤗Transformers
1
496
November 25, 2021