Hugging Face Forums
Why does the falcon QLoRA tutorial code use eos_token as pad_token?
Models
brando
July 8, 2023, 12:56am
4
Why is this the case? seem really bizzare to me.
show post in topic
Related topics
Topic
Replies
Views
Activity
Mistral trouble when fine-tuning : Don't set pad_token_id = eos_token_id
🤗Transformers
8
5946
August 28, 2024
GPT2 finetuned with eos token will never yield eos token during generation
Beginners
6
3384
April 12, 2024
How does GPT decide to stop generating sentences without EOS token?
🤗Transformers
13
24684
August 19, 2024
I want fine tune my LLM (falcon-7b) to learn to stop : Which strategy?
Beginners
0
1203
August 9, 2023
Why does hugging face falcon model use mode.config.use_cache = False, why wouldn't it want to have the decoder re-use computations for fine-tuning?
Models
7
2881
July 19, 2023