What is the difference between tokenizer.eos_token_id, model.config.eos_token_id and model.generation_config.eos_token_id?

Can someone explain what the difference between tokenizer.eos_token_id, model.config.eos_token_id and model.generation_config.eos_token_id is in models like Llama, GPT2 and so on? The same question applies to bos and pad tokens as well. Why do we need to define 3 different eos tokens?

Briefly, the are of the same value but if you want to update them you have to update them seperatly.
Tokenizer() and Model() refer to different objects, they both init with a config object. The config object is supposed to load from the same file(according to the code), usually when you invoke from_pretrained method, either tokenizer or model will init their attributes from a config.json file(or something like it). So basically when you download a model online, its tokenizer and model are supposed to have the same eos token id settings, if there is one. However if you want to update the settings in finetuning or inference phase, you have to assign them to both tokenizer and model, because they don’t share eos token.