I’m wondering if I can add this setting to any *config.json.
When I add stop_strings to generation_config.json and try to generate
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained(model_path)
>>> model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
>>> model.generate(**tokenizer("Hi how are you?", return_tensors="pt", return_token_type_ids=False))
...
ValueError: There are one or more stop strings, either in the arguments to `generate` or in the model's generation config, but we could not locate a tokenizer. When generating with stop strings, you must pass the model's tokenizer to the `tokenizer` argument of `generate`.
Is there a way to define stop_strings and the default tokenizer so I can skip manually passing the info whenever calling model.generate()?