Llama 3 tokenizer prints cryptic message

>>> import transformers
>>> model_name = "meta-llama/Meta-Llama-3-8B"
>>> tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
  • What is the meaning of this message, what is its source and is it an issue?
  • It seems that it is just printed to stdout, rather than using warnings or raising an exception, why?