Questions about Mistral and apply_chat_template with Text Generation Inference, openai API and messages API

  1. The proper way to use Mistral (mistralai/Mistral-7B-Instruct-v0.2) is with its apply_chat_template that is defined by the Mistral tokenizer. Is the mistral tokenizer being called under the hood when I use the TGI interface of Messages API ?

  2. TGI also has it’s own messages API:
    Messages API
    Is this API using the Mistral tokenizer apply_chat_template method?