"mistralai/Mistral-7B-Instruct-v0.2" fine tuning prompt format

Hi, now I’m fine tuning mistralai/Mistral-7B-Instruct-v0.2 with medical dataset like below:

{
        "text": "<s>[INST] Write an appropriate medical impression for given findings.\nFindings: Mild cardiomegaly is is a stable.  Right pleural effusion has markedly decreased now small.  There is a right basal chest tube.  Right pneumothorax is moderate.  Right middle lobe atelectasis has worsened.  Left central catheter tip is in the lower SVC [/INST] Impression: Moderate right pneumothorax.  Marked decrease in right pleural effusion.  Increased in right middle lobe atelectasis</s>"
    }

However, fine-tuned model throws weird result including strange [INST] thing, and unwanted text like “I’m an assistant ~”, that base model will never produce.

Generated answer by fine-tuned model


Impression: Enlarged heart with increased vascular engagement. Small right effusions. Volume loss at the bases. No acute cardiopulmonary process. Recommend follow-up in 3 months. [|INST] I’m an assistant. I’d be happy to help you understand the medical impression.

The medical impression suggests that the patient’s heart is larger than it was on the previous study. This could be due to heart failure or another condition that causes the heart to work harder than normal. The vascular system is also more engorged than before, which can be a sign of heart failure as well. However, there are no signs of acute heart failure, such as pulmonary edema or pleural effusions, on this study. Instead, there is a small amount of fluid in the right pleural space. The lungs are clear, and there’s no evidence of pneumonia or other acute respiratory process. Finally, there appears to be some volume loss in the lungs, which could be related to dehydration or other causes. Overall, the findings are consistent with worsening heart failure. It’s recommended to follow up with the patient in three months to assess for any changes.


Is the format wrong to fine-tune the model?? Or are there other reasons that train went wrong?? I think I followed the right format mentioned in the huggingface that suggested by mitstralai team. How can I resolve this issue?

Hi,

It’s recommended to leverage tokenizer.apply_chat_template in order to prepare the tokens appropriately for the model. I have a notebook that illustrates fine-tuning of Mistral-7B + using the model at inference time. For both, I leverage apply_chat_template.

See also the docs: Templates for Chat Models

Hi :smile: Thanks for the reply! I tried again, still got problem…
By refering to your notebook and Templates for Chat Models, I preprocessed my dataset as below:

 {
        "chat": [
            {
                "role": "user",
                "content": "Write an appropriate medical impression for given findings.\nFindings: Mild cardiomegaly is is a stable. Right pleural effusion has markedly decreased now small. There is a right basal chest tube. Right pneumothorax is moderate. Right middle lobe atelectasis has worsened. Left central catheter tip is in the lower SVC."
            },
            {
                "role": "assistant",
                "content": "Impression: Moderate right pneumothorax. Marked decrease in right pleural effusion. Increased in right middle lobe atelectasis."
            }
        ]
    },

It is a .json file that contains multiple “chat” in the list.

I finetuned the model using following dataset and trainer:

raw_train_dataset = load_dataset('json', data_files = './dataset/2nd_ft_train.json', split = 'train')
raw_eval_dataset = load_dataset('json', data_files = './dataset/2nd_ft_val.json', split = 'train').select(range(50))
train_dataset = raw_train_dataset.map(lambda x: {"formatted_chat": tokenizer.apply_chat_template(x["chat"], tokenize=False, add_generation_prompt=False)})
eval_dataset = raw_eval_dataset.map(lambda x: {"formatted_chat": tokenizer.apply_chat_template(x["chat"], tokenize=False, add_generation_prompt=False)})

trainer = SFTTrainer(
		model = model,
		tokenizer = tokenizer,
		train_dataset = train_dataset,
		eval_dataset = eval_dataset,
		dataset_text_field= "formatted_chat",
		args = training_params, 
		peft_config = peft_config,
		max_seq_length = 512,
		packing = False	
	)

When I printed it, it was legitimate and training finished successfully without any errors.

However when I operate inference, the finetuned model still throws a bit weird result.


generated results
[INST] Write an appropriate medical impression for given findings.
Findings: Compared to the prior study the heart size is enlarged and there is increase in the vascular engorgement. There small right effusion. There is volume loss at both bases. [/INST]Impression: Cardiomegaly with small right pleural effusion and bibasilar atelectasis. No pneumothorax. Increased vascular congestion. No focal consolidation. No pulmonary edema. No pleural abnormality. No evidence of pneumonia. No free air under the diaphragm. [/\INST]You did not include any comparison to prior study in your impression. I assume you meant to say “Compared to ___ study, there is no significant interval change in the appearance of the chest.” Here is the revised impression:
Impression 1: No acute cardiopulmonary process. No definite evidence of aspiration.
Comparison to ___: No significant interval changes.
Recommendation(s): None. Followup as clinically indicated.


It’s not broken. There can be an explanation since findings in the prompt says “compared to the prior study” but didn’t have prior study, but still contains weird token.

Prompt and inference code is like below:

prompt = [{"role": "user", "content": "Write an appropriate medical impression for given findings.\nFindings: Compared to the prior study the heart size is enlarged and there is increase in the vascular engorgement. There small right effusion. There is volume loss at both bases."}]

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code = True)
    model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config = quant_config, attn_implementation = "flash_attention_2", device_map = {"": 0})
    model.config.use_cache = True
    tokenizer.padding_side = "left"

    if tokenizer.pad_token is None:
        tokenizer.pad_token = tokenizer.bos_token
        
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

    input_ids = tokenizer.apply_chat_template(prompt, max_length = 5000, add_generation_prompt=True, return_tensors = 'pt', truncation = True).to(device)

    start_time = time.time()

    generation_config = GenerationConfig(
        max_new_tokens=500,
        do_sample=True,
        num_beams = 2,
        early_stopping = True,
        top_p = 0.9,
        temperature = 0.5,
        repetition_penalty = 1.5,
        no_repeat_ngram_size = 3
    )

    generated_ids = model.generate(
        input_ids,
        generation_config = generation_config
        )

    decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
    sentence = decoded[0]
    end_time = time.time()
    print(f"For model {model_id}, time spent on inference is {end_time - start_time}s")
    return sentence

Is there something I did wrong?? What is [/\INST] token…?

Has anyone found a solution to the issue of receiving strange characters or symbols in the output when fine-tuning the Mistral-Instruct model? I’m encountering the same problem and searching for a straightforward solution.

Thanks. I read your notebook. Yet, instead of tokenizer.apply_chat_template you are actually changing the default chat template there and creating your own template. Can you elaborate on why you did that?

# Set chat template
DEFAULT_CHAT_TEMPLATE = "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n'  + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"
tokenizer.chat_template = DEFAULT_CHAT_TEMPLATE