Is it correct to load weights from task A to train task B

I want to use 2 tasks of modelling including (a) Causal language modelling &
(b) Mask language modelling for training my new added tokens

My pseudo-code is below

##add new tokenizer
model_name = "vinai/phobert-large"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
with open("tokenizer_vocab.txt","r") as wr_file:
   new_tokens=wr_file.read().splitlines()
_________________
## Train the (a) task - Casual language modeling
added_tokens = tokenizer.add_tokens([tokens for i in new_tokens])
model.resize_token_embeddings(len(tokenizer))

## train the model following the hugging face released 
https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb#scrollTo=JAscNNUD3l-P

##save_the_model
model.save_pretrained('weights_tokenizer/')
tokenizer.save_pretrained('weights_tokenizer/')
___________________
## Train the (b) task - Masked language modeling
model = AutoModelForMaskedLM.from_pretrained("weights_tokenizer/")
tokenizer= AutoTokenizer.from_pretrained("weights_tokenizer/")

My question here is it correct when I want to train the (b) task with weights from the (a) (because I think I can somehow more enrich the tokenizer).

Or there is any solution that I can train my tokenizer in both these two tasks. And I then can use the weights (from training on both two tasks above) to train my model with the task (C)

I do appreciate your time and sharing.