Continual training on the fine-tuned model to learn more things

I have fine tuned a multi-qa-minilm-l6-cos-v1 model for similarity search between user query and product_descriptions for a e-commerce search. Now I want the same fine-tuned model to learn synonyms in my word corpus. For the first fine tuning I have used MultiNegativeRanking loss and my data is pairs positive (query, product descriptions) pairs and negative (query, product descriptions).
For synonyms task I also same positive (word, synonym) pairs. Duplicates can be there so I have make sure a batch in my train loader doesn’t contains any duplicates.
I have used Elastic Weight consolidation technique for training for synonyms task in order to retain knowledge of previous task also. My data synonyms is very less, I have only 1 lac pairs of it.
Parameters I uses are
batch_size:16
lambda_ewc = 0.1
optimizer: Adam
loss: 5e-6
loss: MultiNegativeRanking Loss with EWC
epoch: 20

I am not able to get a good accuracy even on training data (27% accuracy only). Can any help me in this what to do improve accuracy while maintaining previous learning also? If any other ways I can use in fine tuning for task b please share.

1 Like

Did you finetune the model with modifying the source code? How do you add the EWC in the training?