Question about unsupervised T5 training

Hola! I have a project I am working on which relates to search queries and a contact database, e.g. I have a golden set of search queries and their related results:

query: john smith (ACME corporation)
result: Smith, John 123 Main Street etc

So I’ve got a small percentage of “correct” search queries and their results which I am fine tuning T5 on for a text classification task. However the problem is I need to get the remaining 95% unobserved contacts into the model somehow, either via some method of context or perhaps fine tune training this T5 in an unsupervised fashion on the contents of the contact database, in hopes that the model will generalize to those semantic associations in the golden set of searches?

I am experimenting with both T5.1-large and BeIR/query-gen-msmarco-t5-large-v1 for this

TIA!