Feeding a Knowledge Base into Transformer model

Hey HuggingFace family,
I’m an undergrad in CS working in NLP. I’m really fascinated by the idea of incorporating everyday commonsense reasoning within existing Language Models. Although there are some commonsense knowledge bases like ConceptNet, ATOMIC, OpenMind Commonsense (MIT), Cyc etc… they exist in forms of knowledge graphs, ontologies.
My question is, how can I go about feeding these knowledge bases into current transformer LMs like BERT and GPT-2?
Is there a way I can fine-tune them, such that they retain their language modelling capabilities but also learn new commonsense understanding of our physical world?

1 Like

Hello @ShivamArya , did you ever figure out how to do this?