Generating coherent related text with generative model (GPT2 etc.)

I am trying to generative sentences based on some context on one of the datasets from datasets library. I’ve tried finetuning on some portion (50% train) of the dataset, but still not able to generate coherent related context. For instance, if some noun is present, the generated sentence should most likely be related to noun or perform some form of coreference resolution, needs to know the verb from context, and talk about it in generated sentence. I do not see anything happening like that. The generated sentences are infact very divergent from the context. I’d appreciate if you can suggest some methods, papers (with code) for tackling this problem. Thanks.