Finetune GPT2 in tensorflow on custom data example programmatically

I am trying to utilize GPT2 for text generation in tf 2 and test out some basic fine tuning on custom data but the examples I have found only seem to utilize the scripts to do this, or assume only wanting to fine tune a classification task. Are there any examples floating around of trying to do fine tuning of the model over custom text data to create new text generation output in tf2? I am not sure if I just need to set it up like any other sequence model like I would an lstm or if GPT2 should just take in the tokenized data after using the gpt2 tokenizer?