Can I pass a text file to the tokenizer?

Do I have to use the dataset library to be able to fine-tune gpt2? if yes, should I add the special tokens to the file before passing it to the tokenizer?