Fine tune gpt2 language model error - unexpected keyword argument 'cache_dir'

When fine tuning gpt2 using the example script, I’m seeing the following error:

!python ./transformers/examples/language-modeling/run_language_modeling.py \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=../data/train.txt \
--num_train_epochs 5 \
--output_dir=./model_output \
--overwrite_output_dir \
--save_steps 20000 \
--per_gpu_train_batch_size 4 \

Traceback (most recent call last):
  File "./transformers/examples/language-modeling/run_language_modeling.py", line 313, in <module>
    main()
  File "./transformers/examples/language-modeling/run_language_modeling.py", line 242, in main
    get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
  File "./transformers/examples/language-modeling/run_language_modeling.py", line 143, in get_dataset
    cache_dir=cache_dir,
TypeError: __init__() got an unexpected keyword argument 'cache_dir'

I’ve followed the suggestion from this [https://github.com/huggingface/transformers/issues/185](http://Issue 185)

Would appreciate any pointers. Thank you!