Pplm runtime error with finetuned model

I’m getting an error when I run the pplm example from github with a gpt2 model I fine-tuned with the language modeling example

run_pplm.py
-B /math/to/BOW.txt
–pretrained_model=/user/FindtunedModelOut
–cond_text=“potato”
–num_samples=20
–length=150
–stepsize=0.03
–num_iterations=3
–window_length=5
–gamma=1.5
–gm_scale=0.95
–kl_scale=0.01
–colorama
–verbosity=regular
–sample

and I get the error:

Traceback (most recent call last):
File “/pythonProjects/transformerTest/venv/PPLM/run_pplm.py”, line 936, in
run_pplm_example(**vars(args))
File “/pythonProjects/transformerTest/venv/PPLM/run_pplm.py”, line 768, in run_pplm_example
unpert_gen_tok_text, pert_gen_tok_texts, _, _ = full_text_generation(
File “/pythonProjects/transformerTest/venv/PPLM/run_pplm.py”, line 472, in full_text_generation
pert_gen_tok_text, discrim_loss, loss_in_time = generate_text_pplm(
File “/pythonProjects/transformerTest/venv/PPLM/run_pplm.py”, line 584, in generate_text_pplm
pert_past, _, grad_norms, loss_this_iter = perturb_past(
File “/pythonProjects/transformerTest/venv/PPLM/run_pplm.py”, line 213, in perturb_past
bow_logits = torch.mm(probs, torch.t(one_hot_bow))
RuntimeError: mat1 dim 1 must match mat2 dim 0

I’m not sure if I screwed up the finetuning or pplm, but the model does generate text with the run_generation example, and if I just change the model to gpt2 it runs with BOW.txt. Anyone know how to fix this error, or what I am doing wrong?
Thanks.

The problem seems to have to do with the special tokens I added.