Pplm runtime error with finetuned model

I’m getting an error when I run the pplm example from github with a gpt2 model I fine-tuned with the language modeling example

-B /math/to/BOW.txt

and I get the error:

Traceback (most recent call last):
File “/pythonProjects/transformerTest/venv/PPLM/run_pplm.py”, line 936, in
File “/pythonProjects/transformerTest/venv/PPLM/run_pplm.py”, line 768, in run_pplm_example
unpert_gen_tok_text, pert_gen_tok_texts, _, _ = full_text_generation(
File “/pythonProjects/transformerTest/venv/PPLM/run_pplm.py”, line 472, in full_text_generation
pert_gen_tok_text, discrim_loss, loss_in_time = generate_text_pplm(
File “/pythonProjects/transformerTest/venv/PPLM/run_pplm.py”, line 584, in generate_text_pplm
pert_past, _, grad_norms, loss_this_iter = perturb_past(
File “/pythonProjects/transformerTest/venv/PPLM/run_pplm.py”, line 213, in perturb_past
bow_logits = torch.mm(probs, torch.t(one_hot_bow))
RuntimeError: mat1 dim 1 must match mat2 dim 0

I’m not sure if I screwed up the finetuning or pplm, but the model does generate text with the run_generation example, and if I just change the model to gpt2 it runs with BOW.txt. Anyone know how to fix this error, or what I am doing wrong?

The problem seems to have to do with the special tokens I added.