Nuance in usage of GPT2 when setting the attribute trainable

gpt2 = GPT2LMHeadModel.from_pretrained(‘gpt2’, cache_dir=“./cache”, local_files_only=True)
gpt2.trainable = False
gpt2.config.pad_token_id=50256
gen_nlp = pipeline(“text-generation”, model=gpt2, tokenizer=tokenizer_gpt2, device=args.gpu, return_full_text=False)
contents = ds.df_train.sample(10)[‘content’].tolist()
results_trunk = gen_nlp(contents, max_length=64, do_sample=True, top_p=0.9, top_k=0,
repetition_penalty=1.0, num_return_sequences=4, clean_up_tokenization_spaces=True)

I use off-the-shelf GPT2 for open-ended generation. I find that there is a parameter

trainable

which needs to be set False or True before using.
Anyone know the nuance in this setting?
What is the best setting for this parameter?

Thanks.