Attention_mask missing from generate() output

Hi, I am using Transformers from Git on Linux. I am passing return_attentions=True, return_dict_in_generate=True, and return_attention_mask=True in my GenerationConfig but the keys in the output from Generate are only ‘sequences’, ‘attentions’, and ‘past_key_values’. The attention_mask that I am supposed to be passing back to the model, is not being returned by generate().