Getting output attentions for encoder_attention decoder layers

Hi, so I am using a marian MT model and trying to get multiheaded encoder_attention values from the decoder layers. I use hooks to get those attention_outputs and pass the config : config = MarianConfig.from_pretrained(romance_model_name, output_attentions=True), and it is returning self attention in the decoder layers as a tuple of (attn_value attn_outputs), however for the encoder-decoder attention, it returns me a tuple of (attn_value, None). Is there a way to get multi headed attention_outputs for the encoder-decoder attention