Why get_parameter_names()?

in transformers/trainer_pt_utils.py, get_parameter_names() is as follow:

def get_parameter_names(model, forbidden_layer_types):
    """
    Returns the names of the model parameters that are not inside a forbidden layer.
    """
    result = []
    for name, child in model.named_children():
        result += [
            f"{name}.{n}"
            for n in get_parameter_names(child, forbidden_layer_types)
            if not isinstance(child, tuple(forbidden_layer_types))
        ]
    # Add model specific parameters (defined with nn.Parameter) since they are not in any child.
    result += list(model._parameters.keys())
    return result

If the linear layer bias of my model is set to False.I use the following code to print the model structure, and the output result is that there is no bias layer:

for k,v in model.named_parameters():
    print(k)


# output:
base_model.model.model.embed_tokens.weight
base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight
base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.weight
base_model.model.model.layers.0.self_attn.q_proj.lora_B.default.weight
base_model.model.model.layers.0.self_attn.k_proj.base_layer.weight

but when i use get_parameter_names(), the output is like this:

'base_model.model.model.embed_tokens.weight',
 'base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight',
 'base_model.model.model.layers.0.self_attn.q_proj.base_layer.bias',
 'base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.weight',
 'base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.bias',
 'base_model.model.model.layers.0.self_attn.q_proj.lora_B.default.weight',

I want to know why it is necessary to use "for name, child in model.named_children():" to implement the get_parameter_names() function. Wouldn’t it be better to use for name, param in model.named_parameters()