DoRA for depthwise-convolutional layers

Hello!

I am currently working on a project regarding the application of PEFT methods for CNNs. I recently tried to implement LoRA for MobileNetV2, which worked fine. I also want to investigate DoRA but failed to get it running on the MobileNetV2. The issue seems to be, that the DoRA implementation form the peft package is not compatible with depthwise convolutional layers.

Tin the last line of this snipped form dora.py (line 56) i get the following error:

            if weight.data.ndim == 4:  # For handling LoRAs applied to Conv2Ds.
                lora_weight = torch.mm(lora_B.flatten(start_dim=1), lora_A.flatten(start_dim=1))
                b = lora_B.flatten(start_dim=1)
                a = lora_A.flatten(start_dim=1)
                lora_weight = lora_weight.reshape(weight.shape)
    lora_weight = lora_weight.reshape(weight.shape)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: shape '[32, 1, 3, 3]' is invalid for input of size 9216

It seems like the lora_A and lora_B are calculated wrong (maybe) here (layer.py, 842):

        self.lora_A[adapter_name] = nn.Conv2d(self.in_features, r, kernel_size, stride, padding, bias=False)
        self.lora_B[adapter_name] = nn.Conv2d(r, self.out_features, (1, 1), (1, 1), bias=False)

I can’t seem to figure out how the groups parameter would influence this and how I can correct it. Especially since LoRA works fine.

I’m thankful for any hints! :slight_smile: