Task_type parameter of LoraConfig

I am training a fine-tune of codellama using PEFT but not sure how to use the task_type parameter of LoraConfig. Should it be CAUSAL_LM or SEQ_2_SEQ_LM or something else? Does it have any affect?

The goal of my model is to parse an input for independent clauses in a sentence. For example, it would insert a delimiter, such as in this sentence: “the tea was on the stove and was at high temperature” , separating the independent clause from the subordinate clause. My training data is all in a single col and each row looks like this (where the → and are custom tokens I add to the tokenizer vocab and the is the EOS token):

“the tea was on the stove and was at high temperature → the tea was on the stove and was at high temperature ”

2 Likes

Does task_type matters in LoraConfig, and if so, in what way?

1 Like

I have the same question. Does task_type matter?

2 Likes

In the source code, task_type is not even there:

1 Like

The task_type parameter is used in the superclass PeftConfig

5 Likes
    SEQ_CLS = "SEQ_CLS"
    SEQ_2_SEQ_LM = "SEQ_2_SEQ_LM"
    CAUSAL_LM = "CAUSAL_LM"
    TOKEN_CLS = "TOKEN_CLS"
    QUESTION_ANS = "QUESTION_ANS"
    FEATURE_EXTRACTION = "FEATURE_EXTRACTION"
    Overview of the supported task types:
    - SEQ_CLS: Text classification.
    - SEQ_2_SEQ_LM: Sequence-to-sequence language modeling.
    - Causal LM: Causal language modeling.
    - TOKEN_CLS: Token classification.
    - QUESTION_ANS: Question answering.
    - FEATURE_EXTRACTION: Feature extraction. Provides the hidden states which can be used as embeddings or features
      for downstream tasks.

See here for source: peft/src/peft/utils/peft_types.py at v0.8.2 · huggingface/peft · GitHub

3 Likes