How to use PEFT approach to do Prompt Tuning on DollyV2 model

I am using this link to study about Prompt Tuning. Parameter-Efficient Fine-Tuning using 🤗 PEFT

It has 4 options.

  1. LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
  2. Prefix Tuning: P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
  3. Prompt Tuning: The Power of Scale for Parameter-Efficient Prompt Tuning
  4. P-Tuning: GPT Understands, Too

I am more interested in Option 3. Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning]

Can I use Prompt Tuning to tune DollyV2 [databricks/dolly-v2-12b · Hugging Face] model.

The use-case is:

I have Context which has lot of paragraphs and then Question , the model has to answer the Question based on Context in a professional manner. Also can it classify the Question as relevant if answer is present in Context and irrelevant if answer is not in Context

Like this:

1 data sample:

{
Context: How to Link Credit Card to ICICI Bank Account Step 1: Login to ICICIBank.com using your existing internet banking credentials. Step 2: Go to the 'Service Request' section. Step 3: Visit the 'Customer Service' option. Step 4: Select the Link Accounts/ Policy option to link your credit card to the existing user ID.

Question: How to add card?
Prompt: Answer the question as truthfully as possible using and only using the provided context and if the answer is not contained within the context/text, say Irrelevant
Answer: Relevant. To add your card you can follow these steps:

Step 1: Login to ICICIBank.com using your existing internet banking credentials.
Step 2: Go to the 'Service Request' section.
Step 3: Visit the 'Customer Service' option.
Step 4: Select the Link Accounts/ Policy option to link your credit card to the existing user ID.```
}

The idea is since **dollyv2** is huge model, so want to explore `prompt tuning`.