Hello,
I am trying to test out autotrain using a llama 2 chat and an alpaca dataset but I have cut it down to 1k rows instead of 52k. I am running on T4 Small
I haven’t changed any configuration options and everything is default.
{
"block_size": 1024,
"model_max_length": 2048,
"use_flash_attention_2": false,
"disable_gradient_checkpointing": false,
"logging_steps": -1,
"evaluation_strategy": "epoch",
"save_total_limit": 1,
"save_strategy": "epoch",
"auto_find_batch_size": false,
"mixed_precision": "fp16",
"lr": 0.00003,
"epochs": 3,
"batch_size": 2,
"warmup_ratio": 0.1,
"gradient_accumulation": 1,
"optimizer": "adamw_torch",
"scheduler": "linear",
"weight_decay": 0,
"max_grad_norm": 1,
"seed": 42,
"quantization": "int4",
"target_modules": "",
"merge_adapter": false,
"peft": true,
"lora_r": 16,
"lora_alpha": 32,
"lora_dropout": 0.05
}
I get the following runtime error
❌ ERROR | 2023-12-16 17:13:26 | autotrain.trainers.common:wrapper:80 - Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
I have tried rebuilding with no success