FastLoRAChat Instruct-tune LLaMA on consumer hardware with shareGPT data

Announcing FastLoRAChat , training chatGPT without A100.

Releasing model: icybee/fast_lora_chat_v1_sunlight · Hugging Face

The purpose of this project is to produce similar result to the Fastchat model, but in much cheaper hardware (especially in non-Ampere GPUs).

This repository combined features of alpaca-lora and Fastchat:

  1. Like Fastchat, support multilanguage and multi round chat.
  2. Like alpaca-lora, support training and inference on low-end graphic cards (using LORA).
  3. Opensource everything, include dataset, training code, export model code, and more.