Fine-tuning, or other types of adaptation performed on foundation models after pretraining, are an equally important and complex step in model development. Fine-tuned models are more frequently deployed than base models. Here, we also link to some useful and widely-used resources for adapting foundation models or otherwise fine-tuning them.
12 Finetuning Repositories for Foundation Model Training
- Home /
- Foundation Model Resources /
- Finetuning Repositories for Foundation Model Training
Finetuning Repositories
Text 12
Speech 3
Vision 9
Axolotl
A repository for chat- or instruction-tuning language models, including through full fine-tuning, LoRA, QLoRA, and GPTQ.
TextLLaMA-Factory
A framework for efficiently fine-tuning LLMs using cutting-edge algorithms with a user-friendly web UI
TextLevanter
Levanter is a framework for training large language models (LLMs) and other foundation models that strives for legibility, scalability, and reproducibility:
Text