winglian commited on
Commit
2495909
1 Parent(s): 7af8166

hopefully improve the README (#419)

Browse files

* hopefully improve the README

* exitcode -9 help

* table of contents

* formatting

Files changed (1) hide show
  1. README.md +44 -2
README.md CHANGED
@@ -1,10 +1,39 @@
1
  # Axolotl
2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  <div align="center">
4
  <img src="image/axolotl.png" alt="axolotl" width="160">
5
  <div>
6
  <p>
7
- <b>One repo to finetune them all! </b>
8
  </p>
9
  <p>
10
  Go ahead and axolotl questions!!
@@ -14,6 +43,10 @@
14
  </div>
15
  </div>
16
 
 
 
 
 
17
  ## Axolotl supports
18
 
19
  | | fp16/fp32 | lora | qlora | gptq | gptq w/ lora | gptq w/flash attn | flash attn | xformers attn |
@@ -29,6 +62,8 @@
29
 
30
  ## Quickstart ⚡
31
 
 
 
32
  **Requirements**: Python >=3.9 and Pytorch >=2.0.
33
 
34
  ```bash
@@ -130,6 +165,7 @@ accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml \
130
 
131
  ### Dataset
132
 
 
133
  Have dataset(s) in one of the following format (JSONL recommended):
134
 
135
  - `alpaca`: instruction; input(optional)
@@ -622,7 +658,7 @@ CUDA_VISIBLE_DEVICES="" python3 scripts/finetune.py ...
622
 
623
  ## Common Errors 🧰
624
 
625
- > Cuda out of memory
626
 
627
  Please reduce any below
628
  - `micro_batch_size`
@@ -630,6 +666,10 @@ Please reduce any below
630
  - `gradient_accumulation_steps`
631
  - `sequence_len`
632
 
 
 
 
 
633
  > RuntimeError: expected scalar type Float but found Half
634
 
635
  Try set `fp16: true`
@@ -658,6 +698,8 @@ Building something cool with Axolotl? Consider adding a badge to your model card
658
 
659
  ## Community Showcase
660
 
 
 
661
  Open Access AI Collective
662
  - [Minotaur 13b](https://huggingface.co/openaccess-ai-collective/minotaur-13b)
663
  - [Manticore 13b](https://huggingface.co/openaccess-ai-collective/manticore-13b)
 
1
  # Axolotl
2
 
3
+ Axolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.
4
+
5
+ <table>
6
+ <tr>
7
+ <td>
8
+
9
+ ## Table of Contents
10
+ - [Introduction](#axolotl)
11
+ - [Supported Features](#axolotl-supports)
12
+ - [Quickstart](#quickstart-)
13
+ - [Installation](#installation)
14
+ - [Docker Installation](#environment)
15
+ - [Conda/Pip venv Installation](#condapip-venv)
16
+ - [LambdaLabs Installation](#lambdalabs)
17
+ - [Dataset](#dataset)
18
+ - [How to Add Custom Prompts](#how-to-add-custom-prompts)
19
+ - [Config](#config)
20
+ - [Train](#train)
21
+ - [Inference](#inference)
22
+ - [Merge LORA to Base](#merge-lora-to-base)
23
+ - [Common Errors](#common-errors-)
24
+ - [Need Help?](#need-help-)
25
+ - [Badge](#badge-)
26
+ - [Community Showcase](#community-showcase)
27
+ - [Contributing](#contributing-)
28
+
29
+ </td>
30
+ <td>
31
+
32
  <div align="center">
33
  <img src="image/axolotl.png" alt="axolotl" width="160">
34
  <div>
35
  <p>
36
+ <b>Axolotl provides a unified repository for fine-tuning <br />a variety of AI models with ease</b>
37
  </p>
38
  <p>
39
  Go ahead and axolotl questions!!
 
43
  </div>
44
  </div>
45
 
46
+ </td>
47
+ </tr>
48
+ </table>
49
+
50
  ## Axolotl supports
51
 
52
  | | fp16/fp32 | lora | qlora | gptq | gptq w/ lora | gptq w/flash attn | flash attn | xformers attn |
 
62
 
63
  ## Quickstart ⚡
64
 
65
+ Get started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.
66
+
67
  **Requirements**: Python >=3.9 and Pytorch >=2.0.
68
 
69
  ```bash
 
165
 
166
  ### Dataset
167
 
168
+ Axolotl supports a variety of dataset formats. Below are some of the formats you can use.
169
  Have dataset(s) in one of the following format (JSONL recommended):
170
 
171
  - `alpaca`: instruction; input(optional)
 
658
 
659
  ## Common Errors 🧰
660
 
661
+ > If you encounter a 'Cuda out of memory' error, it means your GPU ran out of memory during the training process. Here's how to resolve it:
662
 
663
  Please reduce any below
664
  - `micro_batch_size`
 
666
  - `gradient_accumulation_steps`
667
  - `sequence_len`
668
 
669
+ > `failed (exitcode: -9)` usually means your system has run out of system memory.
670
+ Similarly, you should consider reducing the same settings as when you run out of VRAM.
671
+ Additionally, look into upgrading your system RAM which should be simpler than GPU upgrades.
672
+
673
  > RuntimeError: expected scalar type Float but found Half
674
 
675
  Try set `fp16: true`
 
698
 
699
  ## Community Showcase
700
 
701
+ Check out some of the projects and models that have been built using Axolotl! Have a model you'd like to add to our Community Showcase? Open a PR with your model.
702
+
703
  Open Access AI Collective
704
  - [Minotaur 13b](https://huggingface.co/openaccess-ai-collective/minotaur-13b)
705
  - [Manticore 13b](https://huggingface.co/openaccess-ai-collective/manticore-13b)