winglian commited on
Commit
34c0a86
1 Parent(s): 5e2d8a4

update readme to point to direct link to runpod template, cleanup install instrucitons (#532)

Browse files

* update readme to point to direct link to runpod template, cleanup install instrucitons

* default install flash-attn and auto-gptq now too

* update readme w flash-attn extra

* fix version in setup

Files changed (5) hide show
  1. .github/workflows/tests.yml +2 -2
  2. README.md +4 -16
  3. docker/Dockerfile +2 -2
  4. requirements.txt +1 -1
  5. setup.py +2 -7
.github/workflows/tests.yml CHANGED
@@ -24,8 +24,8 @@ jobs:
24
 
25
  - name: Install dependencies
26
  run: |
27
- pip install -e .
28
- pip install -r requirements-tests.txt
29
 
30
  - name: Run tests
31
  run: |
 
24
 
25
  - name: Install dependencies
26
  run: |
27
+ pip3 install -e .
28
+ pip3 install -r requirements-tests.txt
29
 
30
  - name: Run tests
31
  run: |
README.md CHANGED
@@ -90,8 +90,7 @@ accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml \
90
  ```bash
91
  docker run --gpus '"all"' --rm -it winglian/axolotl:main-py3.10-cu118-2.0.1
92
  ```
93
- - `winglian/axolotl-runpod:main-py3.10-cu118-2.0.1`: for runpod
94
- - `winglian/axolotl-runpod:main-py3.9-cu118-2.0.1-gptq`: for gptq
95
 
96
  Or run on the current files for development:
97
 
@@ -104,19 +103,9 @@ accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml \
104
 
105
  2. Install pytorch stable https://pytorch.org/get-started/locally/
106
 
107
- 3. Install python dependencies with ONE of the following:
108
- - Recommended, supports QLoRA, NO gptq/int4 support
109
  ```bash
110
- pip3 install -e .
111
- pip3 install -U git+https://github.com/huggingface/peft.git
112
- ```
113
- - gptq/int4 support, NO QLoRA
114
- ```bash
115
- pip3 install -e .[gptq]
116
- ```
117
- - same as above but not recommended
118
- ```bash
119
- pip3 install -e .[gptq_triton]
120
  ```
121
 
122
  - LambdaLabs
@@ -151,10 +140,9 @@ accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml \
151
  git clone https://github.com/OpenAccess-AI-Collective/axolotl
152
  cd axolotl
153
 
154
- pip3 install -e . # change depend on needs
155
  pip3 install protobuf==3.20.3
156
  pip3 install -U --ignore-installed requests Pillow psutil scipy
157
- pip3 install git+https://github.com/huggingface/peft.git # not for gptq
158
  ```
159
 
160
  5. Set path
 
90
  ```bash
91
  docker run --gpus '"all"' --rm -it winglian/axolotl:main-py3.10-cu118-2.0.1
92
  ```
93
+ - `winglian/axolotl-runpod:main-latest`: for runpod or use this [direct link](https://runpod.io/gsc?template=v2ickqhz9s&ref=6i7fkpdz)
 
94
 
95
  Or run on the current files for development:
96
 
 
103
 
104
  2. Install pytorch stable https://pytorch.org/get-started/locally/
105
 
106
+ 3. Install axolotl along with python dependencies
 
107
  ```bash
108
+ pip3 install -e .[flash-attn]
 
 
 
 
 
 
 
 
 
109
  ```
110
 
111
  - LambdaLabs
 
140
  git clone https://github.com/OpenAccess-AI-Collective/axolotl
141
  cd axolotl
142
 
143
+ pip3 install -e .
144
  pip3 install protobuf==3.20.3
145
  pip3 install -U --ignore-installed requests Pillow psutil scipy
 
146
  ```
147
 
148
  5. Set path
docker/Dockerfile CHANGED
@@ -15,9 +15,9 @@ RUN git clone --depth=1 https://github.com/OpenAccess-AI-Collective/axolotl.git
15
  # If AXOLOTL_EXTRAS is set, append it in brackets
16
  RUN cd axolotl && \
17
  if [ "$AXOLOTL_EXTRAS" != "" ] ; then \
18
- pip install -e .[flash-attn,gptq,$AXOLOTL_EXTRAS]; \
19
  else \
20
- pip install -e .[flash-attn,gptq]; \
21
  fi
22
 
23
  # fix so that git fetch/pull from remote works
 
15
  # If AXOLOTL_EXTRAS is set, append it in brackets
16
  RUN cd axolotl && \
17
  if [ "$AXOLOTL_EXTRAS" != "" ] ; then \
18
+ pip install -e .[flash-attn,$AXOLOTL_EXTRAS]; \
19
  else \
20
+ pip install -e .[flash-attn]; \
21
  fi
22
 
23
  # fix so that git fetch/pull from remote works
requirements.txt CHANGED
@@ -12,7 +12,7 @@ evaluate
12
  fire
13
  PyYAML>=6.0
14
  datasets
15
- flash-attn>=2.0.8
16
  sentencepiece
17
  wandb
18
  einops
 
12
  fire
13
  PyYAML>=6.0
14
  datasets
15
+ flash-attn>=2.2.1
16
  sentencepiece
17
  wandb
18
  einops
setup.py CHANGED
@@ -7,9 +7,7 @@ def parse_requirements():
7
  _install_requires = []
8
  _dependency_links = []
9
  with open("./requirements.txt", encoding="utf-8") as requirements_file:
10
- lines = [
11
- r.strip() for r in requirements_file.readlines() if "auto-gptq" not in r
12
- ]
13
  for line in lines:
14
  if line.startswith("--extra-index-url"):
15
  # Handle custom index URLs
@@ -33,11 +31,8 @@ setup(
33
  install_requires=install_requires,
34
  dependency_links=dependency_links,
35
  extras_require={
36
- "gptq": [
37
- "auto-gptq",
38
- ],
39
  "flash-attn": [
40
- "flash-attn==2.0.8",
41
  ],
42
  "extras": [
43
  "deepspeed",
 
7
  _install_requires = []
8
  _dependency_links = []
9
  with open("./requirements.txt", encoding="utf-8") as requirements_file:
10
+ lines = [r.strip() for r in requirements_file.readlines()]
 
 
11
  for line in lines:
12
  if line.startswith("--extra-index-url"):
13
  # Handle custom index URLs
 
31
  install_requires=install_requires,
32
  dependency_links=dependency_links,
33
  extras_require={
 
 
 
34
  "flash-attn": [
35
+ "flash-attn>=2.2.1",
36
  ],
37
  "extras": [
38
  "deepspeed",