anon8231489123 commited on
Commit
f267949
1 Parent(s): fadc38f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -13
README.md CHANGED
@@ -5,12 +5,7 @@ Okay... Two different models now. One generated in the Triton branch, one genera
5
  Cuda info (use this one):
6
  Command:
7
 
8
- CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca
9
-
10
- --wbits 4
11
- --true-sequential
12
- --groupsize 128
13
- --save gpt-x-alpaca-13b-native-4bit-128g-cuda.pt
14
 
15
 
16
  Prev. info
@@ -25,10 +20,4 @@ Because of this, it appears to be incompatible with Oobabooga at the moment. Sta
25
 
26
  Command:
27
 
28
- CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca
29
-
30
- --wbits 4
31
- --true-sequential
32
- --act-order
33
- --groupsize 128
34
- --save gpt-x-alpaca-13b-native-4bit-128g.pt
 
5
  Cuda info (use this one):
6
  Command:
7
 
8
+ CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca --wbits 4 --true-sequential --groupsize 128 --save gpt-x-alpaca-13b-native-4bit-128g-cuda.pt
 
 
 
 
 
9
 
10
 
11
  Prev. info
 
20
 
21
  Command:
22
 
23
+ CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca --wbits 4 --true-sequential --act-order --groupsize 128 --save gpt-x-alpaca-13b-native-4bit-128g.pt