kaiokendev commited on
Commit
e218269
2 Parent(s): e42c0c8 59ae072

Merge branch 'main' of https://huggingface.co/kaiokendev/supercot-lora

Browse files
Files changed (1) hide show
  1. README.md +8 -9
README.md CHANGED
@@ -7,20 +7,19 @@ SuperCOT is a LoRA I trained with the aim of making LLaMa follow prompts for Lan
7
  Trained against LLaMa 30B 4-bit for 3 epochs with cutoff length 1024, using a mixture of the following datasets:
8
 
9
  [https://huggingface.co/datasets/QingyiSi/Alpaca-CoT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
10
-
11
- Chain of thought QED
12
-
13
- Chain of thought Aqua
14
-
15
- CodeAlpaca
16
 
17
  [https://huggingface.co/datasets/neulab/conala](https://huggingface.co/datasets/neulab/conala)
18
-
19
- Code snippets
20
 
21
  [https://huggingface.co/datasets/yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned)
 
22
 
23
- Alpaca GPT4
 
 
24
 
25
  ### Compatibility
26
  This LoRA is compatible with any 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins
 
7
  Trained against LLaMa 30B 4-bit for 3 epochs with cutoff length 1024, using a mixture of the following datasets:
8
 
9
  [https://huggingface.co/datasets/QingyiSi/Alpaca-CoT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
10
+ - Chain of thought QED
11
+ - Chain of thought Aqua
12
+ - CodeAlpaca
 
 
 
13
 
14
  [https://huggingface.co/datasets/neulab/conala](https://huggingface.co/datasets/neulab/conala)
15
+ - Code snippets
 
16
 
17
  [https://huggingface.co/datasets/yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned)
18
+ - Alpaca GPT4
19
 
20
+ ### Merged Models
21
+ - [https://huggingface.co/gozfarb/llama-30b-supercot-ggml](https://huggingface.co/gozfarb/llama-30b-supercot-ggml)
22
+ - [https://huggingface.co/ausboss/llama-30b-supercot](https://huggingface.co/ausboss/llama-30b-supercot)
23
 
24
  ### Compatibility
25
  This LoRA is compatible with any 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins