Text Generation
English
sft
jordiclive commited on
Commit
2e769c2
1 Parent(s): 3ae2899

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -15
README.md CHANGED
@@ -15,6 +15,8 @@ widget:
15
  - text: <|prompter|>Write a story about future of AI development</s><|assistant|>
16
  ---
17
 
 
 
18
  This repo contains a low-rank adapter for **LLaMA-7b** fit on
19
  - `Nebulous/gpt4all_pruned`
20
  - `sahil2801/CodeAlpaca-20k`
@@ -35,20 +37,6 @@ This version of the weights was trained with the following hyperparameters:
35
  The model was trained with flash attention and gradient checkpointing.
36
 
37
 
38
- ---
39
- license: apache-2.0
40
-
41
-
42
- # Open-Assistant SFT-1 12B Model
43
-
44
-
45
- This is the first iteration English supervised-fine-tuning (SFT) model of
46
- the [Open-Assistant](https://github.com/LAION-AI/Open-Assistant) project.
47
- It is based on a Pythia 12B that was fine-tuned on ~22k human demonstrations
48
- of assistant conversations collected through the
49
- [https://open-assistant.io/](https://open-assistant.io/) human feedback web
50
- app before March 7, 2023.
51
-
52
  ## Model Details
53
 
54
  - **Developed** as part of the OpenAssistant Project
@@ -68,7 +56,7 @@ The input ends with the `<|assistant|>` token to signal that the model should
68
  start generating the assistant reply.
69
 
70
 
71
- **Example Code** (Note several embeddings need to be loaded along with the LoRA weights):
72
 
73
  ```
74
  from typing import List, NamedTuple
 
15
  - text: <|prompter|>Write a story about future of AI development</s><|assistant|>
16
  ---
17
 
18
+ # LoRA Adapter for LLaMA 7B trained on more datasets than tloen/alpaca-lora-7b
19
+
20
  This repo contains a low-rank adapter for **LLaMA-7b** fit on
21
  - `Nebulous/gpt4all_pruned`
22
  - `sahil2801/CodeAlpaca-20k`
 
37
  The model was trained with flash attention and gradient checkpointing.
38
 
39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ## Model Details
41
 
42
  - **Developed** as part of the OpenAssistant Project
 
56
  start generating the assistant reply.
57
 
58
 
59
+ ##**Example Code** (Note several embeddings need to be loaded along with the LoRA weights):
60
 
61
  ```
62
  from typing import List, NamedTuple