jondurbin commited on
Commit
aa08e4a
1 Parent(s): a999371

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md CHANGED
@@ -1,3 +1,43 @@
1
  ---
2
  license: other
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ datasets:
4
+ - jondurbin/airoboros-gpt4-1.3
5
  ---
6
+
7
+ _Not tested yet, use if you want though!_
8
+
9
+ ### Overview
10
+
11
+ This is a qlora fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
12
+
13
+ This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2) with a few enhancements:
14
+
15
+ - All coding instructions have an equivalent " PLAINFORMAT" version now.
16
+ - Thousands of new orca style reasoning instructions, this time with reasoning first, then answer.
17
+ - Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech.
18
+
19
+ This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with previous full fine-tune versions.
20
+
21
+ ```
22
+ A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
23
+ ```
24
+
25
+ So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
26
+
27
+ ### Usage
28
+
29
+ To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
30
+ ```
31
+ pip install git+https://github.com/jondurbin/FastChat
32
+ ```
33
+
34
+ Be sure you are pulling the latest branch!
35
+
36
+ Then, you can invoke it like so (after downloading the model):
37
+ ```
38
+ python -m fastchat.serve.cli \
39
+ --model-path airoboros-13b-gpt4-1.3 \
40
+ --temperature 0.5 \
41
+ --max-new-tokens 2048 \
42
+ --no-history
43
+ ```