Text Generation
Transformers
PyTorch
Safetensors
English
llama
conversational
text-generation-inference
Doctor-Shotgun commited on
Commit
1546977
1 Parent(s): d8337a8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - llama
9
+ datasets:
10
+ - LDJnr/Capybara
11
+ - jondurbin/airoboros-3.2
12
+ - unalignment/toxic-dpo-v0.1
13
+ - LDJnr/Verified-Camel
14
+ - HuggingFaceH4/no_robots
15
+ - Doctor-Shotgun/no-robots-sharegpt
16
+ - Doctor-Shotgun/capybara-sharegpt
17
+ ---
18
+
19
+ # Norobara-ZLoss-8x7B
20
+
21
+ This is an instruct-tuned [TinyLlama-1.1B-32k](https://huggingface.co/Doctor-Shotgun/TinyLlama-1.1B-32k) on several open-source instruct datasets, intended primarily for speculative decoding.
22
+
23
+ ## Usage:
24
+ The intended prompt format is a modified multi-turn Alpaca instruction format:
25
+ ```
26
+ ### Instruction:
27
+ {system prompt}
28
+
29
+ ### Input:
30
+ {user message}
31
+
32
+ ### Response:
33
+ {model response}
34
+
35
+ ### Input:
36
+ {user message}
37
+
38
+ ### Response:
39
+ {model response}
40
+
41
+ (etc.)
42
+ ```
43
+
44
+ ## Bias, Risks, and Limitations
45
+ The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.
46
+ ## Training Details
47
+ This model was trained as a full finetune for 3 epochs using a single A100 GPU for around 3.5 hours.