chansurgeplus commited on
Commit
cf3a12a
1 Parent(s): 548cf8d

Created Initial ReadMe

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Anthropic/hh-rlhf
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - text-generation-inference
10
+ ---
11
+ # Model Card for OpenBezoar-HH-RLHF-SFT
12
+
13
+ The OpenBezoar-HH-RLHF-SFT is an LLM that has been further instruction fine tuned version of [OpenBezoar-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-SFT) model on a subset of [Anthropic's HH-RLHF Dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf).
14
+
15
+ ## Model Details
16
+
17
+ - Base Model: [OpenBezoar-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-SFT)
18
+ - Dataset used for SFT: First 100K examples of the [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset
19
+ - Epochs: 1
20
+
21
+ ### Model Description
22
+
23
+ Primary purpose of performing SFT on [OpenBezoar-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-SFT) is to minimize the distribution shift before applying Direct Preference Optimization (DPO) for human preferences alignment. For more information please refer to our paper.
24
+
25
+ ### Model Sources
26
+
27
+ - **Repository:** [More Information Needed]
28
+ - **Paper :** [More Information Needed]
29
+
30
+ ## Instruction Format
31
+
32
+ We follow the typical format for instruction-based prompt templates, with a system prompt followed up by the user prompt. Both begins with a prefix and ends with two newline characters as described below. It is important to utilize this template in order to obtain best responses for instruction fine-tuning related tasks.
33
+ ```
34
+ ### System: {system}
35
+
36
+ ### Instruction: {instruction}
37
+
38
+ ### Response:
39
+ ```
40
+
41
+ Notice that **no** end-of-sentence (eos) token is being appended.
42
+
43
+ ## Limitations
44
+
45
+ - The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops.
46
+ - This model is not aligned to human preferences and therefore it may generate harmful and uncensored content.
47
+ - Caution is urged against relying on this model for production or adjacent use-cases.
48
+
49
+ ## Citation
50
+
51
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
52
+
53
+ **BibTeX:**
54
+
55
+ [More Information Needed]