Text Generation
Transformers
PyTorch
Safetensors
English
llama
text-generation-inference
Inference Endpoints
chansurgeplus commited on
Commit
06d2089
·
verified ·
1 Parent(s): f4863c2

Added the ReadMe

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Anthropic/hh-rlhf
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - text-generation-inference
10
+ ---
11
+ # Model Card for OpenBezoar-HH-RLHF-DPO
12
+
13
+ The OpenBezoar-HH-RLHF-DPO is an LLM that has been fine tuned for human preferences alignment using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290), on top of [OpenBezoar-HH-RLHF-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-SFT) model on a subset of [Anthropic's HH-RLHF Dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf).
14
+
15
+ ## Model Details
16
+
17
+ - Base Model: [OpenBezoar-HH-RLHF-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-SFT) model on a subset of [Anthropic's HH-RLHF Dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)
18
+ - Dataset used for SFT: First 100K examples of the [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset
19
+ - Alignment Method: [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290)
20
+ - Epochs: 1
21
+
22
+ ### Model Description
23
+
24
+ OpenBezoar-HH-RLHF-SFT is an LLM that is built upon the OpenLLaMA 3B v2 architecture. This model has been fine-tuned for human preferences alignment using [DPO](https://arxiv.org/abs/2305.18290). Alignment has been performed on top of the [OpenBezoar-HH-RLHF-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-SFT) model. For more information please refer to our paper.
25
+
26
+ ### Model Sources
27
+
28
+ - **Repository:** [More Information Needed]
29
+ - **Paper :** [More Information Needed]
30
+
31
+ ## Instruction Format
32
+
33
+ We follow the typical format for instruction-based prompt templates, with a system prompt followed up by the user prompt. Both begins with a prefix and ends with two newline characters as described below. It is important to utilize this template in order to obtain best responses for instruction fine-tuning related tasks.
34
+ ```
35
+ ### System: {system}
36
+
37
+ ### Instruction: {instruction}
38
+
39
+ ### Response:
40
+ ```
41
+
42
+ Notice that **no** end-of-sentence (eos) token is being appended.
43
+
44
+ ## Limitations
45
+
46
+ - The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops.
47
+ - Although this model is aligned to human preferences and has been evaluated for performance, it is not guaranteed that it will **refrain** from generating harmful content exclusively.
48
+ - Caution is urged against relying on this model for production or adjacent use-cases.
49
+
50
+ ## Citation
51
+
52
+ If you find our work useful, please cite our paper as follows:
53
+
54
+ ```
55
+ [More Information Needed]
56
+ ```