ynakashima commited on
Commit
b8ede3c
1 Parent(s): 4afe202

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -1
README.md CHANGED
@@ -26,7 +26,25 @@ base_model: google/gemma-7b
26
  - **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b)
27
  - **Contact**: For questions and comments about the model, please email `karakuri-rd@karakuri.ai`
28
 
29
- ## Use in 🤗 Transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ```python
32
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
26
  - **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b)
27
  - **Contact**: For questions and comments about the model, please email `karakuri-rd@karakuri.ai`
28
 
29
+ ## Usage
30
+
31
+ KARAKURI LM 7B APM v0.1 is a attribute prediction model that rates model responses on various aspects that makes a response desirable.
32
+
33
+ Given a conversation with multiple turns between user and assistant, the model rates the following attributes (between 0 and 4) for every assistant turn.
34
+
35
+ - helpfulness: Overall helpfulness of the response to the prompt.
36
+ - correctness: Inclusion of all pertinent facts without errors.
37
+ - coherence: Consistency and clarity of expression.
38
+ - complexity: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).
39
+ - verbosity: Amount of detail included in the response, relative to what is asked for in the prompt.
40
+ - quality: Perceived goodness of response
41
+ - toxicity: Undesirable elements such as vulgar, harmful or potentially biased response
42
+ - humor: Sense of humor within response
43
+ - creativity: Willingness to generate non-conventional response
44
+
45
+ The first five are derived from HelpSteer, while the remaining four are derived from OASST2.
46
+
47
+ You can run the model using the 🤗 Transformers:
48
 
49
  ```python
50
  from transformers import AutoModelForCausalLM, AutoTokenizer