leaderboard-pr-bot commited on
Commit
50746c8
1 Parent(s): afa5a11

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +60 -46
README.md CHANGED
@@ -1,46 +1,60 @@
1
- ---
2
- language: en
3
- license: mit
4
- ---
5
- # GPT-J 6B - Shinen
6
- ## Model Description
7
- GPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
8
- **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
9
- ## Training data
10
- The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
11
- ```
12
- [Theme: <theme1>, <theme2> ,<theme3>]
13
- <Story goes here>
14
- ```
15
- ### How to use
16
- You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
17
- ```py
18
- >>> from transformers import pipeline
19
- >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Shinen')
20
- >>> generator("She was staring at me", do_sample=True, min_length=50)
21
- [{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
22
- ```
23
- ### Limitations and Biases
24
-
25
- The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
26
-
27
- GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
28
-
29
- As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
30
-
31
- ### BibTeX entry and citation info
32
- The model uses the following model as base:
33
- ```bibtex
34
- @misc{gpt-j,
35
- author = {Wang, Ben and Komatsuzaki, Aran},
36
- title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
37
- howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
38
- year = 2021,
39
- month = May
40
- }
41
- ```
42
-
43
- ## Acknowledgements
44
-
45
- This project would not have been possible without compute generously provided by Google through the
46
- [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ ---
5
+ # GPT-J 6B - Shinen
6
+ ## Model Description
7
+ GPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
8
+ **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
9
+ ## Training data
10
+ The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
11
+ ```
12
+ [Theme: <theme1>, <theme2> ,<theme3>]
13
+ <Story goes here>
14
+ ```
15
+ ### How to use
16
+ You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
17
+ ```py
18
+ >>> from transformers import pipeline
19
+ >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Shinen')
20
+ >>> generator("She was staring at me", do_sample=True, min_length=50)
21
+ [{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
22
+ ```
23
+ ### Limitations and Biases
24
+
25
+ The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
26
+
27
+ GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
28
+
29
+ As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
30
+
31
+ ### BibTeX entry and citation info
32
+ The model uses the following model as base:
33
+ ```bibtex
34
+ @misc{gpt-j,
35
+ author = {Wang, Ben and Komatsuzaki, Aran},
36
+ title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
37
+ howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
38
+ year = 2021,
39
+ month = May
40
+ }
41
+ ```
42
+
43
+ ## Acknowledgements
44
+
45
+ This project would not have been possible without compute generously provided by Google through the
46
+ [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
47
+
48
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
49
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__GPT-J-6B-Shinen)
50
+
51
+ | Metric | Value |
52
+ |-----------------------|---------------------------|
53
+ | Avg. | 34.62 |
54
+ | ARC (25-shot) | 39.85 |
55
+ | HellaSwag (10-shot) | 67.06 |
56
+ | MMLU (5-shot) | 27.72 |
57
+ | TruthfulQA (0-shot) | 36.94 |
58
+ | Winogrande (5-shot) | 64.09 |
59
+ | GSM8K (5-shot) | 1.97 |
60
+ | DROP (3-shot) | 4.71 |