mingfengxue commited on
Commit
1a718db
1 Parent(s): b609220

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -6,7 +6,9 @@ language:
6
  - en
7
  ---
8
 
9
- This is the OccuLLaMA-7B model in [OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models](https://arxiv.org/abs/2310.16517)
 
 
10
 
11
  Abstract:
12
  The emergence of large language models (LLMs) has revolutionized natural language processing tasks.
@@ -22,7 +24,7 @@ By fine-tuning LLaMA on a mixture of OccuQuest and Tulu datasets, we introduce P
22
  Among the different LLaMA variants, the 7B and 13B ProLLaMA models achieve the highest performance on MMLU and GSM8K, with the 7B ProLLaMA model demonstrating an improvement of more than 4 points over the other 7B variants on GSM8K.
23
  We open release the dataset and models.
24
 
25
- Please cite if you use this dataset:
26
  ```
27
  @misc{xue2023occuquest,
28
  title={OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models},
 
6
  - en
7
  ---
8
 
9
+ This is the OccuLLaMA-7B model in [OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models](https://arxiv.org/abs/2310.16517).
10
+
11
+ The dataset is on [OccuQuest](https://huggingface.co/datasets/OFA-Sys/OccuQuest).
12
 
13
  Abstract:
14
  The emergence of large language models (LLMs) has revolutionized natural language processing tasks.
 
24
  Among the different LLaMA variants, the 7B and 13B ProLLaMA models achieve the highest performance on MMLU and GSM8K, with the 7B ProLLaMA model demonstrating an improvement of more than 4 points over the other 7B variants on GSM8K.
25
  We open release the dataset and models.
26
 
27
+ Please cite if you use this model:
28
  ```
29
  @misc{xue2023occuquest,
30
  title={OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models},