andreas122001 commited on
Commit
732553d
1 Parent(s): 2240bd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +190 -1
README.md CHANGED
@@ -1,3 +1,192 @@
1
  ---
2
- license: bigscience-openrail-m
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: openrail
3
+ widget:
4
+ - text: I am totally a human, trust me bro.
5
+ example_title: default
6
+ - text: >-
7
+ In Finnish folklore, all places and things, and also human beings, have a
8
+ haltija (a genius, guardian spirit) of their own. One such haltija is called
9
+ etiäinen—an image, doppelgänger, or just an impression that goes ahead of a
10
+ person, doing things the person in question later does. For example, people
11
+ waiting at home might hear the door close or even see a shadow or a
12
+ silhouette, only to realize that no one has yet arrived. Etiäinen can also
13
+ refer to some kind of a feeling that something is going to happen. Sometimes
14
+ it could, for example, warn of a bad year coming. In modern Finnish, the
15
+ term has detached from its shamanistic origins and refers to premonition.
16
+ Unlike clairvoyance, divination, and similar practices, etiäiset (plural)
17
+ are spontaneous and can't be induced. Quite the opposite, they may be
18
+ unwanted and cause anxiety, like ghosts. Etiäiset need not be too dramatic
19
+ and may concern everyday events, although ones related to e.g. deaths are
20
+ common. As these phenomena are still reported today, they can be considered
21
+ a living tradition, as a way to explain the psychological experience of
22
+ premonition.
23
+ example_title: real wikipedia
24
+ - text: >-
25
+ In Finnish folklore, all places and things, animate or inanimate, have a
26
+ spirit or "etiäinen" that lives there. Etiäinen can manifest in many forms,
27
+ but is usually described as a kind, elderly woman with white hair. She is
28
+ the guardian of natural places and often helps people in need. Etiäinen has
29
+ been a part of Finnish culture for centuries and is still widely believed in
30
+ today. Folklorists study etiäinen to understand Finnish traditions and how
31
+ they have changed over time.
32
+ example_title: generated wikipedia
33
+ - text: >-
34
+ This paper presents a novel framework for sparsity-certifying graph
35
+ decompositions, which are important tools in various areas of computer
36
+ science, including algorithm design, complexity theory, and optimization.
37
+ Our approach is based on the concept of "cut sparsifiers," which are sparse
38
+ graphs that preserve the cut structure of the original graph up to a certain
39
+ error bound. We show that cut sparsifiers can be efficiently constructed
40
+ using a combination of spectral techniques and random sampling, and we use
41
+ them to develop new algorithms for decomposing graphs into sparse subgraphs.
42
+ example_title: from ChatGPT
43
+ - text: >-
44
+ Recent work has demonstrated substantial gains on many NLP tasks and
45
+ benchmarks by pre-training on a large corpus of text followed by fine-tuning
46
+ on a specific task. While typically task-agnostic in architecture, this
47
+ method still requires task-specific fine-tuning datasets of thousands or
48
+ tens of thousands of examples. By contrast, humans can generally perform a
49
+ new language task from only a few examples or from simple instructions -
50
+ something which current NLP systems still largely struggle to do. Here we
51
+ show that scaling up language models greatly improves task-agnostic,
52
+ few-shot performance, sometimes even reaching competitiveness with prior
53
+ state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an
54
+ autoregressive language model with 175 billion parameters, 10x more than any
55
+ previous non-sparse language model, and test its performance in the few-shot
56
+ setting. For all tasks, GPT-3 is applied without any gradient updates or
57
+ fine-tuning, with tasks and few-shot demonstrations specified purely via
58
+ text interaction with the model. GPT-3 achieves strong performance on many
59
+ NLP datasets, including translation, question-answering, and cloze tasks, as
60
+ well as several tasks that require on-the-fly reasoning or domain
61
+ adaptation, such as unscrambling words, using a novel word in a sentence, or
62
+ performing 3-digit arithmetic. At the same time, we also identify some
63
+ datasets where GPT-3's few-shot learning still struggles, as well as some
64
+ datasets where GPT-3 faces methodological issues related to training on
65
+ large web corpora. Finally, we find that GPT-3 can generate samples of news
66
+ articles which human evaluators have difficulty distinguishing from articles
67
+ written by humans. We discuss broader societal impacts of this finding and
68
+ of GPT-3 in general.
69
+ example_title: GPT-3 paper
70
+ datasets:
71
+ - NicolaiSivesind/human-vs-machine
72
+ - gfissore/arxiv-abstracts-2021
73
+ language:
74
+ - en
75
+ pipeline_tag: text-classification
76
+ tags:
77
+ - mgt-detection
78
+ - ai-detection
79
  ---
80
+
81
+ Machine-generated text-detection by fine-tuning of language models
82
+ ===
83
+
84
+ This project is related to a bachelor's thesis with the title "*Turning Poachers into Gamekeepers: Detecting Machine-Generated Text in Academia using Large Language Models*" (not yet published) written by *Nicolai Thorer Sivesind* and *Andreas Bentzen Winje* at the *Department of Computer Science* at the *Norwegian University of Science and Technology*.
85
+
86
+ It contains text classification models trained to distinguish human-written text from text generated by language models like ChatGPT and GPT-3. The best models were able to achieve an accuracy of 100% on real and *GPT-3*-generated wikipedia articles (4500 samples), and an accuracy of 98.4% on real and *ChatGPT*-generated research abstracts (3000 samples).
87
+
88
+ The dataset card for the dataset that was created in relation to this project can be found [here](https://huggingface.co/datasets/NicolaiSivesind/human-vs-machine).
89
+
90
+ **NOTE**: the hosted inference on this site only works for the RoBERTa-models, and not for the Bloomz-models. The Bloomz-models otherwise can produce wrong predictions when not explicitly providing the attention mask from the tokenizer to the model for inference. To be sure, the [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines)-library seems to produce the most consistent results.
91
+
92
+
93
+ ## Fine-tuned detectors
94
+
95
+ This project includes 12 fine-tuned models based on the RoBERTa-base model, and three sizes of the bloomz-models.
96
+
97
+ | Base-model | RoBERTa-base | Bloomz-560m | Bloomz-1b7 | Bloomz-3b |
98
+ |------------|--------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|
99
+ | Wiki | [roberta-wiki](https://huggingface.co/andreas122001/roberta-academic-detector) | [Bloomz-560m-wiki](https://huggingface.co/andreas122001/bloomz-560m-wiki-detector) | [Bloomz-1b7-wiki](https://huggingface.co/andreas122001/bloomz-1b7-wiki-detector) | [Bloomz-3b-wiki](https://huggingface.co/andreas122001/bloomz-3b-wiki-detector) |
100
+ | Academic | [roberta-academic](https://huggingface.co/andreas122001/roberta-wiki-detector) | [Bloomz-560m-academic](https://huggingface.co/andreas122001/bloomz-560m-academic-detector) | [Bloomz-1b7-academic](https://huggingface.co/andreas122001/bloomz-1b7-academic-detector) | [Bloomz-3b-academic](https://huggingface.co/andreas122001/bloomz-3b-academic-detector) |
101
+ | Mixed | [roberta-mixed](https://huggingface.co/andreas122001/roberta-mixed-detector) | [Bloomz-560m-mixed](https://huggingface.co/andreas122001/bloomz-560m-mixed-detector) | [Bloomz-1b7-mixed](https://huggingface.co/andreas122001/bloomz-1b7-mixed-detector) | [Bloomz-3b-mixed](https://huggingface.co/andreas122001/bloomz-3b-mixed-detector) |
102
+
103
+
104
+ ### Datasets
105
+
106
+ The models were trained on selections from the [GPT-wiki-intros]() and [ChatGPT-Research-Abstracts](), and are separated into three types, **wiki**-detectors, **academic**-detectors and **mixed**-detectors, respectively.
107
+
108
+ - **Wiki-detectors**:
109
+ - Trained on 30'000 datapoints (10%) of GPT-wiki-intros.
110
+ - Best model (in-domain) is Bloomz-3b-wiki, with an accuracy of 100%.
111
+ - **Academic-detectors**:
112
+ - Trained on 20'000 datapoints (100%) of ChatGPT-Research-Abstracts.
113
+ - Best model (in-domain) is Bloomz-3b-academic, with an accuracy of 98.4%
114
+ - **Mixed-detectors**:
115
+ - Trained on 15'000 datapoints (5%) of GPT-wiki-intros and 10'000 datapoints (50%) of ChatGPT-Research-Abstracts.
116
+ - Best model (in-domain) is RoBERTa-mixed, with an F1-score of 99.3%.
117
+
118
+
119
+ ### Hyperparameters
120
+
121
+ All models were trained using the same hyperparameters:
122
+
123
+ ```python
124
+ {
125
+ "num_train_epochs": 1,
126
+ "adam_beta1": 0.9,
127
+ "adam_beta2": 0.999,
128
+ "batch_size": 8,
129
+ "adam_epsilon": 1e-08
130
+ "optim": "adamw_torch" # the optimizer (AdamW)
131
+ "learning_rate": 5e-05, # (LR)
132
+ "lr_scheduler_type": "linear", # scheduler type for LR
133
+ "seed": 42, # seed for PyTorch RNG-generator.
134
+ }
135
+ ```
136
+
137
+ ### Metrics
138
+
139
+ Metrics can be found at https://wandb.ai/idatt2900-072/IDATT2900-072.
140
+
141
+
142
+ In-domain performance of wiki-detectors:
143
+
144
+ | Base model | Accuracy | Precision | Recall | F1-score |
145
+ |-------------|----------|-----------|--------|----------|
146
+ | Bloomz-560m | 0.973 | *1.000 | 0.945 | 0.972 |
147
+ | Bloomz-1b7 | 0.972 | *1.000 | 0.945 | 0.972 |
148
+ | Bloomz-3b | *1.000 | *1.000 | *1.000 | *1.000 |
149
+ | RoBERTa | 0.998 | 0.999 | 0.997 | 0.998 |
150
+
151
+
152
+ In-domain peformance of academic-detectors:
153
+
154
+ | Base model | Accuracy | Precision | Recall | F1-score |
155
+ |-------------|----------|-----------|--------|----------|
156
+ | Bloomz-560m | 0.964 | 0.963 | 0.965 | 0.964 |
157
+ | Bloomz-1b7 | 0.946 | 0.941 | 0.951 | 0.946 |
158
+ | Bloomz-3b | *0.984 | *0.983 | 0.985 | *0.984 |
159
+ | RoBERTa | 0.982 | 0.968 | *0.997 | 0.982 |
160
+
161
+
162
+ F1-scores of the mixed-detectors on all three datasets:
163
+
164
+ | Base model | Mixed | Wiki | CRA |
165
+ |-------------|--------|--------|--------|
166
+ | Bloomz-560m | 0.948 | 0.972 | *0.848 |
167
+ | Bloomz-1b7 | 0.929 | 0.964 | 0.816 |
168
+ | Bloomz-3b | 0.988 | 0.996 | 0.772 |
169
+ | RoBERTa | *0.993 | *0.997 | 0.829 |
170
+
171
+
172
+ ## Credits
173
+
174
+ - [GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro), by Aaditya Bhat
175
+ - [arxiv-abstracts-2021](https://huggingface.co/datasets/gfissore/arxiv-abstracts-2021), by Giancarlo
176
+ - [Bloomz](bigscience/bloomz), by BigScience
177
+ - [RoBERTa](https://huggingface.co/roberta-base), by Liu et. al.
178
+
179
+
180
+ ## Citation
181
+
182
+ Please use the following citation:
183
+
184
+ ```
185
+ @misc {sivesind_2023,
186
+ author = { {Nicolai Thorer Sivesind} and {Andreas Bentzen Winje} },
187
+ title = { Machine-generated text-detection by fine-tuning of language models },
188
+ url = { https://huggingface.co/andreas122001/roberta-academic-detector }
189
+ year = 2023,
190
+ publisher = { Hugging Face }
191
+ }
192
+ ```