freddiezhang commited on
Commit
a8c8e48
1 Parent(s): 2242c25

Create modelcard.md

Browse files
Files changed (1) hide show
  1. modelcard.md +62 -0
modelcard.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Honor Model Card
2
+ Freddie Zhang
3
+
4
+ Last updated: December 2022
5
+
6
+ Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993), this model card provides accompanying information about the Honor model.
7
+
8
+ ## Model Details
9
+
10
+ Honor is a binary text classification model built with BertForSequenceClassification. This model was built to explore possibilities for zero-shot classification of texts in a wide range of domains.
11
+
12
+ ### Model date
13
+ December 2022
14
+
15
+ ### Model type
16
+ Transformer-based binary text classifier
17
+
18
+ ### Model version
19
+ v.1
20
+
21
+ ### Paper & samples
22
+ A research paper and samples are currently unavailable for this project.
23
+
24
+ ## Model Use
25
+ The intended direct users of Honor are untrained readers who access its capabilities via the Hugging Face Hub. Through the Hub, the model can be used by those who may not have AI development experience to efficiently and accurately detect usage of machine-generated text.
26
+
27
+ ## Data, Performance, and Limitations
28
+
29
+ ### Data
30
+ The GPT-3 training dataset is composed of text posted to the internet, or of text uploaded to the internet (e.g., books). The internet data that it has been trained on and evaluated against to date includes [*The Pile*](https://pile.eleuther.ai/), a dataset compiled by Gao et. al that consists of 22 smaller, high-quality datasets combined together.
31
+
32
+ Given its training data, Honor’s performance is more representative of internet-connected populations than those steeped in verbal, non-digital culture. The internet-connected population is more representative of developed countries, wealthy, younger, and male views, and is mostly U.S.-centric. Wealthier nations and populations in developed countries show higher internet penetration.<sup>[[1]](#fn1)</sup> The digital gender divide also shows fewer women represented online worldwide.<sup>[[2]](#fn2)</sup> Additionally, because different parts of the world have different levels of internet penetration and access, the dataset underrepresents less connected communities.<sup>[[3]](#fn3)</sup>
33
+
34
+ ### Performance
35
+ Honor's performance has been evaluated and validated against detection for the 22 areas of text it has been trained on.
36
+
37
+ ## Limitations
38
+ Honor has a number of limitations. Some of these limitations are inherent to any model with machine learning (ML) components that can have high-bandwidth, open-ended interactions with people (e.g. via natural language): ML components have limited robustness; ML components are biased; open-ended systems have large surface areas for risk; and safety is a moving target for ML systems. Like any model with ML components, it can only be expected to provide reasonable detection when given inputs similar to the ones present in its training data.
39
+
40
+ Lack of world grounding: Honor like other large pretrained language models, is not grounded in other modalities of experience, such as video, real-world physical interaction, or human feedback, and thus lacks a large amount of context about the world.<sup>[[4]](#fn4)</sup>
41
+
42
+ Predominantly English: Honor is trained largely on text in the English language, and is best suited for classifying such text. Honor will by default perform worse on inputs that are different from the data distribution it is trained on, including non-English languages as well as specific dialects of English that are not as well-represented in training data.
43
+
44
+ Interpretability & predictability: the capacity to interpret or predict how Honor will behave is very limited, a limitation common to most deep learning systems.
45
+
46
+ Creation date of training corpora: The December 2022 version of GPT-3 was trained on a dataset created in December 2020, so has not been trained on any data more recent than that.
47
+
48
+ Biases: GPT-3, like all large language models trained on internet corpora, is biased. The model has the propensity to retain and magnify biases it inherited from any part of its training, from the datasets the authors selected to the training techniques they chose.<sup>[[5]](#fn5)
49
+
50
+ ## Where to send questions or comments about the model
51
+ Please use this the [Community](https://huggingface.co/freddiezhang/honor/discussions) section of Hugging Face.
52
+
53
+ -------
54
+ <a name="fn1">[1]</a> International Telecommunication Union ( ITU ) World Telecommunication/ICT Indicators Database. "Individuals using the Internet (% of population)" https://data.worldbank.org/indicator/IT.NET.USER.ZS?end=2018&start=2002.
55
+
56
+ <a name="fn2">[2]</a> Organisation for Economic Co-operation and Development. "Bridging the Digital Divide." http://www.oecd.org/internet/bridging-the-digital-gender-divide.pdf.
57
+
58
+ <a name="fn3">[3]</a> Telecommunication Development Bureau. "Manual for Measuring ICT Access and Use by Households and Individuals." https://www.itu.int/pub/D-IND-ITCMEAS-2014.
59
+
60
+ <a name="fn4">[4]</a> Bisk, Yonatan, et al. Experience Grounds Language. arXiv preprint [arXiv:2004.10151](https://arxiv.org/abs/2004.10151), 2020.
61
+
62
+ <a name="fn5">[5]</a> Crawford, Kate. [The Trouble with Bias](https://www.youtube.com/watch?v=fMym_BKWQzk). NeurIPS 2017 Keynote, 2017.