Text Classification
Transformers
PyTorch
bert
Inference Endpoints
rttl commited on
Commit
53a298c
1 Parent(s): d67b3f5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -73
README.md CHANGED
@@ -16,11 +16,7 @@ Foody-bert results from the second round of fine-tuning on the text classificati
16
  Continuation of fine-tuning of [senty-bert](https://huggingface.co/rttl-ai/senty-bert), which is fine-tuned on yelp reviews and Stanford sentiment treebank with ternary labels (neutral, positive, negative).
17
 
18
 
19
-
20
- - analysis. Ms., Stanford University and Facebook AI Research.
21
- - **Shared by [Optional]:** Hugging Face
22
- - **Model type:** Language model
23
- - **Language(s) (NLP):** More information needed
24
  - **License:** bigscience-bloom-rail-1.0
25
  - **Related Models:** More information needed
26
  - **Parent Model:** More information needed
@@ -40,30 +36,14 @@ Continuation of fine-tuning of [senty-bert](https://huggingface.co/rttl-ai/senty
40
  - We urge caution about using these models for sentiment prediction in other domains. For example, sentiment expression in medical contexts and professional evaluations can be different from sentiment expression in product/service reviews.
41
 
42
 
43
- ## Downstream Use [Optional]
44
-
45
- More information needed
46
-
47
-
48
-
49
-
50
- ## Out-of-Scope Use
51
-
52
- More information needed
53
-
54
-
55
-
56
-
57
  # Bias, Risks, and Limitations
58
 
59
-
60
  Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
61
 
62
 
63
  ## Recommendations
64
 
65
 
66
-
67
  - We recommend careful study of how these models behave, even when they are used in the domain on which they were trained and assessed. The models are deep learning models about which it is challenging to gain full analytic command; two examples that appear synonymous to human readers can receive very different predictions from these models, in ways that are hard to anticipate or explain, and so it is crucial to do continual qualitative and quantitative evaluation as part of any deployment.
68
 
69
  - We advise even more caution when using these models in new domains, as sentiment expression can shift in subtle (and not-so-subtle) ways across different domains, and this could lead specific phenomena to be mis-handled in ways that could have dramatic and pernicious consequences.
@@ -78,44 +58,6 @@ The model was trained on product/service reviews from Yelp, reviews from Amazon,
78
  For extensive details on these datasets are included in the [associated Paper](https://arxiv.org/abs/2012.15349).
79
 
80
 
81
- ## Training Procedure
82
-
83
-
84
- ### Preprocessing
85
-
86
- More information needed
87
-
88
- ### Speeds, Sizes, Times
89
-
90
-
91
- More information needed
92
-
93
- # Evaluation
94
-
95
-
96
- ## Testing Data, Factors & Metrics
97
-
98
- ### Testing Data
99
-
100
- More information needed
101
-
102
-
103
- ### Factors
104
-
105
- More information needed
106
-
107
- ### Metrics
108
-
109
- More information needed
110
-
111
- ## Results
112
-
113
- More information needed
114
-
115
- # Model Examination
116
-
117
- More information needed
118
-
119
  # Environmental Impact
120
 
121
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
@@ -160,20 +102,6 @@ More information needed
160
  year={2020}}
161
  ```
162
 
163
- # Glossary [optional]
164
- More information needed
165
-
166
- # More Information [optional]
167
-
168
- More information needed
169
-
170
- # Model Card Authors [optional]
171
-
172
- [Christopher Potts](http://web.stanford.edu/~cgpotts/), [Zhengxuan Wu](http://zen-wu.social), Atticus Geiger, and [Douwe Kiela](https://douwekiela.github.io). 2020. DynaSent: A dynamic benchmark for sentiment analysis. Ms., Stanford University and Facebook AI Research, in collabertation with the Hugging Face team
173
-
174
- # Model Card Contact
175
-
176
- More information needed
177
 
178
  # How to Get Started with the Model
179
 
 
16
  Continuation of fine-tuning of [senty-bert](https://huggingface.co/rttl-ai/senty-bert), which is fine-tuned on yelp reviews and Stanford sentiment treebank with ternary labels (neutral, positive, negative).
17
 
18
 
19
+ - **Language(s) (NLP):** English
 
 
 
 
20
  - **License:** bigscience-bloom-rail-1.0
21
  - **Related Models:** More information needed
22
  - **Parent Model:** More information needed
 
36
  - We urge caution about using these models for sentiment prediction in other domains. For example, sentiment expression in medical contexts and professional evaluations can be different from sentiment expression in product/service reviews.
37
 
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  # Bias, Risks, and Limitations
40
 
 
41
  Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
42
 
43
 
44
  ## Recommendations
45
 
46
 
 
47
  - We recommend careful study of how these models behave, even when they are used in the domain on which they were trained and assessed. The models are deep learning models about which it is challenging to gain full analytic command; two examples that appear synonymous to human readers can receive very different predictions from these models, in ways that are hard to anticipate or explain, and so it is crucial to do continual qualitative and quantitative evaluation as part of any deployment.
48
 
49
  - We advise even more caution when using these models in new domains, as sentiment expression can shift in subtle (and not-so-subtle) ways across different domains, and this could lead specific phenomena to be mis-handled in ways that could have dramatic and pernicious consequences.
 
58
  For extensive details on these datasets are included in the [associated Paper](https://arxiv.org/abs/2012.15349).
59
 
60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  # Environmental Impact
62
 
63
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
102
  year={2020}}
103
  ```
104
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
105
 
106
  # How to Get Started with the Model
107