chrlukas commited on
Commit
af683f4
1 Parent(s): dc34c64

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -15
README.md CHANGED
@@ -1,13 +1,14 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
4
  ---
5
 
6
  # Model Card for Model ID
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
-
11
 
12
  ## Model Details
13
 
@@ -15,23 +16,27 @@ tags: []
15
 
16
  <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
 
 
 
 
 
 
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
 
 
 
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
32
  - **Repository:** [More Information Needed]
33
  - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
@@ -196,6 +201,4 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
200
-
201
-
 
1
  ---
2
  library_name: transformers
3
+ language:
4
+ - en
5
  ---
6
 
7
  # Model Card for Model ID
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
+ This model is intended to predict emotions (valence, arousal) in written stories. For all details see [the paper (TODO)](#) and [the accompanying github repo (TODO)](#).
12
 
13
  ## Model Details
14
 
 
16
 
17
  <!-- Provide a longer summary of what this model is. -->
18
 
19
+ As described in [the paper (TODO)](#), this model is finetuned from [DeBERTaV3-large](https://huggingface.co/microsoft/deberta-v3-large) and predicts sentence-wise valence/arousal values between 0 and 1.
20
+
21
+ This particular checkpoint was trained with a window size of $4$.
22
+
23
+ All available checkpoints and their performance:
24
+
25
+
26
 
27
+ Technically, this model is predicting token-wise values. Sentences are concatenated via the ``<s>`` token, where the valence/arousal predictions for an ``<s>`` token
28
+ are meant to be the predictions for the sentence preceding it. All other tokens' predictions should be ignored. For reference, see the figure in the paper:
 
 
 
 
 
29
 
30
+
31
+ The [accompanying repo](TODO) provides a convenient script to use the model for prediction.
32
+
33
+ ### Model Sources
34
 
35
  <!-- Provide the basic links for the model. -->
36
 
37
  - **Repository:** [More Information Needed]
38
  - **Paper [optional]:** [More Information Needed]
39
+ -
40
 
41
  ## Uses
42
 
 
201
 
202
  ## Model Card Contact
203
 
204
+ [More Information Needed]