dennlinger commited on
Commit
baafa9c
1 Parent(s): af9ae97

Extended explanation in README.

Browse files

Added information on training specifics, as well as note that htis is topically specific (legal texts). Further clarified on the prediction targets.

Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -8,8 +8,14 @@ tokenizer = AutoTokenizer.from_pretrained('dennlinger/roberta-cls-consec')
8
  model = AutoModel.from_pretrained('dennlinger/roberta-cls-consec')
9
  ```
10
 
 
 
 
11
  # Training objective
12
- The training task is to determine whether two text segments (paragraphs) belong to the same topical section or not. This can be utilized to create a topical segmentation of a document by consecutively predicting the "togetherness" of two models.
 
 
 
13
 
14
  # Performance
15
  The results of this model can be found in the paper. We average over models from five different random seeds, which is why the specific results for this model might be different from the exact values in the paper.
 
8
  model = AutoModel.from_pretrained('dennlinger/roberta-cls-consec')
9
  ```
10
 
11
+ # Input Format
12
+ The model expects two segments that are separated with the `[SEP]` token. In our training setup, we had entire paragraphs as samples (or up to 512 tokens across two paragraphs), specifically trained on a Terms of Service data set. Note that this might lead to poor performance on "general" topics, such as news articles or Wikipedia.
13
+
14
  # Training objective
15
+ The training task is to determine whether two text segments (paragraphs) belong to the same topical section or not. This can be utilized to create a topical segmentation of a document by consecutively predicting the "coherence" of two segments.
16
+ If you are experimenting via the Huggingface Model API, the following are interpretations of the `LABEL`s:
17
+ * `LABEL_0`: Two input segments separated by `[SEP]` do *not* belong to the same topic.
18
+ * `LABEL_1`: Two input segments separated by `[SEP]` do belong to the same topic.
19
 
20
  # Performance
21
  The results of this model can be found in the paper. We average over models from five different random seeds, which is why the specific results for this model might be different from the exact values in the paper.