cbdb commited on
Commit
b6a6172
1 Parent(s): 14e7c81

update authors and license

Browse files
Files changed (1) hide show
  1. README.md +19 -2
README.md CHANGED
@@ -3,6 +3,7 @@ language:
3
  - zh
4
  tags:
5
  - SequenceClassification
 
6
  - 古文
7
  - 文言文
8
  - ancient
@@ -15,7 +16,7 @@ license: cc-by-nc-sa-4.0
15
  # <font color="IndianRed"> BertForSequenceClassification model (Classical Chinese) </font>
16
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jVu2LrNwkLolItPALKGNjeT6iCfzF8Ic?usp=sharing/)
17
 
18
- This BertForSequenceClassification Classical Chinese model is intended to predict whether a Classical Chinese sentence is <font color="IndianRed"> a letter title (书信标题) </font> or not. This model is first inherited from the BERT base Chinese model (MLM), and finetuned using a large corpus of Classical Chinese language (3GB textual dataset), then concatenated with the BertForSequenceClassification architecture to perform a binary classification task.
19
  * <font color="Salmon"> Labels: 0 = non-letter, 1 = letter </font>
20
 
21
  ## <font color="IndianRed"> Model description </font>
@@ -106,4 +107,20 @@ print(f'The predicted class is: {pred_class}')
106
  ```
107
  <font color="IndianRed"> Output: </font> The predicted class is: letter
108
 
109
- Author: Queenie Luo (queenieluo[at]g.harvard.edu)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - zh
4
  tags:
5
  - SequenceClassification
6
+ - Lepton
7
  - 古文
8
  - 文言文
9
  - ancient
 
16
  # <font color="IndianRed"> BertForSequenceClassification model (Classical Chinese) </font>
17
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jVu2LrNwkLolItPALKGNjeT6iCfzF8Ic?usp=sharing/)
18
 
19
+ Our model LEPTON (Letter Prediction) is BertForSequenceClassification Classical Chinese model that is intended to predict whether a Classical Chinese sentence is <font color="IndianRed"> a letter title (书信标题) </font> or not. This model is first inherited from the BERT base Chinese model (MLM), and finetuned using a large corpus of Classical Chinese language (3GB textual dataset), then concatenated with the BertForSequenceClassification architecture to perform a binary classification task.
20
  * <font color="Salmon"> Labels: 0 = non-letter, 1 = letter </font>
21
 
22
  ## <font color="IndianRed"> Model description </font>
 
107
  ```
108
  <font color="IndianRed"> Output: </font> The predicted class is: letter
109
 
110
+ ### <font color="IndianRed">Authors </font>
111
+ Queenie Luo (queenieluo[at]g.harvard.edu)
112
+ <br>
113
+ Katherine Enright
114
+ <br>
115
+ Hongsu Wang
116
+ <br>
117
+ Peter Bol
118
+ <br>
119
+ CBDB Group
120
+
121
+ ### <font color="IndianRed">License </font>
122
+ Copyright (c) 2023 CBDB
123
+
124
+ Except where otherwise noted, content on this repository is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).
125
+ To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or
126
+ send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.