ikuyamada commited on
Commit
a1e1514
1 Parent(s): 29167de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -13
README.md CHANGED
@@ -12,22 +12,17 @@ license: apache-2.0
12
 
13
  ## luke-japanese
14
 
15
- **luke-japanese** is the Japanese version of **LUKE** (**L**anguage
16
- **U**nderstanding with **K**nowledge-based **E**mbeddings), a pre-trained
17
- _knowledge-enhanced_ contextualized representation of words and entities based
18
- on transformer. LUKE treats words and entities in a given text as independent
19
- tokens, and outputs contextualized representations of them. Please refer to our
20
- [GitHub repository](https://github.com/studio-ousia/luke) for more details and
21
- updates.
22
-
23
- **luke-japanese**は、単語とエンティティの知識拡張型訓練済みモデル**LUKE**の日本
24
- 語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を
25
- 考慮した表現を出力します。詳細については
26
- 、[GitHub リポジトリ](https://github.com/studio-ousia/luke)を参照してください。
27
 
28
  ### Experimental results on JGLUE
29
 
30
- The performance of luke-japanese evaluated on the dev set of
31
  [JGLUE](https://github.com/yahoojapan/JGLUE) is shown as follows:
32
 
33
  | Model | MARC-ja | JSTS | JNLI | JCommonsenseQA |
@@ -40,6 +35,8 @@ The performance of luke-japanese evaluated on the dev set of
40
  | Waseda RoBERTa base | 0.962 | 0.901/0.865 | 0.895 | 0.840 |
41
  | XLM RoBERTa base | 0.961 | 0.870/0.825 | 0.893 | 0.687 |
42
 
 
 
43
  ### Citation
44
 
45
  ```latex
 
12
 
13
  ## luke-japanese
14
 
15
+ **luke-japanese** is the Japanese version of **LUKE** (**L**anguage **U**nderstanding with **K**nowledge-based **E**mbeddings), a pre-trained _knowledge-enhanced_ contextualized representation of words and entities. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Please refer to our [GitHub repository](https://github.com/studio-ousia/luke) for more details and updates.
16
+
17
+ This model contains Wikipedia entity embeddings which are not used in general NLP tasks. Please use the [lite version](https://huggingface.co/studio-ousia/luke-japanese-base-lite/) for tasks that do not use Wikipedia entities as inputs.
18
+
19
+ **luke-japanese**は、単語とエンティティの知識拡張型訓練済みTransformerモデル**LUKE**の日本語版です。LUKEは単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。詳細については、[GitHub リポジトリ](https://github.com/studio-ousia/luke)を参照してください。
20
+
21
+ このモデルは、通常のNLPタスクでは使われないWikipediaエンティティのエンベディングを含んでいます。単語の入力のみを使うタスクには、[lite version](https://huggingface.co/studio-ousia/luke-japanese-base-lite/)を使用してください。
 
 
 
 
 
22
 
23
  ### Experimental results on JGLUE
24
 
25
+ The experimental results evaluated on the dev set of
26
  [JGLUE](https://github.com/yahoojapan/JGLUE) is shown as follows:
27
 
28
  | Model | MARC-ja | JSTS | JNLI | JCommonsenseQA |
 
35
  | Waseda RoBERTa base | 0.962 | 0.901/0.865 | 0.895 | 0.840 |
36
  | XLM RoBERTa base | 0.961 | 0.870/0.825 | 0.893 | 0.687 |
37
 
38
+ The baseline scores are obtained from [here](https://github.com/yahoojapan/JGLUE/tree/9f650417195ec54a080411f44e2395012979d42e#baseline-scores).
39
+
40
  ### Citation
41
 
42
  ```latex