File size: 1,792 Bytes
aef0fda
 
da9ea0c
5cff37b
634b1ff
 
5cff37b
634b1ff
 
 
 
 
5cff37b
462f2b7
 
da9ea0c
462f2b7
bfa2c07
462f2b7
bfa2c07
462f2b7
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: mit
---
# このモデルはLuke-japanese-base-liteをファインチューニングしたものです。
文章がポジティブかネガティブかを分類することができます。
モデルは夏目漱石さんの文章を日本語極性辞書を用いてポジティブ・ネガティブ判定したものを教師データとして学習を行いました。

#This model is based on Luke-japanese-base-lite
This model was fine-tuned model which besed on studio-ousia/Luke-japanese-base-lite.
This could be able to distinguish between positive and negative content.
This model was fine-tuned by using Natsume Souseki's documents
For example Kokoro, Bocchan and Sanshiro and so on...

# what is Luke?[1] 
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.

LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing).

# how to use 使い方

# Citation
[1]@inproceedings{yamada2020luke,
  title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
  author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
  booktitle={EMNLP},
  year={2020}
}