{ "paper_id": "D19-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:11:15.079227Z" }, "title": "Low-Resource Name Tagging Learned with Weakly Labeled Data", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "country": "Singapore" } }, "email": "caoyixin2011@gmail.com" }, { "first": "Zikun", "middle": [], "last": "Hu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "country": "Singapore" } }, "email": "zikunhu@u.nus.edu" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "country": "Singapore" } }, "email": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois Urbana-Champaign", "location": { "country": "U.S.A" } }, "email": "hengji@illinois.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Name tagging in low-resource languages or domains suffers from inadequate training data. Existing work heavily relies on additional information, while leaving those noisy annotations unexplored that extensively exist on the web. In this paper, we propose a novel neural model for name tagging solely based on weakly labeled (WL) data, so that it can be applied in any low-resource settings. To take the best advantage of all WL sentences, we split them into high-quality and noisy portions for two modules, respectively: (1) a classification module focusing on the large portion of noisy data can efficiently and robustly pretrain the tag classifier by capturing textual context semantics; and (2) a costly sequence labeling module focusing on high-quality data utilizes Partial-CRFs with nonentity sampling to achieve global optimum. Two modules are combined via shared parameters. Extensive experiments involving five low-resource languages and fine-grained food domain demonstrate our superior performance (6% and 7.8% F1 gains on average) as well as efficiency 1 .", "pdf_parse": { "paper_id": "D19-1025", "_pdf_hash": "", "abstract": [ { "text": "Name tagging in low-resource languages or domains suffers from inadequate training data. Existing work heavily relies on additional information, while leaving those noisy annotations unexplored that extensively exist on the web. In this paper, we propose a novel neural model for name tagging solely based on weakly labeled (WL) data, so that it can be applied in any low-resource settings. To take the best advantage of all WL sentences, we split them into high-quality and noisy portions for two modules, respectively: (1) a classification module focusing on the large portion of noisy data can efficiently and robustly pretrain the tag classifier by capturing textual context semantics; and (2) a costly sequence labeling module focusing on high-quality data utilizes Partial-CRFs with nonentity sampling to achieve global optimum. Two modules are combined via shared parameters. Extensive experiments involving five low-resource languages and fine-grained food domain demonstrate our superior performance (6% and 7.8% F1 gains on average) as well as efficiency 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Name tagging 2 is the task of identifying the boundaries of entity mentions in texts and classifying them into the pre-defined entity types (e.g., person). It serves as a fundamental role as providing the essential inputs for many IE tasks, such as Entity Linking (Cao et al., 2018a) and Relation Extraction (Lin et al., 2017) .", "cite_spans": [ { "start": 264, "end": 283, "text": "(Cao et al., 2018a)", "ref_id": "BIBREF1" }, { "start": 308, "end": 326, "text": "(Lin et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many recent methods utilize a neural network (NN) with Conditional Random Fields (CRFs) (Lafferty et al., 2001 ) by treating name tagging as a sequence labeling problem (Lample 1 Our project can be found in https://github.com/ zig-kwin-hu/Low-Resource-Name-Tagging.", "cite_spans": [ { "start": 88, "end": 110, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF17" }, { "start": 169, "end": 178, "text": "(Lample 1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Someone may call it Named Entity Recognition (NER). < l a t e x i t s h a 1 _ b a s e 6 4 = \" z D 0 + i H g 1 5 V q / d e f Z n 4 J z G P 6 v t + A = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I R d F l 0 0 2 V F + 4 B a S j K d 1 t C 8 y E y U U g R / w K 1 + m v g H + h f e G a e g F t E J S c 6 c e 8 + Z u f f 6 a R g I 6 T i v B W t h c W l 5 p b h a W l v f 2 N w q b + + 0 R J J n j D d Z E i Z Z x / c E D 4 O Y N 2 U g Q 9 5 J M + 5 F f s j b / v h c x d u 3 P B N B E l / J S c p 7 k T e K g 2 H A P E n U p e i 7 / X L F q T p 6 2 f P A N a A C s x p J + Q X X G C A B Q 4 4 I H D E k 4 R A e B D 1 d u H C Q E t f D l L i M U K D j H P c o k T a n L E 4 Z H r F j + o 5 o 1 z V s T H v l K b S a 0 S k h v R k p b R y Q J q G 8 j L A 6 z d b x X D s r 9 j f v q f Z U d 5 v Q 3 z d e E b E S N 8 T + p Z t l / l e n a p E Y 4 l T X E F B N q W Z U d c y 4 5 L o r 6 u b 2 l 6 o k O a T E K T y g e E a Y a e W s z 7 b W C F 2 7 6 q 2 n 4 2 8 6 U 7 F q z 0 x u j n d 1 S x q w + 3 O c 8 6 B 1 V H W d q n t x X K m d m V E X s Y d 9 H N I 8 T 1 B D H Q 0 0 y X u E R z z h 2 a p b s Z V b d 5 + p V s F o d v F t W Q 8 f A i G Q H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" z D 0 + i H g 1 5 V q / d e f Z n 4 J z G P 6 v t + A = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I R d F l 0 0 2 V F + 4 B a S j K d 1 t C 8 y E y U U g R / w K 1 + m v g H + h f e G a e g F t E J S c 6 c e 8 + Z u f f 6 a R g I 6 T i v B W t h c W l 5 p b h a W l v f 2 N w q b + + 0 R J J n j D d Z E i Z Z x / c E D 4 O Y N 2 U g Q 9 5 J M + 5 F f s j b / v h c x d u 3 P B N B E l / J S c p 7 k T e K g 2 H A P E n U p e i 7 / X L F q T p 6 2 f P A N a A C s x p J + Q X X G C A B Q 4 4 I H D E k 4 R A e B D 1 d u H C Q E t f D l L i M U K D j H P c o k T a n L E 4 Z H r F j + o 5 o 1 z V s T H v l K b S a 0 S k h v R k p b R y Q J q G 8 j L A 6 z d b x X D s r 9 j f v q f Z U d 5 v Q 3 z d e E b E S N 8 T + p Z t l / l e n a p E Y 4 l T X E F B N q W Z U d c y 4 5 L o r 6 u b 2 l 6 o k O a T E K T y g e E a Y a e W s z 7 b W C F 2 7 6 q 2 n 4 2 8 6 U 7 F q z 0 x u j n d 1 S x q w + 3 O c 8 6 B 1 V H W d q n t x X K m d m V E X s Y d 9 H N I 8 T 1 B D H Q 0 0 y X u E R z z h 2 a p b s Z V b d 5 + p V s F o d v F t W Q 8 f A i G Q H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" z D 0 + i H g 1 5 V q / d e f Z n 4 J z G P 6 v t + A = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I R d F l 0 0 2 V F + 4 B a S j K d 1 t C 8 y E y U U g R / w K 1 + m v g H + h f e G a e g F t E J S c 6 c e 8 + Z u f f 6 a R g I 6 T i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "v B W t h c W l 5 p b h a W l v f 2 N w q b + + 0 R J J n j D d Z E i Z Z x / c E D 4 O Y N 2 U g Q 9 5 J M + 5 F f s j b / v h c x d u 3 P B N B E l / J S c p 7 k T e K g 2 H A P E n U p e i 7 / X L F q T p 6 2 f P A N a A C s x p J + Q X X G C A B Q 4 4 I H D E k 4 R A e B D 1 d u H C Q E t f D l L i M U K D j H P c o k T a n L E 4 Z H r F j + o 5 o 1 z V s T H v l K b S a 0 S k h v R k p b R y Q J q G 8 j L A 6 z d b x X D s r 9 j f v q f Z U d 5 v Q 3 z d e E b E S N 8 T + p Z t l / l e n a p E Y 4 l T X E F B N q W Z U d c y 4 5 L o r 6 u b 2 l 6 o k O a T E K T y g e E", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "a Y a e W s z 7 b W C F 2 7 6 q 2 n 4 2 8 6 U 7 F q z 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "x u j n d 1 S x q w + 3 O c 8 6 B 1 V H W d q n t x X K m d m V E X s Y d 9 H N I 8 T 1 B D H Q 0 0 y X u E R z z h 2 a p b s Z V b d 5 + p V s F o d v F t W Q 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "f A i G Q H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" z D 0 + i H g 1 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "V q / d e f Z n 4 J z G P 6 v t + A = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I R d F l 0 0 2 V F + 4 B a S j K d 1 t C 8 y E y U U g R / w K 1 + m v g H + h f e G a e g F t E J S c 6 c e 8 + Z u f f 6 a R g I 6 T i v B W t h c W l 5 p b h a W l v f 2 N w q b + + 0 R J J n j D d Z E i Z Z x / c E D 4 O Y N 2 U g Q 9 5 J M + 5 F f s j b / v h c x d u 3 P B N B E l / J S c p 7 k T e K g 2 H A P E n U p e i 7 / X L F q T p 6 2 f P A N a A C s x p J + Q X X G C A B Q 4 4 I H D E k 4 R A e B D 1 d u H C Q E t f D l L i M U K D j H P c o k T a n L E 4 Z H r F j + o 5 o 1 z V s T H v l K b S a 0 S k h v R k p b R y Q J q G 8 j L A 6 z d b x X D s r 9 j f v q f Z U d 5 v Q 3 z d e E b E S N 8 T + p Z t l / l e n a p E Y 4 l T X E F B N q W Z U d c y 4 5 L o r 6 u b 2 l 6 o k O a T E K T y g e E", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "a Y a e W s z 7 b W C F 2 7 6 q 2 n 4 2 8 6 U 7 F q z 0 et al., 2016), which has became a basic architecture due to its superior performance. Nevertheless, NN-CRFs require exhaustive human efforts for training annotations, and may not perform well in low-resource settings (Ni et al., 2017) . Many approaches thus focus on transferring cross-domain, cross-task and cross-lingual knowledge into name tagging (Yang et al., 2017; Peng and Dredze, 2016; Mayhew et al., 2017; Pan et al., 2017; Lin et al., 2018; Xie et al., 2018) . However, they are usually limited by the extra knowledge resources that are effective only in specific languages or domains. Actually, in many low-resource settings, there are extensive noisy annotations that naturally exist on the web yet to be explored (Ni et al., 2017) . In this paper, we propose a novel model for name tagging that maximizes the potential of weakly labeled (WL) data. As shown in Figure 1 , s 2 is weakly labeled, since only Formula shell and Barangay Ginebra are annotated, leaving the remaining words unannotated.", "cite_spans": [ { "start": 273, "end": 290, "text": "(Ni et al., 2017)", "ref_id": "BIBREF24" }, { "start": 407, "end": 426, "text": "(Yang et al., 2017;", "ref_id": "BIBREF38" }, { "start": 427, "end": 449, "text": "Peng and Dredze, 2016;", "ref_id": "BIBREF27" }, { "start": 450, "end": 470, "text": "Mayhew et al., 2017;", "ref_id": "BIBREF23" }, { "start": 471, "end": 488, "text": "Pan et al., 2017;", "ref_id": "BIBREF26" }, { "start": 489, "end": 506, "text": "Lin et al., 2018;", "ref_id": "BIBREF21" }, { "start": 507, "end": 524, "text": "Xie et al., 2018)", "ref_id": "BIBREF35" }, { "start": 782, "end": 799, "text": "(Ni et al., 2017)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 929, "end": 937, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "x u j n d 1 S x q w + 3 O c 8 6 B 1 V H W d q n t x X K m d m V E X s Y d 9 H N I 8 T 1 B D H Q 0 0 y X u E R z z h 2 a p b s Z V b d 5 + p V s F o d v F t W Q 8 f A i G Q H A = = < / l a t e x i t > s 2 < l a t e x i t s h a 1 _ b a s e 6 4 = \" 9 H p V 6 Y L U p s k f S a j X T x 2 x U 6 c o g v 8 = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I i 6 L L o p s u K t h V q K U k 6 r U P z Y m a i l C L 4 A 2 7 1 0 8 Q / 0 L / w z p i C W k Q n J D l z 7 j 1 n 5 t 7 r p y G X y n F e C 9 b C 4 t L y S n G 1 t L a + s b l V 3 t 5 p y y Q T A W s F S Z i I K 9 + T L O Q x a y m u Q n a V C u Z F f s g 6 / v h M x z u 3 T E i e x J d q k r J e 5 I 1 i P u S B p 4 i 6 k P 1 a v 1 x x q o 5 Z 9 j x w c 1 B B v p p J + Q X X G C B B g A w R G G I o w i E 8 S H q 6 c O E g J a 6 H K X G C E D d x h n u U S J t R F q M M j 9 g x f U e 0 6 + Z s T H v t K Y 0 6 o F N C e g U p b R y Q J q E 8 Q V i f Z p t 4 Z p w 1 + 5 v 3 1 H j q u 0 3 o 7 + d e E b E K N 8 T + p Z t l / l e n a 1 E Y 4 s T U w K m m 1 D C 6 u i B 3 y U x X 9 M 3 t L 1 U p c k i J 0 3 h A c U E 4 M M p Z n 2 2 j k a Z 2 3 V v P x N 9 M p m b 1 P s h z M 7 z r W 9 K A 3 Z / j n A f t W t V 1 q u 7 5 U a V + m o + 6 i D 3 s 4 5 D m e Y w 6 G m i i R d 4 j P O I J z 1 b D i q 3 M u v t M t Q q 5 Z h f f l v X w A Q S B k B 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 9 H p V 6 Y L U p s k f S a j X T x 2 x U 6 c o g v 8 = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I i 6 L L o p s u K t h V q K U k 6 r U P z Y m a i l C L 4 A 2 7 1 0 8 Q / 0 L / w z p i C W k Q n J D l z 7 j 1 n 5 t 7 r p y G X y n F e C 9 b C 4 t L y S n G 1 t L a + s b l V 3 t 5 p y y Q T A W s F S Z i I K 9 + T L O Q x a y m u Q n a V C u Z F f s g 6 / v h M x z u 3 T E i e x J d q k r J e 5 I 1 i P u S B p 4 i 6 k P 1 a v 1 x x q o 5 Z 9 j x w c 1 B B v p p J + Q X X G C B B g A w R G G I o w i E 8 S H q 6 c O E g J a 6 H K X G C E D d x h n u U S J t R F q M M j 9 g x f U e 0 6 + Z s T H v t K Y 0 6 o F N C e g U p b R y Q J q E 8 Q V i f Z p t 4 Z p w 1 + 5 v 3 1 H j q u 0 3 o 7 + d e E b E K N 8 T + p Z t l / l e n a 1 E Y 4 s T U w K m m 1 D C 6 u i B 3 y U x X 9 M 3 t L 1 U p c k i J 0 3 h A c U E 4 M M p Z n 2 2 j k a Z 2 3 V v P x N 9 M p m b 1 P s h z M 7 z r W 9 K A 3 Z / j n A f t W t V 1 q u 7 5 U a V + m o + 6 i D 3 s 4 5 D m e Y w 6 G m i i R d 4 j P O I J z 1 b D i q 3 M u v t M t Q q 5 Z h f f l v X w A Q S B k B 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 9 H p V 6 Y L U p s k f S a j X T x 2 x U 6 c o g v 8 = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I i 6 L L o p s u K t h V q K U k 6 r U P z Y m a i l C L 4 A 2 7 1 0 8 Q / 0 L / w z p i C W k Q n J D l z 7 j 1 n 5 t 7 r p y G X y n F e C 9 b C 4 t L y S n G 1 t L a + s b l V 3 t 5 p y y Q T A W s F S Z i I K 9 + T L O Q x a y m u Q n a V C u Z F f s g 6 / v h M x z u 3 T E i e x J d q k r J e 5 I 1 i P u S B p 4 i 6 k P 1 a v 1 x x q o 5 Z 9 j x w c 1 B B v p p J + Q X X G C B B g A w R G G I o w i E 8 S H q 6 c O E g J a 6 H K X G C E D d x h n u U S J t R F q M M j 9 g x f U e 0 6 + Z s T H v t K Y 0 6 o F N C e g U p b R y Q J q E 8 Q V i f Z p t 4 Z p w 1 + 5 v 3 1 H j q u 0 3 o 7 + d e E b E K N 8 T + p Z t l / l e n a 1 E Y 4 s T U w K m m 1 D C 6 u i B 3 y U x X 9 M 3 t L 1 U p c k i J 0 3 h A c U E 4 M M p Z n 2 2 j k a Z 2 3 V v P x N 9 M p m b 1 P s h z M 7 z r W 9 K A 3 Z / j n A f t W t V 1 q u 7 5 U a V + m o + 6 i D 3 s 4 5 D m e Y w 6 G m i i R d 4 j P O I J z 1 b D i q 3 M u v t M t Q q 5 Z h f f l v X w A Q S B k B 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 9 H p V 6 Y L U p s k f S a j X T x 2 x U 6 c o g v 8 = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I i 6 L L o p s u K t h V q K U k 6 r U P z Y m a i l C L 4 A 2 7 1 0 8 Q / 0 L / w z p i C W k Q n J D l z 7 j 1 n 5 t 7 r p y G X y n F e C 9 b C 4 t L y S n G 1 t L a + s b l V 3 t 5 p y y Q T A W s F S Z i I K 9 + T L O Q x a y m u Q n a V C u Z F f s g 6 / v h M x z u 3 T E i e x J d q k r J e 5 I 1 i P u S B p 4 i 6 k P 1 a v 1 x x q o 5 Z 9 j x w c 1 B B v p p J + Q X X G C B B g A w R G G I o w i E 8 S H q 6 c O E g J a 6 H K X G C E D d x h n u U S J t R F q M M j 9 g x f U e 0 6 + Z s T H v t K Y 0 6 o F N C e g U p b R y Q J q E 8 Q V i f Z p t 4 Z p w 1 + 5 v 3 1 H j q u 0 3 o 7 + d e E b E K N 8 T + p Z t l / l e n a 1 E Y 4 s T U w K m m 1 D C 6 u i B 3 y U x X 9 M 3 t L 1 U p c k i J 0 3 h A c U E 4 M M p Z n 2 2 j k a Z 2 3 V v P x N 9 M p m b 1 P s h z M 7 z r W 9 K A 3 Z / j n A f t W t V 1 q u 7 5 U a V + m o + 6 i D 3 s 4 5 D m e Y w 6 G m i i R d 4 j P O I J z 1 b D i q 3 M u v t M t Q q 5 Z h f f l v X w A Q S B k B 0 = < / l a t e x i t > B-ORG I-ORG B-NT I-NT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "WL data is more practical to obtain, since it is difficult for people to accurately annotate those entities that they do not know or are not interested in. We can construct them from online resources, such as the anchors in Wikipedia. However, the following natures of WL data make learning name tagging from them more challenging:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Partially-Labeled Sequence Automatically derived WL data does not contain complete annotations, thus can not be directly used for training. Ni et al. (2017) select the sentences with highest confidence, and assume missing labels as O (i.e., non-entity), but it will introduce a bias to recognize mentions as non-entity. Another line of work is to replace CRFs with Partial-CRFs (T\u00e4ckstr\u00f6m et al., 2013) , which assign unlabeled words with all possible labels and maximize the total probability Shang et al., 2018) . However, they still rely on seed annotations or domain dictionaries for high-quality training.", "cite_spans": [ { "start": 140, "end": 156, "text": "Ni et al. (2017)", "ref_id": "BIBREF24" }, { "start": 378, "end": 402, "text": "(T\u00e4ckstr\u00f6m et al., 2013)", "ref_id": "BIBREF31" }, { "start": 494, "end": 513, "text": "Shang et al., 2018)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Massive Noisy Data WL corpora are usually generated with massive noisy data including missing labels, incorrect boundaries and types. Previous work filtered out WL sentences by statistical methods (Ni et al., 2017) or the output of a trainable classifier . However, abandoning training data may exacerbate the issue of inadequate annotation. Therefore, maximizing the potential of massive noisy data as well as highquality part, yet being efficient, is challenging.", "cite_spans": [ { "start": 197, "end": 214, "text": "(Ni et al., 2017)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address these issues, we first differentiate noisy data from high-quality WL sentences via a lightweight scoring strategy, which accounts for the annotation confidence as well as the coverage of all mentions in one sentence. To take best advantages of all WL data, we then propose a unified neural framework that solves name tagging from two perspectives: sequence labeling and classification for two types of data, respectively. Specifically, the classification module focuses on noisy data to efficiently pretrain the tag classifier by capturing textual context semantics. It is trained only using annotated words without noisy unannotated words, and thus it is robust and efficient during training. The costly sequence labeling module is to achieve sequential optimum among word tags. It further alleviates the burden of seed annotations in Partial-CRFs and increases randomness via Non-entity Sampling strategy, which samples O words according to some linguistic natures. These two modules are combined via shared parameters. Our main contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a novel neural name tagging model that merely relies on WL data without feature engineering. It can thus be adapted for both low-resource languages and domains, while no previous work deals with them at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We consider name tagging from two perspec-tives of sequence labeling and classification, to efficiently take the best advantage of both high-quality and noisy WL data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We conduct extensive experiments in five low-resource languages and a fine-grained domain. Since few work has been done in two types of low-resource settings simultaneously, we arrive at two types of baselines from state-of-the-art methods. Our model achieves significant improvements (6% and 7.8% F1 on average), yet being efficient demonstrated in further ablation studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Name tagging is an fundamental task of extracting entity information, which shall benefit many applications, such as information extraction Kuang et al., 2019; Cao et al., 2019a) and recommendation (Wang et al., 2019; Cao et al., 2019b) . It can be treated as either a multiclass classification problem (Hammerton, 2003; or a sequence labeling problem (Collobert et al., 2011) , but very little work combined them together. The difference between them mainly lies in whether the method models sequential label constraints, which have been demonstrated effective in many NN-CRFs models (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016) . However, they require a large amount of human annotated corpora, which are usually expensive to obtain. The above issue motivates a lot of work on name tagging in low-resource languages or domains. A typical line of effort focuses on introducing external knowledge via transfer learning (Fritzler et al., 2018; Hofer et al., 2018) , such as the use of crossdomain (Yang et al., 2017) , cross-task (Peng and Dredze, 2016; Lin et al., 2018) and cross-lingual resources (Ni et al., 2017; Xie et al., 2018; Zafarian et al., 2015; Zhang et al., 2016; Mayhew et al., 2017; Tsai et al., 2016; Feng et al., 2018; Pan et al., 2017) . Although they achieve promising results, there are a large amount of weak annotations on the Web, which have not been well studied (Nothman et al., 2008; Ehrmann et al., 2011) . ; Shang et al. (2018) utilized Partial-CRFs (T\u00e4ckstr\u00f6m et al., 2013) to model incomplete annotations for specific domains, but they still rely on seed annotations or a domain dictionary. Therefore, we aim at filling the gap in lowresource name tagging research by using only WL Figure 2 : Framework. Rectangles denote the main components for two steps, and rounded rectangles consist of two modules of the neural model. In input sentences, bold fonts denote labeled words, and at the top is corresponding outputs. We use Partial-CRFs to model all possible label sequences (red paths from left to right by picking up one label per column) controlled by non-entity sampling (strikethrough labels according to the distribution). We replace \"UN\" and \"x-NT\" label with corresponding possible labels to clarify the principle of PCRF.", "cite_spans": [ { "start": 140, "end": 159, "text": "Kuang et al., 2019;", "ref_id": "BIBREF16" }, { "start": 160, "end": 178, "text": "Cao et al., 2019a)", "ref_id": "BIBREF4" }, { "start": 198, "end": 217, "text": "(Wang et al., 2019;", "ref_id": "BIBREF34" }, { "start": 218, "end": 236, "text": "Cao et al., 2019b)", "ref_id": "BIBREF5" }, { "start": 303, "end": 320, "text": "(Hammerton, 2003;", "ref_id": "BIBREF13" }, { "start": 352, "end": 376, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF7" }, { "start": 585, "end": 606, "text": "(Lample et al., 2016;", "ref_id": "BIBREF18" }, { "start": 607, "end": 625, "text": "Ma and Hovy, 2016;", "ref_id": "BIBREF22" }, { "start": 626, "end": 649, "text": "Chiu and Nichols, 2016)", "ref_id": "BIBREF6" }, { "start": 939, "end": 962, "text": "(Fritzler et al., 2018;", "ref_id": "BIBREF10" }, { "start": 963, "end": 982, "text": "Hofer et al., 2018)", "ref_id": "BIBREF15" }, { "start": 1016, "end": 1035, "text": "(Yang et al., 2017)", "ref_id": "BIBREF38" }, { "start": 1049, "end": 1072, "text": "(Peng and Dredze, 2016;", "ref_id": "BIBREF27" }, { "start": 1073, "end": 1090, "text": "Lin et al., 2018)", "ref_id": "BIBREF21" }, { "start": 1119, "end": 1136, "text": "(Ni et al., 2017;", "ref_id": "BIBREF24" }, { "start": 1137, "end": 1154, "text": "Xie et al., 2018;", "ref_id": "BIBREF35" }, { "start": 1155, "end": 1177, "text": "Zafarian et al., 2015;", "ref_id": "BIBREF39" }, { "start": 1178, "end": 1197, "text": "Zhang et al., 2016;", "ref_id": "BIBREF40" }, { "start": 1198, "end": 1218, "text": "Mayhew et al., 2017;", "ref_id": "BIBREF23" }, { "start": 1219, "end": 1237, "text": "Tsai et al., 2016;", "ref_id": "BIBREF32" }, { "start": 1238, "end": 1256, "text": "Feng et al., 2018;", "ref_id": "BIBREF9" }, { "start": 1257, "end": 1274, "text": "Pan et al., 2017)", "ref_id": "BIBREF26" }, { "start": 1408, "end": 1430, "text": "(Nothman et al., 2008;", "ref_id": "BIBREF25" }, { "start": 1431, "end": 1452, "text": "Ehrmann et al., 2011)", "ref_id": "BIBREF8" }, { "start": 1457, "end": 1476, "text": "Shang et al. (2018)", "ref_id": "BIBREF29" }, { "start": 1499, "end": 1523, "text": "(T\u00e4ckstr\u00f6m et al., 2013)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 1733, "end": 1741, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "data, and adapt it to arbitrary low-resource languages or domains, which can be further improved by the above transfer-based methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Preliminaries and Framework", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We formally define the name tagging task as follows: given a sequence of words", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "3.1" }, { "text": "X = x 1 , \u2022 \u2022 \u2022 , x i , \u2022 \u2022 \u2022 , x |X| , it aims to infer a sequence of labels Y = y 1 , \u2022 \u2022 \u2022 , y i , \u2022 \u2022 \u2022 , y |X| ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "3.1" }, { "text": "where |X| is the length of the sequence, y i \u2208 Y is the label of the word x i , each label consists of the boundary and type information, such as B-ORG indicating that the word is Begin of an ORGanization entity. To make notations consistent, we use\u1ef8 = Y {UN,B-NT,I-NT} to denote the label set of WL data, where UN indicates that the word is unlabeled, and NT denote only the type is unlabeled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "3.1" }, { "text": "In other words, the word with UN may be any one of the label in Y, and the word with NT may be any type. We define\u1ef8 for notation clarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "3.1" }, { "text": "To deal with the issue of limited annotations, we construct WL data D = {(X,\u1ef8 )} based on Wikipedia anchors and taxonomy, where\u1ef8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "3.1" }, { "text": "= \u1ef9 1 , \u2022 \u2022 \u2022 ,\u1ef9 i , \u2022 \u2022 \u2022 ,\u1ef9 |X| and\u1ef9 i \u2208\u1ef8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "3.1" }, { "text": "An anchor m, e \u2208 A links a mention m to an entity e \u2208 E, where m contains one or several consecutive words of length |m|. Particularly, we define A(X) as the set of anchors in X. Most entities are mapped to hierarchically organized categories, namely taxonomy T , which provides category information C = {c}. We define C(e) as the category set of e, and T \u2193 (c) as the children of c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "3.1" }, { "text": "The goal of our method is to extract WL data from Wikipedia and use them as training corpora for name tagging. As shown in Figure 2 , there are two steps in our framework:", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 131, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Framework", "sec_num": "3.2" }, { "text": "Weakly Labeled Data Generation generates as many WL data as possible for higher tagging recall. It contains two components of label induction and data selection scheme. First, the label induction assigns each word a label based on Wikipedia anchors and taxonomy. Then, the data selection scheme computes the quality scores for the WL sentences by considering the coverage of mentions as well as the label confidence. According to the scores, we split the entire set into two parts: a small set of high-quality data for the sequence labeling module, and a large amount of noisy data for the classification module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "3.2" }, { "text": "Neural Name Tagging Model aims at efficiently and robustly utilizing both high-quality and noisy WL data, ensuring satisfying tagging precision. It is to make best use of labeled words via the sequence labeling module and the classification module. More specifically, we pre-train the classification module to capture the textual context semantics from massive noisy data, and then the sequence labeling module further fine-tunes the shared neural networks using a Partial-CRFs layer with Non-Entity Sampling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "3.2" }, { "text": "Existing methods use Wikipedia (Ni et al., 2017; Pan et al., 2017; Gei\u00df et al., 2017) to train an extra classifier to predict entity categories for name tagging training. Instead, we aim at lowering the requirements of additional resources in order to support more low-resource settings. We thus utilize a lightweight strategy to generate WL data including label induction and data selection scheme.", "cite_spans": [ { "start": 31, "end": 48, "text": "(Ni et al., 2017;", "ref_id": "BIBREF24" }, { "start": 49, "end": 66, "text": "Pan et al., 2017;", "ref_id": "BIBREF26" }, { "start": 67, "end": 85, "text": "Gei\u00df et al., 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Weakly Labeled Data Generation", "sec_num": "4" }, { "text": "Given a sentence X including anchors A(X) and taxonomy T , we aim at inducing a label\u1ef9 \u2208\u1ef8 for each word x \u2208 X. Obviously, the words outside of anchors should be labeled with UN, indicating that it is unlabeled and could be O or unannotated mentions. For the words in an anchor m, e , we label it according to the entity categories. For example, words Formula and Shell (Figure 1 ) in s 2 are labeled as B-ORG and I-ORG, respectively, because mention Formula Shell is linked to entity Shell Turbo Chargers, which belongs to category Basketball teams. We trace it along the taxonomy T : Basketball teams\u2192...\u2192Organizations, and find that it is a child of Organizations. According to a manually defined mapping \u0393(Y) \u2192 C (e.g., \u0393(ORG) =Organizations), we denote all the classes and their children with the same type (e.g., ORG).", "cite_spans": [], "ref_spans": [ { "start": 369, "end": 378, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Label Induction", "sec_num": "4.1" }, { "text": "However, there are two minor issues. First, for the entities without category information C(e) = \u2205, we label them as B-NT or I-NT, indicating that they have no type information. Second, for the entities referring to multiple categories, we induce labels that maximizes the conditional probability: argmax y * p(y * |C(e)) = c\u2208C(e) 1(c \u2208 T \u2193 (\u0393(y * ))) |C(e)| (1) where 1(\u2022) indicates 1 if holds true, otherwise 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label Induction", "sec_num": "4.1" }, { "text": "By doing so, we obtain a set of WL sentences D = {(X,\u1ef8 )}. However, the induction process may introduce incorrect boundaries and types of labels due to the crowdsourcing nature of source data. We thus design a data selection scheme to deal with the issues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label Induction", "sec_num": "4.1" }, { "text": "Following Ni et al. (2017) , we compute quality scores for sentences to distinguish high-quality and noisy data from two aspects: the annotation confidence and the annotation coverage.", "cite_spans": [ { "start": 10, "end": 26, "text": "Ni et al. (2017)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Data Selection Scheme", "sec_num": "4.2" }, { "text": "The annotation confidence measures the likelihood of the text spans being mentions (i.e., cor-rectness of boundaries), and being assigned with the types. We define it as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Scheme", "sec_num": "4.2" }, { "text": "q(X,\u1ef8 ) = (x i ,\u1ef9 i ) 1(\u1ef9 i \u2208 Y)p(\u1ef9 i |C(e))p(C(e)|x i ) |X|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Scheme", "sec_num": "4.2" }, { "text": "(2) where p(C(e)|x i ) is the conditional probability of x i linking to an entity belong to category C(e), we compute it based on its statistical frequency among Wikipedia anchors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Scheme", "sec_num": "4.2" }, { "text": "The annotation coverage measures to which ratio the words are being labeled in the sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Scheme", "sec_num": "4.2" }, { "text": "n(X,\u1ef8 ) = (x i ,\u1ef9 i ) 1(\u1ef9 i \u2208 Y) |X|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Scheme", "sec_num": "4.2" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Scheme", "sec_num": "4.2" }, { "text": "We select high-quality sentences D hq satisfying:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Scheme", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q(X,\u1ef8 ) \u2265 \u03b8 q ; n(X,\u1ef8 ) \u2265 \u03b8 n", "eq_num": "(4)" } ], "section": "Data Selection Scheme", "sec_num": "4.2" }, { "text": "where \u03b8 q and \u03b8 n are the hyperparameters. Thus, the remaining sentences are noisy D noise . For example (Figure 2 ), the sentence ... Barangay Ginebra and Formula Shell ... is highquality, and The team is owned by Ginebra is noisy. This is because there are more anchors that link Formula Shell to an organization entity and the anchors within the sentence account for a large proportion, leading to a higher quality score. Note that Barangay and Ginebra are labeled with B-NT and I-NT, indicating the type information is missing. Our model may learn the textual semantics for classifying Ginebra to ORG from the noisy sentence, where Ginebra is labeled with B-ORG.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 114, "text": "(Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Data Selection Scheme", "sec_num": "4.2" }, { "text": "Our neural model contains two modules that share the same NN architecture except the Partial-CRFs layer. Given D hq and D noise , we first pre-train the classification module using massive noisy data D noise to efficiently capture the textual semantics. Then, we use the sequence labeling module to fine-tune the classification module on highquality data D hq by considering the transitional constraints among sequential labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Name Tagging Model", "sec_num": "5" }, { "text": "Before describing the NN of the classification module, we first introduce the sequence labeling module. Different from conventional NN-CRFs models, we utilize the Partial CRFs layer to maximize the probability of all possible sequential labels for the sentence with transitional constraints, where the probability of missing word labels is controlled by non-entity sampling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Labeling Module", "sec_num": "5.1" }, { "text": "Partial-CRFs (PCRFs) was first proposed in the field of Part-of-Speech Tagging (T\u00e4ckstr\u00f6m et al., 2013) . It can be trained when the coupled word and label constraints provide only a partial signal by assuming that the uncoupled words may refer to multiple labels. Given (X,\u1ef8 ), we traverse all possible labels Y for each unannotated word {x i |\u1ef9 i \u2208 UN,B-NT,I-NT} (e.g., the red paths in Figure 2) , and compute the total probability of possible fully labeled sentences Y(X,\u1ef8 ) = {(X, Y )}:", "cite_spans": [ { "start": 79, "end": 103, "text": "(T\u00e4ckstr\u00f6m et al., 2013)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 389, "end": 398, "text": "Figure 2)", "ref_id": null } ], "eq_spans": [], "section": "Partial-CRFs", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(\u1ef8 |X) = Y(X,\u1ef8 ) (X,Y ) p(Y |X)", "eq_num": "(5)" } ], "section": "Partial-CRFs", "sec_num": null }, { "text": "where p(Y |X) = softmax(s(X, Y )), the same as in CRFs, and the score function s(X, Y ) is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial-CRFs", "sec_num": null }, { "text": "s(X, Y ) = |X| i=0 A y i ,y i+1 + |X| i=1 P x i ,y i (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial-CRFs", "sec_num": null }, { "text": "where P x i ,y i is the score indicating how possible x i is labeled with y i , which is defined as the output of NN and will be detailed in the next section. A y i ,y i+1 is the transition score from label y i to y i+1 that is learned in this layer. Instead of the single correct label sequence in CRFs, the loss function of partial-CRFs is to minimize the negative log-probability of ground truth over all possible labeled sequences:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial-CRFs", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = \u2212 D nq (X,\u1ef8 ) log p(\u1ef8 |X)", "eq_num": "(7)" } ], "section": "Partial-CRFs", "sec_num": null }, { "text": "Non-entity Sampling A crucial drawback of using partial CRFs for WL sentences is that there are no words labeled with O (i.e., non-entity words) for training (Section 6.5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial-CRFs", "sec_num": null }, { "text": "To further alleviate the reliance on seed annotations, we introduce non-entity sampling that samples O from unlabeled words as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial-CRFs", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y i = O|x i ,\u1ef9 i = N) = \u03b1 3 (\u03bb 1 f 1 + \u03bb 2 (1 \u2212 f 2 ) + \u03bb 3 f 3 )", "eq_num": "(8)" } ], "section": "Partial-CRFs", "sec_num": null }, { "text": "where \u03b1 is non-entity ratio to balance how many unlabeled words are sampled as O, we set \u03b1 = 0.9 in experiments according to Augenstein et al. (2017) . Weighting parameters satisfy 0 \u2264 \u03bb 1 , \u03bb 2 , \u03bb 3 \u2264 1, and f 1 , f 2 , f 3 are feature scores. We define f 1 = 1(x i adjoins an entity), which implies that the words around a mention are possible to be O; f 2 is the ratio of the number of x i labeled with entities to its total occurrences, reflecting how frequent a word is in a mention; and f 3 = tf * df , where tf is term frequency and df is document frequency in Wikipedia articles.", "cite_spans": [ { "start": 125, "end": 149, "text": "Augenstein et al. (2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Partial-CRFs", "sec_num": null }, { "text": "As shown in Figure 2 , three words and, forming and an are labeled with N since they are outside of anchors. During training, they should be regarded as all labels of Y in Partial CRFs, while we sample some of them as O words according to Equation 8. Thus, and and an are instead treated as O words, because they do not appear in any anchor, and are too general due to a high f 3 value.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Partial-CRFs", "sec_num": null }, { "text": "To efficiently utilize the noisy WL sentences, this module regards name tagging as a multi-label classification problem. On one hand, it predicts each word's label separately, naturally addressing the issue of inconsecutive labels. On the other hand, we only focus on the labeled words, so that the module is robust to the noise since most noise arises from the unlabeled words, and enjoy an efficient training procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Module", "sec_num": "5.2" }, { "text": "Formally, given a noisy sentence (X,\u1ef8 ) \u2208 D noise , we classify words {x i |\u1ef9 i \u2208 Y} by capturing textual semantics within the context. Independently of languages and domains, we combine the character and word embeddings for each word, then feed them into an encoder layer to capture contextual information for the classification layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Module", "sec_num": "5.2" }, { "text": "As inputs, we introduce character information to enhance word representations to improve the robustness to morphological and misspelling noise following (Ma and Hovy, 2016) . Concretely, we represent a word x by concatenating word embedding w and Convolutional Neural Networks (CNN) (LeCun et al., 1989) based character embedding c, which is obtained through convolution operations over characters in a word followed by max pooling and drop out techniques.", "cite_spans": [ { "start": 153, "end": 172, "text": "(Ma and Hovy, 2016)", "ref_id": "BIBREF22" }, { "start": 283, "end": 303, "text": "(LeCun et al., 1989)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Character and Word Embeddings", "sec_num": null }, { "text": "Given an arbitrary length of sentence X, this component encodes the semantics of words as well as their compositionality into a lowdimensional vector space. The most common encoders are CNN, Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Transformer (Vaswani et al., 2017) . We use the bi-directional LSTM (Bi-LSTM) due to its superior performance. We discuss it in Section 6.2.", "cite_spans": [ { "start": 221, "end": 255, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF14" }, { "start": 272, "end": 294, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Encoder Layer", "sec_num": null }, { "text": "Bi-LSTM (Graves et al., 2013) has been widely used in modeling sequential words, so as to capture both past and future input features for a given word. It stacks a forward LSTM and a backward LSTM, so that the output of a word", "cite_spans": [ { "start": 8, "end": 29, "text": "(Graves et al., 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Encoder Layer", "sec_num": null }, { "text": "x i is h i = [ \u2190 \u2212 h i ; \u2212 \u2192 h i ], where \u2212 \u2192 h i = LSTM(X 1:i ) and \u2190 \u2212 h i = LSTM(X i:|X| ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder Layer", "sec_num": null }, { "text": "The classification layer makes independent labeling decisions for each word, so that we can only focus on labeled words, while robustly and efficiently skip the noisy unlabeled words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Layer", "sec_num": null }, { "text": "In this layer, we estimate the score P x i ,y i (Equation 6) for word x i being the label y i . We use a fully connected layer followed by softmax to output a probability-like score:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Layer", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P x i ,y i = Sof tmax(W h i + b)", "eq_num": "(9)" } ], "section": "Classification Layer", "sec_num": null }, { "text": "where W \u2208 R |Y| . Note that we have no training instance for O words. Thus, we also use the nonentity sampling (Section 5.1). Given (X,\u1ef8 ) \u2208 D noise , this module is trained to minimize crossentropy of the predicted and ground truth:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Layer", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L c = \u2212 D noise (X,\u1ef8 ) 1(\u1ef9 i \u2208 Y)\u1ef9 i log P x i ,y i", "eq_num": "(10)" } ], "section": "Classification Layer", "sec_num": null }, { "text": "To distill the knowledge derived from noisy data, we first pre-train the classification module, then share the overall NN with the sequence labeling module. If we choose a loose threshold \u03b8 p and \u03b8 n , there is no noisy data and our model shall degrade to the sequential model without the pre-trained classifier. When the threshold is strict, there is no high-quality data and our model will degrade to the classification module only (Section 6.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference", "sec_num": "5.3" }, { "text": "For inference, we use the sequence labeling module to predict the output label sequence with the largest score as in Equation 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference", "sec_num": "5.3" }, { "text": "We verify our model using five low-resource languages and a specific domain. Furthermore, we investigate the impacts of the main components as well as hyperparameters in the ablation study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "6" }, { "text": "Datasets Since most datasets on low-resource languages are not publicly available, we use Wikipedia data as the \"ground truth\" following Pan et al. (2017) . Thus, we can test name tagging in low-resource languages as well as domains. We choose five languages: Welsh, Bengali, Yoruba, Mongolian and Egyptian Arabic (or CY, BN, YO, MN and ARZ for short), at different lowresource levels, and select 3 types: Person, Location and Organization. For food domain, we reorganized the entities in Wikipedia category \"Food and drink\" into 5 types: Drinks, Meat, Vegetables, Condiments and Breads, for name tagging, and extract sentences containing those entities from all English Wikipedia articles for as many data as possible. We use 20190120 Wikipedia dump for WL data construction, where the ratio of words in anchors to the whole sentence is nearly 0.12, 0.07, 0.14, 0.07 and 0.06 for languages CY, BN, YO, MN and ARZ, and 0.13 for food domain, demonstrating that unlabeled words are dominant. By heuristically setting \u03b8 q = 0.1, \u03b8 n = 0.9, we obtain 56,571, 16,718, 4,131, 8,332, 6,266, 11,297 high-quality and 49,970, 50,197, 32,417, 10,918, 12,434, 16,501 CY, BN, YO, MN and ARZ, and food domain, respectively. For correctness, we then pick up test data of 25% sentences that has highest annotation confidence and exceed 0.3 coverage. We randomly choose 25% of high-quality data as validation for early stop, and the rest for training. The statistics 3 is in Table 1 .", "cite_spans": [ { "start": 137, "end": 154, "text": "Pan et al. (2017)", "ref_id": "BIBREF26" }, { "start": 1047, "end": 1154, "text": "56,571, 16,718, 4,131, 8,332, 6,266, 11,297 high-quality and 49,970, 50,197, 32,417, 10,918, 12,434, 16,501", "ref_id": null } ], "ref_spans": [ { "start": 1458, "end": 1465, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiment Settings", "sec_num": "6.1" }, { "text": "Training Details For tuning of hyper-parameters, we set nonentity feature weights to \u03bb 1 = 0, \u03bb 2 = 0.9, \u03bb 3 = 0.1 heuristically. We pre-train word embeddings using Glove (Pennington et al., 2014) , and finetune embeddings during training. We set the dimension of words and characters as 100 and 30, respectively. We use 30 filter kernels, where each kernel has the size of 3 in character CNN, and dropout rate is set to 0.5. For bi-LSTM, the hidden state has 150 dimensions. The batch size is set to 32 and 64 for sequence labeling module and classification module. We adopt Adam with L2 regularization for optimization, and set the learning rate and weight decay to 0.001 and 1e \u22129 . Baselines Since most low-resource name tagging methods introduce external knowledge (Section 2), which has limited availability and is out of the scope for this paper, we arrive at two types of baselines from weakly supervised models:", "cite_spans": [ { "start": 171, "end": 196, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Settings", "sec_num": "6.1" }, { "text": "Typical NN-CRFs models (Ni et al., 2017) by selecting high-quality WL data and regarding unlabeled words as O, which usually achieve very competitive results. NN denotes CNN, Transformer (Trans for short) or Bi-LSTM. NN-PCRFs model Shang et al., 2018) . Although they achieves state-ofthe-art performance, methods of this type are only evaluated in specific domains and require a small set of seed annotations or a domain dictionary. We thus carefully adapt them to low-resource languages and domains by selecting the highestquality WL data (\u03b8 n > 0.3) as seeds 4 . 3 The statistics includes noisy data, which greatly increases the size but cannot be used for evaluation. 4 We adopt the common part of their models related to 6.2 Results on Low-Resource Languages Table 2 shows the overall performance of our proposed model as well as the baseline methods (P and R denote Precision and Recall). We can see:", "cite_spans": [ { "start": 23, "end": 40, "text": "(Ni et al., 2017)", "ref_id": "BIBREF24" }, { "start": 232, "end": 251, "text": "Shang et al., 2018)", "ref_id": "BIBREF29" }, { "start": 566, "end": 567, "text": "3", "ref_id": null }, { "start": 672, "end": 673, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 764, "end": 771, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiment Settings", "sec_num": "6.1" }, { "text": "Our method consistently outperforms all baselines in five languages w.r.t F1, mainly because we greatly improve recall (2.7% to 9.34% on average) by taking best advantage of WL data and being robust to noise via two modules. As for the precision, partial-CRFs perform poorly compared with CRFs due to the uncertainty of unlabeled words, while our method alleviates this issue by introducing linguistic features in non-entity sampling. An exception occurs in CY, because it has the most training data, which may bring more accurate information than sampling. Actually, we can tune hyper-parameter non-entity ratio \u03b1 to improve precision 5 , more studies can be found in Section 6.5. Besides, the sampling technique can utilize more prior features if available, we leave it in future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Settings", "sec_num": "6.1" }, { "text": "Among all encoders, Bi-LSTM has greater ability for feature abstraction and achieves the highest precision in most languages. An unexpected exception is Yoruba, where CNN achieves the higher performance. This indicates that the three encoders capture textual semantics from different perspectives, thus it is better to choose the encoder by considering the linguistic natures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Settings", "sec_num": "6.1" }, { "text": "As for the impacts of resources, all the models perform the worst in Yoruba. Interestingly, we conclude that the performance for name tagging in low-resource languages doesn't depend entirely on the absolute number of mentions in the training data, but largely on the average number of annotations per sentence. For example, Bengali has 1.9 mentions per sentence and all methods achieve their best results, while the opposite is Welsh with handling weakly labeled data, removing the other parts that are specifically designed for domains, such as instance selector which makes it worse since we have already selected the high-quality data. 1.4 mentions per sentence. This verifies our data selection scheme (e.g., annotation coverage n(\u2022)), and we will give more discussion in Section 6.4. Table 3 shows the overall performance in food domain, where D, M, V, C and B denote: Drink, Meat, Vegetables, Condiments and Breads. We can observe that there is a performance drop compared to that in low-resource languages, mainly because of more types and sparse training data. Our model outperforms all of the baselines in all food types by 7.8% on average. The performance in condiments is relatively low, because most of them are composed of meat or vegetables, such as steak sauce, which is overlapped with other types and make the recognition more difficult. Here is a representative case demonstrating that our model is robust to noise induced by unlabeled words. In Figure 4 , the sentence is from the noisy WL training data of food domain, and only Maize is labeled as B-V. Although our model is trained on this sentence, it successfully predicts yams as B-V. This example shows that our two-modules design can utilize the noisy data while avoiding side effects caused by incomplete annotation.", "cite_spans": [], "ref_spans": [ { "start": 790, "end": 797, "text": "Table 3", "ref_id": "TABREF7" }, { "start": 1465, "end": 1473, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Experiment Settings", "sec_num": "6.1" }, { "text": "We utilize \u03b8 n , the main factor to annotation quality (Section 6.2), to trade off between high-quality and noisy WL data. As shown in Figure 3(a) , the red curve denotes the training time and the blue curve denotes F1. We can see that the performance of our model is relatively stable when \u03b8 n \u2208 [0, 0.15), while the time cost drops dramatically (from 90 to 20 minutes), demonstrating the robustness and efficiency of two-modules design. When \u03b8 n \u2208 [0.15, 0.3], the performance decreases greatly due to less available high-quality data for sequence labeling module; meanwhile, little time is saved through classification module. Thus, we pick up \u03b8 n = 0.1 in experiments. A special case happens when \u03b8 n = 0, our model degrades to sequence labeling without pre-trained classifier. We can see the performance is worse than that of \u03b8 n = 0.1 due to massive noisy data.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 146, "text": "Figure 3(a)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Efficiency Analysis", "sec_num": "6.4" }, { "text": "We use non-entity ratio \u03b1 to control sampling, and a higher \u03b1 denotes that more unlabeled words are labeled with O. As shown in Figure 3(b) , the precision increases as more words are assigned with labels, while the recall achieves two peaks (\u03b1 = 0.4, 0.9), leading to the highest F1 when \u03b1 = 0.9, which conforms to the statistics in Augenstein et al. (2017) . There are two special cases. When \u03b1 = 0, our model degrades to a NN-PCRFs model without non-entity sampling and there is no seed annotations for training. We can see the model performs poorly due to the dominant unlabeled words (Section 5.1). When \u03b1 = 1 indicating all unlabeled words are sampled as O, our model degrades to NN-CRFs model, which has higher precision at the cost of recall. Clearly, the model suffers from the bias to O labeling.", "cite_spans": [ { "start": 334, "end": 358, "text": "Augenstein et al. (2017)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 128, "end": 139, "text": "Figure 3(b)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Impact of Non-Entity Sampling Ratio", "sec_num": "6.5" }, { "text": "We propose three features for non-entity samples: nearby entities (f 1 ), ever within entities (f 2 ) and term/document frequency (f 3 ). We now investigate how effective each feature is. Figure 3(c) shows the performance of our model that samples non-entity words using each feature as well as their combinations. The first bar denotes the performance of sampling without any features. It is not satisfying but competitive, indicating the importance of non-entity sampling to partial-CRFs. The single f 2 contributes the most, and gets enhanced with f 3 because they provide complementary information. Surprisingly, f 1 seems better than f 3 , but makes the model worse if we use it combined with f 2 , f 3 , thus we set \u03bb 1 = 0.", "cite_spans": [], "ref_spans": [ { "start": 188, "end": 199, "text": "Figure 3(c)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Impact of Non-Entity Features", "sec_num": "6.6" }, { "text": "In this paper, we propose a novel name tagging model that consists of two modules of sequence labeling and classification, which are combined via shared parameters. We automatically construct WL data from Wikipedia anchors and split them into high-quality and noisy portions for training each module. The sequence labeling module focuses on high-quality data and is costly due to the partial-CRFs layer with non-entity sampling, which models all possible label combinations. The classification module focuses on the annotated words in noisy data to pretrain the tag classifier efficiently. The experimental results in five low-resource languages and a specific domain demonstrate the effectiveness and efficiency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In the future, we are interested in incorporating entity structural knowledge to enhance text representation (Cao et al., , 2018b , or transfer learning (Sun et al., 2019) to deal with massive rare words and entities for low-resource name tagging, or introduce external knowledge for further improvement.", "cite_spans": [ { "start": 109, "end": 129, "text": "(Cao et al., , 2018b", "ref_id": "BIBREF2" }, { "start": 153, "end": 171, "text": "(Sun et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In this table, we show the performance using the same hyper-parameters in different languages for fairness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "NExT++ research is supported by the National Research Foundation, Prime Minister's Office, Singapore under its IRC@SG Funding Initiative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Generalisation in named entity recognition: A quantitative analysis", "authors": [ { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" } ], "year": 2017, "venue": "Computer Speech & Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isabelle Augenstein, Leon Derczynski, and Kalina Bontcheva. 2017. Generalisation in named entity recognition: A quantitative analysis. Computer Speech & Language.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural collective entity linking", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Juanzi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Cao, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2018a. Neural collective entity linking. In COL- ING.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Joint representation learning of cross-lingual words and entities via attentive distant supervision", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Juanzi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chengjiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Tiansi", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Chengjiang Li, Xu Chen, and Tiansi Dong. 2018b. Joint representation learning of cross-lingual words and entities via attentive distant supervision. In EMNLP.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bridge text and knowledge by learning multi-prototype entity mention embedding", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Lifu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Juanzi", "middle": [], "last": "Li", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, and Juanzi Li. 2017. Bridge text and knowledge by learning multi-prototype entity mention embedding. In ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multi-channel graph neural network for entity alignment", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chengjiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Juanzi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Cao, Zhiyuan Liu, Chengjiang Li, Juanzi Li, and Tat-Seng Chua. 2019a. Multi-channel graph neural network for entity alignment. In ACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiangnan", "middle": [], "last": "He", "suffix": "" }, { "first": "Zikun", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2019, "venue": "WWW", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Cao, Xiang Wang, Xiangnan He, Zikun Hu, and Tat-Seng Chua. 2019b. Unifying knowledge graph learning and recommendation: Towards a better un- derstanding of user preferences. In WWW.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Named entity recognition with bidirectional lstm-cnns", "authors": [ { "first": "Jason", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nichols", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. TACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Building a multilingual named entityannotated corpus using annotation projection", "authors": [ { "first": "Maud", "middle": [], "last": "Ehrmann", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Steinberger", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maud Ehrmann, Marco Turchi, and Ralf Steinberger. 2011. Building a multilingual named entity- annotated corpus using annotation projection. In Proceedings of the International Conference Recent Advances in Natural Language Processing.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving low resource named entity recognition using cross-lingual knowledge transfer", "authors": [ { "first": "Xiaocheng", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Xiachong", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Zhangyin", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaocheng Feng, Xiachong Feng, Bing Qin, Zhangyin Feng, and Ting Liu. 2018. Improving low resource named entity recognition using cross-lingual knowl- edge transfer. In IJCAI.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Few-shot classification in named entity recognition task", "authors": [ { "first": "Alexander", "middle": [], "last": "Fritzler", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Maksim", "middle": [], "last": "Kretov", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1812.06158" ] }, "num": null, "urls": [], "raw_text": "Alexander Fritzler, Varvara Logacheva, and Mak- sim Kretov. 2018. Few-shot classification in named entity recognition task. arXiv preprint arXiv:1812.06158.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Neckar: a named entity classifier for wikidata", "authors": [ { "first": "Johanna", "middle": [], "last": "Gei\u00df", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Spitz", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gertz", "suffix": "" } ], "year": 2017, "venue": "International Conference of the German Society for Computational Linguistics and Language Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johanna Gei\u00df, Andreas Spitz, and Michael Gertz. 2017. Neckar: a named entity classifier for wikidata. In International Conference of the German Society for Computational Linguistics and Language Technol- ogy. Springer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Speech recognition with deep recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Mohamed", "middle": [], "last": "Abdel-Rahman", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2013, "venue": "2013 IEEE international conference on acoustics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recur- rent neural networks. In 2013 IEEE international conference on acoustics, speech and signal process- ing.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Named entity recognition with long short-term memory", "authors": [ { "first": "James", "middle": [], "last": "Hammerton", "suffix": "" } ], "year": 2003, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Hammerton. 2003. Named entity recognition with long short-term memory. In NAACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Few-shot learning for named entity recognition in medical text", "authors": [ { "first": "Maximilian", "middle": [], "last": "Hofer", "suffix": "" }, { "first": "Andrey", "middle": [], "last": "Kormilitzin", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Alejo", "middle": [], "last": "Nevado-Holgado", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.05468" ] }, "num": null, "urls": [], "raw_text": "Maximilian Hofer, Andrey Kormilitzin, Paul Goldberg, and Alejo Nevado-Holgado. 2018. Few-shot learn- ing for named entity recognition in medical text. arXiv preprint arXiv:1811.05468.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improving neural relation extraction with implicit mutual relations", "authors": [ { "first": "Jun", "middle": [], "last": "Kuang", "suffix": "" }, { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Jianbing", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Xiangnan", "middle": [], "last": "He", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Aoying", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.05333" ] }, "num": null, "urls": [], "raw_text": "Jun Kuang, Yixin Cao, Jianbing Zheng, Xiangnan He, Ming Gao, and Aoying Zhou. 2019. Improving neu- ral relation extraction with implicit mutual relations. arXiv preprint arXiv:1907.05333.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando Cn", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In ICML.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Backpropagation applied to handwritten zip code recognition", "authors": [ { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Boser", "suffix": "" }, { "first": "S", "middle": [], "last": "John", "suffix": "" }, { "first": "Donnie", "middle": [], "last": "Denker", "suffix": "" }, { "first": "Richard", "middle": [ "E" ], "last": "Henderson", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Lawrence", "middle": [ "D" ], "last": "Hubbard", "suffix": "" }, { "first": "", "middle": [], "last": "Jackel", "suffix": "" } ], "year": 1989, "venue": "Neural computation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann LeCun, Bernhard Boser, John S Denker, Don- nie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. 1989. Backpropagation ap- plied to handwritten zip code recognition. Neural computation.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Neural relation extraction with multi-lingual attention", "authors": [ { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2017. Neural relation extraction with multi-lingual atten- tion. In ACL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A multi-lingual multi-task architecture for low-resource sequence labeling", "authors": [ { "first": "Ying", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Shengqi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. In ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In ACL.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Cheap translation for cross-lingual named entity recognition", "authors": [ { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Chen-Tse", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In EMNLP.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection", "authors": [ { "first": "Jian", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Florian", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and representa- tion projection. In ACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Transforming wikipedia into named entity training data", "authors": [ { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "Tara", "middle": [], "last": "James R Curran", "suffix": "" }, { "first": "", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Australasian Language Technology Association Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel Nothman, James R Curran, and Tara Murphy. 2008. Transforming wikipedia into named entity training data. In Proceedings of the Australasian Language Technology Association Workshop 2008.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Crosslingual name tagging and linking for 282 languages", "authors": [ { "first": "Xiaoman", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Boliang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross- lingual name tagging and linking for 282 languages. In ACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Improving named entity recognition for chinese social media with word segmentation representation learning", "authors": [ { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for chinese social media with word segmentation representation learning. In ACL.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Learning named entity tagger using domain-specific dictionary", "authors": [ { "first": "Jingbo", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Liyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaotao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Teng", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingbo Shang, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, and Jiawei Han. 2018. Learning named entity tagger using domain-specific dictionary. In EMNLP.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Tat-Seng Chua, and Bernt Schiele. 2019. Meta-transfer learning for few-shot learning", "authors": [ { "first": "Qianru", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yaoyao", "middle": [], "last": "Liu", "suffix": "" } ], "year": null, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. 2019. Meta-transfer learning for few-shot learning. In CVPR.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Token and type constraints for cross-lingual part-of-speech tagging", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. TACL.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Cross-lingual named entity recognition via wikification", "authors": [ { "first": "Chen-Tse", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-lingual named entity recognition via wikifica- tion. In CoNLL.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Explainable reasoning over knowledge graphs for recommendation", "authors": [ { "first": "Xiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dingxian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Canran", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "5329--5336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Wang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao, and Tat-Seng Chua. 2019. Explain- able reasoning over knowledge graphs for recom- mendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5329- 5336.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Neural crosslingual named entity recognition with minimal resources", "authors": [ { "first": "Jiateng", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A Smith, and Jaime Carbonell. 2018. Neural cross- lingual named entity recognition with minimal re- sources. In EMNLP.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A local detection approach for named entity recognition and mention detection", "authors": [ { "first": "Mingbin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Sedtawut", "middle": [], "last": "Watcharawittayakul", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingbin Xu, Hui Jiang, and Sedtawut Watcharawit- tayakul. 2017. A local detection approach for named entity recognition and mention detection. In ACL.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Distantly supervised ner with partial annotation learning and reinforcement learning", "authors": [ { "first": "Yaosheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhengqiu", "middle": [], "last": "He", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaosheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, and Min Zhang. 2018. Distantly su- pervised ner with partial annotation learning and re- inforcement learning. In COLING.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Transfer learning for sequence tagging with hierarchical recurrent networks", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2017, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tag- ging with hierarchical recurrent networks. In ICLR.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Semi-supervised learning for named entity recognition using weakly labeled training data", "authors": [ { "first": "Atefeh", "middle": [], "last": "Zafarian", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Rokni", "suffix": "" }, { "first": "Shahram", "middle": [], "last": "Khadivi", "suffix": "" }, { "first": "Sonia", "middle": [], "last": "Ghiasifard", "suffix": "" } ], "year": 2015, "venue": "AISP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atefeh Zafarian, Ali Rokni, Shahram Khadivi, and So- nia Ghiasifard. 2015. Semi-supervised learning for named entity recognition using weakly labeled train- ing data. In AISP.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Name tagging for low-resource incident languages based on expectation-driven learning", "authors": [ { "first": "Boliang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaoman", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Tianlu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2016, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boliang Zhang, Xiaoman Pan, Tianlu Wang, Ashish Vaswani, Heng Ji, Kevin Knight, and Daniel Marcu. 2016. Name tagging for low-resource incident lan- guages based on expectation-driven learning. In NAACL.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Xlink: An unsupervised bilingual entity linking system", "authors": [ { "first": "Jing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Juanzi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hai-Tao", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2017, "venue": "CCL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Zhang, Yixin Cao, Lei Hou, Juanzi Li, and Hai- Tao Zheng. 2017. Xlink: An unsupervised bilingual entity linking system. In CCL.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Example of weakly labeled data. B-NT and I-NT denote incomplete labels without types.", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "(a) Efficiency analysis.(b) Impact of non-entity sampling ratio. (c) Impact of non-entity features.", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "Ablation study of our model in Mongolian.", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "Our predictions on a noisy WL sentence.", "num": null, "uris": null }, "TABREF1": { "type_str": "table", "text": "", "num": null, "html": null, "content": "
B-ORGB-ORGB-ORGB-ORGB-ORG
I-ORGI-ORGI-ORGI-ORGI-ORG
\u2026\u2026\u2026\u2026\u2026
OOB-ORGI-ORGOO
B-ORG I-ORG \u2026 OPartial-CRFs with Non-Entity Sampling
Neural Network The team is owned Classification ModuleLabel Scores Shared Parameter \u2026 Barangay Ginebra and Formula Shell forming an \u2026 Neural NetworkSequence Labeling Module
Noisy Weakly Labeled DataHigh-quality Weakly Labeled Data
Label InductionData Selection Scheme
" }, "TABREF3": { "type_str": "table", "text": "The statistics of weakly labeled dataset.", "num": null, "html": null, "content": "" }, "TABREF4": { "type_str": "table", "text": "noisy WL sentences for language 76.2 80.1 92.0 89.1 90.5 80.9 68.9 74.4 87.3 85.5 86.3 88.6 86.7 87.6 BiLSTM-CRFs 86.0 77.8 81.6 93.3 91.5 92.3 74.1 68.9 71.3 89.0 85.5 87.1 89.5 88.5 89.0 Trans-CRFs 83.7 73.2 78.1 93.0 85.9 89.3 80.2 60.5 69.0 88.0 80.0 83.8 88.9 83.2 85.9 BiLSTM-PCRFs 85.2 79.6 82.3 91.2 92.7 91.9 68.1 70.2 69.1 82.5 91.2 86.6 84.0 90.7 87.1 Ours 82.8 82.5 82.6 93.4 93.5 93.4 73.5 76.8 75.1 86.9 93.6 90.1 87.7 91.5 89.5", "num": null, "html": null, "content": "
CYBNYOMNARZ
PRF1PRF1PRF1PRF1PRF1
CNN-CRFs84.4
" }, "TABREF5": { "type_str": "table", "text": "Performance (%) on low-resource languages.", "num": null, "html": null, "content": "" }, "TABREF7": { "type_str": "table", "text": "F1-score (%) on food domain.", "num": null, "html": null, "content": "
" } } } }