File size: 48,169 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
{
    "paper_id": "O07-2012",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:08:27.249068Z"
    },
    "title": "Word sense induction using independent component analysis",
    "authors": [
        {
            "first": "Jia-Fei",
            "middle": [],
            "last": "Hong",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Petr\u0160imon Institute of Linguistics Academia Sinica",
                "location": {
                    "country": "Taiwan"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper explores the possibilities of using independent component analysis (ICA) for features extraction that could be applied to word sense induction. Two different methods for using the features derived by ICA are introduced and results evaluated. Our goal in this paper is to observe whether ICA based feature vectors can be efficiently used for word context encoding and subsequently for clustering. We show that it is possible, further research is, however, necessary to ascertain more reliable results.",
    "pdf_parse": {
        "paper_id": "O07-2012",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper explores the possibilities of using independent component analysis (ICA) for features extraction that could be applied to word sense induction. Two different methods for using the features derived by ICA are introduced and results evaluated. Our goal in this paper is to observe whether ICA based feature vectors can be efficiently used for word context encoding and subsequently for clustering. We show that it is possible, further research is, however, necessary to ascertain more reliable results.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Word senses are known to be difficult to discriminate and even though discrete definitions are usually sufficient for humans, they might pose problems for computer systems. Word sense induction is a task in which we don't know the word sense as opposed to more popular word sense disambiguation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Word sense can be analyzed by observing behaviour of words in text. In other words, syntagmatic and paradigmatic characteristics of a word give us enough information to describe all it's senses, given that all it's senses appear in the text.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Based on this assumption, many techniques for word sense induction have been proposed. All are based on word co-occurrence statistics. There are two strategies for creating the vectors that encode each word: global encoding strategy, which encodes co-occurrence of word types with other word types and local encoding strategy which encodes co-occurrence of word tokens with word types. The global encoding strategy is more popular, because it provides more information and does not suffer from data sparseness and most of the research has focused on sense analysis of words of different forms, i.e. on phenomena like synonymy etc. However, by encoding word types, we naturally merge all the possible sense distinctions hidden in word's context, i.e. context of a token. For more details cf. (3; 11; 10) .",
                "cite_spans": [
                    {
                        "start": 791,
                        "end": 802,
                        "text": "(3; 11; 10)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Problem of high dimensionality that would be computationally restricting, us usually solved by one of several methods: principal component analysis (PCA), singular value decomposition (SVD) and random projection (RP) and latent semantic analysis, also known as latent semantic indexing is a special application of dimensionality reduction where both SVD and PCA can be used. See (1; 2) for overview and critical analysis.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The classical approach to word context analysis is a vector space model, which uses simple the whole co-occurrence vectors when measuring word similarity. This approach also suffers from a problem similar to data sparseness, i.e. the similarity of words is based on word forms and therefore fails in case where synonym rather than similar word form is used in the vector encoding (11; 10) .",
                "cite_spans": [
                    {
                        "start": 380,
                        "end": 388,
                        "text": "(11; 10)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Major problem with the classical simple vector space model approach is the superficial nature the information provided by mere co-occurrence frequency, which can only account for seen variables. One of the most popular approaches to word context analysis, latent semantic analysis (LSA), can improve this limitation, by creating a latent semantic space using SVD performed on word by document matrix. Frequency of occurrence of each word in a document represents each entry w ij in the matrix, thus, the whole document serves as a context. Document is, naturally, some sort of meaningful portion of text. SVD then decomposes the original matrix into three matrices: word by concept matrix, concept by concept matrix and concept by document matrix. The results produced by LSA are, however, difficult to understand for humans (9), i.e. there is no way of explaining their meaning.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Independent component analysis (ICA) (7) is a statistical method that takes into account high order statistical dependencies. It can be compared to PCA in the sense that both are related to factor analysis, but PCA uses only second-order statistics, assuming Gaussian distribution, while ICA can only be performed on non-Gaussian data (6) . Comparison with SVD is provided by (12) on word context analysis task.",
                "cite_spans": [
                    {
                        "start": 335,
                        "end": 338,
                        "text": "(6)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ICA",
                "sec_num": "2"
            },
            {
                "text": "ICA is capable of finding emergent linguistic knowledge without predefined categories as shown in (4; 5) and others.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ICA",
                "sec_num": "2"
            },
            {
                "text": "As a method for feature extraction/dimensionality reduction it provides results that are approachable by humans reader. Major advantage of ICA is that it looks for factors that are statistically independent, therefore it is able find important representation for multivariate data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ICA",
                "sec_num": "2"
            },
            {
                "text": "ICA can be defined in a matrix form as x = As where s = (s 1 , s 2 , ..., s n ) T represents the independent variables, components, and the original data is represented by x = (x 1 , x 2 , ..., x n ) T , which can be decomposed into s \u00d7 A, where A is a n \u00d7 n square mixing-matrix.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ICA",
                "sec_num": "2"
            },
            {
                "text": "Both the mixing-matrix A and independent components s are learning by unsupervised process from the observed data x. For more rigorous explanation see (7) .",
                "cite_spans": [
                    {
                        "start": 151,
                        "end": 154,
                        "text": "(7)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ICA",
                "sec_num": "2"
            },
            {
                "text": "We have used FastICA algorithm as implemented in R language 1 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ICA",
                "sec_num": "2"
            },
            {
                "text": "The context matrix has been constructed from words from Sinica Corpus of a frequency higher than 150. This restriction yielded 5969 word types. We have chosen this limited lexicon to lower the complexity of the task. The whole corpus was stripped from everything but all words whose word class tag started with N, V, A or D. This means that our data consisted of nouns (N, including pronouns), verbs (V), adjectives (A) and adverbs (D) 2 .",
                "cite_spans": [
                    {
                        "start": 436,
                        "end": 437,
                        "text": "2",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data collection",
                "sec_num": "3"
            },
            {
                "text": "Then we collected co-occurrence statistics for all words from window of 4 preceding and 4 following words, but only if these were within a sentence. We defined sentence simply as a string of words delimited by ideographic full-stop, comma, exclamation and question mark (\u3002,\uff0c,\uff01 and \uff1f). In case of context being shorter than 4 words, the remaining slots were substituted by zero indicating no data available.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data collection",
                "sec_num": "3"
            },
            {
                "text": "We have normalized the data by taking log of each data point a ij in context matrix. Since this is a sparse matrix and lot of data points are zero, one has been added to each data point.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data collection",
                "sec_num": "3"
            },
            {
                "text": "After the extraction of the independent components, we have encoded contexts of word tokens for each word type selected for analysis using these independent components. Thus we are able to provide reliable encoding for words, which is based on global properties. Note that there is no need to pursue orthogonality of different word types that are sometimes required in the context encoding. The similarities between different word types are based on the strength of independent components for each word type and therefore much better results of similarity measure can be expected than one would get from binary random encoding as introduced in (8).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data collection",
                "sec_num": "3"
            },
            {
                "text": "We could experiment with several strategies to context matrix construction: different word classes in the context and different sizes of feature vectors. Context in our experiments is defined by four words that precede and four words that follow each keyword. Then we study the feature similarities across different words. To aid the analysis, a hierarchical clustering is used to determine closeness of relation among feature vectors of specified dimension. This step is to find most reliable feature vector dimension for subsequent experiments. As mentioned before, the features can be traced back and their nature determined, i.e. they can be labelled.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data collection",
                "sec_num": "3"
            },
            {
                "text": "Due to the time constraints, we have predetermined feature vector size beforehand. We've extracted 100 and 1000 independent components and used them in two separate experiments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data collection",
                "sec_num": "3"
            },
            {
                "text": "Having determined the size of feature vectors, we use original word contexts for each word token and encode the context using these vectors. That means that each word in the context of particular keyword is replaced by it's respective feature vector, a vector of quantified relations to each of the independent components that has been extracted by ICA from the global co-occurrence matrix.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data collection",
                "sec_num": "3"
            },
            {
                "text": "We than use maximum-linkage hierarchical clustering to find related words and based on the features present in the vectors we determine their characteristics that will provide clues to their word senses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data collection",
                "sec_num": "3"
            },
            {
                "text": "We ran two experiments, one with 100 independent components and the second with 1000 components. For the experiment we have manually selected 9 words, which we expected to be easier to analyze. We have, however, failed to find in Chinese word that would allow for such obvious sense distinctions as English plant,palm, bank etc. Such words are typically used in word sense related task to test the new algorithms. The failure to find words that would have similarly clear-cut sense distinctions, might have influenced our initial results. The words we have selected are (number in bracket indicates the number of senses according to Chinese Wordnet) 3 ",
                "cite_spans": [
                    {
                        "start": 650,
                        "end": 651,
                        "text": "3",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4"
            },
            {
                "text": "When ICA algorithm retrieves the specified number of independent components, each of them can be labelled by creating a descending list of those words that are most responsive for each of the components (5; 4) . Only the most responsive word could be assigned to each of the components as a label, but this way we would not be able to determine characteristics of the components with sufficient clarity. As we will see, even listing several items from the top of the list of the most responsive words, won't always provide clear explanation of the nature of the component in question. This is due to the fact that the independent components are not yet very well understood, that it is not yet entirely obvious how the components are created (5) .",
                "cite_spans": [
                    {
                        "start": 742,
                        "end": 745,
                        "text": "(5)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 203,
                        "end": 209,
                        "text": "(5; 4)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Independent components",
                "sec_num": "4.1"
            },
            {
                "text": "Bellow are few examples independent components and labels assigned to each of them. We list up to 20 most responsive words for each component to provide information for human judgment. These are examples from the 100 independent components experiment. For future research, perhaps an automatic way of determining different number of labels required to explain each independent component might be proposed using time series analysis, but for that, more research has to be provided to better understand the nature of independent components in order to justify such step.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Independent components",
                "sec_num": "4.1"
            },
            {
                "text": "First ten independent components can be seen in 1. As we can see, independent components cannot we regarded as synsets as known in WordNet, since they clearly contain words from multiple classes. We can perhaps call them collocation sets, colsets. But this term will have to be revised based on the subsequent research on the nature of independent components. Table 4 .1 shows an example how a particular word type is encoded. The independent components in this example are are sorted by the most important features. We can see how the encoding in Table 4.1 contrasts with Table 4 .1, which shows ten least salient features for word type yuyan \u8a9e\u8a00.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 360,
                        "end": 367,
                        "text": "Table 4",
                        "ref_id": "TABREF4"
                    },
                    {
                        "start": 548,
                        "end": 580,
                        "text": "Table 4.1 contrasts with Table 4",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Independent components",
                "sec_num": "4.1"
            },
            {
                "text": "We have used maximum-linkage hierarchical algorithm from Pycluster package 4 to cluster word token contexts. The use of hierarchical clustering is motivated by the attempt to provide gradual sense analysis where subsenses could be identified within partial senses.",
                "cite_spans": [
                    {
                        "start": 75,
                        "end": 76,
                        "text": "4",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sense clustering",
                "sec_num": "4.2"
            },
            {
                "text": "Our goal in this paper is to observe whether ICA based feature vectors can be efficiently used for word context encoding and subsequently for clustering. Clustering results were evaluated by native speaker with linguistics knowledge, who labelled all the sentences according to Chinese Wordnet and in this paper, Table 5 : Results for word \u63aa\u8fad number of sense were also determined this way. Then we have assigned sense label to each cluster according to most prevalent sense in the cluster. For example, word fangui \u72af\u898f has two sense in Chinese Wordnet. We cut the tree produced by hierarchical clustering algorithm into two and our expectation is that word tokens manually labelled as sense 1 will be in one of the clusters and word tokens labelled as sense 2 will be in the other. Naturally some incorrect classifications can be expected as well and therefore we assign sense label according to the label most frequent in the particular cluster. In case we get both clusters labelled the same, the sense induction has failed.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 313,
                        "end": 320,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Sense clustering",
                "sec_num": "4.2"
            },
            {
                "text": "In this experiment we have not pursued correct classification of all the words, therefore we leave the evaluation of those results out.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sense clustering",
                "sec_num": "4.2"
            },
            {
                "text": "For reference we include tables with results of several words. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sense clustering",
                "sec_num": "4.2"
            },
            {
                "text": "The major advantage of our approach is that it uses global characteristics of words based on their co-occurrence with other words in the language, which are then applied to derive local encoding of word context. Thus we retrieve reliable characteristics of word's behaviour in the language and don't loose the word sense information, which allows us to analyze semantic characteristics of similar word forms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "5"
            },
            {
                "text": "Our current results are not very satisfying. I can be observed, however, from Table8 that increased number improves the sense induction considerably. We will pursue this track in our subsequent research. On the other hand, this result is not surprising. Considering the nature of independent components, which are rather symbolic features similar to synonymic sets, synsets, or rather collocation sets, collsets, it can be expected that much larger number of these components would be required to encode semantic information.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "5"
            },
            {
                "text": "With manually semantically tagged word tokens we will try to automatically estimate the sufficient number of independent components that would improve precision of sense clustering.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future work",
                "sec_num": "6"
            },
            {
                "text": "Another approach we intend to try is to add feature vectors of all the context words and cluster the resulting vectors. This approach should emphasize more important features in given contexts.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future work",
                "sec_num": "6"
            },
            {
                "text": "We will also do more carefull preprocessing and also apply dimensionality reduction (typically done by PCA) before running ICA as has been done in some of the previous studies.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Future work",
                "sec_num": "6"
            },
            {
                "text": "http://www.stats.ox.ac.uk/ marchini/software.html2 For complete list see: http://wordsketch.ling.sinica.edu.tw/gigaword pos tags.html",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "http://cwn.ling.sinica.edu.tw",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "http://bonsai.ims.u-tokyo.ac.jp/ mdehoon/software/cluster/software.htm",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Advances in Independent Component Analysis with Applications to Data Mining",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Bingham",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "E. Bingham. Advances in Independent Component Analysis with Applica- tions to Data Mining. PhD thesis, Helsinki University of Technology, 2003.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Random projection in dimensionality reduction: applications to image and text data",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Bingham",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Mannila",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "KDD '01: Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining",
                "volume": "",
                "issue": "",
                "pages": "245--250",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "E. Bingham and H. Mannila. Random projection in dimensionality reduc- tion: applications to image and text data. In KDD '01: Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 245-250, New York, NY, USA, 2001. ACM Press.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Word sense induction: Triplet-based clustering and automatic evaluation",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Bordag",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "11 th Conference of the European Chapter of the Association for Computational Linguistics: EACL 2006",
                "volume": "",
                "issue": "",
                "pages": "137--144",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Bordag. Word sense induction: Triplet-based clustering and automatic evaluation. In 11 th Conference of the European Chapter of the Association for Computational Linguistics: EACL 2006, pages 137-144, 2006.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Linguistic feature extraction using independent component analysis",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Honkela",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Hyv\u00e4rinen",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of IJCNN",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "T. Honkela and A. Hyv\u00e4rinen. Linguistic feature extraction using indepen- dent component analysis. In Proc. of IJCNN 2004, 2004.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Emergence of linguistic features: Independent component analysis of context",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Honkela",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Hyv\u00e4rinen",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "V\u00e4yrynen",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of NCPW9",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "T. Honkela, A. Hyv\u00e4rinen, and J. V\u00e4yrynen. Emergence of linguistic fea- tures: Independent component analysis of context. In A. C. et al., editor, Proceedings of NCPW9, 2005.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Independent component analysis",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Hyv\u00e4rinen",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Karhunen",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Oja",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Hyv\u00e4rinen, J. Karhunen, and E. Oja. Independent component analysis. Wiley, 2001.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Independent component analysis: Algorithms and applications",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Hyv\u00e4rinen",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Oja",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Neural networks",
                "volume": "13",
                "issue": "4",
                "pages": "411--430",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Hyv\u00e4rinen and E. Oja. Independent component analysis: Algorithms and applications. Neural networks, 13(4):411-430, 2001.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Dimensionality reduction by random mapping: Fast similarity computation for clustering",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Kaski",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of IJCNN'98, International Joint Conference on Neural Networks",
                "volume": "1",
                "issue": "",
                "pages": "413--418",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Kaski. Dimensionality reduction by random mapping: Fast similarity computation for clustering. In Proceedings of IJCNN'98, International Joint Conference on Neural Networks, volume 1, pages 413-418. IEEE Service Center, Piscataway, NJ, 1998.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Landauer",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Dumais",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Psychological Review",
                "volume": "104",
                "issue": "2",
                "pages": "211--240",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "T. Landauer and S. Dumais. A solution to plato's problem: The latent seman- tic analysis theory of acquisition, induction, and representation of knowl- edge. Psychological Review, 104(2):211-240, 2001.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Fully automatic word sense induction by semantic clustering",
                "authors": [
                    {
                        "first": "D",
                        "middle": [
                            "B"
                        ],
                        "last": "Neill",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Master's thesis",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "D. B. Neill. Fully automatic word sense induction by semantic clustering. Master's thesis, Cambridge University, 2002.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "A practical solution to the problem of automatic word sense induction",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Rapp",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "The Companion Volume to the Proceedings of 42st Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "194--197",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Rapp. A practical solution to the problem of automatic word sense in- duction. In The Companion Volume to the Proceedings of 42st Annual Meeting of the Association for Computational Linguistics, pages 194-197, Barcelona, Spain, July 2004. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Comparison of independent component analysis and singular value decomposition in word context analysis",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "V\u00e4yrynen",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Honkela",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "AKRR'05",
                "volume": "",
                "issue": "",
                "pages": "135--140",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. V\u00e4yrynen and T. Honkela. Comparison of independent component analy- sis and singular value decomposition in word context analysis. In AKRR'05, pages 135-140.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": ": \u5c71\u982d (3), \u540d\u724c(2), \u72af\u898f(2), \u7d04\u8ac7(2), \u63aa\u8fad(2), \u5805\u786c(2), \u5bcc \u6709(2), \u5c6c\u4e0b(2), \u5929\u6c23(3).",
                "type_str": "figure",
                "num": null,
                "uris": null
            },
            "TABREF0": {
                "content": "<table><tr><td>Label</td><td>IC Responsive words (descending order)</td></tr><tr><td>TIME</td><td>0</td></tr></table>",
                "num": null,
                "text": "\u6642\u9593 \u5e74 \u6708 \u5c0f\u6642 \u5929 \u6bb5 \u534a \u7d93\u904e \u671f\u9593 \u9031 \u5206\u9418 \u5f8c \u661f \u671f \u4e45 \u6301\u7e8c \u5167 \u65e5 \u4e4b\u5f8c \u5de5\u4f5c \u7d50\u675f TIME 1 \u4e09\u5341 \u4e8c\u5341 \u4e94\u5341 \u4e00\u767e \u56db\u5341 \u5341 \u516c\u5c3a \u516c\u91cc \u6b72 \u5341\u4e94 \u4ee5\u4e0a \u516d\u5341 \u8d85\u904e \u7d04 \u5927\u7d04 \u5de6\u53f3 \uff0e \u5206\u9418 \u5341\u4e8c \u516b\u5341 FAMILY 2 \u5abd\u5abd \u6bcd\u89aa \u5b69\u5b50 \u7238\u7238 \u7236\u89aa \u5973\u5152 \u7236\u6bcd \u6b72 \u5152\u5b50 \u5bb6 \u5c0f\u5b69 \u5f1f\u5f1f \u56de\u5bb6 \u59b9\u59b9 \u54e5\u54e5 \u5e36 \u592a\u592a \u7167\u9867 \u56de\u4f86 \u5bb6 \u6b66\u5668 \u79d1\u5a01\u7279 \u6ce2\u65af\u7063 \u884c\u52d5 \u806f\u5408\u570b \u7f8e \u7f8e\u8ecd \u4ee5 \u8272\u5217 \u570b\u9632\u90e8 \u6d77\u73ca \u4e2d\u6771 WARNING 8 \u6ce8\u610f \u61c9 \u4e0d\u8981 \u7279\u5225 \u907f\u514d \u91cd\u8981 \u61c9\u8a72 \u7d50\u679c \u63d0\u9192 \u6700 \u597d \u8981 \u9ede \u9078\u64c7 \u6e96\u5099 \u5b89\u5168 \u5c0f\u5fc3 \u5065\u5eb7 \u4fdd\u6301 \u98f2\u98df \u547c \u7c72 COMPETITION 9 \u9078\u624b \u6bd4\u8cfd \u51a0\u8ecd \u904b\u52d5 \u5c46 \u4e2d\u83ef \u9326\u6a19\u8cfd \u5973\u5b50 \u4e9e\u904b \u53c3\u52a0 \u4e16\u754c \u5354\u6703 \u91d1\u724c \u9ad4\u80b2 \u7537\u5b50 \u7403\u54e1 \u570b \u6211\u570b \u570b \u969b \u6559\u7df4 PRODUCTION 10 \u751f\u7522 \u6280\u8853 \u5de5\u696d \u88fd\u9020 \u8a2d\u5099 \u5de5\u5ee0 \u7522\u696d \u79d1\u6280 \u7522\u54c1 \u6a5f\u68b0 \u6750\u6599 \u96fb\u5b50 \u5ee0 \u7814\u767c \u5316\u5b78 \u539f\u6599 \u77e5\u8b58 \u79d1\u5b78 \u52a0 \u5de5 \u8fb2\u696d ECONOMY 11 \u5143 \u7d93\u8cbb \u8cbb\u7528 \u88dc\u52a9 \u9810\u7b97 \u7b46 \u9322 \u7f8e\u5143 \u8ca0\u64d4 \u91d1\u984d \u652f \u51fa \u6536\u5165 \u652f\u4ed8 \u65b0\u53f0\u5e63 \u6210\u672c \u8cb8\u6b3e \u6bcf \u8cc7\u91d1 \u7d66 \u82b1\u8cbb RESEARCH 12 \u8cc7\u6599 \u8abf\u67e5 \u5831\u544a \u7d50\u679c \u7d71\u8a08 \u986f\u793a \u5206\u6790 \u505a \u7814\u7a76 \u4efd \u9032\u5165 \u4f9d\u64da \u6307\u51fa \u6578\u64da \u9810\u6e2c \u5c08\u5bb6 \u5730\u9707 \u767c\u73fe \u8a55\u4f30 \u6b63\u78ba",
                "type_str": "table",
                "html": null
            },
            "TABREF1": {
                "content": "<table/>",
                "num": null,
                "text": "Independent components: 100 IC set, first 10 IC Feature strength Responsive words (descending order) 7.55055952072 \u7528 \u5b57 \u807d \u8a9e\u8a00 \u9996 \u82f1\u6587 \u53e5 \u5531 \u97f3\u6a02 \u8a5e \u8868\u9054 \u6b4c \u5fc3 \u570b\u8a9e \u7372\u5f97 \u5beb \u8072\u97f3 \u4f7f\u7528 \u8a69 \u6b4c\u66f2 6.93665552139 \u7279\u8272 \u5177\u6709 \u5177 \u539f\u4f4f\u6c11 \u7279\u6b8a \u6587\u5316 \u7368\u7279 \u8a9e\u8a00 \u98a8 \u683c \u8272\u5f69 \u8c50\u5bcc \u80cc\u666f \u4e0d\u540c \u7279\u6027 \u8868\u73fe \u5f88\u591a \u7576\u5730 \u50b3\u7d71 \u6b77\u53f2 \u6700 6.20834875107 \u6559\u5b78 \u82f1\u8a9e \u570b\u5c0f \u570b\u4e2d \u6559\u80b2 \u5b78\u7fd2 \u8001\u5e2b \u8ab2\u7a0b \u6559 \u5e2b \u5c0f\u5b78 \u9ad8\u4e2d \u5b78\u6821 \u5b69\u5b50 \u5c0f\u670b\u53cb \u5b78\u751f \u5bb6\u9577 \u6559 \u6750 \u6578\u5b78 \u6559\u79d1\u66f8 \u82f1\u6587 3.42819428444 \u5979 \u5f97 \u6211 \u4ed6 \u5feb \u5b69\u5b50 \u73a9 \u5403 \u6df1 \u614b\u5ea6 \u5168 \u8d77\u4f86 \u7236 \u89aa \u7236\u6bcd \u8dd1 \u5bb6\u5ead \u6bcd\u89aa \u5171\u540c \u76f8\u7576 \u4e00\u8d77 3.34706568718 \u54c1\u8cea \u63d0\u9ad8 \u9ad8 \u63d0\u5347 \u6c34\u6e96 \u6210\u672c \u964d\u4f4e \u6548\u7387 \u9054\u5230 \u4f4e \u63d0\u6607 \u6539\u5584 \u5b89\u5168 \u6574\u9ad4 \u4fdd\u969c \u670d\u52d9 \u904e \u570b\u6c11 \u4eab \u53d7 \u8003\u91cf",
                "type_str": "table",
                "html": null
            },
            "TABREF2": {
                "content": "<table><tr><td>Feature strength</td><td>Responsive words (descending order)</td></tr><tr><td>0.157807931304</td><td/></tr></table>",
                "num": null,
                "text": "Partial example of encoded word \u8a9e\u8a00 (five most salient features) \u7533\u8acb \u898f\u5b9a \u6628\u5929 \u4e0d\u5f97 \u4e0b\u5348 \u53d6\u5f97 \u6cd5\u9662 \u4efb\u4f55 \u8fa6 \u7406 \u8b49\u660e \u884c\u70ba \u8a31 \u540c\u610f \u9055\u53cd \u4e0a\u5348 \u662f\u5426 \u63a5\u53d7 \u6a5f \u95dc \u591a \u51cc\u6668 0.157353967428 \u4e86\u89e3 \u4e0d\u540c \u89c0\u5bdf \u53bb \u601d\u8003 \u770b \u5206\u6790 \u91cd\u65b0 \u8abf\u6574 \u770b\u770b \u6df1\u5165 \u8abf\u67e5 \u91cd\u8981 \u8f03 \u77ad\u89e3 \u9762\u5c0d \u4e00\u4e0b \u63a2\u8a0e \u5f9e \u9ad4\u6703 0.152415782213 \u8d77 \u4e5d\u6708 \u4e09\u6708 \u4e03\u6708 \u516d\u6708 \u4e00\u65e5 \u4e94\u6708 \u56db\u6708 \u4e8c\u6708 \u5341\u4e8c\u6708 \u81ea \u5341\u6708 \u6c11\u570b \u516b\u6708 \u81f3 \u5341\u4e00\u6708 \u5e95 \u4e00\u6708 \u6b62 \u5341\u4e94\u65e5 0.0953392237425 \u5f88 \u6700 \u975e\u5e38 \u76f8\u7576 \u8f03 \u592a \u6bd4\u8f03 \u66f4 \u5341\u5206 \u6108 \u6bd4 \u8d8a \u6975 \u5f97 \u90a3\u9ebc \u9019\u9ebc \u4e00\u9ede \u6108\u4f86\u6108 \u8d8a\u4f86\u8d8a \u751a 0.0753756538033 \u9078\u624b \u6bd4\u8cfd \u51a0\u8ecd \u904b\u52d5 \u5c46 \u4e2d\u83ef \u9326\u6a19\u8cfd \u5973\u5b50 \u4e9e \u904b \u53c3\u52a0 \u4e16\u754c \u5354\u6703 \u91d1\u724c \u9ad4\u80b2 \u7537\u5b50 \u7403\u54e1 \u570b \u6211 \u570b \u570b\u969b \u6559\u7df4",
                "type_str": "table",
                "html": null
            },
            "TABREF3": {
                "content": "<table><tr><td/><td>\u72af\u898f IC 100</td><td/><td/><td>\u72af\u898f IC 1000</td><td/></tr><tr><td colspan=\"3\">Cluster Sense Count</td><td colspan=\"3\">Cluster Sense Count</td></tr><tr><td>a</td><td>0 1</td><td>5 28</td><td>a</td><td>0 1</td><td>1 0</td></tr><tr><td>b</td><td>0 1</td><td>3 1</td><td>b</td><td>0 1</td><td>6 30</td></tr></table>",
                "num": null,
                "text": "Partial example of encoded word \u8a9e\u8a00 (five least salient features)",
                "type_str": "table",
                "html": null
            },
            "TABREF4": {
                "content": "<table><tr><td/><td>\u63aa\u8fad IC 100</td><td/><td/><td>\u63aa\u8fad IC 1000</td><td/></tr><tr><td colspan=\"3\">Cluster Sense Count</td><td colspan=\"3\">Cluster Sense Count</td></tr><tr><td>a</td><td>0 1</td><td>9 1</td><td>a</td><td>0 1</td><td>0 1</td></tr><tr><td>b</td><td>0 1</td><td>1 8</td><td>b</td><td>0 1</td><td>9 9</td></tr></table>",
                "num": null,
                "text": "Results for word \u72af\u898f",
                "type_str": "table",
                "html": null
            },
            "TABREF6": {
                "content": "<table><tr><td>\u5c71\u982d IC 100</td></tr><tr><td>Cluster Sense Count</td></tr></table>",
                "num": null,
                "text": "Results for word \u63aa\u8fad",
                "type_str": "table",
                "html": null
            },
            "TABREF7": {
                "content": "<table><tr><td colspan=\"3\">Word IC100 IC1000</td></tr><tr><td>\u5c71\u982d</td><td>0</td><td>0</td></tr><tr><td>\u540d\u724c</td><td>0</td><td>0</td></tr><tr><td>\u72af\u898f</td><td>1</td><td>1</td></tr><tr><td>\u7d04\u8ac7</td><td>0</td><td>1</td></tr><tr><td>\u63aa\u8fad</td><td>1</td><td>1</td></tr><tr><td>\u5805\u786c</td><td>0</td><td>1</td></tr><tr><td>\u5bcc\u6709</td><td>0</td><td>0</td></tr><tr><td>\u5c6c\u4e0b</td><td>0</td><td>1</td></tr><tr><td>\u5929\u6c23</td><td>0</td><td>0</td></tr></table>",
                "num": null,
                "text": "Results for word \u63aa\u8fad",
                "type_str": "table",
                "html": null
            },
            "TABREF8": {
                "content": "<table/>",
                "num": null,
                "text": "Overall results",
                "type_str": "table",
                "html": null
            }
        }
    }
}