Titles
stringlengths
6
220
Abstracts
stringlengths
37
3.26k
Years
int64
1.99k
2.02k
Categories
stringclasses
1 value
Challenges for Distributional Compositional Semantics
This paper summarises the current state-of-the art in the study of compositionality in distributional semantics, and major challenges for this area. We single out generalised quantifiers and intensional semantics as areas on which to focus attention for the development of the theory. Once suitable theories have been developed, algorithms will be needed to apply the theory to tasks. Evaluation is a major problem; we single out application to recognising textual entailment and machine translation for this purpose.
2,012
Computation and Language
Distinct word length frequencies: distributions and symbol entropies
The distribution of frequency counts of distinct words by length in a language's vocabulary will be analyzed using two methods. The first, will look at the empirical distributions of several languages and derive a distribution that reasonably explains the number of distinct words as a function of length. We will be able to derive the frequency count, mean word length, and variance of word length based on the marginal probability of letters and spaces. The second, based on information theory, will demonstrate that the conditional entropies can also be used to estimate the frequency of distinct words of a given length in a language. In addition, it will be shown how these techniques can also be applied to estimate higher order entropies using vocabulary word length.
2,012
Computation and Language
Clustering based approach extracting collocations
The following study presents a collocation extraction approach based on clustering technique. This study uses a combination of several classical measures which cover all aspects of a given corpus then it suggests separating bigrams found in the corpus in several disjoint groups according to the probability of presence of collocations. This will allow excluding groups where the presence of collocations is very unlikely and thus reducing in a meaningful way the search space.
2,012
Computation and Language
Automatic Segmentation of Manipuri (Meiteilon) Word into Syllabic Units
The work of automatic segmentation of a Manipuri language (or Meiteilon) word into syllabic units is demonstrated in this paper. This language is a scheduled Indian language of Tibeto-Burman origin, which is also a very highly agglutinative language. This language usages two script: a Bengali script and Meitei Mayek (Script). The present work is based on the second script. An algorithm is designed so as to identify mainly the syllables of Manipuri origin word. The result of the algorithm shows a Recall of 74.77, Precision of 91.21 and F-Score of 82.18 which is a reasonable score with the first attempt of such kind for this language.
2,012
Computation and Language
Frame Interpretation and Validation in a Open Domain Dialogue System
Our goal in this paper is to establish a means for a dialogue platform to be able to cope with open domains considering the possible interaction between the embodied agent and humans. To this end we present an algorithm capable of processing natural language utterances and validate them against knowledge structures of an intelligent agent's mind. Our algorithm leverages dialogue techniques in order to solve ambiguities and acquire knowledge about unknown entities.
2,012
Computation and Language
Appropriate Nouns with Obligatory Modifiers
The notion of appropriate sequence as introduced by Z. Harris provides a powerful syntactic way of analysing the detailed meaning of various sentences, including ambiguous ones. In an adjectival sentence like 'The leather was yellow', the introduction of an appropriate noun, here 'colour', specifies which quality the adjective describes. In some other adjectival sentences with an appropriate noun, that noun plays the same part as 'colour' and seems to be relevant to the description of the adjective. These appropriate nouns can usually be used in elementary sentences like 'The leather had some colour', but in many cases they have a more or less obligatory modifier. For example, you can hardly mention that an object has a colour without qualifying that colour at all. About 300 French nouns are appropriate in at least one adjectival sentence and have an obligatory modifier. They enter in a number of sentence structures related by several syntactic transformations. The appropriateness of the noun and the fact that the modifier is obligatory are reflected in these transformations. The description of these syntactic phenomena provides a basis for a classification of these nouns. It also concerns the lexical properties of thousands of predicative adjectives, and in particular the relations between the sentence without the noun : 'The leather was yellow' and the adjectival sentence with the noun : 'The colour of the leather was yellow'.
1,995
Computation and Language
A prototype for projecting HPSG syntactic lexica towards LMF
The comparative evaluation of Arabic HPSG grammar lexica requires a deep study of their linguistic coverage. The complexity of this task results mainly from the heterogeneity of the descriptive components within those lexica (underlying linguistic resources and different data categories, for example). It is therefore essential to define more homogeneous representations, which in turn will enable us to compare them and eventually merge them. In this context, we present a method for comparing HPSG lexica based on a rule system. This method is implemented within a prototype for the projection from Arabic HPSG to a normalised pivot language compliant with LMF (ISO 24613 - Lexical Markup Framework) and serialised using a TEI (Text Encoding Initiative) based representation. The design of this system is based on an initial study of the HPSG formalism looking at its adequacy for the representation of Arabic, and from this, we identify the appropriate feature structures corresponding to each Arabic lexical category and their possible LMF counterparts.
2,012
Computation and Language
FST Based Morphological Analyzer for Hindi Language
Hindi being a highly inflectional language, FST (Finite State Transducer) based approach is most efficient for developing a morphological analyzer for this language. The work presented in this paper uses the SFST (Stuttgart Finite State Transducer) tool for generating the FST. A lexicon of root words is created. Rules are then added for generating inflectional and derivational words from these root words. The Morph Analyzer developed was used in a Part Of Speech (POS) Tagger based on Stanford POS Tagger. The system was first trained using a manually tagged corpus and MAXENT (Maximum Entropy) approach of Stanford POS tagger was then used for tagging input sentences. The morphological analyzer gives approximately 97% correct results. POS tagger gives an accuracy of approximately 87% for the sentences that have the words known to the trained model file, and 80% accuracy for the sentences that have the words unknown to the trained model file.
2,012
Computation and Language
Adaptation of pedagogical resources description standard (LOM) with the specificity of Arabic language
In this article we focus firstly on the principle of pedagogical indexing and characteristics of Arabic language and secondly on the possibility of adapting the standard for describing learning resources used (the LOM and its Application Profiles) with learning conditions such as the educational levels of students and their levels of understanding,... the educational context with taking into account the representative elements of text, text length, ... in particular, we put in relief the specificity of the Arabic language which is a complex language, characterized by its flexion, its voyellation and agglutination.
2,012
Computation and Language
A Method for Selecting Noun Sense using Co-occurrence Relation in English-Korean Translation
The sense analysis is still critical problem in machine translation system, especially such as English-Korean translation which the syntactical different between source and target languages is very great. We suggest a method for selecting the noun sense using contextual feature in English-Korean Translation.
2,012
Computation and Language
More than Word Frequencies: Authorship Attribution via Natural Frequency Zoned Word Distribution Analysis
With such increasing popularity and availability of digital text data, authorships of digital texts can not be taken for granted due to the ease of copying and parsing. This paper presents a new text style analysis called natural frequency zoned word distribution analysis (NFZ-WDA), and then a basic authorship attribution scheme and an open authorship attribution scheme for digital texts based on the analysis. NFZ-WDA is based on the observation that all authors leave distinct intrinsic word usage traces on texts written by them and these intrinsic styles can be identified and employed to analyze the authorship. The intrinsic word usage styles can be estimated through the analysis of word distribution within a text, which is more than normal word frequency analysis and can be expressed as: which groups of words are used in the text; how frequently does each group of words occur; how are the occurrences of each group of words distributed in the text. Next, the basic authorship attribution scheme and the open authorship attribution scheme provide solutions for both closed and open authorship attribution problems. Through analysis and extensive experimental studies, this paper demonstrates the efficiency of the proposed method for authorship attribution.
2,012
Computation and Language
Recent Technological Advances in Natural Language Processing and Artificial Intelligence
A recent advance in computer technology has permitted scientists to implement and test algorithms that were known from quite some time (or not) but which were computationally expensive. Two such projects are IBM's Jeopardy as a part of its DeepQA project [1] and Wolfram's Wolframalpha[2]. Both these methods implement natural language processing (another goal of AI scientists) and try to answer questions as asked by the user. Though the goal of the two projects is similar, both of them have a different procedure at it's core. In the following sections, the mechanism and history of IBM's Jeopardy and Wolfram alpha has been explained followed by the implications of these projects in realizing Ray Kurzweil's [3] dream of passing the Turing test by 2029. A recipe of taking the above projects to a new level is also explained.
2,012
Computation and Language
Introduction of the weight edition errors in the Levenshtein distance
In this paper, we present a new approach dedicated to correcting the spelling errors of the Arabic language. This approach corrects typographical errors like inserting, deleting, and permutation. Our method is inspired from the Levenshtein algorithm, and allows a finer and better scheduling than Levenshtein. The results obtained are very satisfactory and encouraging, which shows the interest of our new approach.
2,012
Computation and Language
Average word length dynamics as indicator of cultural changes in society
Dynamics of average length of words in Russian and English is analysed in the article. Words belonging to the diachronic text corpus Google Books Ngram and dated back to the last two centuries are studied. It was found out that average word length slightly increased in the 19th century, and then it was growing rapidly most of the 20th century and started decreasing over the period from the end of the 20th - to the beginning of the 21th century. Words which contributed mostly to increase or decrease of word average length were identified. At that, content words and functional words are analysed separately. Long content words contribute mostly to word average length of word. As it was shown, these words reflect the main tendencies of social development and thus, are used frequently. Change of frequency of personal pronouns also contributes significantly to change of average word length. The other parameters connected with average length of word were also analysed.
2,015
Computation and Language
Authorship Identification in Bengali Literature: a Comparative Analysis
Stylometry is the study of the unique linguistic styles and writing behaviors of individuals. It belongs to the core task of text categorization like authorship identification, plagiarism detection etc. Though reasonable number of studies have been conducted in English language, no major work has been done so far in Bengali. In this work, We will present a demonstration of authorship identification of the documents written in Bengali. We adopt a set of fine-grained stylistic features for the analysis of the text and use them to develop two different models: statistical similarity model consisting of three measures and their combination, and machine learning model with Decision Tree, Neural Network and SVM. Experimental results show that SVM outperforms other state-of-the-art methods after 10-fold cross validations. We also validate the relative importance of each stylistic feature to show that some of them remain consistently significant in every model used in this experiment.
2,012
Computation and Language
Robopinion: Opinion Mining Framework Inspired by Autonomous Robot Navigation
Data association methods are used by autonomous robots to find matches between the current landmarks and the new set of observed features. We seek a framework for opinion mining to benefit from advancements in autonomous robot navigation in both research and development
2,012
Computation and Language
Input Scheme for Hindi Using Phonetic Mapping
Written Communication on Computers requires knowledge of writing text for the desired language using Computer. Mostly people do not use any other language besides English. This creates a barrier. To resolve this issue we have developed a scheme to input text in Hindi using phonetic mapping scheme. Using this scheme we generate intermediate code strings and match them with pronunciations of input text. Our system show significant success over other input systems available.
2,012
Computation and Language
Evaluation of Computational Grammar Formalisms for Indian Languages
Natural Language Parsing has been the most prominent research area since the genesis of Natural Language Processing. Probabilistic Parsers are being developed to make the process of parser development much easier, accurate and fast. In Indian context, identification of which Computational Grammar Formalism is to be used is still a question which needs to be answered. In this paper we focus on this problem and try to analyze different formalisms for Indian languages.
2,012
Computation and Language
Identification of Fertile Translations in Medical Comparable Corpora: a Morpho-Compositional Approach
This paper defines a method for lexicon in the biomedical domain from comparable corpora. The method is based on compositional translation and exploits morpheme-level translation equivalences. It can generate translations for a large variety of morphologically constructed words and can also generate 'fertile' translations. We show that fertile translations increase the overall quality of the extracted lexicon for English to French translation.
2,012
Computation and Language
Stemmer for Serbian language
In linguistic morphology and information retrieval, stemming is the process for reducing inflected (or sometimes derived) words to their stem, base or root form; generally a written word form. In this work is presented suffix stripping stemmer for Serbian language, one of the highly inflectional languages.
2,012
Computation and Language
Natural Language Processing - A Survey
The utility and power of Natural Language Processing (NLP) seems destined to change our technological society in profound and fundamental ways. However there are, to date, few accessible descriptions of the science of NLP that have been written for a popular audience, or even for an audience of intelligent, but uninitiated scientists. This paper aims to provide just such an overview. In short, the objective of this article is to describe the purpose, procedures and practical applications of NLP in a clear, balanced, and readable way. We will examine the most recent literature describing the methods and processes of NLP, analyze some of the challenges that researchers are faced with, and briefly survey some of the current and future applications of this science to IT research in general.
2,012
Computation and Language
A Linguistic Model for Terminology Extraction based Conditional Random Fields
In this paper, we show the possibility of using a linear Conditional Random Fields (CRF) for terminology extraction from a specialized text corpus.
2,014
Computation and Language
Detecting multiword phrases in mathematical text corpora
We present an approach for detecting multiword phrases in mathematical text corpora. The method used is based on characteristic features of mathematical terminology. It makes use of a software tool named Lingo which allows to identify words by means of previously defined dictionaries for specific word classes as adjectives, personal names or nouns. The detection of multiword groups is done algorithmically. Possible advantages of the method for indexing and information retrieval and conclusions for applying dictionary-based methods of automatic indexing instead of stemming procedures are discussed.
2,012
Computation and Language
Quick Summary
Quick Summary is an innovate implementation of an automatic document summarizer that inputs a document in the English language and evaluates each sentence. The scanner or evaluator determines criteria based on its grammatical structure and place in the paragraph. The program then asks the user to specify the number of sentences the person wishes to highlight. For example should the user ask to have three of the most important sentences, it would highlight the first and most important sentence in green. Commonly this is the sentence containing the conclusion. Then Quick Summary finds the second most important sentence usually called a satellite and highlights it in yellow. This is usually the topic sentence. Then the program finds the third most important sentence and highlights it in red. The implementations of this technology are useful in a society of information overload when a person typically receives 42 emails a day (Microsoft). The paper also is a candid look at difficulty that machine learning has in textural translating. However, it speaks on how to overcome the obstacles that historically prevented progress. This paper proposes mathematical meta-data criteria that justify the place of importance of a sentence. Just as tools for the study of relational symmetry in bio-informatics, this tool seeks to classify words with greater clarity. "Survey Finds Workers Average Only Three Productive Days per Week." Microsoft News Center. Microsoft. Web. 31 Mar. 2012.
2,012
Computation and Language
Inference of Fine-grained Attributes of Bengali Corpus for Stylometry Detection
Stylometry, the science of inferring characteristics of the author from the characteristics of documents written by that author, is a problem with a long history and belongs to the core task of Text categorization that involves authorship identification, plagiarism detection, forensic investigation, computer security, copyright and estate disputes etc. In this work, we present a strategy for stylometry detection of documents written in Bengali. We adopt a set of fine-grained attribute features with a set of lexical markers for the analysis of the text and use three semi-supervised measures for making decisions. Finally, a majority voting approach has been taken for final classification. The system is fully automatic and language-independent. Evaluation results of our attempt for Bengali author's stylometry detection show reasonably promising accuracy in comparison to the baseline model.
2,011
Computation and Language
Opinion Mining for Relating Subjective Expressions and Annual Earnings in US Financial Statements
Financial statements contain quantitative information and manager's subjective evaluation of firm's financial status. Using information released in U.S. 10-K filings. Both qualitative and quantitative appraisals are crucial for quality financial decisions. To extract such opinioned statements from the reports, we built tagging models based on the conditional random field (CRF) techniques, considering a variety of combinations of linguistic factors including morphology, orthography, predicate-argument structure, syntax, and simple semantics. Our results show that the CRF models are reasonably effective to find opinion holders in experiments when we adopted the popular MPQA corpus for training and testing. The contribution of our paper is to identify opinion patterns in multiword expressions (MWEs) forms rather than in single word forms. We find that the managers of corporations attempt to use more optimistic words to obfuscate negative financial performance and to accentuate the positive financial performance. Our results also show that decreasing earnings were often accompanied by ambiguous and mild statements in the reporting year and that increasing earnings were stated in assertive and positive way.
2,012
Computation and Language
Learning Attitudes and Attributes from Multi-Aspect Reviews
The majority of online reviews consist of plain-text feedback together with a single numeric score. However, there are multiple dimensions to products and opinions, and understanding the `aspects' that contribute to users' ratings may help us to better understand their individual preferences. For example, a user's impression of an audiobook presumably depends on aspects such as the story and the narrator, and knowing their opinions on these aspects may help us to recommend better products. In this paper, we build models for rating systems in which such dimensions are explicit, in the sense that users leave separate ratings for each aspect of a product. By introducing new corpora consisting of five million reviews, rated with between three and six aspects, we evaluate our models on three prediction tasks: First, we use our model to uncover which parts of a review discuss which of the rated aspects. Second, we use our model to summarize reviews, which for us means finding the sentences that best explain a user's rating. Finally, since aspect ratings are optional in many of the datasets we consider, we use our model to recover those ratings that are missing from a user's evaluation. Our model matches state-of-the-art approaches on existing small-scale datasets, while scaling to the real-world datasets we introduce. Moreover, our model is able to `disentangle' content and sentiment words: we automatically learn content words that are indicative of a particular aspect as well as the aspect-specific sentiment words that are indicative of a particular rating.
2,012
Computation and Language
Gender identity and lexical variation in social media
We present a study of the relationship between gender, linguistic style, and social networks, using a novel corpus of 14,000 Twitter users. Prior quantitative work on gender often treats this social variable as a female/male binary; we argue for a more nuanced approach. By clustering Twitter users, we find a natural decomposition of the dataset into various styles and topical interests. Many clusters have strong gender orientations, but their use of linguistic resources sometimes directly conflicts with the population-level language statistics. We view these clusters as a more accurate reflection of the multifaceted nature of gendered language styles. Previous corpus-based work has also had little to say about individuals whose linguistic styles defy population-level gender patterns. To identify such individuals, we train a statistical classifier, and measure the classifier confidence for each individual in the dataset. Examining individuals whose language does not match the classifier's model for their gender, we find that they have social networks that include significantly fewer same-gender social connections and that, in general, social network homophily is correlated with the use of same-gender language markers. Pairing computational methods and social theory thus offers a new perspective on how gender emerges as individuals position themselves relative to audiences, topics, and mainstream gender norms.
2,014
Computation and Language
Semantic Understanding of Professional Soccer Commentaries
This paper presents a novel approach to the problem of semantic parsing via learning the correspondences between complex sentences and rich sets of events. Our main intuition is that correct correspondences tend to occur more frequently. Our model benefits from a discriminative notion of similarity to learn the correspondence between sentence and an event and a ranking machinery that scores the popularity of each correspondence. Our method can discover a group of events (called macro-events) that best describes a sentence. We evaluate our method on our novel dataset of professional soccer commentaries. The empirical results show that our method significantly outperforms the state-of-theart.
2,012
Computation and Language
Diffusion of Lexical Change in Social Media
Computer-mediated communication is driving fundamental changes in the nature of written language. We investigate these changes by statistical analysis of a dataset comprising 107 million Twitter messages (authored by 2.7 million unique user accounts). Using a latent vector autoregressive model to aggregate across thousands of words, we identify high-level patterns in diffusion of linguistic change over the United States. Our model is robust to unpredictable changes in Twitter's sampling rate, and provides a probabilistic characterization of the relationship of macro-scale linguistic influence to a set of demographic and geographic predictors. The results of this analysis offer support for prior arguments that focus on geographical proximity and population size. However, demographic similarity -- especially with regard to race -- plays an even more central role, as cities with similar racial demographics are far more likely to share linguistic influence. Rather than moving towards a single unified "netspeak" dialect, language evolution in computer-mediated communication reproduces existing fault lines in spoken American English.
2,014
Computation and Language
The origin of Mayan languages from Formosan language group of Austronesian
Basic body-part names (BBPNs) were defined as body-part names in Swadesh basic 200 words. Non-Mayan cognates of Mayan (MY) BBPNs were extensively searched for, by comparing with non-MY vocabulary, including ca.1300 basic words of 82 AN languages listed by Tryon (1985), etc. Thus found cognates (CGs) in non-MY are listed in Table 1, as classified by language groups to which most similar cognates (MSCs) of MY BBPNs belong. CGs of MY are classified to 23 mutually unrelated CG-items, of which 17.5 CG-items have their MSCs in Austronesian (AN), giving its closest similarity score (CSS), CSS(AN) = 17.5, which consists of 10.33 MSCs in Formosan, 1.83 MSCs in Western Malayo-Polynesian (W.MP), 0.33 in Central MP, 0.0 in SHWNG, and 5.0 in Oceanic [i.e., CSS(FORM)= 10.33, CSS(W.MP) = 1.88, ..., CSS(OC)= 5.0]. These CSSs for language (sub)groups are also listed in the underline portion of every section of (Section1 - Section 6) in Table 1. Chi-squar test (degree of freedom = 1) using [Eq 1] and [Eqs.2] revealed that MSCs of MY BBPNs are distributed in Formosan in significantly higher frequency (P < 0.001) than in other subgroups of AN, as well as than in non-AN languages. MY is thus concluded to have been derived from Formosan of AN. Eskimo shows some BBPN similarities to FORM and MY.
2,012
Computation and Language
A Lightweight Stemmer for Gujarati
Gujarati is a resource poor language with almost no language processing tools being available. In this paper we have shown an implementation of a rule based stemmer of Gujarati. We have shown the creation of rules for stemming and the richness in morphology that Gujarati possesses. We have also evaluated our results by verifying it with a human expert.
2,012
Computation and Language
Design of English-Hindi Translation Memory for Efficient Translation
Developing parallel corpora is an important and a difficult activity for Machine Translation. This requires manual annotation by Human Translators. Translating same text again is a useless activity. There are tools available to implement this for European Languages, but no such tool is available for Indian Languages. In this paper we present a tool for Indian Languages which not only provides automatic translations of the previously available translation but also provides multiple translations, in cases where a sentence has multiple translations, in ranked list of suggestive translations for a sentence. Moreover this tool also lets translators have global and local saving options of their work, so that they may share it with others, which further lightens the task.
2,012
Computation and Language
Hidden Trends in 90 Years of Harvard Business Review
In this paper, we demonstrate and discuss results of our mining the abstracts of the publications in Harvard Business Review between 1922 and 2012. Techniques for computing n-grams, collocations, basic sentiment analysis, and named-entity recognition were employed to uncover trends hidden in the abstracts. We present findings about international relationships, sentiment in HBR's abstracts, important international companies, influential technological inventions, renown researchers in management theories, US presidents via chronological analyses.
2,012
Computation and Language
Extraction of domain-specific bilingual lexicon from comparable corpora: compositional translation and ranking
This paper proposes a method for extracting translations of morphologically constructed terms from comparable corpora. The method is based on compositional translation and exploits translation equivalences at the morpheme-level, which allows for the generation of "fertile" translations (translation pairs in which the target term has more words than the source term). Ranking methods relying on corpus-based and translation-based features are used to select the best candidate translation. We obtain an average precision of 91% on the Top1 candidate translation. The method was tested on two language pairs (English-French and English-German) and with a small specialized comparable corpora (400k words per language).
2,012
Computation and Language
Some Chances and Challenges in Applying Language Technologies to Historical Studies in Chinese
We report applications of language technology to analyzing historical documents in the Database for the Study of Modern Chinese Thoughts and Literature (DSMCTL). We studied two historical issues with the reported techniques: the conceptualization of "huaren" (Chinese people) and the attempt to institute constitutional monarchy in the late Qing dynasty. We also discuss research challenges for supporting sophisticated issues using our experience with DSMCTL, the Database of Government Officials of the Republic of China, and the Dream of the Red Chamber. Advanced techniques and tools for lexical, syntactic, semantic, and pragmatic processing of language information, along with more thorough data collection, are needed to strengthen the collaboration between historians and computer scientists.
2,011
Computation and Language
Classification Analysis Of Authorship Fiction Texts in The Space Of Semantic Fields
The use of naive Bayesian classifier (NB) and the classifier by the k nearest neighbors (kNN) in classification semantic analysis of authors' texts of English fiction has been analysed. The authors' works are considered in the vector space the basis of which is formed by the frequency characteristics of semantic fields of nouns and verbs. Highly precise classification of authors' texts in the vector space of semantic fields indicates about the presence of particular spheres of author's idiolect in this space which characterizes the individual author's style.
2,012
Computation and Language
The Hangulphabet: A Descriptive Alphabet
This paper describes the Hangulphabet, a new writing system that should prove useful in a number of contexts. Using the Hangulphabet, a user can instantly see voicing, manner and place of articulation of any phoneme found in human language. The Hangulphabet places consonant graphemes on a grid with the x-axis representing the place of articulation and the y-axis representing manner of articulation. Each individual grapheme contains radicals from both axes where the points intersect. The top radical represents manner of articulation where the bottom represents place of articulation. A horizontal line running through the middle of the bottom radical represents voicing. For vowels, place of articulation is located on a grid that represents the position of the tongue in the mouth. This grid is similar to that of the IPA vowel chart (International Phonetic Association, 1999). The difference with the Hangulphabet being the trapezoid representing the vocal apparatus is on a slight tilt. Place of articulation for a vowel is represented by a breakout figure from the grid. This system can be used as an alternative to the International Phonetic Alphabet (IPA) or as a complement to it. Beginning students of linguistics may find it particularly useful. A Hangulphabet font has been created to facilitate switching between the Hangulphabet and the IPA.
2,012
Computation and Language
The Model of Semantic Concepts Lattice For Data Mining Of Microblogs
The model of semantic concept lattice for data mining of microblogs has been proposed in this work. It is shown that the use of this model is effective for the semantic relations analysis and for the detection of associative rules of key words.
2,012
Computation and Language
Optimal size, freshness and time-frame for voice search vocabulary
In this paper, we investigate how to optimize the vocabulary for a voice search language model. The metric we optimize over is the out-of-vocabulary (OoV) rate since it is a strong indicator of user experience. In a departure from the usual way of measuring OoV rates, web search logs allow us to compute the per-session OoV rate and thus estimate the percentage of users that experience a given OoV rate. Under very conservative text normalization, we find that a voice search vocabulary consisting of 2 to 2.5 million words extracted from 1 week of search query data will result in an aggregate OoV rate of 1%; at that size, the same OoV rate will also be experienced by 90% of users. The number of words included in the vocabulary is a stable indicator of the OoV rate. Altering the freshness of the vocabulary or the duration of the time window over which the training data is gathered does not significantly change the OoV rate. Surprisingly, a significantly larger vocabulary (approximately 10 million words) is required to guarantee OoV rates below 1% for 95% of the users.
2,012
Computation and Language
Large Scale Language Modeling in Automatic Speech Recognition
Large language models have been proven quite beneficial for a variety of automatic speech recognition tasks in Google. We summarize results on Voice Search and a few YouTube speech transcription tasks to highlight the impact that one can expect from increasing both the amount of training data, and the size of the language model estimated from such data. Depending on the task, availability and amount of training data used, language model size and amount of work and care put into integrating them in the lattice rescoring step we observe reductions in word error rate between 6% and 10% relative, for systems on a wide range of operating points between 17% and 52% word error rate.
2,012
Computation and Language
Transition-Based Dependency Parsing With Pluggable Classifiers
In principle, the design of transition-based dependency parsers makes it possible to experiment with any general-purpose classifier without other changes to the parsing algorithm. In practice, however, it often takes substantial software engineering to bridge between the different representations used by two software packages. Here we present extensions to MaltParser that allow the drop-in use of any classifier conforming to the interface of the Weka machine learning package, a wrapper for the TiMBL memory-based learner to this interface, and experiments on multilingual dependency parsing with a variety of classifiers. While earlier work had suggested that memory-based learners might be a good choice for low-resource parsing scenarios, we cannot support that hypothesis in this work. We observed that support-vector machines give better parsing performance than the memory-based learner, regardless of the size of the training set.
2,012
Computation and Language
Verbalizing Ontologies in Controlled Baltic Languages
Controlled natural languages (mostly English-based) recently have emerged as seemingly informal supplementary means for OWL ontology authoring, if compared to the formal notations that are used by professional knowledge engineers. In this paper we present by examples controlled Latvian language that has been designed to be compliant with the state of the art Attempto Controlled English. We also discuss relation with controlled Lithuanian language that is being designed in parallel.
2,010
Computation and Language
Detecting English Writing Styles For Non-native Speakers
Analyzing writing styles of non-native speakers is a challenging task. In this paper, we analyze the comments written in the discussion pages of the English Wikipedia. Using learning algorithms, we are able to detect native speakers' writing style with an accuracy of 74%. Given the diversity of the English Wikipedia users and the large number of languages they speak, we measure the similarities among their native languages by comparing the influence they have on their English writing style. Our results show that languages known to have the same origin and development path have similar footprint on their speakers' English writing style. To enable further studies, the dataset we extracted from Wikipedia will be made available publicly.
2,012
Computation and Language
Dating Texts without Explicit Temporal Cues
This paper tackles temporal resolution of documents, such as determining when a document is about or when it was written, based only on its text. We apply techniques from information retrieval that predict dates via language models over a discretized timeline. Unlike most previous works, we rely {\it solely} on temporal cues implicit in the text. We consider both document-likelihood and divergence based techniques and several smoothing methods for both of them. Our best model predicts the mid-point of individuals' lives with a median of 22 and mean error of 36 years for Wikipedia biographies from 3800 B.C. to the present day. We also show that this approach works well when training on such biographies and predicting dates both for non-biographical Wikipedia pages about specific years (500 B.C. to 2010 A.D.) and for publication dates of short stories (1798 to 2008). Together, our work shows that, even in absence of temporal extraction resources, it is possible to achieve remarkable temporal locality across a diverse set of texts.
2,012
Computation and Language
A Hindi Speech Actuated Computer Interface for Web Search
Aiming at increasing system simplicity and flexibility, an audio evoked based system was developed by integrating simplified headphone and user-friendly software design. This paper describes a Hindi Speech Actuated Computer Interface for Web search (HSACIWS), which accepts spoken queries in Hindi language and provides the search result on the screen. This system recognizes spoken queries by large vocabulary continuous speech recognition (LVCSR), retrieves relevant document by text retrieval, and provides the search result on the Web by the integration of the Web and the voice systems. The LVCSR in this system showed enough performance levels for speech with acoustic and language models derived from a query corpus with target contents.
2,012
Computation and Language
A Principled Approach to Grammars for Controlled Natural Languages and Predictive Editors
Controlled natural languages (CNL) with a direct mapping to formal logic have been proposed to improve the usability of knowledge representation systems, query interfaces, and formal specifications. Predictive editors are a popular approach to solve the problem that CNLs are easy to read but hard to write. Such predictive editors need to be able to "look ahead" in order to show all possible continuations of a given unfinished sentence. Such lookahead features, however, are difficult to implement in a satisfying way with existing grammar frameworks, especially if the CNL supports complex nonlocal structures such as anaphoric references. Here, methods and algorithms are presented for a new grammar notation called Codeco, which is specifically designed for controlled natural languages and predictive editors. A parsing approach for Codeco based on an extended chart parsing algorithm is presented. A large subset of Attempto Controlled English (ACE) has been represented in Codeco. Evaluation of this grammar and the parser implementation shows that the approach is practical, adequate and efficient.
2,013
Computation and Language
Semantic Polarity of Adjectival Predicates in Online Reviews
Web users produce more and more documents expressing opinions. Because these have become important resources for customers and manufacturers, many have focused on them. Opinions are often expressed through adjectives with positive or negative semantic values. In extracting information from users' opinion in online reviews, exact recognition of the semantic polarity of adjectives is one of the most important requirements. Since adjectives have different semantic orientations according to contexts, it is not satisfying to extract opinion information without considering the semantic and lexical relations between the adjectives and the feature nouns appropriate to a given domain. In this paper, we present a classification of adjectives by polarity, and we analyze adjectives that are undetermined in the absence of contexts. Our research should be useful for accurately predicting semantic orientations of opinion sentences, and should be taken into account before relying on an automatic methods.
2,010
Computation and Language
A Rule-Based Approach For Aligning Japanese-Spanish Sentences From A Comparable Corpora
The performance of a Statistical Machine Translation System (SMT) system is proportionally directed to the quality and length of the parallel corpus it uses. However for some pair of languages there is a considerable lack of them. The long term goal is to construct a Japanese-Spanish parallel corpus to be used for SMT, whereas, there are a lack of useful Japanese-Spanish parallel Corpus. To address this problem, In this study we proposed a method for extracting Japanese-Spanish Parallel Sentences from Wikipedia using POS tagging and Rule-Based approach. The main focus of this approach is the syntactic features of both languages. Human evaluation was performed over a sample and shows promising results, in comparison with the baseline.
2,012
Computation and Language
Automating rule generation for grammar checkers
In this paper, I describe several approaches to automatic or semi-automatic development of symbolic rules for grammar checkers from the information contained in corpora. The rules obtained this way are an important addition to manually-created rules that seem to dominate in rule-based checkers. However, the manual process of creation of rules is costly, time-consuming and error-prone. It seems therefore advisable to use machine-learning algorithms to create the rules automatically or semi-automatically. The results obtained seem to corroborate my initial hypothesis that symbolic machine learning algorithms can be useful for acquiring new rules for grammar checking. It turns out, however, that for practical uses, error corpora cannot be the sole source of information used in grammar checking. I suggest therefore that only by using different approaches, grammar-checkers, or more generally, computer-aided proofreading tools, will be able to cover most frequent and severe mistakes and avoid false alarms that seem to distract users.
2,012
Computation and Language
Two Algorithms for Finding $k$ Shortest Paths of a Weighted Pushdown Automaton
We introduce efficient algorithms for finding the $k$ shortest paths of a weighted pushdown automaton (WPDA), a compact representation of a weighted set of strings with potential applications in parsing and machine translation. Both of our algorithms are derived from the same weighted deductive logic description of the execution of a WPDA using different search strategies. Experimental results show our Algorithm 2 adds very little overhead vs. the single shortest path algorithm, even with a large $k$.
2,013
Computation and Language
Using external sources of bilingual information for on-the-fly word alignment
In this paper we present a new and simple language-independent method for word-alignment based on the use of external sources of bilingual information such as machine translation systems. We show that the few parameters of the aligner can be trained on a very small corpus, which leads to results comparable to those obtained by the state-of-the-art tool GIZA++ in terms of precision. Regarding other metrics, such as alignment error rate or F-measure, the parametric aligner, when trained on a very small gold-standard (450 pairs of sentences), provides results comparable to those produced by GIZA++ when trained on an in-domain corpus of around 10,000 pairs of sentences. Furthermore, the results obtained indicate that the training is domain-independent, which enables the use of the trained aligner 'on the fly' on any new pair of sentences.
2,012
Computation and Language
The Clustering of Author's Texts of English Fiction in the Vector Space of Semantic Fields
The clustering of text documents in the vector space of semantic fields and in the semantic space with orthogonal basis has been analysed. It is shown that using the vector space model with the basis of semantic fields is effective in the cluster analysis algorithms of author's texts in English fiction. The analysis of the author's texts distribution in cluster structure showed the presence of the areas of semantic space that represent the author's ideolects of individual authors. SVD factorization of the semantic fields matrix makes it possible to reduce significantly the dimension of the semantic space in the cluster analysis of author's texts.
2,012
Computation and Language
A Novel Feature-based Bayesian Model for Query Focused Multi-document Summarization
Both supervised learning methods and LDA based topic model have been successfully applied in the field of query focused multi-document summarization. In this paper, we propose a novel supervised approach that can incorporate rich sentence features into Bayesian topic models in a principled way, thus taking advantages of both topic model and feature based supervised learning methods. Experiments on TAC2008 and TAC2009 demonstrate the effectiveness of our approach.
2,013
Computation and Language
Query-focused Multi-document Summarization: Combining a Novel Topic Model with Graph-based Semi-supervised Learning
Graph-based semi-supervised learning has proven to be an effective approach for query-focused multi-document summarization. The problem of previous semi-supervised learning is that sentences are ranked without considering the higher level information beyond sentence level. Researches on general summarization illustrated that the addition of topic level can effectively improve the summary quality. Inspired by previous researches, we propose a two-layer (i.e. sentence layer and topic layer) graph-based semi-supervised learning approach. At the same time, we propose a novel topic model which makes full use of the dependence between sentences and words. Experimental results on DUC and TAC data sets demonstrate the effectiveness of our proposed approach.
2,014
Computation and Language
On the complexity of learning a language: An improvement of Block's algorithm
Language learning is thought to be a highly complex process. One of the hurdles in learning a language is to learn the rules of syntax of the language. Rules of syntax are often ordered in that before one rule can applied one must apply another. It has been thought that to learn the order of n rules one must go through all n! permutations. Thus to learn the order of 27 rules would require 27! steps or 1.08889x10^{28} steps. This number is much greater than the number of seconds since the beginning of the universe! In an insightful analysis the linguist Block ([Block 86], pp. 62-63, p.238) showed that with the assumption of transitivity this vast number of learning steps reduces to a mere 377 steps. We present a mathematical analysis of the complexity of Block's algorithm. The algorithm has a complexity of order n^2 given n rules. In addition, we improve Block's results exponentially, by introducing an algorithm that has complexity of order less than n log n.
2,012
Computation and Language
Mining the Web for the Voice of the Herd to Track Stock Market Bubbles
We show that power-law analyses of financial commentaries from newspaper web-sites can be used to identify stock market bubbles, supplementing traditional volatility analyses. Using a four-year corpus of 17,713 online, finance-related articles (10M+ words) from the Financial Times, the New York Times, and the BBC, we show that week-to-week changes in power-law distributions reflect market movements of the Dow Jones Industrial Average (DJI), the FTSE-100, and the NIKKEI-225. Notably, the statistical regularities in language track the 2007 stock market bubble, showing emerging structure in the language of commentators, as progressively greater agreement arose in their positive perceptions of the market. Furthermore, during the bubble period, a marked divergence in positive language occurs as revealed by a Kullback-Leibler analysis.
2,012
Computation and Language
Identifying Metaphor Hierarchies in a Corpus Analysis of Finance Articles
Using a corpus of over 17,000 financial news reports (involving over 10M words), we perform an analysis of the argument-distributions of the UP- and DOWN-verbs used to describe movements of indices, stocks, and shares. Using measures of the overlap in the argument distributions of these verbs and k-means clustering of their distributions, we advance evidence for the proposal that the metaphors referred to by these verbs are organised into hierarchical structures of superordinate and subordinate groups.
2,011
Computation and Language
Identifying Metaphoric Antonyms in a Corpus Analysis of Finance Articles
Using a corpus of 17,000+ financial news reports (involving over 10M words), we perform an analysis of the argument-distributions of the UP and DOWN verbs used to describe movements of indices, stocks and shares. In Study 1 participants identified antonyms of these verbs in a free-response task and a matching task from which the most commonly identified antonyms were compiled. In Study 2, we determined whether the argument-distributions for the verbs in these antonym-pairs were sufficiently similar to predict the most frequently-identified antonym. Cosine similarity correlates moderately with the proportions of antonym-pairs identified by people (r = 0.31). More impressively, 87% of the time the most frequently-identified antonym is either the first- or second-most similar pair in the set of alternatives. The implications of these results for distributional approaches to determining metaphoric knowledge are discussed.
2,011
Computation and Language
Diachronic Variation in Grammatical Relations
We present a method of finding and analyzing shifts in grammatical relations found in diachronic corpora. Inspired by the econometric technique of measuring return and volatility instead of relative frequencies, we propose them as a way to better characterize changes in grammatical patterns like nominalization, modification and comparison. To exemplify the use of these techniques, we examine a corpus of NIPS papers and report trends which manifest at the token, part-of-speech and grammatical levels. Building up from frequency observations to a second-order analysis, we show that shifts in frequencies overlook deeper trends in language, even when part-of-speech information is included. Examining token, POS and grammatical levels of variation enables a summary view of diachronic text as a whole. We conclude with a discussion about how these methods can inform intuitions about specialist domains as well as changes in language use as a whole.
2,012
Computation and Language
Language Without Words: A Pointillist Model for Natural Language Processing
This paper explores two separate questions: Can we perform natural language processing tasks without a lexicon?; and, Should we? Existing natural language processing techniques are either based on words as units or use units such as grams only for basic classification tasks. How close can a machine come to reasoning about the meanings of words and phrases in a corpus without using any lexicon, based only on grams? Our own motivation for posing this question is based on our efforts to find popular trends in words and phrases from online Chinese social media. This form of written Chinese uses so many neologisms, creative character placements, and combinations of writing systems that it has been dubbed the "Martian Language." Readers must often use visual queues, audible queues from reading out loud, and their knowledge and understanding of current events to understand a post. For analysis of popular trends, the specific problem is that it is difficult to build a lexicon when the invention of new ways to refer to a word or concept is easy and common. For natural language processing in general, we argue in this paper that new uses of language in social media will challenge machines' abilities to operate with words as the basic unit of understanding, not only in Chinese but potentially in other languages.
2,012
Computation and Language
Sentence Compression in Spanish driven by Discourse Segmentation and Language Models
Previous works demonstrated that Automatic Text Summarization (ATS) by sentences extraction may be improved using sentence compression. In this work we present a sentence compressions approach guided by level-sentence discourse segmentation and probabilistic language models (LM). The results presented here show that the proposed solution is able to generate coherent summaries with grammatical compressed sentences. The approach is simple enough to be transposed into other languages.
2,012
Computation and Language
A comparative study of root-based and stem-based approaches for measuring the similarity between arabic words for arabic text mining applications
Representation of semantic information contained in the words is needed for any Arabic Text Mining applications. More precisely, the purpose is to better take into account the semantic dependencies between words expressed by the co-occurrence frequencies of these words. There have been many proposals to compute similarities between words based on their distributions in contexts. In this paper, we compare and contrast the effect of two preprocessing techniques applied to Arabic corpus: Rootbased (Stemming), and Stem-based (Light Stemming) approaches for measuring the similarity between Arabic words with the well known abstractive model -Latent Semantic Analysis (LSA)- with a wide variety of distance functions and similarity measures, such as the Euclidean Distance, Cosine Similarity, Jaccard Coefficient, and the Pearson Correlation Coefficient. The obtained results show that, on the one hand, the variety of the corpus produces more accurate results; on the other hand, the Stem-based approach outperformed the Root-based one because this latter affects the words meanings.
2,012
Computation and Language
Assessing Sentiment Strength in Words Prior Polarities
Many approaches to sentiment analysis rely on lexica where words are tagged with their prior polarity - i.e. if a word out of context evokes something positive or something negative. In particular, broad-coverage resources like SentiWordNet provide polarities for (almost) every word. Since words can have multiple senses, we address the problem of how to compute the prior polarity of a word starting from the polarity of each sense and returning its polarity strength as an index between -1 and 1. We compare 14 such formulae that appear in the literature, and assess which one best approximates the human judgement of prior polarities, with both regression and classification models.
2,012
Computation and Language
Natural Language Understanding Based on Semantic Relations between Sentences
In this paper, we define event expression over sentences of natural language and semantic relations between events. Based on this definition, we formally consider text understanding process having events as basic unit.
2,012
Computation and Language
Good parts first - a new algorithm for approximate search in lexica and string databases
We present a new efficient method for approximate search in electronic lexica. Given an input string (the pattern) and a similarity threshold, the algorithm retrieves all entries of the lexicon that are sufficiently similar to the pattern. Search is organized in subsearches that always start with an exact partial match where a substring of the input pattern is aligned with a substring of a lexicon word. Afterwards this partial match is extended stepwise to larger substrings. For aligning further parts of the pattern with corresponding parts of lexicon entries, more errors are tolerated at each subsequent step. For supporting this alignment order, which may start at any part of the pattern, the lexicon is represented as a structure that enables immediate access to any substring of a lexicon word and permits the extension of such substrings in both directions. Experimental evaluations of the approximate search procedure are given that show significant efficiency improvements compared to existing techniques. Since the technique can be used for large error bounds it offers interesting possibilities for approximate search in special collections of "long" strings, such as phrases, sentences, or book ti
2,015
Computation and Language
Syntactic Analysis Based on Morphological Characteristic Features of the Romanian Language
This paper refers to the syntactic analysis of phrases in Romanian, as an important process of natural language processing. We will suggest a real-time solution, based on the idea of using some words or groups of words that indicate grammatical category; and some specific endings of some parts of sentence. Our idea is based on some characteristics of the Romanian language, where some prepositions, adverbs or some specific endings can provide a lot of information about the structure of a complex sentence. Such characteristics can be found in other languages, too, such as French. Using a special grammar, we developed a system (DIASEXP) that can perform a dialogue in natural language with assertive and interogative sentences about a "story" (a set of sentences describing some events from the real life).
2,013
Computation and Language
TEI and LMF crosswalks
The present paper explores various arguments in favour of making the Text Encoding Initia-tive (TEI) guidelines an appropriate serialisation for ISO standard 24613:2008 (LMF, Lexi-cal Mark-up Framework) . It also identifies the issues that would have to be resolved in order to reach an appropriate implementation of these ideas, in particular in terms of infor-mational coverage. We show how the customisation facilities offered by the TEI guidelines can provide an adequate background, not only to cover missing components within the current Dictionary chapter of the TEI guidelines, but also to allow specific lexical projects to deal with local constraints. We expect this proposal to be a basis for a future ISO project in the context of the on going revision of LMF.
2,015
Computation and Language
Determining token sequence mistakes in responses to questions with open text answer
When learning grammar of the new language, a teacher should routinely check student's exercises for grammatical correctness. The paper describes a method of automatically detecting and reporting grammar mistakes, regarding an order of tokens in the response. It could report extra tokens, missing tokens and misplaced tokens. The method is useful when teaching language, where order of tokens is important, which includes most formal languages and some natural ones (like English). The method was implemented in a question type plug-in CorrectWriting for the widely used learning manage system Moodle.
2,013
Computation and Language
Cutting Recursive Autoencoder Trees
Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. In this paper, we rely on empirical tests to see whether a particular structure makes sense. We present an analysis of the Semi-Supervised Recursive Autoencoder, a well-known model that produces structural representations of text. We show that for certain tasks, the structure of the autoencoder can be significantly reduced without loss of classification accuracy and we evaluate the produced structures using human judgment.
2,013
Computation and Language
SpeedRead: A Fast Named Entity Recognition Pipeline
Online content analysis employs algorithmic methods to identify entities in unstructured text. Both machine learning and knowledge-base approaches lie at the foundation of contemporary named entities extraction systems. However, the progress in deploying these approaches on web-scale has been been hampered by the computational cost of NLP over massive text corpora. We present SpeedRead (SR), a named entity recognition pipeline that runs at least 10 times faster than Stanford NLP pipeline. This pipeline consists of a high performance Penn Treebank- compliant tokenizer, close to state-of-art part-of-speech (POS) tagger and knowledge-based named entity recognizer.
2,013
Computation and Language
The Manifold of Human Emotions
Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities.
2,013
Computation and Language
A Rhetorical Analysis Approach to Natural Language Processing
The goal of this research was to find a way to extend the capabilities of computers through the processing of language in a more human way, and present applications which demonstrate the power of this method. This research presents a novel approach, Rhetorical Analysis, to solving problems in Natural Language Processing (NLP). The main benefit of Rhetorical Analysis, as opposed to previous approaches, is that it does not require the accumulation of large sets of training data, but can be used to solve a multitude of problems within the field of NLP. The NLP problems investigated with Rhetorical Analysis were the Author Identification problem - predicting the author of a piece of text based on its rhetorical strategies, Election Prediction - predicting the winner of a presidential candidate's re-election campaign based on rhetorical strategies within that president's inaugural address, Natural Language Generation - having a computer produce text containing rhetorical strategies, and Document Summarization. The results of this research indicate that an Author Identification system based on Rhetorical Analysis could predict the correct author 100% of the time, that a re-election predictor based on Rhetorical Analysis could predict the correct winner of a re-election campaign 55% of the time, that a Natural Language Generation system based on Rhetorical Analysis could output text with up to 87.3% similarity to Shakespeare in style, and that a Document Summarization system based on Rhetorical Analysis could extract highly relevant sentences. Overall, this study demonstrated that Rhetorical Analysis could be a useful approach to solving problems in NLP.
2,013
Computation and Language
Joint Space Neural Probabilistic Language Model for Statistical Machine Translation
A neural probabilistic language model (NPLM) provides an idea to achieve the better perplexity than n-gram language model and their smoothed language models. This paper investigates application area in bilingual NLP, specifically Statistical Machine Translation (SMT). We focus on the perspectives that NPLM has potential to open the possibility to complement potentially `huge' monolingual resources into the `resource-constraint' bilingual resources. We introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian construction. In order to facilitate the application to various tasks, we propose the joint space model of ngram-HMM language model. We show an experiment of system combination in the area of SMT. One discovery was that our treatment of noise improved the results 0.20 BLEU points if NPLM is trained in relatively small corpus, in our case 500,000 sentence pairs, which is often the case due to the long training time of NPLM.
2,017
Computation and Language
Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors
Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 75.8%.
2,013
Computation and Language
Two SVDs produce more focal deep learning representations
A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for computing representations. In this paper, we propose an alternative method that is more efficient than prior work and produces representations that have a property we call focality -- a property we hypothesize to be important for neural network representations. The method consists of a simple application of two consecutive SVDs and is inspired by Anandkumar (2012).
2,013
Computation and Language
Efficient Estimation of Word Representations in Vector Space
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.
2,013
Computation and Language
Language learning from positive evidence, reconsidered: A simplicity-based approach
Children learn their native language by exposure to their linguistic and communicative environment, but apparently without requiring that their mistakes are corrected. Such learning from positive evidence has been viewed as raising logical problems for language acquisition. In particular, without correction, how is the child to recover from conjecturing an over-general grammar, which will be consistent with any sentence that the child hears? There have been many proposals concerning how this logical problem can be dissolved. Here, we review recent formal results showing that the learner has sufficient data to learn successfully from positive evidence, if it favours the simplest encoding of the linguistic input. Results include the ability to learn a linguistic prediction, grammaticality judgements, language production, and form-meaning mappings. The simplicity approach can also be scaled-down to analyse the ability to learn a specific linguistic constructions, and is amenable to empirical test as a framework for describing human language acquisition.
2,013
Computation and Language
Transfer Topic Modeling with Ease and Scalability
The increasing volume of short texts generated on social media sites, such as Twitter or Facebook, creates a great demand for effective and efficient topic modeling approaches. While latent Dirichlet allocation (LDA) can be applied, it is not optimal due to its weakness in handling short texts with fast-changing topics and scalability concerns. In this paper, we propose a transfer learning approach that utilizes abundant labeled documents from other domains (such as Yahoo! News or Wikipedia) to improve topic modeling, with better model fitting and result interpretation. Specifically, we develop Transfer Hierarchical LDA (thLDA) model, which incorporates the label information from other domains via informative priors. In addition, we develop a parallel implementation of our model for large-scale applications. We demonstrate the effectiveness of our thLDA model on both a microblogging dataset and standard text collections including AP and RCV1 datasets.
2,013
Computation and Language
Multi-Step Regression Learning for Compositional Distributional Semantics
We present a model for compositional distributional semantics related to the framework of Coecke et al. (2010), and emulating formal semantics by representing functions as tensors and arguments as vectors. We introduce a new learning method for tensors, generalising the approach of Baroni and Zamparelli (2010). We evaluate it on two benchmark data sets, and find it to outperform existing leading methods. We argue in our analysis that the nature of this learning method also renders it suitable for solving more subtle problems compositional distributional models might face.
2,013
Computation and Language
PyPLN: a Distributed Platform for Natural Language Processing
This paper presents a distributed platform for Natural Language Processing called PyPLN. PyPLN leverages a vast array of NLP and text processing open source tools, managing the distribution of the workload on a variety of configurations: from a single server to a cluster of linux servers. PyPLN is developed using Python 2.7.3 but makes it very easy to incorporate other softwares for specific tasks as long as a linux version is available. PyPLN facilitates analyses both at document and corpus level, simplifying management and publication of corpora and analytical results through an easy to use web interface. In the current (beta) release, it supports English and Portuguese languages with support to other languages planned for future releases. To support the Portuguese language PyPLN uses the PALAVRAS parser\citep{Bick2000}. Currently PyPLN offers the following features: Text extraction with encoding normalization (to UTF-8), part-of-speech tagging, token frequency, semantic annotation, n-gram extraction, word and sentence repertoire, and full-text search across corpora. The platform is licensed as GPL-v3.
2,013
Computation and Language
Large Scale Distributed Acoustic Modeling With Back-off N-grams
The paper revives an older approach to acoustic modeling that borrows from n-gram language modeling in an attempt to scale up both the amount of training data and model size (as measured by the number of parameters in the model), to approximately 100 times larger than current sizes used in automatic speech recognition. In such a data-rich setting, we can expand the phonetic context significantly beyond triphones, as well as increase the number of Gaussian mixture components for the context-dependent states that allow it. We have experimented with contexts that span seven or more context-independent phones, and up to 620 mixture components per state. Dealing with unseen phonetic contexts is accomplished using the familiar back-off technique used in language modeling due to implementation simplicity. The back-off acoustic model is estimated, stored and served using MapReduce distributed computing infrastructure. Speech recognition experiments are carried out in an N-best list rescoring framework for Google Voice Search. Training big models on large amounts of data proves to be an effective way to increase the accuracy of a state-of-the-art automatic speech recognition system. We use 87,000 hours of training data (speech along with transcription) obtained by filtering utterances in Voice Search logs on automatic speech recognition confidence. Models ranging in size between 20--40 million Gaussians are estimated using maximum likelihood training. They achieve relative reductions in word-error-rate of 11% and 6% when combined with first-pass models trained using maximum likelihood, and boosted maximum mutual information, respectively. Increasing the context size beyond five phones (quinphones) does not help.
2,013
Computation and Language
Towards the Rapid Development of a Natural Language Understanding Module
When developing a conversational agent, there is often an urgent need to have a prototype available in order to test the application with real users. A Wizard of Oz is a possibility, but sometimes the agent should be simply deployed in the environment where it will be used. Here, the agent should be able to capture as many interactions as possible and to understand how people react to failure. In this paper, we focus on the rapid development of a natural language understanding module by non experts. Our approach follows the learning paradigm and sees the process of understanding natural language as a classification problem. We test our module with a conversational agent that answers questions in the art domain. Moreover, we show how our approach can be used by a natural language interface to a cinema database.
2,011
Computation and Language
S\'emantique des d\'eterminants dans un cadre richement typ\'e
The variation of word meaning according to the context leads us to enrich the type system of our syntactical and semantic analyser of French based on categorial grammars and Montague semantics (or lambda-DRT). The main advantage of a deep semantic analyse is too represent meaning by logical formulae that can be easily used e.g. for inferences. Determiners and quantifiers play a fundamental role in the construction of those formulae. But in our rich type system the usual semantic terms do not work. We propose a solution ins- pired by the tau and epsilon operators of Hilbert, kinds of generic elements and choice functions. This approach unifies the treatment of the different determi- ners and quantifiers as well as the dynamic binding of pronouns. Above all, this fully computational view fits in well within the wide coverage parser Grail, both from a theoretical and a practical viewpoint.
2,013
Computation and Language
Lexical Access for Speech Understanding using Minimum Message Length Encoding
The Lexical Access Problem consists of determining the intended sequence of words corresponding to an input sequence of phonemes (basic speech sounds) that come from a low-level phoneme recognizer. In this paper we present an information-theoretic approach based on the Minimum Message Length Criterion for solving the Lexical Access Problem. We model sentences using phoneme realizations seen in training, and word and part-of-speech information obtained from text corpora. We show results on multiple-speaker, continuous, read speech and discuss a heuristic using equivalence classes of similar sounding words which speeds up the recognition process without significant deterioration in recognition accuracy.
2,013
Computation and Language
Building a reordering system using tree-to-string hierarchical model
This paper describes our submission to the First Workshop on Reordering for Statistical Machine Translation. We have decided to build a reordering system based on tree-to-string model, using only publicly available tools to accomplish this task. With the provided training data we have built a translation model using Moses toolkit, and then we applied a chart decoder, implemented in Moses, to reorder the sentences. Even though our submission only covered English-Farsi language pair, we believe that the approach itself should work regardless of the choice of the languages, so we have also carried out the experiments for English-Italian and English-Urdu. For these language pairs we have noticed a significant improvement over the baseline in BLEU, Kendall-Tau and Hamming metrics. A detailed description is given, so that everyone can reproduce our results. Also, some possible directions for further improvements are discussed.
2,013
Computation and Language
Termhood-based Comparability Metrics of Comparable Corpus in Special Domain
Cross-Language Information Retrieval (CLIR) and machine translation (MT) resources, such as dictionaries and parallel corpora, are scarce and hard to come by for special domains. Besides, these resources are just limited to a few languages, such as English, French, and Spanish and so on. So, obtaining comparable corpora automatically for such domains could be an answer to this problem effectively. Comparable corpora, that the subcorpora are not translations of each other, can be easily obtained from web. Therefore, building and using comparable corpora is often a more feasible option in multilingual information processing. Comparability metrics is one of key issues in the field of building and using comparable corpus. Currently, there is no widely accepted definition or metrics method of corpus comparability. In fact, Different definitions or metrics methods of comparability might be given to suit various tasks about natural language processing. A new comparability, namely, termhood-based metrics, oriented to the task of bilingual terminology extraction, is proposed in this paper. In this method, words are ranked by termhood not frequency, and then the cosine similarities, calculated based on the ranking lists of word termhood, is used as comparability. Experiments results show that termhood-based metrics performs better than traditional frequency-based metrics.
2,013
Computation and Language
Bilingual Terminology Extraction Using Multi-level Termhood
Purpose: Terminology is the set of technical words or expressions used in specific contexts, which denotes the core concept in a formal discipline and is usually applied in the fields of machine translation, information retrieval, information extraction and text categorization, etc. Bilingual terminology extraction plays an important role in the application of bilingual dictionary compilation, bilingual Ontology construction, machine translation and cross-language information retrieval etc. This paper addresses the issues of monolingual terminology extraction and bilingual term alignment based on multi-level termhood. Design/methodology/approach: A method based on multi-level termhood is proposed. The new method computes the termhood of the terminology candidate as well as the sentence that includes the terminology by the comparison of the corpus. Since terminologies and general words usually have differently distribution in the corpus, termhood can also be used to constrain and enhance the performance of term alignment when aligning bilingual terms on the parallel corpus. In this paper, bilingual term alignment based on termhood constraints is presented. Findings: Experiment results show multi-level termhood can get better performance than existing method for terminology extraction. If termhood is used as constrain factor, the performance of bilingual term alignment can be improved.
2,012
Computation and Language
Compactified Horizontal Visibility Graph for the Language Network
A compactified horizontal visibility graph for the language network is proposed. It was found that the networks constructed in such way are scale free, and have a property that among the nodes with largest degrees there are words that determine not only a text structure communication, but also its informational structure.
2,013
Computation and Language
Towards a Semantic-based Approach for Modeling Regulatory Documents in Building Industry
Regulations in the Building Industry are becoming increasingly complex and involve more than one technical area. They cover products, components and project implementation. They also play an important role to ensure the quality of a building, and to minimize its environmental impact. In this paper, we are particularly interested in the modeling of the regulatory constraints derived from the Technical Guides issued by CSTB and used to validate Technical Assessments. We first describe our approach for modeling regulatory constraints in the SBVR language, and formalizing them in the SPARQL language. Second, we describe how we model the processes of compliance checking described in the CSTB Technical Guides. Third, we show how we implement these processes to assist industrials in drafting Technical Documents in order to acquire a Technical Assessment; a compliance report is automatically generated to explain the compliance or noncompliance of this Technical Documents.
2,012
Computation and Language
Probabilistic Frame Induction
In natural-language discourse, related events tend to appear near each other to describe a larger scenario. Such structures can be formalized by the notion of a frame (a.k.a. template), which comprises a set of related events and prototypical participants and event transitions. Identifying frames is a prerequisite for information extraction and natural language generation, and is usually done manually. Methods for inducing frames have been proposed recently, but they typically use ad hoc procedures and are difficult to diagnose or extend. In this paper, we propose the first probabilistic approach to frame induction, which incorporates frames, events, participants as latent topics and learns those frame and event transitions that best explain the text. The number of frames is inferred by a novel application of a split-merge method from syntactic parsing. In end-to-end evaluations from text to induced frames and extracted facts, our method produced state-of-the-art results while substantially reducing engineering effort.
2,013
Computation and Language
NLP and CALL: integration is working
In the first part of this article, we explore the background of computer-assisted learning from its beginnings in the early XIXth century and the first teaching machines, founded on theories of learning, at the start of the XXth century. With the arrival of the computer, it became possible to offer language learners different types of language activities such as comprehension tasks, simulations, etc. However, these have limits that cannot be overcome without some contribution from the field of natural language processing (NLP). In what follows, we examine the challenges faced and the issues raised by integrating NLP into CALL. We hope to demonstrate that the key to success in integrating NLP into CALL is to be found in multidisciplinary work between computer experts, linguists, language teachers, didacticians and NLP specialists.
2,006
Computation and Language
A Labeled Graph Kernel for Relationship Extraction
In this paper, we propose an approach for Relationship Extraction (RE) based on labeled graph kernels. The kernel we propose is a particularization of a random walk kernel that exploits two properties previously studied in the RE literature: (i) the words between the candidate entities or connecting them in a syntactic representation are particularly likely to carry information regarding the relationship; and (ii) combining information from distinct sources in a kernel may help the RE system make better decisions. We performed experiments on a dataset of protein-protein interactions and the results show that our approach obtains effectiveness values that are comparable with the state-of-the art kernel methods. Moreover, our approach is able to outperform the state-of-the-art kernels when combined with other kernel methods.
2,013
Computation and Language
Role of temporal inference in the recognition of textual inference
This project is a part of nature language processing and its aims to develop a system of recognition inference text-appointed TIMINF. This type of system can detect, given two portions of text, if a text is semantically deducted from the other. We focused on making the inference time in this type of system. For that we have built and analyzed a body built from questions collected through the web. This study has enabled us to classify different types of times inferences and for designing the architecture of TIMINF which seeks to integrate a module inference time in a detection system inference text. We also assess the performance of sorties TIMINF system on a test corpus with the same strategy adopted in the challenge RTE.
2,013
Computation and Language
Development of Yes/No Arabic Question Answering System
Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines,including,Artificial Intelligence,Information Retrieval, Information Extraction,Natural Language Processing, and Psychology.In this paper we realize a formal model for a lightweight semantic based open domain yes/no Arabic question answering system based on paragraph retrieval with variable length. We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion synonyms and antonyms.This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system.
2,013
Computation and Language
Non-simplifying Graph Rewriting Termination
So far, a very large amount of work in Natural Language Processing (NLP) rely on trees as the core mathematical structure to represent linguistic informations (e.g. in Chomsky's work). However, some linguistic phenomena do not cope properly with trees. In a former paper, we showed the benefit of encoding linguistic structures by graphs and of using graph rewriting rules to compute on those structures. Justified by some linguistic considerations, graph rewriting is characterized by two features: first, there is no node creation along computations and second, there are non-local edge modifications. Under these hypotheses, we show that uniform termination is undecidable and that non-uniform termination is decidable. We describe two termination techniques based on weights and we give complexity bound on the derivation length for these rewriting system.
2,013
Computation and Language
Ending-based Strategies for Part-of-speech Tagging
Probabilistic approaches to part-of-speech tagging rely primarily on whole-word statistics about word/tag combinations as well as contextual information. But experience shows about 4 per cent of tokens encountered in test sets are unknown even when the training set is as large as a million words. Unseen words are tagged using secondary strategies that exploit word features such as endings, capitalizations and punctuation marks. In this work, word-ending statistics are primary and whole-word statistics are secondary. First, a tagger was trained and tested on word endings only. Subsequent experiments added back whole-word statistics for the words occurring most frequently in the training set. As grew larger, performance was expected to improve, in the limit performing the same as word-based taggers. Surprisingly, the ending-based tagger initially performed nearly as well as the word-based tagger; in the best case, its performance significantly exceeded that of the word-based tagger. Lastly, and unexpectedly, an effect of negative returns was observed - as grew larger, performance generally improved and then declined. By varying factors such as ending length and tag-list strategy, we achieved a success rate of 97.5 percent.
2,013
Computation and Language
KSU KDD: Word Sense Induction by Clustering in Topic Space
We describe our language-independent unsupervised word sense induction system. This system only uses topic features to cluster different word senses in their global context topic space. Using unlabeled data, this system trains a latent Dirichlet allocation (LDA) topic model then uses it to infer the topics distribution of the test instances. By clustering these topics distributions in their topic space we cluster them into different senses. Our hypothesis is that closeness in topic space reflects similarity between different word senses. This system participated in SemEval-2 word sense induction and disambiguation task and achieved the second highest V-measure score among all other systems.
2,010
Computation and Language
Structure-semantics interplay in complex networks and its effects on the predictability of similarity in texts
There are different ways to define similarity for grouping similar texts into clusters, as the concept of similarity may depend on the purpose of the task. For instance, in topic extraction similar texts mean those within the same semantic field, whereas in author recognition stylistic features should be considered. In this study, we introduce ways to classify texts employing concepts of complex networks, which may be able to capture syntactic, semantic and even pragmatic features. The interplay between the various metrics of the complex networks is analyzed with three applications, namely identification of machine translation (MT) systems, evaluation of quality of machine translated texts and authorship recognition. We shall show that topological features of the networks representing texts can enhance the ability to identify MT systems in particular cases. For evaluating the quality of MT texts, on the other hand, high correlation was obtained with methods capable of capturing the semantics. This was expected because the golden standards used are themselves based on word co-occurrence. Notwithstanding, the Katz similarity, which involves semantic and structure in the comparison of texts, achieved the highest correlation with the NIST measurement, indicating that in some cases the combination of both approaches can improve the ability to quantify quality in MT. In authorship recognition, again the topological features were relevant in some contexts, though for the books and authors analyzed good results were obtained with semantic features as well. Because hybrid approaches encompassing semantic and topological features have not been extensively used, we believe that the methodology proposed here may be useful to enhance text classification considerably, as it combines well-established strategies.
2,012
Computation and Language
Statistical sentiment analysis performance in Opinum
The classification of opinion texts in positive and negative is becoming a subject of great interest in sentiment analysis. The existence of many labeled opinions motivates the use of statistical and machine-learning methods. First-order statistics have proven to be very limited in this field. The Opinum approach is based on the order of the words without using any syntactic and semantic information. It consists of building one probabilistic model for the positive and another one for the negative opinions. Then the test opinions are compared to both models and a decision and confidence measure are calculated. In order to reduce the complexity of the training corpus we first lemmatize the texts and we replace most named-entities with wildcards. Opinum presents an accuracy above 81% for Spanish opinions in the financial products domain. In this work we discuss which are the most important factors that have impact on the classification performance.
2,013
Computation and Language