question stringlengths 6 3.53k | text stringlengths 17 2.05k | source stringclasses 1
value |
|---|---|---|
Consider the (toy) grammar $G$ consisting of the following rules:
R1: S --> NP VP
R2: NP --> NN
R3: NP --> Det NN
R4: NN --> N
R5: NN --> NN NN
R6: NN --> NN PNP
R7: PNP --> Prep NP
R8: VP --> V
R9: VP --> Adv V
Indicate what type of constraints are (resp. are not) taken into account by the grammar $G$, and, for each c... | e.g. Constraints may be semantic; rejecting "The apple is angry." e.g. Syntactic; rejecting "Red is apple the. "Constraints are often represented by grammar. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the (toy) grammar $G$ consisting of the following rules:
R1: S --> NP VP
R2: NP --> NN
R3: NP --> Det NN
R4: NN --> N
R5: NN --> NN NN
R6: NN --> NN PNP
R7: PNP --> Prep NP
R8: VP --> V
R9: VP --> Adv V
Indicate what type of constraints are (resp. are not) taken into account by the grammar $G$, and, for each c... | It is proposed that children will apply the morphophonological constraint to subclasses of alternating verbs that are all from the native class (monosyllabic). If the set of alternating verbs are not all from the native class, then the child will not apply the morphophonological constraint. This account correctly predi... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The objective of this question is to illustrate the use of a lexical semantics resource to compute
lexical cohesion.
Consider the following toy ontology providing a semantic structuring for a (small) set of nouns:
<ontology>
<node text='all'>
<children>
<node text='animate entities'>
... | Recent advances in methods of lexical semantic relatedness - a survey. Natural Language Engineering 19 (4), 411–479, Cambridge University Press Book: S. Harispe, S. Ranwez, S. Janaqi, J. Montmain. 2015. Semantic Similarity from Natural Language and Ontology Analysis, Morgan & Claypool Publishers. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The objective of this question is to illustrate the use of a lexical semantics resource to compute
lexical cohesion.
Consider the following toy ontology providing a semantic structuring for a (small) set of nouns:
<ontology>
<node text='all'>
<children>
<node text='animate entities'>
... | Semeval-2012 task 6: A pilot on semantic textual similarity. E. Agirre, D. Cer, M. Diab, A. Gonzalez-Agirre. *SEM 2012: The First Joint Conference on Lexical and Computational Semantics–Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Sem... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
we'd like to do some sentence topic classification using a Naive-Bayes model. Consider the following toy learning corpus, where each sentence has been assigned a topic, either "Medical" or "Computer":
\item Medical: plastic surgery process initial consultation can be scheduled by sending an email to the administration.... | There is an increasing interest in text mining and information extraction strategies applied to the biomedical and molecular biology literature due to the increasing number of electronically available publications stored in databases such as PubMed. Decision tree learning – Sentence extraction – Terminology extraction ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
we'd like to do some sentence topic classification using a Naive-Bayes model. Consider the following toy learning corpus, where each sentence has been assigned a topic, either "Medical" or "Computer":
\item Medical: plastic surgery process initial consultation can be scheduled by sending an email to the administration.... | Here is another model, with a different set of issues. This is an implementation of an unsupervised Naive Bayes model for document clustering. That is, we would like to classify documents into multiple categories (e.g. "spam" or "non-spam", or "scientific journal article", "newspaper article about finance", "newspaper ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following sentence:
High-energy pulsed laser beams are used in soft-tissue surgery.
Using a 2-gram language model and a tokenizer that splits on whitespaces and punctuation (including hyphens (-)), what is the probability of the above sentence? Provide your answer as a formula, but clearly explaining each ... | Let c ( w , w ′ ) {\displaystyle c(w,w')} be the number of occurrences of the word w {\displaystyle w} followed by the word w ′ {\displaystyle w'} in the corpus. The equation for bigram probabilities is as follows: p K N ( w i | w i − 1 ) = max ( c ( w i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − 1 , w ′ ) + λ w i − 1 p K N... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following sentence:
High-energy pulsed laser beams are used in soft-tissue surgery.
Using a 2-gram language model and a tokenizer that splits on whitespaces and punctuation (including hyphens (-)), what is the probability of the above sentence? Provide your answer as a formula, but clearly explaining each ... | Let c ( w , w ′ ) {\displaystyle c(w,w')} be the number of occurrences of the word w {\displaystyle w} followed by the word w ′ {\displaystyle w'} in the corpus. The equation for bigram probabilities is as follows: p K N ( w i | w i − 1 ) = max ( c ( w i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − 1 , w ′ ) + λ w i − 1 p K N... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The objective of this question is to illustrate the use of a lexical semantics resource to compute
lexical cohesion.
Consider the following toy ontology providing a semantic structuring for a (small) set of nouns:
<ontology>
<node text='all'>
<children>
<node text='animate entities'>
... | Recent advances in methods of lexical semantic relatedness - a survey. Natural Language Engineering 19 (4), 411–479, Cambridge University Press Book: S. Harispe, S. Ranwez, S. Janaqi, J. Montmain. 2015. Semantic Similarity from Natural Language and Ontology Analysis, Morgan & Claypool Publishers. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The objective of this question is to illustrate the use of a lexical semantics resource to compute
lexical cohesion.
Consider the following toy ontology providing a semantic structuring for a (small) set of nouns:
<ontology>
<node text='all'>
<children>
<node text='animate entities'>
... | In ontologies designed to serve natural language processing (NLP) and natural language understanding (NLU) systems, ontology concepts are usually connected and symbolized by terms. This kind of connection represents a linguistic realization. Terms are words or a combination of words (multi-word units), in different lan... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Why is natural language processing difficult?Select all that apply.You will get a penalty for wrong answers. | Missing punctuation and the use of non-standard words can often hinder standard natural language processing tools such as part-of-speech tagging and parsing. Techniques to both learn from the noisy data and then to be able to process the noisy data are only now being developed. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Why is natural language processing difficult?Select all that apply.You will get a penalty for wrong answers. | Most of the more successful systems use lexical statistics (that is, they consider the identities of the words involved, as well as their part of speech). However such systems are vulnerable to overfitting and require some kind of smoothing to be effective.Parsing algorithms for natural language cannot rely on the gram... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then... | I had just read an article on writing adventures, and I thought about doing my own article on adventure writing. I did start on the article, and one of the examples of how varied puzzles can be is a mathematical adventure where the player has to "use a probability function to cross a field of improbability to get to a ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then... | An individual or small business might have this publishing process: Brainstorm content ideas to publish, where to publish, and when to publish Write each piece of content based on the publication schedule Edit each piece of content Publish each piece of contentA larger group might have this publishing process: Brainsto... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In an automated email router of a company, we want to make the distinction between three kind of
emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a
Naive Bayes approach.
What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'? | Naive Bayes classifiers are a popular statistical technique of e-mail filtering. They typically use bag-of-words features to identify email spam, an approach commonly used in text classification. Naive Bayes classifiers work by correlating the use of tokens (typically words, or sometimes other things), with spam and no... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In an automated email router of a company, we want to make the distinction between three kind of
emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a
Naive Bayes approach.
What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'? | In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can a... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following CFG
\(\text{S} \rightarrow \text{NP VP PNP}\)
\(\text{NP} \rightarrow \text{Det N}\)
\(\text{NP} \rightarrow \text{Det Adj N}\)
\(\text{VP} \rightarrow \text{V}\)
\(\text{VP} \rightarrow \text{Aux Ving}\)
\(\text{VP} \rightarrow \text{VP NP}\)
\(\text{VP} \rightarrow \text{VP PNP}\)
\(\text{PNP}... | This is an example grammar: S ⟶ NP VP VP ⟶ VP PP VP ⟶ V NP VP ⟶ eats PP ⟶ P NP NP ⟶ Det N NP ⟶ she V ⟶ eats P ⟶ with N ⟶ fish N ⟶ fork Det ⟶ a {\displaystyle {\begin{aligned}{\ce {S}}&\ {\ce {->NP\ VP}}\\{\ce {VP}}&\ {\ce {->VP\ PP}}\\{\ce {VP}}&\ {\ce {->V\ NP}}\\{\ce {VP}}&\ {\ce {->eats}}\\{\ce {PP}}&\ {\ce {->P\ NP... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following CFG
\(\text{S} \rightarrow \text{NP VP PNP}\)
\(\text{NP} \rightarrow \text{Det N}\)
\(\text{NP} \rightarrow \text{Det Adj N}\)
\(\text{VP} \rightarrow \text{V}\)
\(\text{VP} \rightarrow \text{Aux Ving}\)
\(\text{VP} \rightarrow \text{VP NP}\)
\(\text{VP} \rightarrow \text{VP PNP}\)
\(\text{PNP}... | Input: Received word y = ( y 0 , … , y 2 n − 1 ) {\displaystyle y=(y_{0},\dots ,y_{2^{n}-1})} For each i ∈ { 1 , … , n } {\displaystyle i\in \{1,\dots ,n\}}: Pick j ∈ { 0 , … , 2 n − 1 } {\displaystyle j\in \{0,\dots ,2^{n}-1\}} uniformly at random. Pick k ∈ { 0 , … , 2 n − 1 } {\displaystyle k\in \{0,\dots ,2^{n}-1\}}... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In an automated email router of a company, we want to make the distinction between three kind of
emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a
Naive Bayes approach.
What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'?
We will consider ... | Naive Bayes classifiers are a popular statistical technique of e-mail filtering. They typically use bag-of-words features to identify email spam, an approach commonly used in text classification. Naive Bayes classifiers work by correlating the use of tokens (typically words, or sometimes other things), with spam and no... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In an automated email router of a company, we want to make the distinction between three kind of
emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a
Naive Bayes approach.
What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'?
We will consider ... | In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can a... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the (toy) grammar $G$ consisting of the following rules:
R1: S --> NP VP
R2: NP --> NN
R3: NP --> Det NN
R4: NN --> N
R5: NN --> NN NN
R6: NN --> NN PNP
R7: PNP --> Prep NP
R8: VP --> V
R9: VP --> Adv V
Precisely define the type of grammar G is corresponding to (for that, consider at least the following aspect... | Chomsky, N. (1959). "On certain formal properties of grammars". Information and Control. 2 (2): 137–167. doi:10.1016/S0019-9958(59)90362-6.Description: This article introduced what is now known as the Chomsky hierarchy, a containment hierarchy of classes of formal grammars that generate formal languages. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the (toy) grammar $G$ consisting of the following rules:
R1: S --> NP VP
R2: NP --> NN
R3: NP --> Det NN
R4: NN --> N
R5: NN --> NN NN
R6: NN --> NN PNP
R7: PNP --> Prep NP
R8: VP --> V
R9: VP --> Adv V
Precisely define the type of grammar G is corresponding to (for that, consider at least the following aspect... | Generative grammars can be described and compared with the aid of the Chomsky hierarchy (proposed by Chomsky in the 1950s). This sets out a series of types of formal grammars with increasing expressive power. Among the simplest types are the regular grammars (type 3); Chomsky argues that these are not adequate as model... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The objective of this question is to illustrate the use of a lexical semantics resource to compute
lexical cohesion.
Consider the following toy ontology providing a semantic structuring for a (small) set of nouns:
<ontology>
<node text='all'>
<children>
<node text='animate entities'>
... | Recent advances in methods of lexical semantic relatedness - a survey. Natural Language Engineering 19 (4), 411–479, Cambridge University Press Book: S. Harispe, S. Ranwez, S. Janaqi, J. Montmain. 2015. Semantic Similarity from Natural Language and Ontology Analysis, Morgan & Claypool Publishers. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The objective of this question is to illustrate the use of a lexical semantics resource to compute
lexical cohesion.
Consider the following toy ontology providing a semantic structuring for a (small) set of nouns:
<ontology>
<node text='all'>
<children>
<node text='animate entities'>
... | Cohesion is analysed in the context of both lexical and grammatical as well as intonational aspects with reference to lexical chains and, in the speech register, tonality, tonicity, and tone. The lexical aspect focuses on sense relations and lexical repetitions, while the grammatical aspect looks at repetition of meani... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Select which statements are true regarding SCFGs.A penalty will be applied for any incorrect answers. | In languages that allow procedural parameters, the scoping rules are usually defined in such a way that procedural parameters are executed in their native scope. More precisely, suppose that the function actf is passed as argument to P, as its procedural parameter f; and f is then called from inside the body of P. Whil... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Select which statements are true regarding SCFGs.A penalty will be applied for any incorrect answers. | In computer languages it is expected that any truth-valued expression be permitted as the selection condition rather than restricting it to be a simple comparison. In SQL, selections are performed by using WHERE definitions in SELECT, UPDATE, and DELETE statements, but note that the selection condition can result in an... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a 3-gram language model. Select all possible ways we can compute the maximum likelihood of the word sequence:"time flies like an arrow"You will get a penalty for wrong ticks. | Let c ( w , w ′ ) {\displaystyle c(w,w')} be the number of occurrences of the word w {\displaystyle w} followed by the word w ′ {\displaystyle w'} in the corpus. The equation for bigram probabilities is as follows: p K N ( w i | w i − 1 ) = max ( c ( w i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − 1 , w ′ ) + λ w i − 1 p K N... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a 3-gram language model. Select all possible ways we can compute the maximum likelihood of the word sequence:"time flies like an arrow"You will get a penalty for wrong ticks. | Let c ( w , w ′ ) {\displaystyle c(w,w')} be the number of occurrences of the word w {\displaystyle w} followed by the word w ′ {\displaystyle w'} in the corpus. The equation for bigram probabilities is as follows: p K N ( w i | w i − 1 ) = max ( c ( w i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − 1 , w ′ ) + λ w i − 1 p K N... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Following are token counts that appear in 3 documents (D1, D2, and D3):
D1 – tablet: 7; memory: 5; app: 8; sluggish: 7
D2 – memory: 5; app: 3
D3 – tablet: 3; sluggish: 3
Based on the cosine similarity, which 2 documents are the most similar?
| Cosine similarity is a widely used measure to compare the similarity between two pieces of text. It calculates the cosine of the angle between two document vectors in a high-dimensional space. Cosine similarity ranges between -1 and 1, where a value closer to 1 indicates higher similarity, and a value closer to -1 indi... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Following are token counts that appear in 3 documents (D1, D2, and D3):
D1 – tablet: 7; memory: 5; app: 8; sluggish: 7
D2 – memory: 5; app: 3
D3 – tablet: 3; sluggish: 3
Based on the cosine similarity, which 2 documents are the most similar?
| Cosine similarity can be seen as a method of normalizing document length during comparison. In the case of information retrieval, the cosine similarity of two documents will range from 0 → 1 {\displaystyle 0\to 1} , since the term frequencies cannot be negative. This remains true when using TF-IDF weights. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Select what is true about the Baum-Welch algorithm.A penalty will be applied for any incorrect answers. | In electrical engineering, statistical computing and bioinformatics, the Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm to compute the statistics for the expectation step. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Select what is true about the Baum-Welch algorithm.A penalty will be applied for any incorrect answers. | The Baum–Welch algorithm is often used to estimate the parameters of HMMs in deciphering hidden or noisy information and consequently is often used in cryptanalysis. In data security an observer would like to extract information from a data stream without knowing all the parameters of the transmission. This can involve... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following toy corpus: the cat cut the hat
How many different bigrams of characters (including whitespace) do you have in that corpus? | The text identifier itself consists of multiple constituent parts. : 385 Sequences of whitespace are treated as equivalent to a single space. : 381–382 | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following toy corpus: the cat cut the hat
How many different bigrams of characters (including whitespace) do you have in that corpus? | For example, the following two nine character long strings, FAREMVIEL and FARMVILLE, have 8 matching characters. 'F', 'A' and 'R' are in the same position in both strings. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Your aim is to evaluate a Tweet analysis system, the
purpose of which is to detect whether a tweet is offensive. For each Tweet processed, such a system outputs one of the following classes: "hateful",
"offensive" and "neutral".To perform your evaluation, you
collect a large set of Tweets and have it annotated by tw... | Even though short text strings might be a problem, sentiment analysis within microblogging has shown that Twitter can be seen as a valid online indicator of political sentiment. Tweets' political sentiment demonstrates close correspondence to parties' and politicians' political positions, indicating that the content of... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Your aim is to evaluate a Tweet analysis system, the
purpose of which is to detect whether a tweet is offensive. For each Tweet processed, such a system outputs one of the following classes: "hateful",
"offensive" and "neutral".To perform your evaluation, you
collect a large set of Tweets and have it annotated by tw... | Their work explains in detail an attempt to detect inauthentic texts and identify pernicious problems of inauthentic texts in cyberspace. The site has a means of submitting text that assesses, based on supervised learning, whether a corpus is inauthentic or not. Many users have submitted incorrect types of data and hav... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture):
...some picture...
Explanation of (some) tags:
\begin{center}
\begin{tabular}{l|l|l|l}
Tag & English expl. & Expl. française & Example(s) \\
\hline
JJ & Adjective & adjectif & yellow \... | In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture):
...some picture...
Explanation of (some) tags:
\begin{center}
\begin{tabular}{l|l|l|l}
Tag & English expl. & Expl. française & Example(s) \\
\hline
JJ & Adjective & adjectif & yellow \... | In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Using a 4-gram character model, comparing "banana" and "ananas"... | These models compare the letters of words rather than their phonetics. Dunn et al. studied 125 typological characters across 16 Austronesian and 15 Papuan languages. They compared their results to an MP tree and one constructed by traditional analysis. Significant differences were found. Similarly Wichmann and Saunders... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Using a 4-gram character model, comparing "banana" and "ananas"... | The results depended on the data set used. It was found that weighting the characters was important, which requires linguistic judgement. Saunders (2005) compared NJ, MP, GA and Neighbor-Net on a combination of lexical and typological data. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A query \(q\) has been submitted to two distinct Information Retrieval engines operating on the same document collection containing 1'000 documents, with 50 documents being truly relevant for \(q\).The following result lists have been produced by the two IR engines, \(S_1\) and \(S_2\) respectively:
\(S_1\text{:}\)
\(... | The mathematics of universal IR evaluation is a fairly new subject since the relevance metrics P,R,F,M were not analyzed collectively until recently (within the past decade). A lot of the theoretical groundwork has already been formulated, but new insights in this area await discovery. For a detailed mathematical analy... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A query \(q\) has been submitted to two distinct Information Retrieval engines operating on the same document collection containing 1'000 documents, with 50 documents being truly relevant for \(q\).The following result lists have been produced by the two IR engines, \(S_1\) and \(S_2\) respectively:
\(S_1\text{:}\)
\(... | For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of rele... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
For this question, one or more assertions can be correct. Tick only the correct assertion(s). There
will be a penalty for wrong assertions ticked.Using a 3-gram character model, which of the following expressions are equal to \( P(\text{opossum}) \) ? | /^. *?px/ will match the substring 165px in 165px 17px instead of matching 165px 17px. In certain implementations of the BASIC programming language, the ? | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
For this question, one or more assertions can be correct. Tick only the correct assertion(s). There
will be a penalty for wrong assertions ticked.Using a 3-gram character model, which of the following expressions are equal to \( P(\text{opossum}) \) ? | Similar to 1.03, 1.16 and 1.17. A very long demonstration was required here.) ✸2.16 (p → q) → (~q → ~p) (If it's true that "If this rose is red then this pig flies" then it's true that "If this pig doesn't fly then this rose isn't red.") | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Using the same set of transformations as in the previous question, what is the final value you get for the edit distance between execution and exceuton, i.e. D(execution, exceuton)?Give your answer as a numerical value. | To execute O2 after O1, O2 must be transformed against O1 to become: O2' = Delete, whose positional parameter is incremented by one due to the insertion of one character "x" by O1. Executing O2' on "xabc" deletes the correct character "c" and the document becomes "xab". However, if O2 is executed without transformation... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Using the same set of transformations as in the previous question, what is the final value you get for the edit distance between execution and exceuton, i.e. D(execution, exceuton)?Give your answer as a numerical value. | For the task of correcting OCR output, merge and split operations have been used which replace a single character into a pair of them or vice versa.Other variants of edit distance are obtained by restricting the set of operations. Longest common subsequence (LCS) distance is edit distance with insertion and deletion as... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \alpha) \), of appropriate dimension, where \( \alpha \) is a real-number between 0 and 1? | Additive smoothing is a type of shrinkage estimator, as the resulting estimate will be between the empirical probability (relative frequency) x i / N {\textstyle \textstyle {x_{i}/N}} , and the uniform probability 1 / d {\textstyle \textstyle {1/d}} . Invoking Laplace's rule of succession, some authors have argued that... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \alpha) \), of appropriate dimension, where \( \alpha \) is a real-number between 0 and 1? | Smoothing bias complicates interval estimation for these models, and the simplest approach turns out to involve a Bayesian approach. Understanding this Bayesian view of smoothing also helps to understand the REML and full Bayes approaches to smoothing parameter estimation. At some level smoothing penalties are imposed ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A major specificity of natural languages is that they are inherently implicit and ambiguous. How should this be taken into account in the NLP perspective?
(penalty for wrong ticks) | A characteristic of natural language is that there are many different ways to state what one wants to say: several meanings can be contained in a single text and the same meaning can be expressed by different texts. This variability of semantic expression can be seen as the dual problem of language ambiguity. Together,... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A major specificity of natural languages is that they are inherently implicit and ambiguous. How should this be taken into account in the NLP perspective?
(penalty for wrong ticks) | A characteristic of natural language is that there are many different ways to state what one wants to say: several meanings can be contained in a single text and the same meaning can be expressed by different texts. This variability of semantic expression can be seen as the dual problem of language ambiguity. Together,... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture):
...some picture...
Explanation of (some) tags:
\begin{center}
\begin{tabular}{l|l|l|l}
Tag & English expl. & Expl. française & Example(s) \\
\hline
JJ & Adjective & adjectif & yellow \... | In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture):
...some picture...
Explanation of (some) tags:
\begin{center}
\begin{tabular}{l|l|l|l}
Tag & English expl. & Expl. française & Example(s) \\
\hline
JJ & Adjective & adjectif & yellow \... | In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let's denote by respectively \(A\), \(B\) and \(C\) the value stored by the Viterbi algorithm in the node associated to respectively N, V and Adj for the word "time".If \(C > B > A\) and \(10 A \geq 9 C\), what would be the tag of "time" in the most probable tagging, if the tag of "control" is N (in the most probable t... | This version of the halting problem is among the simplest, most-easily described undecidable decision problems: Given an arbitrary positive integer n and a list of n+1 arbitrary words P1,P2,...,Pn,Q on the alphabet {1,2,...,n}, does repeated application of the tag operation t: ijX → XPi eventually convert Q into a word... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let's denote by respectively \(A\), \(B\) and \(C\) the value stored by the Viterbi algorithm in the node associated to respectively N, V and Adj for the word "time".If \(C > B > A\) and \(10 A \geq 9 C\), what would be the tag of "time" in the most probable tagging, if the tag of "control" is N (in the most probable t... | A statistical tagger looks for the most probable tag for an ambiguously tagged text σ σ … σ {\displaystyle \sigma \sigma \ldots \sigma }: γ ∗ … γ ∗ = arg m a x γ ∈ T ( σ ) p ( γ … γ σ … σ ) {\displaystyle \gamma ^{*}\ldots \gamma ^{*}=\operatorname {\arg \,max} _{\gamma \in T(\sigma )}p(\gamma \ldots \gamm... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
How is it possible to compute the average Precision/Recall curves? Explain in detail the
various steps of the computation. | Precision and recall are single-value metrics based on the whole list of documents returned by the system. For systems that return a ranked sequence of documents, it is desirable to also consider the order in which the returned documents are presented. By computing a precision and recall at every position in the ranked... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
How is it possible to compute the average Precision/Recall curves? Explain in detail the
various steps of the computation. | Precision and recall are single-value metrics based on the whole list of documents returned by the system. For systems that return a ranked sequence of documents, it is desirable to also consider the order in which the returned documents are presented. By computing a precision and recall at every position in the ranked... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the (toy) grammar $G$ consisting of the following rules:
R1: S --> NP VP
R2: NP --> NN
R3: NP --> Det NN
R4: NN --> N
R5: NN --> NN NN
R6: NN --> NN PNP
R7: PNP --> Prep NP
R8: VP --> V
R9: VP --> Adv V
What type of rules does the provided grammar $G$ consist of?
What type of rules should $G$ be complemented w... | Context-free grammars are represented as a set of rules inspired from attempts to model natural languages. The rules are absolute and have a typical syntax representation known as Backus–Naur form. The production rules consist of terminal { a , b } {\displaystyle \left\{a,b\right\}} and non-terminal S symbols and a bla... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the (toy) grammar $G$ consisting of the following rules:
R1: S --> NP VP
R2: NP --> NN
R3: NP --> Det NN
R4: NN --> N
R5: NN --> NN NN
R6: NN --> NN PNP
R7: PNP --> Prep NP
R8: VP --> V
R9: VP --> Adv V
What type of rules does the provided grammar $G$ consist of?
What type of rules should $G$ be complemented w... | An intermediate class of grammars known as conjunctive grammars allows conjunction and disjunction, but not negation. The rules of a Boolean grammar are of the form A → α 1 & … & α m & ¬ β 1 & … & ¬ β n {\displaystyle A\to \alpha _{1}\And \ldots \And \alpha _{m}\And \lnot \beta _{1}\And \ldots \And \lnot \beta _{n}} wh... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider 3 regular expressions \(A\), \(B\), and \(C\), such that:the sets of strings recognized by each of the regular expressions is non empty;the set of strings recognized by \(B\) is included in the set of strings recognized by \(A\);some strings are recognized simultaneously by \(A\) and by \(C\); andno string is ... | matches any character. For example, a.b matches any string that contains an "a", and then any character and then "b". a. *b matches any string that contains an "a", and then the character "b" at some later point.These constructions can be combined to form arbitrarily complex expressions, much like one can construct ari... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider 3 regular expressions \(A\), \(B\), and \(C\), such that:the sets of strings recognized by each of the regular expressions is non empty;the set of strings recognized by \(B\) is included in the set of strings recognized by \(A\);some strings are recognized simultaneously by \(A\) and by \(C\); andno string is ... | Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory. Given a finite alphabet Σ, the following constants are defined as regular expression... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Assume that the texts to be tagged contain 1.5% of unknown words and that the performance
of the tagger to be used is 98% on known words.
What will be its typical overall performance in the following situation:
all unknown words are systematically wrongly tagged? | However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at p... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Assume that the texts to be tagged contain 1.5% of unknown words and that the performance
of the tagger to be used is 98% on known words.
What will be its typical overall performance in the following situation:
all unknown words are systematically wrongly tagged? | However, many significant taggers are not included (perhaps because of the labor involved in reconfiguring them for this particular dataset). Thus, it should not be assumed that the results reported here are the best that can be achieved with a given approach; nor even the best that have been achieved with a given appr... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct
at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.The duke were also presented with a book
commemorated his visit’s mother. | Second, if the wrong answers were blind guesses, there would be no information to be found among these answers. On the other hand, if wrong answers reflect interpretation departures from the expected one, these answers should show an ordered relationship to whatever the overall test is measuring. This departure should ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct
at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.The duke were also presented with a book
commemorated his visit’s mother. | The second sentence is an echo question; it would be uttered only after receiving an unsatisfactory or confusing answer to a question. One could replace the word wen (which indicates that this sentence is a question) with an identifier such as Mark: 'Kate liebt Mark?' . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct
at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.The mouse lost a feather as it took off. | Failing both of the above, capture a box that touches at least one other box held by any player.Any time a contestant answers a question incorrectly, other than on the first question or any puzzle, that player is locked out from answering for two questions (originally three). If a question was answered incorrectly, pla... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct
at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.The mouse lost a feather as it took off. | Also, scrolling text too fast can make it unreadable to some people, particularly those with visual impairments. This can easily frustrate users. To combat this, client-side scripting allows marquees to be programmed to stop when the mouse is over them. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following document:
D = 'the exports from Switzerland to the USA are increasing in 2006'
Propose a possible indexing set for this document. Justify your answer. | As it happens, ημν = ημν. This is referred to as raising an index. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following document:
D = 'the exports from Switzerland to the USA are increasing in 2006'
Propose a possible indexing set for this document. Justify your answer. | More than 3,000 academic papers used data from the index. The effect of improving regulations on economic growth is claimed to be very strong. Moving from the worst one-fourth of nations to the best one-fourth implies a 2.3 percentage point increase in annual growth. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following grammar:
S -> NP VP
NP -> Det N
VP -> VBe Adj
NP -> NP PP
VP -> V
N -> Adj N
VP -> VP PP
Adj -> Adj PP
V -> VBe
Adj -> Ving
PP -> Prep NP
and the following lexicon:
at:Prep is:VBe old:Adj
black:Adj looking:Ving the:Det
cat:N mouse:N under:Prep
former:Adj nice:Adj with:Prep
This grammar also ... | Like with all other types of phrases, theories of syntax render the syntactic structure of adpositional phrases using trees. The trees that follow represent adpositional phrases according to two modern conventions for rendering sentence structure, first in terms of the constituency relation of phrase structure grammars... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following grammar:
S -> NP VP
NP -> Det N
VP -> VBe Adj
NP -> NP PP
VP -> V
N -> Adj N
VP -> VP PP
Adj -> Adj PP
V -> VBe
Adj -> Ving
PP -> Prep NP
and the following lexicon:
at:Prep is:VBe old:Adj
black:Adj looking:Ving the:Det
cat:N mouse:N under:Prep
former:Adj nice:Adj with:Prep
This grammar also ... | Like with all other types of phrases, theories of syntax render the syntactic structure of adpositional phrases using trees. The trees that follow represent adpositional phrases according to two modern conventions for rendering sentence structure, first in terms of the constituency relation of phrase structure grammars... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Select what statements are true about probabilistic parsing.A penalty will be applied for any wrong answers selected. | (The probability associated with a grammar rule may be induced, but the application of that grammar rule within a parse tree and the computation of the probability of the parse tree based on its component rules is a form of deduction.) Using this concept, statistical parsers make use of a procedure to search over a spa... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Select what statements are true about probabilistic parsing.A penalty will be applied for any wrong answers selected. | (The probability associated with a grammar rule may be induced, but the application of that grammar rule within a parse tree and the computation of the probability of the parse tree based on its component rules is a form of deduction.) Using this concept, statistical parsers make use of a procedure to search over a spa... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
What are possible morphological analyses of "drinks"?(Penalty for wrong ticks) | "Spatiotemporal variation in a Lyme disease host and vector: black-legged ticks on white-footed mice". Vector-Borne and Zoonotic Diseases. 1 (2): 129–138. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
What are possible morphological analyses of "drinks"?(Penalty for wrong ticks) | 2014. Bat ticks revisited: Ixodes ariadnae sp. nov. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider two Information Retrieval systems S1 and S2 that produced the following outputs for
the 4 reference queries q1, q2, q3, q4:
S1: | referential:
q1: d01 d02 d03 d04 dXX dXX dXX dXX | q1: d01 d02 d03 d04
q2: d06 dXX dXX dXX dXX | q2: d05 d06
q3: dXX d07 d09 ... | Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider two Information Retrieval systems S1 and S2 that produced the following outputs for
the 4 reference queries q1, q2, q3, q4:
S1: | referential:
q1: d01 d02 d03 d04 dXX dXX dXX dXX | q1: d01 d02 d03 d04
q2: d06 dXX dXX dXX dXX | q2: d05 d06
q3: dXX d07 d09 ... | Information retrieval systems, such as databases and web search engines, are evaluated by many different metrics, some of which are derived from the confusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Give some arguments justifying why evaluation is especially important for NLP. In particular, explain the role of evaluation when a corpus-based approach is used. | As in other scientific fields, NLG researchers need to test how well their systems, modules, and algorithms work. This is called evaluation. There are three basic techniques for evaluating NLG systems: Task-based (extrinsic) evaluation: give the generated text to a person, and assess how well it helps them perform a ta... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Give some arguments justifying why evaluation is especially important for NLP. In particular, explain the role of evaluation when a corpus-based approach is used. | As in other scientific fields, NLG researchers need to test how well their systems, modules, and algorithms work. This is called evaluation. There are three basic techniques for evaluating NLG systems: Task-based (extrinsic) evaluation: give the generated text to a person, and assess how well it helps them perform a ta... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Provide a formal definition of a transducer. Give some good reasons to use such a tool for morphological processing. | Finite State Transducers (FSTs) are a popular technique for the computational handling of morphology, esp., inflectional morphology. In rule-based morphological parsers, both lexicon and rules are normally formalized as finite state automata and subsequently combined. They thus require morphological dictionaries with s... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Provide a formal definition of a transducer. Give some good reasons to use such a tool for morphological processing. | A transducer is a device that takes energy from one domain as input and converts it to another energy domain as output. They are often reversible, but are rarely used in that way. Transducers have many uses and there are many kinds, in electromechanical systems they can be used as actuators and sensors. In audio electr... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider we use the set of transformations: insertion, deletion, substitution, and transposition. We want to compute the edit distance between words execution and exceuton, i.e. D(execution, exceuton).When computing the above, what is the value you get for D(exec,exce)?Give your answer as a numerical value. | The closeness of a match is measured in terms of the number of primitive operations necessary to convert the string into an exact match. This number is called the edit distance between the string and the pattern. The usual primitive operations are: insertion: cot → coat deletion: coat → cot substitution: coat → costThe... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider we use the set of transformations: insertion, deletion, substitution, and transposition. We want to compute the edit distance between words execution and exceuton, i.e. D(execution, exceuton).When computing the above, what is the value you get for D(exec,exce)?Give your answer as a numerical value. | There are other popular measures of edit distance, which are calculated using a different set of allowable edit operations. For instance, the Levenshtein distance allows deletion, insertion and substitution; the Damerau–Levenshtein distance allows insertion, deletion, substitution, and the transposition of two adjacent... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You are responsible for a project aiming at providing on-line recommendations to the customers of
a on-line book selling company.
The general idea behind this recommendation system is to cluster books according to both customers
and content similarities, so as to propose books similar to the books already bought by a g... | One key choice for researchers when applying unsupervised methods is selecting the number of categories to sort documents into rather than defining what the categories are in advance. Single membership models: these models automatically cluster texts into different categories that are mutually exclusive, and documents ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You are responsible for a project aiming at providing on-line recommendations to the customers of
a on-line book selling company.
The general idea behind this recommendation system is to cluster books according to both customers
and content similarities, so as to propose books similar to the books already bought by a g... | Another is clustering, which analyzes a set of documents by grouping similar or co-occurring documents or terms. Clustering allows the results to be partitioned into groups of related documents. For example, a search for "java" might return clusters for Java (programming language), Java (island), or Java (coffee). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:If a 4-gram has a probability estimated to be \( p \) with Maximum-Likelihood estimation, what would
be its probability if estimated using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \al... | Additive smoothing is a type of shrinkage estimator, as the resulting estimate will be between the empirical probability (relative frequency) x i / N {\textstyle \textstyle {x_{i}/N}} , and the uniform probability 1 / d {\textstyle \textstyle {1/d}} . Invoking Laplace's rule of succession, some authors have argued that... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:If a 4-gram has a probability estimated to be \( p \) with Maximum-Likelihood estimation, what would
be its probability if estimated using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \al... | Smoothing bias complicates interval estimation for these models, and the simplest approach turns out to involve a Bayesian approach. Understanding this Bayesian view of smoothing also helps to understand the REML and full Bayes approaches to smoothing parameter estimation. At some level smoothing penalties are imposed ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following SCFG with the following probabilities:
S → NP VP
0.8
S → NP VP PP
0.2
VP → Vi
{a1}
VP → Vt NP
{a2}
VP → VP PP
a
NP → Det NN
0.3
NP → NP PP
0.7
PP → Prep NP
1.0
Vi → sleeps 1.0Vt → saw 1.0NN → man {b1}NN → dog bNN → telescope ... | {\displaystyle {\begin{aligned}\Pr(G=T,S=T,R=T)&=\Pr(G=T\mid S=T,R=T)\Pr(S=T\mid R=T)\Pr(R=T)\\&=0.99\times 0.01\times 0.2\\&=0.00198.\end{aligned}}} Then the numerical results (subscripted by the associated variable values) are Pr ( R = T ∣ G = T ) = 0.00198 T T T + 0.1584 T F T 0.00198 T T T + 0.288 T T F + 0.1584 T ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following SCFG with the following probabilities:
S → NP VP
0.8
S → NP VP PP
0.2
VP → Vi
{a1}
VP → Vt NP
{a2}
VP → VP PP
a
NP → Det NN
0.3
NP → NP PP
0.7
PP → Prep NP
1.0
Vi → sleeps 1.0Vt → saw 1.0NN → man {b1}NN → dog bNN → telescope ... | One question is whether to treat the range of obtained values for | M 0 ν | {\displaystyle \left|M^{0\nu }\right|} as the theoretical uncertainty and whether this is then to be understood as a statistical uncertainty. Different approaches are being chosen here. The obtained values for | M 0 ν | {\displaystyle \left|M^{... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
What is the formal relation between accuracy and the error rate? In which case would you recommend to use the one or the other? | A further complication is added by whether a given syntax allows for error correction and, if it does, how easy that process is for the user. There is thus some merit to the argument that performance metrics should be developed to suit the particular system being measured. Whichever metric is used, however, one major t... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
What is the formal relation between accuracy and the error rate? In which case would you recommend to use the one or the other? | Accuracy and precision are two measures of observational error. Accuracy is how close a given set of measurements (observations or readings) are to their true value, while precision is how close the measurements are to each other. In other words, precision is a description of random errors, a measure of statistical var... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following CF grammar \(G_1\)
\( R_1: \text{S} \rightarrow \text{NP VP} \)
\( R_2: \text{S} \rightarrow \text{NP VP PNP} \)
\( R_3: \text{PNP} \rightarrow \text{Prep NP} \)
\( R_4: \text{NP} \rightarrow \text{N} \)
\( R_5: \text{NP} \rightarrow \text{Det N} \)
\( R_6: \text{NP} \rightarrow \text{Det N PNP}... | A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left. A starting non-terminal S ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following CF grammar \(G_1\)
\( R_1: \text{S} \rightarrow \text{NP VP} \)
\( R_2: \text{S} \rightarrow \text{NP VP PNP} \)
\( R_3: \text{PNP} \rightarrow \text{Prep NP} \)
\( R_4: \text{NP} \rightarrow \text{N} \)
\( R_5: \text{NP} \rightarrow \text{Det N} \)
\( R_6: \text{NP} \rightarrow \text{Det N PNP}... | Similar to a CFG, a probabilistic context-free grammar G can be defined by a quintuple: G = ( M , T , R , S , P ) {\displaystyle G=(M,T,R,S,P)} where M is the set of non-terminal symbols T is the set of terminal symbols R is the set of production rules S is the start symbol P is the set of probabilities on production r... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then... | An individual or small business might have this publishing process: Brainstorm content ideas to publish, where to publish, and when to publish Write each piece of content based on the publication schedule Edit each piece of content Publish each piece of contentA larger group might have this publishing process: Brainsto... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then... | The main drawback of the evaluation systems so far is that we need a reference summary (for some methods, more than one), to compare automatic summaries with models. This is a hard and expensive task. Much effort has to be made to create corpora of texts and their corresponding summaries. Furthermore, some methods requ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Why is natural language processing difficult?
Select all that apply.A penalty will be applied for wrong answers. | Missing punctuation and the use of non-standard words can often hinder standard natural language processing tools such as part-of-speech tagging and parsing. Techniques to both learn from the noisy data and then to be able to process the noisy data are only now being developed. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Why is natural language processing difficult?
Select all that apply.A penalty will be applied for wrong answers. | Most of the more successful systems use lexical statistics (that is, they consider the identities of the words involved, as well as their part of speech). However such systems are vulnerable to overfitting and require some kind of smoothing to be effective.Parsing algorithms for natural language cannot rely on the gram... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following CF grammar \(G_1\)
\( R_1: \text{S} \rightarrow \text{NP VP} \)
\( R_2: \text{S} \rightarrow \text{NP VP PNP} \)
\( R_3: \text{PNP} \rightarrow \text{Prep NP} \)
\( R_4: \text{NP} \rightarrow \text{N} \)
\( R_5: \text{NP} \rightarrow \text{Det N} \)
\( R_6: \text{NP} \rightarrow \text{Det N PNP}... | Similar to a CFG, a probabilistic context-free grammar G can be defined by a quintuple: G = ( M , T , R , S , P ) {\displaystyle G=(M,T,R,S,P)} where M is the set of non-terminal symbols T is the set of terminal symbols R is the set of production rules S is the start symbol P is the set of probabilities on production r... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following CF grammar \(G_1\)
\( R_1: \text{S} \rightarrow \text{NP VP} \)
\( R_2: \text{S} \rightarrow \text{NP VP PNP} \)
\( R_3: \text{PNP} \rightarrow \text{Prep NP} \)
\( R_4: \text{NP} \rightarrow \text{N} \)
\( R_5: \text{NP} \rightarrow \text{Det N} \)
\( R_6: \text{NP} \rightarrow \text{Det N PNP}... | A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left. A starting non-terminal S ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the (toy) grammar $G$ consisting of the following rules:
R1: S --> NP VP
R2: NP --> NN
R3: NP --> Det NN
R4: NN --> N
R5: NN --> NN NN
R6: NN --> NN PNP
R7: PNP --> Prep NP
R8: VP --> V
R9: VP --> Adv V
What is the number $N$ of additional rules that should be added to $G$ to make it applicable to any sequence... | Thus, a prescribed sequence controlled grammar is at least approximately a 5-tuple G = ( N , T , S , P , R ) {\displaystyle G=(N,T,S,P,R)} where everything except R is the same as in a CFG, and R is an infinite set of valid derivation sequences p 1 p 2 . . . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the (toy) grammar $G$ consisting of the following rules:
R1: S --> NP VP
R2: NP --> NN
R3: NP --> Det NN
R4: NN --> N
R5: NN --> NN NN
R6: NN --> NN PNP
R7: PNP --> Prep NP
R8: VP --> V
R9: VP --> Adv V
What is the number $N$ of additional rules that should be added to $G$ to make it applicable to any sequence... | The theorem can be used in analytic combinatorics to estimate the number of words of length n generated by a given unambiguous context-free grammar, as n grows large. The following example is given by Gruber, Lee & Shallit (2012): the unambiguous context-free grammar G over the alphabet {0,1} has start symbol S and the... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.