file
stringclasses
75 values
start
int64
12
8.27k
end
int64
49
8.32k
label
stringclasses
4 values
user
stringclasses
5 values
text
stringlengths
2
4.16k
paper_100.txt
594
966
Coherence
Ed
Zhang et al. (2019) improves an LSTM- based encoder-decoder model with online vocabulary adaptation. For abbreviated pinyin, CoCAT (Huang et al., 2015) uses machine translation technology to reduce the number of the typing letters. Huang and Zhao (2018) propose an LSTM-based encoder-decoder approach with the concatenation of context words and abbreviated pinyin as input
paper_100.txt
1,110
1,646
Coherence
Ed
In addition, there are some works handling pinyin with typing errors. Chen and Lee (2000) investigate a typing model which handles spelling correction in sentence-based pinyin input method. CHIME (Zheng et al., 2011) is a error-tolerant Chinese pinyin input method. It finds similar pinyin which will be further ranked with Chinese specific features. Jia and Zhao (2014) propose a joint graph model to globally optimize the tasks of pinyin input method and typo correction. We leave error-tolerant pinyin input method as a future work.
paper_100.txt
2,088
2,529
Coherence
Ed
Zhang et al. (2021a) add a pinyin embedding layer and learns to predict characters from similarly pronounced candidates. PLOME (Liu et al., 2021) add two embedding layers implemented with two GRU networks to inject both pinyin and shape of characters, respectively. Xu et al. (2021) add a hierarchical encoder to inject the pinyin letters at character and sentence levels, and add a ResNet encoder to use graphic features of character image.
paper_100.txt
2,471
2,478
Unsupported claim
Ed
ResNet
paper_100.txt
1,929
1,934
Unsupported claim
Ed
BERT
paper_100.txt
1,815
1,820
Unsupported claim
Ed
BERT
paper_11.txt
797
960
Unsupported claim
Ed
alternative end-to-end approach that can tackle the problem purely cross-lingually, i.e., without involving MT, would clearly be more efficient and cost-effective
paper_13.txt
3,365
3,395
Unsupported claim
Ed
In contrast to most prior work
paper_13.txt
14
105
Unsupported claim
Ed
Few-shot learning is the problem of learning classifiers with only a few training examples.
paper_13.txt
445
932
Lacks synthesis
Ed
In recent years, there has been a surge in zeroshot and few-shot approaches to text classification. One approach (Yin et al., 2019, 2020; Halder et al., 2020;Wang et al., 2021) makes use of entailment models. Textual entailment (Dagan et al., 2006), also known as natural language inference (NLI) (Bowman et al., 2015), is the problem of predicting whether a textual premise implies a textual hypothesis in a logical sense. For example, Emma loves apples implies that Emma likes apples.
paper_13.txt
933
1,190
Lacks synthesis
Ed
The entailment approach for text classification sets the input text as the premise and the text repre-senting the label as the hypothesis. A NLI model is applied to each input pair and the entailment probability is used to identify the best matching label.
paper_13.txt
1,713
1,881
Unsupported claim
Ed
In contrast, the models typically applied in the entailment approach are Cross Attention (CA) models which need to be executed for every combination of text and label.
paper_14.txt
182
310
Unsupported claim
Ed
Unfortunately, for many languages, and especially low-resource languages, such taskspecific labelled data is often not available
paper_14.txt
2,549
2,644
Unsupported claim
Ed
as this is the only task for which high-quality data is available in a large number of language
paper_14.txt
2,833
2,980
Unsupported claim
Ed
a base understanding of syntactic structure in both the source and target language is necessary for any meaningful natural language processing task
paper_15.txt
897
902
Format
Ed
2021)
paper_15.txt
14
105
Unsupported claim
Ed
Multimodal machine translation is a cross-domain task in the filed of machine translation.
paper_16.txt
713
1,264
Lacks synthesis
Ed
Researchers recently explore the peer review domain data for a few tasks, such as PeerRead (Kang et al., 2018) for paper decision predictions, AM-PERE for proposition classification in reviews, and RR (Cheng et al., 2020) for paired-argument extraction from review-rebuttal pairs. Additionally, a meta-review dataset is introduced by Bhatia et al. (2020) without any annotation. There are also some explorations on research articles (Teufel et al., 1999;Liakata et al., 2010;Lauscher et al., 2018), which differ in nature from the peer review domain.
paper_16.txt
13
416
Coherence
Ed
To facilitate the study of text summarization, earlier datasets are mostly in the news domain with relatively short input passages, such as NYT (Sandhaus, 2008), Gigaword (Napoles et al., 2012), CNN/Daily Mail (Hermann et al., 2015), NEWSROOM (Grusky et al., 2018) and XSUM (Narayan et al., 2018). Datasets for long docu-ments include Sharma et al. (2019), Cohan et al. (2018), andFisas et al. (2016).
paper_17.txt
14
1,286
Lacks synthesis
Ed
Fully supervised event extraction. Event extraction has been studied for over a decade (Ahn, 2006;Ji and Grishman, 2008) and most traditional event extraction works follow the fully supervised setting (Nguyen et al., 2016;Sha et al., 2018;Nguyen and Nguyen, 2019;Yang et al., 2019;Lin et al., 2020;Li et al., 2020). Many of them use classification-based models and use pipeline-style frameworks to extract events (Nguyen et al., 2016;Yang et al., 2019;Wadden et al., 2019). To better leverage shared knowledge in event triggers and arguments, some works propose to incorporate global features to jointly decide triggers and arguments (Lin et al., 2020;Li et al., 2013;Yang and Mitchell, 2016). Recently, few generation-based event extraction models have been proposed. TANL (Paolini et al., 2021) treats event extraction as translation tasks between augmented natural languages. Their predicted targetaugmented language embed labels into the input passage via using brackets and vertical bar symbols, hindering the model from fully leveraging label semantics. BART-Gen is also a generation-based model focusing on documentlevel event argument extraction. Yet, similar to TANL, they solve event extraction with a pipeline, which prevents knowledge sharing across subtasks.
paper_17.txt
1,726
2,044
Lacks synthesis
Ed
Liu et al. (2020) uses a machine reading comprehension formulation to conduct event extraction in a low-resource regime. Text2Event (Lu et al., 2021), a sequence-to-structure generation paradigm, first presents events in a linearized format, and then trains a generative model to generate the linearized event sequence
paper_17.txt
1,726
2,044
Coherence
Ed
Liu et al. (2020) uses a machine reading comprehension formulation to conduct event extraction in a low-resource regime. Text2Event (Lu et al., 2021), a sequence-to-structure generation paradigm, first presents events in a linearized format, and then trains a generative model to generate the linearized event sequence
paper_17.txt
1,074
1,169
Unsupported claim
Ed
BART-Gen is also a generation-based model focusing on documentlevel event argument extraction.
paper_17.txt
1,397
1,587
Unsupported claim
Ed
However, their designs are not specific for low-resource scenarios, hence, these models can not enjoy all the benefits that DEGREE obtains for low-resource event extraction at the same time,
paper_18.txt
952
970
Format
Ed
(Li et al., 2020a;
paper_18.txt
3,495
3,574
Unsupported claim
Ed
three large-scale benchmark datasets (OntoNotes V4.0, OntoNotes V5.0, and MSRA)
paper_18.txt
3,769
3,792
Unsupported claim
Ed
medical dataset (CBLUE)
paper_19.txt
1,566
1,654
Unsupported claim
Ed
shown promising for AL in NLP due to its good qualitative and computational performance
paper_19.txt
1,801
1,824
Format
Ed
Shelmanov et al. (2021
paper_20.txt
252
631
Coherence
Ed
Following Chen et al. (2020c), other works adopt PLMs for few-shot D2T generation (Chang et al., 2021b;Su et al., 2021a). Kale and Rastogi (2020b) and Ribeiro et al. (2020) showed that PLMs using linearized representations of data can outperform graph neural networks on graph-to-text datasets, recently surpassed again by graph-based models (Ke et al., 2021;Chen et al., 2020a)
paper_20.txt
3,514
3,533
Format
Ed
Jiang et al., 2020)
paper_20.txt
1,781
1,870
Unsupported claim
Ed
Recently, have shown that using a content plan leads to improved quality of PLM outputs.
paper_20.txt
39
631
Lacks synthesis
Ed
Large neural language models pretrained on self-supervised tasks (Lewis et al., 2020;Liu et al., 2019;Devlin et al., 2019) have recently gained a lot of traction in D2T generation research (Ferreira et al., 2020). Following Chen et al. (2020c), other works adopt PLMs for few-shot D2T generation (Chang et al., 2021b;Su et al., 2021a). Kale and Rastogi (2020b) and Ribeiro et al. (2020) showed that PLMs using linearized representations of data can outperform graph neural networks on graph-to-text datasets, recently surpassed again by graph-based models (Ke et al., 2021;Chen et al., 2020a)
paper_20.txt
1,003
1,051
Format
Ed
(Heidari et al., 2021;Kale and Rastogi, 2020a;.
paper_20.txt
2,107
2,491
Lacks synthesis
Ed
Sentence ordering is the task of organizing a set of natural language sentences to increase the coherence of a text (Barzilay et al., 2001;Lapata, 2003). Several neural methods for this task were proposed, using either interactions between pairs of sentences Li and Jurafsky, 2017), global interactions (Gong et al., 2016;Wang and Wan, 2019), or combination of both (Cui et al., 2020)
paper_20.txt
4,026
4,047
Format
Ed
(Botha et al., 2018;.
paper_22.txt
169
179
Unsupported claim
Ed
STS tasks
paper_22.txt
635
964
Coherence
Ed
Specifically, Reimers and Gurevych (2019) mainly use the classification objective for an NLI dataset, and Wu et al. (2020) adopt contrastive learning to utilize self-supervision from a large corpus. Yan et al. (2021); Gao et al. (2021) incorporate a parallel corpus such as NLI datasets into their contrastive learning framework.
paper_22.txt
1,101
1,201
Unsupported claim
Ed
One related task is interpretable STS, which aims to predict chunk alignment between two sentences .
paper_22.txt
1,291
1,313
Format
Ed
(Konopík et al., 2016;
paper_22.txt
1,863
1,881
Format
Ed
(Li et al., 2020;
paper_22.txt
2,408
2,746
Coherence
Ed
. To get the solution efficiently, Cuturi (2013) provides a regularizer inspired by a probabilistic theory and then uses Sinkhorn's algorithm. Kusner et al. (2015) relax the problem to get the quadratic-time solution by removing one of the constraints, and Wu et al. (2018) introduce a kernel method to approximate the optimal transport.
paper_23.txt
1,377
1,380
Unsupported claim
Ed
GPT
paper_23.txt
1,382
1,387
Unsupported claim
Ed
GPT-3
paper_23.txt
1,854
1,967
Unsupported claim
Ed
Discriminator-based methods alleviate the training cost problem, as discriminators are easier to train than a LM.
paper_23.txt
3,910
3,915
Unsupported claim
Ed
GPT-3
paper_23.txt
4,275
4,299
Unsupported claim
Ed
myopic decoding strategy
paper_23.txt
2,272
2,283
Unsupported claim
Ed
beam search
paper_23.txt
3,593
3,626
Unsupported claim
Ed
pre-trained transformer-based LM
paper_24.txt
767
1,355
Lacks synthesis
Ed
It is crucial to understand human morality to develop beneficial AI (Soares and Fallenstein, 2017;Russell, 2019). As artificial agents live and operate among humans (Akata et al., 2020), they must be able to comprehend and recognize the moral values that drive the differences in human behavior (Gabriel, 2020). The ability to understand moral rhetoric can be instrumental for, e.g., facilitating human-agent trust (Chhogyal et al., 2019;Mehrotra et al., 2021) and engineering value-aligned sociotechnical systems (Murukannaiah et al., 2020;Serramia et al., 2020;Montes and Sierra, 2021).
paper_24.txt
1,357
1,804
Lacks synthesis
Ed
There are survey instruments to estimate individual value profiles (Schwartz, 2012;Graham et al., 2013). However, reasoning about moral values is challenging for humans (Le Dantec et al., 2009;Pommeranz et al., 2012). Further, in practical applications, e.g., to conduct meaningful conversations (Tigunova et al., 2019) or to identify online trends (Mooijman et al., 2018), artificial agents should be able to understand moral rhetoric on the fly.
paper_24.txt
1,922
1,945
Format
Ed
Mooijman et al., 2018;
paper_24.txt
1,806
2,218
Lacks synthesis
Ed
The growing capabilities of natural language processing (NLP) enable the estimation of moral rhetoric from discourse Mooijman et al., 2018;Rezapour et al., 2019;Hoover et al., 2020;Araque et al., 2020). Value classifiers can be used to identify the moral values underlying a piece of text on the fly. For instance, Mooijman et al. (2018) show that detecting moral values from tweets can predict violent protests.
paper_24.txt
2,220
2,353
Unsupported claim
Ed
Existing value classifiers are evaluated on a specific dataset, without re-training or testing the classifier on a different dataset.
paper_24.txt
2,354
2,494
Unsupported claim
Ed
This shows the ability of the classifier to predict values from text, but not the ability to transfer the learned knowledge across datasets.
paper_24.txt
4,396
4,424
Format
Ed
(BERT Devlin et al. (2019))
paper_37.txt
1,213
1,280
Format
Ed
Radford et al., 2021;Schick and Schütze, 2020a,b;Brown et al., 2020
paper_37.txt
1,544
1,572
Format
Ed
Schick and Schütze, 2020a,b)
paper_37.txt
992
1,071
Unsupported claim
Ed
they are impractical to use in real-world applications due to their model sizes
paper_37.txt
1,100
1,692
Lacks synthesis
Ed
Providing prompts or task descriptions play an vital role in improving pre-trained language models in many tasks Radford et al., 2021;Schick and Schütze, 2020a,b;Brown et al., 2020). Among them, GPT models (Radford et al., 2019;Brown et al., 2020) achieved great success in prompting or task demonstrations in NLP tasks. In light of this direction, prompt-based approaches improve small pre-trained models in few-shot text classification tasks Schick and Schütze, 2020a,b). CLIP (Radford et al., 2021) also explores prompt templates for image classification which affect zero-shot performance
paper_37.txt
49
932
Lacks synthesis
Ed
Recently, several few-shot learners on vision-language tasks were proposed including GPT (Radford et al., 2019;Brown et al., 2020), Frozen (Tsimpoukelli et al., 2021), PICa , and SimVLM . Frozen (Tsimpoukelli et al., 2021) is a large language model based on GPT-2 (Radford et al., 2019), and is transformed into a multimodal few-shot learner by extending the soft prompting to incorporate a set of images and text. Their approach shows the fewshot capability on visual question answering and image classification tasks. Similarly, PICa uses GPT-3 (Brown et al., 2020) to solve VQA tasks in a few-shot manner by providing a few in-context VQA examples. It converts images into textual descriptions so that GPT-3 can understand the images. SimVLM is trained with prefix language modeling on weakly-supervised datasets. It demonstrates its effectiveness on a zero-shot captioning task
paper_38.txt
24
794
Lacks synthesis
Ed
pre-training a transformer model on a large corpus with language modeling tasks and finetuning it on different downstream tasks has become the main transfer learning paradigm in natural language processing (Devlin et al., 2019). Notably, this paradigm requires updating and storing all the model parameters for every downstream task. As the model size proliferates (e.g., 330M parameters for BERT (Devlin et al., 2019) and 175B for GPT-3 (Brown et al., 2020)), it becomes computationally expensive and challenging to fine-tune the entire pre-trained language model (LM). Thus, it is natural to ask the question of whether we can transfer the knowledge of a pre-trained LM into downstream tasks by tuning only a small portion of its parameters with most of them freezing.
paper_38.txt
872
1,390
Lacks synthesis
Ed
One line of research (Li and Liang, 2021) suggests to augment the model with a few small trainable mod-ules and freeze the original transformer weight. Take Adapter (Houlsby et al., 2019;Pfeiffer et al., 2020a,b) and Compacter (Mahabadi et al., 2021) for example, both of them insert a small set of additional modules between each transformer layer. During fine-tuning, only these additional and taskspecific modules are trained, reducing the trainable parameters to ∼ 1-3% of the original transformer model per task.
paper_38.txt
1,434
1,921
Lacks synthesis
Ed
The GPT-3 models (Brown et al., 2020;Schick and Schütze, 2020) find that with proper manual prompts, a pre-trained LM can successfully match the fine-tuning performance of BERT models. LM-BFF (Gao et al., 2020), EFL (Wang et al., 2021), and AutoPrompt (Shin et al., 2020) further this direction by insert prompts in the input embedding layer. However, these methods rely on grid-search for a natural language-based prompt from a large search space, resulting in difficulties to optimize.
paper_38.txt
2,461
2,602
Unsupported claim
Ed
all existing prompt-tuning methods have thus far focused on task-specific prompts, making them incompatible with the traditional LM objective
paper_38.txt
2,617
2,711
Unsupported claim
Ed
it is unlikely to see many different sentences with the same prefix in the pre-training corpus
paper_39.txt
129
213
Format
Ed
(Eric et al., 2017;Wu et al., 2019; and collections of largescale annotation corpora
paper_39.txt
355
377
Format
Ed
(El Asri et al., 2017
paper_39.txt
530
533
Unsupported claim
Ed
SGD
paper_39.txt
874
909
Format
Ed
Quan et al., 2020;Lin et al., 2021)
paper_39.txt
998
1,151
Unsupported claim
Ed
vast majority of existing multilingual ToD datasets do not consider the real use cases when using a ToD system to search for local entities in a country.
paper_40.txt
396
437
Format
Ed
[Levy et al., 2017, Elsahar et al., 2018
paper_41.txt
1,890
2,370
Lacks synthesis
Ed
Previous work has shown that SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) have annotation artifacts (e.g., negation is a strong indicator of contradictions) (Gururangan et al., 2018). The literature has also shown that simple adversarial attacks including negation cues are very effective (Naik et al., 2018;Wallace et al., 2019). Kovatchev et al. (2019) analyze 11 paraphrasing systems and show that they obtain substantially worse results when negation is present
paper_41.txt
3,201
3,272
Format
Ed
Bar-Haim et al., 2006;Giampiccolo et al., 2007;Bentivogli et al., 2009)
paper_43.txt
465
524
Unsupported claim
Ed
Machine Translation (MT) is the mainstream approach for GEC
paper_43.txt
918
966
Unsupported claim
Ed
recent powerful Transformer-based Seq2Seq model
paper_44.txt
75
92
Format
Ed
(Li et al., 2015;
paper_44.txt
521
656
Unsupported claim
Ed
As external knowledge supplements the background to the inputs and decides what to say, knowledge selection is a key ingredient in KGC.
paper_44.txt
1,224
1,592
Unsupported claim
Ed
A crucial point is, they often make assumption that the golden knowledge is distinguishable as long as the dialogue context is known, yet this is not always held true because there exists a one-to-many relationship in conversation and the past utterance history in a dialogue session is insufficient to decide the knowledge selection or the future trend of a dialogue.
paper_44.txt
2,046
2,143
Unsupported claim
Ed
In other words, there exists a mapping from one's personal memory to its selection of knowledge.
paper_45.txt
533
969
Coherence
Ed
Mathew et al. (2019) collect and handcode 6,898 counter hate comments from YouTube videos targeting Jews, Blacks and LGBT communities. Ziems et al. (2020) use a collection of hate and counter hate keywords relevant to COVID-19 and create a dataset containing 359 counter hate tweets targeting Asians. Garland et al. (2020) work with German tweets and define hate and counter speech based on the communities to which the authors belong.
paper_45.txt
1,244
1,339
Unsupported claim
Ed
Even if it were, conclusions and models from synthetic data may not transfer to the real world.
paper_45.txt
1,834
2,159
Coherence
Ed
Gao and Huang (2017) annotate hateful comments in the nested structures of 10 Fox News discussion threads. Vidgen et al. (2021) Utilizing conversational context has also been explored in text classification tasks such as sentiment analysis (Ren et al., 2016), stance (Zubiaga et al., 2018) and sarcasm (Ghosh et al., 2020).
paper_46.txt
410
415
Unsupported claim
Ed
LSTM
paper_46.txt
201
231
Unsupported claim
Ed
conditional random field (CRF)
paper_46.txt
709
824
Unsupported claim
Ed
Nested NER allows a token to belong to multiple entities, which conflicts with the plain sequence tagging framework
paper_46.txt
826
1,280
Coherence
Ed
Ju et al. (2018) proposed to use stacked LSTM-CRFs to predict from inner to outer entities. Straková et al. (2019) concatenated the BILOU tags for each token inside the nested entities, which allows the LSTM-CRF to work as for flat entities. Li et al. (2020b) reformulated nested NER as a machine reading comprehension task. Shen et al. (2021) proposed to recognize nested entities by the two-stage object detection method widely used in computer vision.
paper_46.txt
2,065
2,736
Lacks synthesis
Ed
Label Smoothing Szegedy et al. (2016) proposed the label smoothing as a regularization technique to improve the accuracy of the Inception networks on ImageNet. By explicitly assigning a small probability to non-ground-truth labels, label smoothing can prevent the models from becoming too confident about the predictions, and thus improve generalization. It turned out to be a useful alternative to the standard cross entropy loss, and has been widely adopted to fight against the over-confidence (Zoph et al., 2018;Chorowski and Jaitly, 2017;Vaswani et al., 2017), improve the model calibration (Müller et al., 2019), and denoise incorrect labels (Lukasik et al., 2020).
paper_46.txt
2,844
2,969
Unsupported claim
Ed
This is driven by the observation that entity boundaries are more ambiguous and inconsistent to annotate in NER engineering.
paper_47.txt
633
637
Format
Ed
2018
paper_49.txt
1,006
1,043
Format
Ed
Wang et al., 2016aDai and Song, 2019
paper_49.txt
177
1,179
Lacks synthesis
Ed
In contrast, Aspect-based Sentiment Analysis (ABSA) is an aspect or entity oriented fine-grained sentiment analysis task. The most three basic subtasks are Aspect Term Extraction (ATE) (Hu and Liu, 2004;Yin et al., 2016;Li et al., 2018b;Xu et al., 2018;Ma et al., 2019;Chen and Qian, 2020;, Aspect Sentiment Classification (ASC) (Wang et al., 2016b;Tang et al., 2016;Ma et al., 2017;Fan et al., 2018;Li et al., 2018a;Li et al., 2021) and Opinion Term Extraction (OTE) Cardie, 2012, 2013;Fan et al., 2019;Wu et al., 2020b). The studies solve these tasks separately and ignore the dependency between these subtasks. Therefore, some efforts devoted to couple the two subtasks and proposed effective models to jointly extract aspect-based pairs. This kind of work mainly has two tasks: Aspect and Opinion Term Co-Extraction (AOTE) (Wang et al., 2016aDai and Song, 2019; Wang and Pan, 2019;Wu et al., 2020a) and Aspect-Sentiment Pair Extraction (ASPE) (Ma et al., 2018;Li et al., 2019a,b;He et al., 2019).
paper_49.txt
1,535
1,540
Unsupported claim
Ed
BERT
paper_49.txt
1,739
1,756
Format
Ed
Wu et al., 2020a
paper_49.txt
1,865
1,944
Unsupported claim
Ed
limitations related to existing works by enriching the expressiveness of labels
paper_49.txt
1,181
2,322
Lacks synthesis
Ed
Most recently, Peng et al. (2020) first proposed the ASTE task and developed a two-stage pipeline framework to couple together aspect extraction, aspect sentiment classification and opinion extraction. To further explore this task, (Mao et al., 2021;Chen et al., 2021a) transformed ASTE to a machine reading comprehension problem and utilized the shared BERT encoder to obatin the triplets after multiple stages decoding. Another line of research focuses on designing a new tagging scheme that makes the model can extract the triplets in an endto-end fashion Wu et al., 2020a;Xu et al., 2021;Yan et al., 2021). For instance, proposed a positionaware tagging scheme, which solves the limitations related to existing works by enriching the expressiveness of labels. Wu et al. (2020a) proposed a grid tagging scheme, similar to table filling (Miwa and Sasaki, 2014;Gupta et al., 2016), to solve this task in an end-to-end manner. Yan et al. (2021) converted ASTE task into a generative formulation. However, these approaches generally ignore the relations between words and linguistic features which effectively promote the triplet extraction.
paper_50.txt
13
134
Unsupported claim
Ed
Grapheme-to-phoneme conversion (G2P) is the task of converting grapheme sequences into corresponding phoneme sequences.
paper_50.txt
134
278
Unsupported claim
Ed
Many languages have the difficulty that some grapheme sequences correspond to more than one different phoneme sequence depending on the context.
paper_50.txt
280
384
Unsupported claim
Ed
G2P plays a key role in speech and text processing systems, especially in text-to-speech (TTS) systems.
paper_50.txt
746
835
Unsupported claim
Ed
each syllable is composed of characters following the orthography rules of that language.