Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I13-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:15:30.070428Z"
},
"title": "Tuning SMT with A Large Number of Features via Online Feature Grouping",
"authors": [
{
"first": "Lemao",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin",
"country": "China"
}
},
"email": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin",
"country": "China"
}
},
"email": ""
},
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Information and Communications Technology",
"location": {
"addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Information and Communications Technology",
"location": {
"addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we consider the tuning of statistical machine translation (SMT) models employing a large number of features. We argue that existing tuning methods for these models suffer serious sparsity problems, in which features appearing in the tuning data may not appear in the testing data and thus those features may be over tuned in the tuning data. As a result, we face an over-fitting problem, which limits the generalization abilities of the learned models. Based on our analysis, we propose a novel method based on feature grouping via OSCAR to overcome these pitfalls. Our feature grouping is implemented within an online learning framework and thus it is efficient for a large scale (both for features and examples) of learning in our scenario. Experiment results on IWSLT translation tasks show that the proposed method significantly outperforms the state of the art tuning methods.",
"pdf_parse": {
"paper_id": "I13-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we consider the tuning of statistical machine translation (SMT) models employing a large number of features. We argue that existing tuning methods for these models suffer serious sparsity problems, in which features appearing in the tuning data may not appear in the testing data and thus those features may be over tuned in the tuning data. As a result, we face an over-fitting problem, which limits the generalization abilities of the learned models. Based on our analysis, we propose a novel method based on feature grouping via OSCAR to overcome these pitfalls. Our feature grouping is implemented within an online learning framework and thus it is efficient for a large scale (both for features and examples) of learning in our scenario. Experiment results on IWSLT translation tasks show that the proposed method significantly outperforms the state of the art tuning methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Since the introduction of log-linear based SMT (Och and Ney, 2002) , tuning has been a hot topic. Various methods have been explored: their objectives are either error rates (Och, 2003) , hinge loss (Watanabe et al., 2007; Chiang et al., 2008) or ranking loss (Hopkins and May, 2011) , and they are either batch training or online training methods. In this paper, we consider tuning translation models with a large number of features such as lexical, n-gram level and rule level features, where the number of features is largely greater than the number of bilingual sentences. Practically, existing tuning methods such as PRO and MIRA might This joint work was done while the first author visited NICT.",
"cite_spans": [
{
"start": 47,
"end": 66,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF9"
},
{
"start": 174,
"end": 185,
"text": "(Och, 2003)",
"ref_id": "BIBREF10"
},
{
"start": 199,
"end": 222,
"text": "(Watanabe et al., 2007;",
"ref_id": "BIBREF14"
},
{
"start": 223,
"end": 243,
"text": "Chiang et al., 2008)",
"ref_id": "BIBREF1"
},
{
"start": 260,
"end": 283,
"text": "(Hopkins and May, 2011)",
"ref_id": "BIBREF5"
},
{
"start": 622,
"end": 640,
"text": "PRO and MIRA might",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "be applied in our scenario, however, they will suffer from some pitfalls as well, which have been less investigated in previous works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of pitfalls is that these features are so sparse that many features which are potentially useful for a test set may not be included in a given tuning set, and many useless features for testing will be over tuned on the developement set meanwhile. As a result, the generalization abilities of features are limited due to the mismatch between the testing data and the tuning data, and over-fitting occurs. One practice is to tune translation models on a larger tuning set, such as the entire training data (Xiao et al., 2011; Simianer et al., 2012) , in the hope that more features would be included during tuning. However, tuning robust weights for translation models has additional requirements to a tuning set. Firstly, multiple reference translations in the tuning data are helpful for better tuning, especially when testing data contains multiple reference translations. Secondly, the closeness between the tuning set and a test set is also important for better testing performance (Li et al., 2010) . These requirements can explain why tuning on the training data leads to unsatisfactory performance on the IWSLT translation task, as will be shown in our experiments later. Therefore, enlarging a tuning set is not always a sufficient solution for robust tuning, since it would be impractical to create a large scale tuning set with these requirements.",
"cite_spans": [
{
"start": 508,
"end": 527,
"text": "(Xiao et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 528,
"end": 550,
"text": "Simianer et al., 2012)",
"ref_id": "BIBREF12"
},
{
"start": 989,
"end": 1006,
"text": "(Li et al., 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a novel tuning method by grouping a large number of features to leverage the above pitfalls. Instead of directly taking the large number of atomic features into translation model, we firstly learn their group structure on the training data to alleviate their serious sparsity. Then, we tune the translation model consisting of grouped features on a multi-reference development set to ensure robust tuning. Unlike unsupervised clustering methods such as k-means (MacQueen, 1967) for feature clustering, we group the features with the OSCAR (Octagonal Shrinkage and Clustering Algorithm for Regression) method (Bondell and Reich, 2008) , which directly relates the objective of feature grouping to translation evaluation metrics such as BLEU (Papineni et al., 2002) and thus grouped features are optimized with respect to BLEU. Due to the large number of features and large number of training examples, efficient grouping is not simple. We apply the online gradient projection method under the FOBOS (forwardbackward splitting) framework (Duchi and Singer, 2009) to accelerate feature grouping.",
"cite_spans": [
{
"start": 472,
"end": 488,
"text": "(MacQueen, 1967)",
"ref_id": "BIBREF8"
},
{
"start": 619,
"end": 644,
"text": "(Bondell and Reich, 2008)",
"ref_id": "BIBREF0"
},
{
"start": 751,
"end": 774,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF11"
},
{
"start": 1047,
"end": 1071,
"text": "(Duchi and Singer, 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We employ a large number of features by treating each translation rule in a synchronous-CFG as a single feature. Experiments on IWSLT Chineseto-English translation tasks show that, with the help of grouping these features, our method can overcome the above pitfalls and thus achieves significant improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a novel tuning method for translation models with a large number of features, which incorporates feature grouping. Our assumption is that although a feature which is useful for a test set does not appear in the tuning set, another similar feature may exist. Therefore, grouping similar features can alleviate sparsity in this way. The proposed tuning method consists of two steps: first, it tries to learn a group structure for atomic features; second, it treats each feature group as a single feature and tunes the translation model on a given tuning set using off-the-shelf toolkits such as PRO. In the first step, we learn a group structure of atomic features in the large training data for better coverage. In the second step, we tune a translation model with the grouped features on a given development set with multiple references to ensure the robust tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning Method",
"sec_num": "2"
},
{
"text": "Before describing our tuning algorithm, we present notations for the rest of this paper. Suppose H is a feature set consisting of atomic features ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning Method",
"sec_num": "2"
},
{
"text": "{h 1 , h 2 , \u2022 \u2022 \u2022 , h d } or their in- dex set {1, 2, \u2022 \u2022 \u2022 , d} for simplicity; H = h 1 , h 2 , \u2022 \u2022 \u2022 , h d is a d-dimensional",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning Method",
"sec_num": "2"
},
{
"text": "G = {g 1 , g 2 , ..g M } is a group of H , where each element g i is a power set of H . Similarly, G = g 1 , g 2 , \u2022 \u2022 \u2022 , g M is an M -dimensional fea-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning Method",
"sec_num": "2"
},
{
"text": "ture vector function with respect to G and W G is its weight with with each component W G i . In this paper, we consider the disjoint G , i.e. g i \u2229 g j = \u2205 if i = j. Further, suppose \u2206(W ) is a set of the index i such that W i = 0, and |\u2022| is either the number of elements in a set S or the absolute value of a real number x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning Method",
"sec_num": "2"
},
{
"text": "Input: training data, dev, W ini , T 1: Initialize W 1 = W ini 2: for all i such that 1 \u2264 i \u2264 T do 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Tuning Algorithm",
"sec_num": null
},
{
"text": "Decode on training data with W i to obtain a k-best-list and merge k-best-lists 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Tuning Algorithm",
"sec_num": null
},
{
"text": "Update the group set G based on the merged k-best-list Call Algorithm 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Tuning Algorithm",
"sec_num": null
},
{
"text": "Tune the translation model with G as the feature set on dev with PRO to update W G",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5:",
"sec_num": null
},
{
"text": "Unpack W G to W i+1 7: end for 8: W = W T +1 Output: W Algorithm 1 describes our two-step tuning procedure for a translation model with H as its feature set. It inputs a training data set, a development set, initial weight W 1 with respect to H , and maximal iterations T ; and outputs a weight W . It initializes with W 1 in line 1; from line 2 to line 7, it iteratively obtains a k-best-list by decoding with W i , updates the group set G , tunes the translation weight W G based on G , and unpacks the W G to obtain W i+1 . At the end, it returns the final weight W . In particular, the k-best-list is obtained using H as a feature vector with its weight vector W derived from grouped weights W G through unpacking: if h j \u2208 g k , then W j =W G k . The grouping algorithm in line 3 will be introduced in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "In this paper, we use a hierarchical phrase based translation model, which consists of 8 default features: translation probabilities, lexical translation probabilities, word penalty, glue rule penalty, synchronous rule penalty and language model. In addition, we also employ a large number rule identify ( id ) features: each rule itself is a feature, and if a translation contains a rule for x times, then the value of this rule id feature is x. In line 4 we group these id features and impose that each default feature itself is a group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "Algorithm 2 Feature Grouping Algorithm Input: \u03bb 1 ,\u03bb 2 ,k-best-list, W 1 , n 1: Collect a set of tuples f, e , e * from kbest-list 2: for all i such that 1 \u2264 t \u2264 n do 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "Randomly select f, e , e * from the tuple set 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "W t+1/2 = W t + \u2207 W \u03b4(f, e , e * , W t )/t 5: Minimize Q(W ; 2W t+1/2 , t + 1, \u03bb 1 , \u03bb 2 ) to obtain (W t+1 , G )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "Group optimization 6: end for Output: G Suppose f is a sentence in a development set, C is a set of translations for f , and r is a set of reference translations for f . Following PRO, we define ranking loss function as follows: where e , e * \u2208 C such that BLEU(e * , r) > BLEU(e , r), and N is the number of all tuples e * , e , f . To achieve group structure and avoid the sparsity in H, we apply the OSCAR over the above loss function, and obtain the function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "L(W ) = 1 N f e * ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "L(W )+\u03bb 1 d i=1 |W i |+\u03bb 2 1\u2264i<j\u2264d max{|W i | , |W j |}, (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "where d is the dimension of feature vector H or its weight W , \u03bb 1 and \u03bb 2 are two hyperparameters for two regularizers taking positive value. Minimization of Eq.2 makes some components in W equal and thus achieves a feature grouping effect. In other words, W i = W j means that h i and h j lie in the same group, i.e. h i , h j \u2208 g k for some g k \u2208 D(W ), where D(W ) denotes the group derived from W as follows. Given W , we first sort its components W i to obtain a permutation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "{i k } d k=1 such that W i 1 \u2264 W i 2 \u2022 \u2022 \u2022 \u2264 W i d with 1 \u2264 i k \u2264 d; then we can easily obtain D(W ) after traversing {W i k } d k=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "For example, W = 1, 3, 1, 3, 1 , then D(W ) = {1, 3, 5}, {2, 4} . One advantage of OSCAR over unsupervised clustering methods (e.g. k-means) is that it relates the objective of grouping to an error metric, such as BLEU, and thus can achieve an optimal grouping towards BLEU. Bondell and Reich (2008) firstly proposed two approaches for OSCAR. The first one casts the problem into a quadratic program (QP) consisting of O(d 2 ) variables and O(d 2 ) constraints. The second one tries to optimize a sequence of (potentially smaller) QP's with more constraints, which can be up to O(d!) in the worst case. Zhong and Kwok (2011) explored a much faster approach which is based on the accelerated gradient and projection method. Its complexity is reduced to O(d log d). Since the dimension d of H is large enough in our scenario where d is up to hundred of thousands, these existing optimization methods are inefficient to minimize Eq.2. Here, based on (Zhong and Kwok, 2011), we employ an online gradient projection algorithm under the FOBOS framework for faster learning. The framework of FOBOS is a type of online learning, in which it is theoretically guaranteed to solve such a problem as in Equation 2: the objectives consisting of two additive terms, in which one is non-smooth but convex and the other is smooth and convex 1 . FOBOS contains two steps: it first performs a gradient descent operator, and then updates weight by a proximity (or projection) operator.",
"cite_spans": [
{
"start": 275,
"end": 299,
"text": "Bondell and Reich (2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "Algorithm 2 describes the online training of feature grouping. It requires some inputs: two regularizer parameters \u03bb 1 and \u03bb 2 ; a k-best-list translations; an initial weight W 1 ; and a maximum iterations n. It firstly collects a set of tuples encoded with translation pairs from k-best-list following the strategy implemented in the PRO toolkit in line 1. It repeatedly updates weight W t and feature group G from line 2 to line 6: it randomly samples a tuple f, e , e * from the collected tuple set in line 3, it performs a gradient descent operator in line 4 where \u2207 W \u03b4(f, e , e * , W t ) denotes the subgradient of \u03b4(f, e , e * , W t ) at current weight W t , and it optimizes (W t+1 , G ) by a proximity operator for group optimization in line 5. At last it returns the group result G .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "In particular, the subgradient of \u03b4(f, e , e * , W t ) in line 4 is defined via the following equation: The main technique is the proximity operator for group optimization in line 5, which tries to minimize the function Q(\u2022; a, t, \u03bb 1 , \u03bb 2 ) with a = 2 \u00d7 W t + \u2207 W \u03b4(f, e , e * , W t )/t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "\u2207 W \u03b4(f,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q(W ; a, t, \u03bb 1 , \u03bb 2 ) = (W \u2212 a) W + 2 t \u03bb 1 d i=1 |W i |+\u03bb 2 1\u2264i<j\u2264d max{|W i | , |W j |} .",
"eq_num": "(3)"
}
],
"section": "6:",
"sec_num": null
},
{
"text": "In the next Section, we will present the details of this proximity for group optimization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "To derive an efficient algorithm with large d for group optimization, we present the following lemma with its proof attached in appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "Lemma 1. In Eq.3, if a k =0, then its minimal so-lution\u0174 suffices to\u0174 k = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "Suppose W t is sparse, i.e. d is largely greater than the number of its non-zero components (|\u2206(W t )|), then W t+1/2 in line 4 is also sparse since H is sparse. The above Lemma states that the optimal solution W t+1 in line 5 of Algorithm 2 is also a sparse vector. Therefore, it is desirable to optimize the W t+1 in a low complexity independent on d. If so, we can easily see that if we set W 1 as a sparse vector, W t is sparse for all t > 1 by a mathematical induction. Based on these analysis, the efficiency of proximity operator for group optimization only requires an assumption that its proximity step can be efficiently solved in a low complexity independent on a large d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "Let u = |\u2206(a)|, and p be a one-to-one map 2 p : nents in the optimal solutions of both equations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "{1, \u2022 \u2022 \u2022 , u} \u2192 {1, \u2022 \u2022 \u2022 , d}, s.t. |a p(1) | \u2265 |a p(2) | \u2265 \u2022 \u2022 \u2022 \u2265 |a p(u) | > 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "Q(W ; a, t, \u03bb 1 , \u03bb 2 ) = u i =1 W 2 p(i ) \u2212 u i =1 a p(i ) W p(i ) + u i =1 2 \u03bb 1 + \u03bb 2 (d \u2212 u) t |W p(i ) |+ 2\u03bb 2 t 1\u2264i <j \u2264u max{ W p(i ) , W p(j ) }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "The advantage of optimizingQ(W ; a, t, \u03bb 1 , \u03bb 2 ) instead of Q(W ; a, t, \u03bb 1 , \u03bb 2 ) is that it explicitly reduces the size of active components in W into u rather than d, and thus it is more direct to expect a faster optimization algorithm. Further, Proposition 1 in (Zhong and Kwok, 2011) states that the minimal solution\u0174 of such an equation asQ(W ; a, t, \u03bb 1 , \u03bb 2 ) suffices to the constraint",
"cite_spans": [
{
"start": 269,
"end": 291,
"text": "(Zhong and Kwok, 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "|\u0174 p(1) | \u2265 \u2022 \u2022 \u2022 \u2265 |\u0174 p(u) |. Therefore, minimizing Q(W ; a, t, \u03bb 1 , \u03bb 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "is also equivalent to optimizing the following constraint programming:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "minimize WQ (W ; a, t, \u03bb 1 , \u03bb 2 ) subject to |W p(1) | \u2265 \u2022 \u2022 \u2022 \u2265 |W p(u) |,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "whereQ(W ; a, t, \u03bb 1 , \u03bb 2 ) defined on the constraint is rewritten as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "Q(W ; a, t, \u03bb 1 , \u03bb 2 ) = u i =1 W 2 p(i ) \u2212 u i =1 a p(i ) W p(i ) + u i =1 2 \u03bb 1 + \u03bb 2 (d \u2212 i ) t |W p(i ) |. (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "Now, we can implement line 5 in Algorithm 2 as summarized by Algorithm 3, after some modifications over the projection algorithm in (Zhong and Kwok, 2011) . Algorithm 3 requires some variables \u03bb 1 , \u03bb 2 , a and t. Firstly, it sorts |a i | for the indice in \u2206(a) to obtain the map p in line 1, and initializes G as {p(1)} in line 2. From line 3 to line 10, it goes into a merging loop where it repeatedly merges two group members to precalculate G : for each i , it iteratively merges the member g initialized as {p(i )} and the top member in the stack, updates g with the merged member, and substitutes the top member in the stack with g, if the v value (will be defined later) of g is greater than that of the top member. Then, it begins to calculate W initialized as 0 and G . For each index i in each member g of G , it assigns W i Algorithm 3 Group Optimization Input: \u03bb 1 ,\u03bb 2 ,a,t 1: Sort |a i | : i \u2208 \u2206(a) to obtain p See the definition of \u2206 in Section 2 2: Initialize stack of group set G = {p(1)} 3: for all i such that 2 \u2264 i \u2264 |\u2206(a)| do 4:",
"cite_spans": [
{
"start": 132,
"end": 154,
"text": "(Zhong and Kwok, 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "g = {p(i )} 5: while G = \u2205 and v(g) \u2265 v(top(G )) do 6: g = g \u222a top(G ) Merge g 7: Pop top(G ) 8: end while 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "Push g onto G 10: end for Pre-calculate G 11: W = 0 12: for all g \u2208 G do 13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "for all i \u2208 g do 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "W i = sign(a i )max v(g), 0 15: end for 16: end for Calculate W 17: G =D(W ) Calculate G Output: W, G W minimizes Eq.3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "according to the sign 3 of a i and v(g) in line 14. In line 17 it calculates G = D(W ) as discussed in Section 3. At last it returns the pair W, G . In particular, the v value v(g) in line 5 is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "v(g) = i\u2208g |a i | \u2212 2 \u03bb 1 + \u03bb 2 (d \u2212 p * (i)) /t 2 |g| ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "where p * (i) denotes the inversion of p such that p p * (i) = i. And v(g) can be intuitively interpreted as the group averaged sub-gradient of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "u i =1 W 2 p(i ) \u2212Q(W ; a, t, \u03bb 1 , \u03bb 2 ) /2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "In addition, an intuitive explanation of merging loop is that the value of objective in Eq.4 will be decreased after each merging step in line 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "In summary, if we use a sparse representation for vector a in Algorithm 3, then its complexity is O |\u2206(a)|log(|\u2206(a)|) , which is independent of d. Therefore, the whole tuning algorithm (Algorithm 1) with feature grouping is efficient even with a large value of d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group Optimization",
"sec_num": "4"
},
{
"text": "We conduct experiments on the IWSLT2008 Chinese-to-English translation tasks, whose training data consists of about 30K bilingual sentence 3 The reason is attributed to the Eq.5 in (Zhong and Kwok, 2011) .",
"cite_spans": [
{
"start": 139,
"end": 140,
"text": "3",
"ref_id": null
},
{
"start": 181,
"end": 203,
"text": "(Zhong and Kwok, 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "pairs. Test sets 2003 Test sets , 2004 Test sets and 2008 are used as the development set, development test (devtest) set and test set, respectively; and all of them contain 16 references. A 5-gram language model is trained on the training data with the SRILM toolkit, and word alignment is obtained with GIZA++. In our experiments, the translation performances are measured by the case-insensitive BLEU4 metric. The significance testing is performed by paired bootstrap re-sampling (Koehn, 2004) .",
"cite_spans": [
{
"start": 7,
"end": 21,
"text": "Test sets 2003",
"ref_id": null
},
{
"start": 22,
"end": 38,
"text": "Test sets , 2004",
"ref_id": null
},
{
"start": 39,
"end": 57,
"text": "Test sets and 2008",
"ref_id": null
},
{
"start": 483,
"end": 496,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use an in-house developed hierarchical phrase-based translation (Chiang, 2005) as our baseline decoder, and we use the state of the art tuning methods MERT and PRO as our comparison methods 4 . Based on our in-house decoder, we implement three translation models with different feature sets: default features (default); default features plus rule id features (+id) ; and default features plus group features of rule id (+group). On the IWSLT training data, the number of rule id features is 500K, i.e. d = 500K, which is significantly greater than the number of bilingual sentences 30K. Our proposed tuning method is with the following setting by tuning on the dev-test set: \u03bb 1 = 1e \u2212 10, \u03bb 2 = 3e \u2212 8, and T = 15, n = 20 \u00d7 N , i.e. 20 passes over k-best-lists.",
"cite_spans": [
{
"start": 67,
"end": 81,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "From Table 1 , we can see that tuning the translation model on the development set is much better (improvements of 4.3 BLEU scores) than that on the training data under the default features setting. Its main reason, as presented in Section 1, may be that multiple references and closeness 5 of tuning sets are much helpful for translation tasks. Further, the id features do not achieve improvements and even decreases 0.9 BLEU scores when tuned on the development set, due to its serious sparsity. However, after grouping id features, the groups learned by our method can alleviate the feature sparsity and thus significantly obtain gains of 0.7 BLEU scores over default feature setting.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Further, we implement another tuning method 6 for comparison, i.e. L 1 regularization method (Tsuruoka et al., 2009) model with the +id feature setting on both the development set and training data set, respectively, and their hyperameters are tuned on the dev-test set. As depicted in Table 1 , our method significantly outperforms the L 1 method. In addition, Table 1 presents the number of both active and reused features for each method on different settings. We can see that the active features (503 grouped features) in OSCAR method are much less than those (11081 features) in PRO with +id setting, which means that OSCAR has lower model complexity. Further, most (84.5%) of active features tuned on dev set are be used during testing for OSCAR, which means that OSCAR is more efficient to address feature sparsity problem compared with both L 1 and PRO.",
"cite_spans": [
{
"start": 93,
"end": 116,
"text": "(Tsuruoka et al., 2009)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 286,
"end": 293,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 362,
"end": 369,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "At last, Table 1 also shows the runtimes for each tuning method. Tuning on training data is much inefficient compared with tuning on dev set, since it requires repeatedly decoding on a much larger dataset. Furthermore, the efficiency of our OS-CAR method is comparable to that of tuning on training data. Anyway, distributed training is a reasonable approach to improve the efficiency of OSCAR, as suggested by Simianer et al. (2012) .",
"cite_spans": [
{
"start": 411,
"end": 433,
"text": "Simianer et al. (2012)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "This paper proposes a novel training method for a translation model with a large number of features, which is the main contribution of this paper. This method is based on automatic feature grouping, which is implemented within an online learning method and thus is efficient for large scale training in SMT. The other contribution is that we success-fuly extend OSCAR to a large scale of learning setting. In future work, we will investigate distributed learning for OSCAR and then testify it on larger scale training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "For Eq.2, the non-smooth but convex term is the entire of Eq.2, and the smooth and convex term can be considered as 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Both of them are derived from the Moses toolkit: http://www.statmt.org/moses/.5 If the tuning set and test set are close enough or identically distributed, it is possible to get gains by sparse discriminative features without using feature grouping(Chiang et al., 2009).6 It is similar to dtrain implemented in the cdec toolkit: http://cdec-decoder.org/, except that it does not use the distributed learning framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank our colleagues in both HIT and NICT for insightful discussions, and three anonymous reviewers for many invaluable comments and suggestions to improve our paper. This work is supported by National Natural Science Foundation of China (61173073, 61100093, 61073130, 61272384), and the Key Project of the National High Technology Research and Development Program of China (2011AA01A207).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Proof. Suppose\u0174 k = 0, and thus |\u0174 k | > 0. Set W as another weight such that\u0174 j =\u0174 j for all j(j = k), and\u0174 k = 0. Then, for each i, j the following equations hold:andThus, the following equations hold based on the above equations by simple algebraic operations:Therefore, we conclude that Q(\u0174 ; a, t, \u03bb 1 , \u03bb 2 ) > Q(\u0174 ; a, t, \u03bb 1 , \u03bb 2 ). This contradicts the assumption that\u0174 is the minimal solution of Eq.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with oscar",
"authors": [
{
"first": "H",
"middle": [
"D"
],
"last": "Bondell",
"suffix": ""
},
{
"first": "B",
"middle": [
"J"
],
"last": "Reich",
"suffix": ""
}
],
"year": 2008,
"venue": "Biometrics",
"volume": "64",
"issue": "1",
"pages": "115--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. D. Bondell and B. J. Reich. 2008. Simultaneous regression shrinkage, variable selection, and super- vised clustering of predictors with oscar. Biomet- rics, 64(1):115-123.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Online large-margin training of syntactic and structural translation features",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of EMNLP. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and struc- tural translation features. In Proc. of EMNLP. ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "001 new features for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL, NAACL '09",
"volume": "11",
"issue": "",
"pages": "218--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Kevin Knight, and Wei Wang. 2009. 11,001 new features for statistical machine transla- tion. In NAACL, NAACL '09, pages 218-226.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL, ACL '05",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In ACL, ACL '05, pages 263-270.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Efficient online and batch learning using forward backward splitting",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2009,
"venue": "J. Mach. Learn. Res",
"volume": "10",
"issue": "",
"pages": "2899--2934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi and Yoram Singer. 2009. Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res., 10:2899-2934, December.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Tuning as ranking",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1352--1362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In EMNLP, pages 1352-1362, July.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of EMNLP. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. of EMNLP. ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adaptive development data selection for log-linear model in statistical machine translation",
"authors": [
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yinggong",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Dongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING, COLING '10",
"volume": "",
"issue": "",
"pages": "662--670",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mu Li, Yinggong Zhao, Dongdong Zhang, and Ming Zhou. 2010. Adaptive development data selection for log-linear model in statistical machine transla- tion. In COLING, COLING '10, pages 662-670.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Some methods for classification and analysis of multivariate observations",
"authors": [
{
"first": "J",
"middle": [
"B"
],
"last": "Macqueen",
"suffix": ""
}
],
"year": 1967,
"venue": "Proc. of 5-th Berkeley Symposium on Mathematical Statistics and Probability",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. B. MacQueen. 1967. Some methods for classifi- cation and analysis of multivariate observations. In Proc. of 5-th Berkeley Symposium on Mathematical Statistics and Probability.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimina- tive training and maximum entropy models for sta- tistical machine translation. In Proc. of ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proc. of ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Joint feature selection in distributed stochastic learning for large-scale discriminative training in smt",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Simianer",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2012,
"venue": "ACL, ACL '12",
"volume": "",
"issue": "",
"pages": "11--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Simianer, Stefan Riezler, and Chris Dyer. 2012. Joint feature selection in distributed stochastic learn- ing for large-scale discriminative training in smt. In ACL, ACL '12, pages 11-21.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty",
"authors": [
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Tsujii",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL-IJCNLP, ACL '09",
"volume": "",
"issue": "",
"pages": "477--485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshimasa Tsuruoka, Jun'ichi Tsujii, and Sophia Ana- niadou. 2009. Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty. In ACL-IJCNLP, ACL '09, pages 477-485.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Online large-margin training for statistical machine translation",
"authors": [
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin train- ing for statistical machine translation. In Proc. of EMNLP-CoNLL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Fast generation of translation forest for largescale smt discriminative training",
"authors": [
{
"first": "Xinyan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "880--888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyan Xiao, Yang Liu, Qun Liu, and Shouxun Lin. 2011. Fast generation of translation forest for large- scale smt discriminative training. In EMNLP, pages 880-888.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient sparse modeling with automatic feature grouping",
"authors": [
{
"first": "Wenliang",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Kwok",
"suffix": ""
}
],
"year": 2011,
"venue": "ICML, ICML '11",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenliang Zhong and James Kwok. 2011. Efficient sparse modeling with automatic feature grouping. In ICML, ICML '11, pages 9-16.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "e \u03b4(f, e , e * , W ), (1) with \u03b4(f, e , e * , W ) = max W H(f, e ) \u2212 H(f, e * ) \u2022 W + 1, 0 ,",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "e , e * , W ) = H(f, e ) \u2212 H(f, e * ), if \u03b4(f, e , e * , W ) > 0; 0, else.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Followed by Lemma 1, minimizing Eq.2 is equivalent to minimizing the following equation if we ignore the zero compo-2 For easier understanding, i (j ) denotes the index in {1, \u2022 \u2022 \u2022 , u}, while i(j) denotes the index in {1, \u2022 \u2022 \u2022 , d}.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"html": null,
"text": "based on the ranking loss L(W ) defined in Eq.1. We tune the translation BLEU scores on the test set and tuning runtimes (minutes) for the different tuning methods with different settings. Tuning sets dev and train denote the development and training data sets, respectively. \"Active\" denotes the number of active features for all methods except OSCAR or active grouped features for OSCAR; and \"Reused\" denotes the number of active (or grouped) features which also appear during 1000-best decoding on the test set. Boldface BLEU means our method OSCAR is significantly better than other methods with p < 0.05.",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Methods Tuning set Feature set</td><td colspan=\"4\"># Features Active Reused devtest test BLEU4</td><td>Runtimes</td></tr><tr><td>MERT</td><td>dev</td><td>default</td><td>8</td><td>8</td><td>45.7</td><td>40.6</td><td>15</td></tr><tr><td>PRO</td><td>dev</td><td>default</td><td>8</td><td>8</td><td>46.3</td><td>41.1</td><td>34</td></tr><tr><td>PRO</td><td>train</td><td>default</td><td>8</td><td>8</td><td>42.8</td><td>36.8</td><td>834</td></tr><tr><td>PRO</td><td>dev</td><td>+id</td><td>11081</td><td>4534</td><td>45.5</td><td>40.2</td><td>47</td></tr><tr><td>L 1</td><td>train</td><td>+id</td><td>584</td><td>71</td><td>42.7</td><td>36.9</td><td>975</td></tr><tr><td>L 1</td><td>dev</td><td>+id</td><td>443</td><td>248</td><td>46.2</td><td>41.0</td><td>39</td></tr><tr><td>OSCAR</td><td>-</td><td>+group</td><td>503</td><td>425</td><td>46.9</td><td>41.8</td><td>1256</td></tr></table>"
}
}
}
}