{ "paper_id": "I13-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:15:30.070428Z" }, "title": "Tuning SMT with A Large Number of Features via Online Feature Grouping", "authors": [ { "first": "Lemao", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "settlement": "Harbin", "country": "China" } }, "email": "" }, { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "settlement": "Harbin", "country": "China" } }, "email": "" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun", "settlement": "Kyoto", "country": "Japan" } }, "email": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun", "settlement": "Kyoto", "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we consider the tuning of statistical machine translation (SMT) models employing a large number of features. We argue that existing tuning methods for these models suffer serious sparsity problems, in which features appearing in the tuning data may not appear in the testing data and thus those features may be over tuned in the tuning data. As a result, we face an over-fitting problem, which limits the generalization abilities of the learned models. Based on our analysis, we propose a novel method based on feature grouping via OSCAR to overcome these pitfalls. Our feature grouping is implemented within an online learning framework and thus it is efficient for a large scale (both for features and examples) of learning in our scenario. Experiment results on IWSLT translation tasks show that the proposed method significantly outperforms the state of the art tuning methods.", "pdf_parse": { "paper_id": "I13-1032", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we consider the tuning of statistical machine translation (SMT) models employing a large number of features. We argue that existing tuning methods for these models suffer serious sparsity problems, in which features appearing in the tuning data may not appear in the testing data and thus those features may be over tuned in the tuning data. As a result, we face an over-fitting problem, which limits the generalization abilities of the learned models. Based on our analysis, we propose a novel method based on feature grouping via OSCAR to overcome these pitfalls. Our feature grouping is implemented within an online learning framework and thus it is efficient for a large scale (both for features and examples) of learning in our scenario. Experiment results on IWSLT translation tasks show that the proposed method significantly outperforms the state of the art tuning methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Since the introduction of log-linear based SMT (Och and Ney, 2002) , tuning has been a hot topic. Various methods have been explored: their objectives are either error rates (Och, 2003) , hinge loss (Watanabe et al., 2007; Chiang et al., 2008) or ranking loss (Hopkins and May, 2011) , and they are either batch training or online training methods. In this paper, we consider tuning translation models with a large number of features such as lexical, n-gram level and rule level features, where the number of features is largely greater than the number of bilingual sentences. Practically, existing tuning methods such as PRO and MIRA might This joint work was done while the first author visited NICT.", "cite_spans": [ { "start": 47, "end": 66, "text": "(Och and Ney, 2002)", "ref_id": "BIBREF9" }, { "start": 174, "end": 185, "text": "(Och, 2003)", "ref_id": "BIBREF10" }, { "start": 199, "end": 222, "text": "(Watanabe et al., 2007;", "ref_id": "BIBREF14" }, { "start": 223, "end": 243, "text": "Chiang et al., 2008)", "ref_id": "BIBREF1" }, { "start": 260, "end": 283, "text": "(Hopkins and May, 2011)", "ref_id": "BIBREF5" }, { "start": 622, "end": 640, "text": "PRO and MIRA might", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "be applied in our scenario, however, they will suffer from some pitfalls as well, which have been less investigated in previous works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of pitfalls is that these features are so sparse that many features which are potentially useful for a test set may not be included in a given tuning set, and many useless features for testing will be over tuned on the developement set meanwhile. As a result, the generalization abilities of features are limited due to the mismatch between the testing data and the tuning data, and over-fitting occurs. One practice is to tune translation models on a larger tuning set, such as the entire training data (Xiao et al., 2011; Simianer et al., 2012) , in the hope that more features would be included during tuning. However, tuning robust weights for translation models has additional requirements to a tuning set. Firstly, multiple reference translations in the tuning data are helpful for better tuning, especially when testing data contains multiple reference translations. Secondly, the closeness between the tuning set and a test set is also important for better testing performance (Li et al., 2010) . These requirements can explain why tuning on the training data leads to unsatisfactory performance on the IWSLT translation task, as will be shown in our experiments later. Therefore, enlarging a tuning set is not always a sufficient solution for robust tuning, since it would be impractical to create a large scale tuning set with these requirements.", "cite_spans": [ { "start": 508, "end": 527, "text": "(Xiao et al., 2011;", "ref_id": "BIBREF15" }, { "start": 528, "end": 550, "text": "Simianer et al., 2012)", "ref_id": "BIBREF12" }, { "start": 989, "end": 1006, "text": "(Li et al., 2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a novel tuning method by grouping a large number of features to leverage the above pitfalls. Instead of directly taking the large number of atomic features into translation model, we firstly learn their group structure on the training data to alleviate their serious sparsity. Then, we tune the translation model consisting of grouped features on a multi-reference development set to ensure robust tuning. Unlike unsupervised clustering methods such as k-means (MacQueen, 1967) for feature clustering, we group the features with the OSCAR (Octagonal Shrinkage and Clustering Algorithm for Regression) method (Bondell and Reich, 2008) , which directly relates the objective of feature grouping to translation evaluation metrics such as BLEU (Papineni et al., 2002) and thus grouped features are optimized with respect to BLEU. Due to the large number of features and large number of training examples, efficient grouping is not simple. We apply the online gradient projection method under the FOBOS (forwardbackward splitting) framework (Duchi and Singer, 2009) to accelerate feature grouping.", "cite_spans": [ { "start": 472, "end": 488, "text": "(MacQueen, 1967)", "ref_id": "BIBREF8" }, { "start": 619, "end": 644, "text": "(Bondell and Reich, 2008)", "ref_id": "BIBREF0" }, { "start": 751, "end": 774, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF11" }, { "start": 1047, "end": 1071, "text": "(Duchi and Singer, 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We employ a large number of features by treating each translation rule in a synchronous-CFG as a single feature. Experiments on IWSLT Chineseto-English translation tasks show that, with the help of grouping these features, our method can overcome the above pitfalls and thus achieves significant improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a novel tuning method for translation models with a large number of features, which incorporates feature grouping. Our assumption is that although a feature which is useful for a test set does not appear in the tuning set, another similar feature may exist. Therefore, grouping similar features can alleviate sparsity in this way. The proposed tuning method consists of two steps: first, it tries to learn a group structure for atomic features; second, it treats each feature group as a single feature and tunes the translation model on a given tuning set using off-the-shelf toolkits such as PRO. In the first step, we learn a group structure of atomic features in the large training data for better coverage. In the second step, we tune a translation model with the grouped features on a given development set with multiple references to ensure the robust tuning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning Method", "sec_num": "2" }, { "text": "Before describing our tuning algorithm, we present notations for the rest of this paper. Suppose H is a feature set consisting of atomic features ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning Method", "sec_num": "2" }, { "text": "{h 1 , h 2 , \u2022 \u2022 \u2022 , h d } or their in- dex set {1, 2, \u2022 \u2022 \u2022 , d} for simplicity; H = h 1 , h 2 , \u2022 \u2022 \u2022 , h d is a d-dimensional", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning Method", "sec_num": "2" }, { "text": "G = {g 1 , g 2 , ..g M } is a group of H , where each element g i is a power set of H . Similarly, G = g 1 , g 2 , \u2022 \u2022 \u2022 , g M is an M -dimensional fea-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning Method", "sec_num": "2" }, { "text": "ture vector function with respect to G and W G is its weight with with each component W G i . In this paper, we consider the disjoint G , i.e. g i \u2229 g j = \u2205 if i = j. Further, suppose \u2206(W ) is a set of the index i such that W i = 0, and |\u2022| is either the number of elements in a set S or the absolute value of a real number x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning Method", "sec_num": "2" }, { "text": "Input: training data, dev, W ini , T 1: Initialize W 1 = W ini 2: for all i such that 1 \u2264 i \u2264 T do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 Tuning Algorithm", "sec_num": null }, { "text": "Decode on training data with W i to obtain a k-best-list and merge k-best-lists 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 Tuning Algorithm", "sec_num": null }, { "text": "Update the group set G based on the merged k-best-list Call Algorithm 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 Tuning Algorithm", "sec_num": null }, { "text": "Tune the translation model with G as the feature set on dev with PRO to update W G", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5:", "sec_num": null }, { "text": "Unpack W G to W i+1 7: end for 8: W = W T +1 Output: W Algorithm 1 describes our two-step tuning procedure for a translation model with H as its feature set. It inputs a training data set, a development set, initial weight W 1 with respect to H , and maximal iterations T ; and outputs a weight W . It initializes with W 1 in line 1; from line 2 to line 7, it iteratively obtains a k-best-list by decoding with W i , updates the group set G , tunes the translation weight W G based on G , and unpacks the W G to obtain W i+1 . At the end, it returns the final weight W . In particular, the k-best-list is obtained using H as a feature vector with its weight vector W derived from grouped weights W G through unpacking: if h j \u2208 g k , then W j =W G k . The grouping algorithm in line 3 will be introduced in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6:", "sec_num": null }, { "text": "In this paper, we use a hierarchical phrase based translation model, which consists of 8 default features: translation probabilities, lexical translation probabilities, word penalty, glue rule penalty, synchronous rule penalty and language model. In addition, we also employ a large number rule identify ( id ) features: each rule itself is a feature, and if a translation contains a rule for x times, then the value of this rule id feature is x. In line 4 we group these id features and impose that each default feature itself is a group.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6:", "sec_num": null }, { "text": "Algorithm 2 Feature Grouping Algorithm Input: \u03bb 1 ,\u03bb 2 ,k-best-list, W 1 , n 1: Collect a set of tuples f, e , e * from kbest-list 2: for all i such that 1 \u2264 t \u2264 n do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6:", "sec_num": null }, { "text": "Randomly select f, e , e * from the tuple set 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6:", "sec_num": null }, { "text": "W t+1/2 = W t + \u2207 W \u03b4(f, e , e * , W t )/t 5: Minimize Q(W ; 2W t+1/2 , t + 1, \u03bb 1 , \u03bb 2 ) to obtain (W t+1 , G )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6:", "sec_num": null }, { "text": "Group optimization 6: end for Output: G Suppose f is a sentence in a development set, C is a set of translations for f , and r is a set of reference translations for f . Following PRO, we define ranking loss function as follows: where e , e * \u2208 C such that BLEU(e * , r) > BLEU(e , r), and N is the number of all tuples e * , e , f . To achieve group structure and avoid the sparsity in H, we apply the OSCAR over the above loss function, and obtain the function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6:", "sec_num": null }, { "text": "L(W ) = 1 N f e * ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6:", "sec_num": null }, { "text": "L(W )+\u03bb 1 d i=1 |W i |+\u03bb 2 1\u2264i 1 by a mathematical induction. Based on these analysis, the efficiency of proximity operator for group optimization only requires an assumption that its proximity step can be efficiently solved in a low complexity independent on a large d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Group Optimization", "sec_num": "4" }, { "text": "Let u = |\u2206(a)|, and p be a one-to-one map 2 p : nents in the optimal solutions of both equations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Group Optimization", "sec_num": "4" }, { "text": "{1, \u2022 \u2022 \u2022 , u} \u2192 {1, \u2022 \u2022 \u2022 , d}, s.t. |a p(1) | \u2265 |a p(2) | \u2265 \u2022 \u2022 \u2022 \u2265 |a p(u) | > 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Group Optimization", "sec_num": "4" }, { "text": "Q(W ; a, t, \u03bb 1 , \u03bb 2 ) = u i =1 W 2 p(i ) \u2212 u i =1 a p(i ) W p(i ) + u i =1 2 \u03bb 1 + \u03bb 2 (d \u2212 u) t |W p(i ) |+ 2\u03bb 2 t 1\u2264i 0. Set W as another weight such that\u0174 j =\u0174 j for all j(j = k), and\u0174 k = 0. Then, for each i, j the following equations hold:andThus, the following equations hold based on the above equations by simple algebraic operations:Therefore, we conclude that Q(\u0174 ; a, t, \u03bb 1 , \u03bb 2 ) > Q(\u0174 ; a, t, \u03bb 1 , \u03bb 2 ). This contradicts the assumption that\u0174 is the minimal solution of Eq.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with oscar", "authors": [ { "first": "H", "middle": [ "D" ], "last": "Bondell", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Reich", "suffix": "" } ], "year": 2008, "venue": "Biometrics", "volume": "64", "issue": "1", "pages": "115--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. D. Bondell and B. J. Reich. 2008. Simultaneous regression shrinkage, variable selection, and super- vised clustering of predictors with oscar. Biomet- rics, 64(1):115-123.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Online large-margin training of syntactic and structural translation features", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Marton", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proc. of EMNLP. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and struc- tural translation features. In Proc. of EMNLP. ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "001 new features for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2009, "venue": "NAACL, NAACL '09", "volume": "11", "issue": "", "pages": "218--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang, Kevin Knight, and Wei Wang. 2009. 11,001 new features for statistical machine transla- tion. In NAACL, NAACL '09, pages 218-226.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "ACL, ACL '05", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In ACL, ACL '05, pages 263-270.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Efficient online and batch learning using forward backward splitting", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2009, "venue": "J. Mach. Learn. Res", "volume": "10", "issue": "", "pages": "2899--2934", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi and Yoram Singer. 2009. Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res., 10:2899-2934, December.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Tuning as ranking", "authors": [ { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1352--1362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In EMNLP, pages 1352-1362, July.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical significance tests for machine translation evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proc. of EMNLP. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. of EMNLP. ACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Adaptive development data selection for log-linear model in statistical machine translation", "authors": [ { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yinggong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Dongdong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2010, "venue": "COLING, COLING '10", "volume": "", "issue": "", "pages": "662--670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mu Li, Yinggong Zhao, Dongdong Zhang, and Ming Zhou. 2010. Adaptive development data selection for log-linear model in statistical machine transla- tion. In COLING, COLING '10, pages 662-670.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Some methods for classification and analysis of multivariate observations", "authors": [ { "first": "J", "middle": [ "B" ], "last": "Macqueen", "suffix": "" } ], "year": 1967, "venue": "Proc. of 5-th Berkeley Symposium on Mathematical Statistics and Probability", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. B. MacQueen. 1967. Some methods for classifi- cation and analysis of multivariate observations. In Proc. of 5-th Berkeley Symposium on Mathematical Statistics and Probability.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Discriminative training and maximum entropy models for statistical machine translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimina- tive training and maximum entropy models for sta- tistical machine translation. In Proc. of ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proc. of ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Joint feature selection in distributed stochastic learning for large-scale discriminative training in smt", "authors": [ { "first": "Patrick", "middle": [], "last": "Simianer", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2012, "venue": "ACL, ACL '12", "volume": "", "issue": "", "pages": "11--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Simianer, Stefan Riezler, and Chris Dyer. 2012. Joint feature selection in distributed stochastic learn- ing for large-scale discriminative training in smt. In ACL, ACL '12, pages 11-21.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty", "authors": [ { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "Sophia", "middle": [], "last": "Tsujii", "suffix": "" }, { "first": "", "middle": [], "last": "Ananiadou", "suffix": "" } ], "year": 2009, "venue": "ACL-IJCNLP, ACL '09", "volume": "", "issue": "", "pages": "477--485", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshimasa Tsuruoka, Jun'ichi Tsujii, and Sophia Ana- niadou. 2009. Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty. In ACL-IJCNLP, ACL '09, pages 477-485.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Online large-margin training for statistical machine translation", "authors": [ { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Hajime", "middle": [], "last": "Tsukada", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "" } ], "year": 2007, "venue": "Proc. of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin train- ing for statistical machine translation. In Proc. of EMNLP-CoNLL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Fast generation of translation forest for largescale smt discriminative training", "authors": [ { "first": "Xinyan", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shouxun", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "880--888", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinyan Xiao, Yang Liu, Qun Liu, and Shouxun Lin. 2011. Fast generation of translation forest for large- scale smt discriminative training. In EMNLP, pages 880-888.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Efficient sparse modeling with automatic feature grouping", "authors": [ { "first": "Wenliang", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "James", "middle": [], "last": "Kwok", "suffix": "" } ], "year": 2011, "venue": "ICML, ICML '11", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenliang Zhong and James Kwok. 2011. Efficient sparse modeling with automatic feature grouping. In ICML, ICML '11, pages 9-16.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "e \u03b4(f, e , e * , W ), (1) with \u03b4(f, e , e * , W ) = max W H(f, e ) \u2212 H(f, e * ) \u2022 W + 1, 0 ,", "type_str": "figure" }, "FIGREF1": { "uris": null, "num": null, "text": "e , e * , W ) = H(f, e ) \u2212 H(f, e * ), if \u03b4(f, e , e * , W ) > 0; 0, else.", "type_str": "figure" }, "FIGREF2": { "uris": null, "num": null, "text": "Followed by Lemma 1, minimizing Eq.2 is equivalent to minimizing the following equation if we ignore the zero compo-2 For easier understanding, i (j ) denotes the index in {1, \u2022 \u2022 \u2022 , u}, while i(j) denotes the index in {1, \u2022 \u2022 \u2022 , d}.", "type_str": "figure" }, "TABREF1": { "num": null, "html": null, "text": "based on the ranking loss L(W ) defined in Eq.1. We tune the translation BLEU scores on the test set and tuning runtimes (minutes) for the different tuning methods with different settings. Tuning sets dev and train denote the development and training data sets, respectively. \"Active\" denotes the number of active features for all methods except OSCAR or active grouped features for OSCAR; and \"Reused\" denotes the number of active (or grouped) features which also appear during 1000-best decoding on the test set. Boldface BLEU means our method OSCAR is significantly better than other methods with p < 0.05.", "type_str": "table", "content": "
Methods Tuning set Feature set# Features Active Reused devtest test BLEU4Runtimes
MERTdevdefault8845.740.615
PROdevdefault8846.341.134
PROtraindefault8842.836.8834
PROdev+id11081453445.540.247
L 1train+id5847142.736.9975
L 1dev+id44324846.241.039
OSCAR-+group50342546.941.81256
" } } } }