ACL-OCL / Base_JSON /prefixB /json /bucc /2021.bucc-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:07:58.023095Z"
},
"title": "Machine Translation in Low Resource Setting",
"authors": [
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology",
"location": {
"settlement": "Bombay"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "AI now and in future will have to grapple continuously with the problem of low resource. AI will increasingly be ML intensive. But ML needs data often with annotation. However, annotation is costly. Over the years, through work on multiple problems, we have developed insight into how to do language processing in low resource setting. Following 6 methods-individually and in combination-seem to be the way forward:",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "AI now and in future will have to grapple continuously with the problem of low resource. AI will increasingly be ML intensive. But ML needs data often with annotation. However, annotation is costly. Over the years, through work on multiple problems, we have developed insight into how to do language processing in low resource setting. Following 6 methods-individually and in combination-seem to be the way forward:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Linguistic embellishment (e.g. factor based MT, source reordering)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linguistic embellishment (e.g. factor based MT, source reordering)",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Coref and NER, Sentiment and Emotion: each task helping the other to either boost accuracy or reduce resource requirement)",
"authors": [],
"year": null,
"venue": "Joint Modeling",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joint Modeling (e.g., Coref and NER, Sentiment and Emotion: each task helping the other to either boost accuracy or reduce resource requirement)",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "eye tracking based NLP",
"authors": [
{
"first": "",
"middle": [],
"last": "Multimodality",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Multimodality (e.g., eye tracking based NLP, also picture+text+speech based Sentiment Analysis)",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "embedding from multiple languages helping MT, close to 2 above) The present talk will focus on low resource machine translation. We describe the use of techniques from the above list and bring home the seriousness and methodology of doing Machine Translation in low resource settings",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cross Lingual Embedding (e.g., embedding from multiple languages helping MT, close to 2 above) The present talk will focus on low resource machine translation. We describe the use of techniques from the above list and bring home the seriousness and methodology of doing Machine Translation in low resource settings.",
"links": null
}
},
"ref_entries": {}
}
}