taesiri commited on
Commit
3389077
1 Parent(s): a2201c9

Upload papers/2402/2402.01018.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2402/2402.01018.tex +311 -0
papers/2402/2402.01018.tex ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \pdfoutput=1
2
+
3
+
4
+ \documentclass[11pt]{article}
5
+
6
+ \usepackage[review]{EMNLP2023}
7
+
8
+ \usepackage{times}
9
+ \usepackage{latexsym}
10
+
11
+ \usepackage[T1]{fontenc}
12
+
13
+
14
+ \usepackage[utf8]{inputenc}
15
+
16
+ \usepackage{microtype}
17
+ \usepackage{graphicx}
18
+ \usepackage{amsmath}
19
+ \usepackage{inconsolata}
20
+
21
+ \usepackage{amsfonts}
22
+ \usepackage{amsmath}
23
+ \usepackage{soul}
24
+
25
+
26
+
27
+
28
+ \title{Schema Guided Dialogue for HR}
29
+
30
+
31
+
32
+ \author{First Author \\
33
+ Affiliation / Address line 1 \\
34
+ Affiliation / Address line 2 \\
35
+ Affiliation / Address line 3 \\
36
+ \texttt{email@domain} \\\And
37
+ Second Author \\
38
+ Affiliation / Address line 1 \\
39
+ Affiliation / Address line 2 \\
40
+ Affiliation / Address line 3 \\
41
+ \texttt{email@domain} \\}
42
+
43
+ \begin{document}
44
+ \maketitle
45
+ \begin{abstract}
46
+ This document is a supplement to the general instructions for *ACL authors. It contains instructions for using the \LaTeX{} style file for EMNLP 2023.
47
+ The document itself conforms to its own specifications, and is, therefore, an example of what your manuscript should look like.
48
+ These instructions should be used both for papers submitted for review and for final versions of accepted papers.
49
+ \end{abstract}
50
+
51
+ \section{Introduction}
52
+
53
+ Dialogue State Tracking (DST) is a process that monitors and predicts a user's ongoing intent and specific details during a conversation(cite). Traditional DST methods rely on a constant framework and designate a unique classifier for every detail(cite). However, these methods face challenges in dynamic real-world scenarios where services might change, necessitating frequent model updates. To address this, recent advancements have shifted towards more adaptable schema-guided techniques. These methods utilize natural language explanations of all potential intents and details, enabling them to seamlessly adapt to new services without prior training(cite).
54
+
55
+ People analytics is a data-driven approach to improving people-related decisions for the purpose of advancing the success of not only the organization but also of individual employees(cite Emilio J. Castilla). Collecting individual employee information is a crucial part in people analytics and save a lot employees' time on repetitive work. For training and onboarding, the company needs to collect information to provide training recommendations and onboarding guidance. For employee, they also spend a lot time on requesting timeoff, scheduling meetings, submitting ticket for IT issues and filing medical claims. Also, schema guided dialogue can be used in employee recognition, collect feedback, review preformance and submit time sheet.
56
+
57
+ While many schema guided dialogue dataset has been proposed such as MultiWOZ, Schema-Guided Dialogue and SGD-X. These datasets' tasks are related to services for customers. (Expand)
58
+ Besides, the transfer learning ability of existed Dialogue State Tracking algorithm is also very low. The Joint Goal Accuracy for transfer learning for methods (TRADE< SUMBT, SimpleTOD++, T5DST) are all below 35 percent on MultiWOZ 2.0. Thus, to deal with schma guided dialogue in people analytics, we need additional dataset
59
+
60
+ On the other hand, the time complexity of using many of those methods are high. Good methods either rely on larger language model where they require one inference pass for each slot in the schema (DaP (Lee et al., 2021) or separately encodes dialogue history, slots and slot values (LUNA Wang et al. 2022). In practice, if the reponse time is above (cite some work), users are less interested in continuing the conversation. Additionally, leveraging extractive dialogue state tracking model is not enough to integrate the collected information into the system. For example, if some employee file a time off request, the algorithm may capture that the starting day of the timeoff is next Friday, but the system need to understand it and convert it to the specific date. Last but not the least, the company may already constains some information of employees or may always alter the dialogue tracking system. Thus, it is important to make sure the data is flexible. Last but not the least, to have a conversation of individual employee, the system should be able to have empathy and ask right questions.
61
+
62
+
63
+ In this work, we propose a framework for designing a system that design the schema guided dialogue. Our contribution includes designing a system to collect relevant schemas, identifying methods to generate high quality synthetic data for our use cases and identify the best method for each part of the component in the system.
64
+
65
+
66
+
67
+ \section{Related Work}
68
+ \textbf{Schema-Guided Dialogue (SGD)~\citep{rastogi2020scalable}}: Unlike many task-oriented dialogue datasets with a fixed ontology, SGD introduces new slots and services in the test set. This design tests not just DST performance but also zero-shot generalization. The dataset has descriptions for all intents and slots. \textbf{SGD-X~\citep{Lee_2022}}: This is an extension of SGD, offering five schema variants with varying linguistic styles. \textbf{MultiWOZ~\citep{budzianowski2020multiwoz}}: This dataset comprises human-human dialogues from the Wizard-of-OZ setup. Unlike SGD, its ontology remains constant, with no new services in testing. However, all these scenarios are for customers to fill a request such as rent a car or book a restaurant, which make them less transferable to our use case. While they have provided the guidance on how they collect data, most of requires human label which is time consuming and cost a lot.
69
+
70
+
71
+
72
+ SGD-baseline, SGP-DST and DS-DST encode utterance and slot schema either together or jointly to predict relative slot. Multi-Task BERT adopt slot carryover mechanisms and encode only the preceding system utterance and the current uteerance. LUNA separately encodes dialogue history, slots and slot values and learns to first predict the correct utterance to condition the slot value prediction on. However, it is not useful in practice because we cannot capture all conversations upfront.
73
+
74
+ Seq2Seq-DU by Feng et al. (2021) creates a state representation from schema elements and utterance tokens. AG-DST by Tian et al. (2021) generates a new state in two passes without schema conditioning. DaP by Lee et al. (2021) offers two variants, one decoding all slot values collectively and the other individually. Dap individual achieves great performance but requires multiple path for each slot which is time consuming.
75
+ Lastly, D3ST by Zhao et al. (2022) decodes the entire dialogue state in a single pass, using an index-picking mechanism for categorical slots. We notice that D3ST achieves the best performance across different datasets and metrics but it encodes all information as input which makes it harder to do inference in reality. It takes a long time to do inference because the input is too long. Generative method performs the best in JGA for schema guided dialogue.
76
+
77
+
78
+ \section{Methods}
79
+ When we have a schema, we first convert each entity to a question. [We can
80
+ use self-instruct paper to generate questions.] This is followed by (2022
81
+ paper T5) since it can provide enough information about entity. For each
82
+ utterance, we use multi-choice question to select those entities that can be
83
+ answered by this question. We then format the question and utterance as an
84
+ entity extraction task and select the best entity from utterance. As the last
85
+ step, we use Claude to generate final schema that could actually be used.
86
+ For example, convert fuzzy time such as next Friday to exact time.
87
+
88
+ \subsection{Dataset Generation}
89
+ While there are many data as discussed in the section 2, those data are not relevant in people analytics. To generate a schema guided dialogue based on previous work requires extensive labor efforts. While we can leverage language model to generate synthetic conversation, we also need another model to generate current schema for each utterance. This will cost a few problems: 1. It requires many guidelines to generate a conversation in a way that could capture the schema. 2. Capture schema is not always accurate or the text does not extract from the answer. 3. The cost of synthetic data generation is high if we have multiple agents to interact with each other and additional agent to extract entity. However, in our case, we only require synthetic data to train two models. The first model is to select the right entity to answer from utterance. The second is to extract the right entity. Thus, we can convert the synthetic data in a different format. We ask model to provide: utterance, some questions from the same domain, relevant questions that could be asked by the utterance, extracted entity for each answer.
90
+
91
+ We follow [cite apacha paper, self-instruct and textbook is all you need], we use batch process by explicitly ask the model to produce multiple samples. We randomly select domain from the list of the following domains, we select a random number of question and random number of answer. After generating synthetic data, we need to filter low quality and wrong data. We have done the following process: 1. We remove the entity that cannot be extract from utterance. 2. We remove question that is paraphrase for each other. 3. We validate the correctness of the sample by split each sample to utterance and question pair to ask 2 small model whether the selected question is answered 4. We remove the question that is completely irrelevant with utterance. 5. We validate the correctness of question and answer and 3 extraction model and we use human to validate the result when they disagree.
92
+
93
+ We adding different methods and compare synthetic data quality through human labeler.
94
+
95
+ We split the data to uncleaned data, clean data and test set.
96
+
97
+
98
+
99
+ \subsection{Slots Finder}
100
+ For entity selection, we need to select the relevant entities that could be answered by entities.
101
+ Since entity itself is not informative, we can either right a description about it [2022 paper SGD] or generate question about it [2021 paper T5]. To fully understand which one is better, we compare two formats with different templates. For each template, we varied the way to ask questions or describe, randomize the order and average 10 different ways to ask questions. We show an example of the templet in the appendix. We also compare different models with the same size (FlanT5-3B, Falcon-3B, LLMA2-3B, Vacuna-3B and MPT-3B). We choose 3B because they are faster to inference. (Give data point to compare response time)
102
+
103
+ Experiment 1:
104
+ On 200 human labeled synthetic data, compare those methods and compare recall and select the best one
105
+
106
+ Based on our selected model, we need to finetune the selected top models and compare their performance. We compared different finetuning methods such as finetuning the whole model and using performance effective method to finetune.
107
+ Experiment 2:
108
+ Finetune on 1k and 10k data with all
109
+ On 200 human labeled synthetic data, compare those methods and compare recall and select the best one
110
+
111
+ \begin{table*}[h]
112
+ \begin{center}
113
+ \begin{tabular}{l|l|l|l|l|l|l}
114
+ & Flan T5 XL & Falcon & MPT & Deberta & Roberta & Flan T5 Trained \\
115
+ Size & 3B & 7B & 7B & & & 220 M \\
116
+ Rouge 1 & 0.639 & 0.091 & 0.321 & 0.657 & 0.636 & 0.713 \\
117
+ Rouge 2 & 0.02 & 0.004 & 0.015 & 0.02 & 0.015 & 0.02 \\
118
+ Rouge L & 0.637 & 0.091 & 0.319 & 0.657 & 0.636 & 0.715
119
+ \end{tabular}
120
+ \end{center}
121
+ \end{table*}
122
+
123
+ \subsection{Entity Extraction}
124
+ For entity extraction, our goal is to extract the best entity extraction method without finetuning and then finetune the best method. (Adding literature review on entity extraction methods) such as deberta etc
125
+ Experiment 1:
126
+
127
+
128
+ Experiment 2:
129
+ Based on our selected model, we need to finetune the selected top models and compare their performance. We compared different finetuning methods such as finetuning the whole model and using performance effective method to finetune.
130
+ Experiment 2:
131
+ Finetune on 1k and 10k data with LORA, Prefix and all
132
+ On 200 human labeled synthetic data, compare those methods and compare accuracy and select the best one
133
+
134
+
135
+
136
+ \subsection{Production Consideration}
137
+ In the pursuit of enhancing content generation, it is imperative to employ Language Model (LLM) to ensure the production of accurate and relevant content. An integral component of this process is the incorporation of a robust fact-checking mechanism, serving to validate the answers generated, thereby bolstering the reliability and credibility of the output. Additionally, infusing more empathy into the responses is paramount, as it fosters a more human-centric and relatable interaction experience. It is also essential to maintain transparency by elucidating the information captured during the processing time, which contributes to the accountability and trustworthiness of the system. Looking ahead, potential advancements include the capability to generate resumes and emails, interface with various APIs, proficiently answer queries, and identify pertinent tickets, further enhancing the utility and efficiency of the system.
138
+
139
+
140
+ \subsection{Future Work}
141
+ To enhance the performance and capabilities of the model, it is crucial to amass a substantial dataset for the fine-tuning of a more extensive model, enabling it to execute multiple tasks concurrently. Essential prerequisites for this advancement encompass the ability to handle longer sequences and the provision of clear, upfront instructions for entity extraction and task execution. Furthermore, illustrating the connections to other agents is vital, offering a comprehensive and interconnected approach to task management and execution, thereby bolstering the overall efficiency and effectiveness of the model.
142
+
143
+
144
+ \section{Conclusion}
145
+
146
+
147
+
148
+
149
+
150
+ \section{Preamble}
151
+ \begin{table*}
152
+ \centering
153
+ \begin{tabular}{lll}
154
+ \hline
155
+ \textbf{Output} & \textbf{natbib command} & \textbf{Old ACL-style command}\\
156
+ \hline
157
+ \citep{ct1965} & \verb|\citep| & \verb|\cite| \\
158
+ \citealp{ct1965} & \verb|\citealp| & no equivalent \\
159
+ \citet{ct1965} & \verb|\citet| & \verb|\newcite| \\
160
+ \citeyearpar{ct1965} & \verb|\citeyearpar| & \verb|\shortcite| \\
161
+ \citeposs{ct1965} & \verb|\citeposs| & no equivalent \\
162
+ \citep[FFT;][]{ct1965} & \verb|\citep[FFT;][]| & no equivalent\\
163
+ \hline
164
+ \end{tabular}
165
+ \caption{\label{citation-guide}
166
+ Citation commands supported by the style file.
167
+ The style is based on the natbib package and supports all natbib citation commands.
168
+ It also supports commands defined in previous ACL-style files for compatibility.
169
+ }
170
+ \end{table*}
171
+ The first line of the file must be
172
+ \begin{quote}
173
+ \begin{verbatim}
174
+ \documentclass[11pt]{article}
175
+ \end{verbatim}
176
+ \end{quote}
177
+ To load the style file in the review version:
178
+ \begin{quote}
179
+ \begin{verbatim}
180
+ \usepackage[review]{EMNLP2023}
181
+ \end{verbatim}
182
+ \end{quote}
183
+ For the final version, omit the \verb|review| option:
184
+ \begin{quote}
185
+ \begin{verbatim}
186
+ \usepackage{EMNLP2023}
187
+ \end{verbatim}
188
+ \end{quote}
189
+ To use Times Roman, put the following in the preamble:
190
+ \begin{quote}
191
+ \begin{verbatim}
192
+ \usepackage{times}
193
+ \end{verbatim}
194
+ \end{quote}
195
+ (Alternatives like txfonts or newtx are also acceptable.)
196
+ Please see the \LaTeX{} source of this document for comments on other packages that may be useful.
197
+ Set the title and author using \verb|\title| and \verb|\author|. Within the author list, format multiple authors using \verb|\and| and \verb|\And| and \verb|\AND|; please see the \LaTeX{} source for examples.
198
+ By default, the box containing the title and author names is set to the minimum of 5 cm. If you need more space, include the following in the preamble:
199
+ \begin{quote}
200
+ \begin{verbatim}
201
+ \setlength\titlebox{<dim>}
202
+ \end{verbatim}
203
+ \end{quote}
204
+ where \verb|<dim>| is replaced with a length. Do not set this length smaller than 5 cm.
205
+
206
+ \section{Document Body}
207
+
208
+ \subsection{Footnotes}
209
+
210
+ Footnotes are inserted with the \verb|\footnote| command.\footnote{This is a footnote.}
211
+
212
+ \subsection{Tables and figures}
213
+
214
+ See Table~\ref{tab:accents} for an example of a table and its caption.
215
+ \textbf{Do not override the default caption sizes.}
216
+
217
+ \subsection{Hyperlinks}
218
+
219
+ Users of older versions of \LaTeX{} may encounter the following error during compilation:
220
+ \begin{quote}
221
+ \tt\verb|\pdfendlink| ended up in different nesting level than \verb|\pdfstartlink|.
222
+ \end{quote}
223
+ This happens when pdf\LaTeX{} is used and a citation splits across a page boundary. The best way to fix this is to upgrade \LaTeX{} to 2018-12-01 or later.
224
+
225
+ \subsection{Citations}
226
+
227
+
228
+
229
+ Table~\ref{citation-guide} shows the syntax supported by the style files.
230
+ We encourage you to use the natbib styles.
231
+ You can use the command \verb|\citet| (cite in text) to get ``author (year)'' citations, like this citation to a paper by \citet{Gusfield:97}.
232
+ You can use the command \verb|\citep| (cite in parentheses) to get ``(author, year)'' citations \citep{Gusfield:97}.
233
+ You can use the command \verb|\citealp| (alternative cite without parentheses) to get ``author, year'' citations, which is useful for using citations within parentheses (e.g. \citealp{Gusfield:97}).
234
+
235
+ \subsection{References}
236
+
237
+ \nocite{Ando2005,borschinger-johnson-2011-particle,andrew2007scalable,rasooli-tetrault-2015,goodman-etal-2016-noise,harper-2014-learning}
238
+
239
+ The \LaTeX{} and Bib\TeX{} style files provided roughly follow the American Psychological Association format.
240
+ If your own bib file is named \texttt{custom.bib}, then placing the following before any appendices in your \LaTeX{} file will generate the references section for you:
241
+ \begin{quote}
242
+ \begin{verbatim}
243
+ \bibliographystyle{acl_natbib}
244
+ \bibliography{custom}
245
+ \end{verbatim}
246
+ \end{quote}
247
+ You can obtain the complete ACL Anthology as a Bib\TeX{} file from \url{https://aclweb.org/anthology/anthology.bib.gz}.
248
+ To include both the Anthology and your own .bib file, use the following instead of the above.
249
+ \begin{quote}
250
+ \begin{verbatim}
251
+ \bibliographystyle{acl_natbib}
252
+ \bibliography{anthology,custom}
253
+ \end{verbatim}
254
+ \end{quote}
255
+ Please see Section~\ref{sec:bibtex} for information on preparing Bib\TeX{} files.
256
+
257
+ \subsection{Appendices}
258
+
259
+ Use \verb|\appendix| before any appendix section to switch the section numbering over to letters. See Appendix~\ref{sec:appendix} for an example.
260
+
261
+ \section{Bib\TeX{} Files}
262
+ \label{sec:bibtex}
263
+
264
+ Unicode cannot be used in Bib\TeX{} entries, and some ways of typing special characters can disrupt Bib\TeX's alphabetization. The recommended way of typing special characters is shown in Table~\ref{tab:accents}.
265
+
266
+ Please ensure that Bib\TeX{} records contain DOIs or URLs when possible, and for all the ACL materials that you reference.
267
+ Use the \verb|doi| field for DOIs and the \verb|url| field for URLs.
268
+ If a Bib\TeX{} entry has a URL or DOI field, the paper title in the references section will appear as a hyperlink to the paper, using the hyperref \LaTeX{} package.
269
+
270
+ \section*{Limitations}
271
+ EMNLP 2023 requires all submissions to have a section titled ``Limitations'', for discussing the limitations of the paper as a complement to the discussion of strengths in the main text. This section should occur after the conclusion, but before the references. It will not count towards the page limit.
272
+
273
+ The discussion of limitations is mandatory. Papers without a limitation section will be desk-rejected without review.
274
+ ARR-reviewed papers that did not include ``Limitations'' section in their prior submission, should submit a PDF with such a section together with their EMNLP 2023 submission.
275
+
276
+ While we are open to different types of limitations, just mentioning that a set of results have been shown for English only probably does not reflect what we expect.
277
+ Mentioning that the method works mostly for languages with limited morphology, like English, is a much better alternative.
278
+ In addition, limitations such as low scalability to long text, the requirement of large GPU resources, or other things that inspire crucial further investigation are welcome.
279
+
280
+ \section*{Ethics Statement}
281
+ Scientific work published at EMNLP 2023 must comply with the \href{https://www.aclweb.org/portal/content/acl-code-ethics}{ACL Ethics Policy}. We encourage all authors to include an explicit ethics statement on the broader impact of the work, or other ethical considerations after the conclusion but before the references. The ethics statement will not count toward the page limit (8 pages for long, 4 pages for short papers).
282
+
283
+ \section*{Acknowledgements}
284
+ This document has been adapted by Yue Zhang, Ryan Cotterell and Lea Frermann from the style files used for earlier ACL and NAACL proceedings, including those for
285
+ ACL 2020 by Steven Bethard, Ryan Cotterell and Rui Yan,
286
+ ACL 2019 by Douwe Kiela and Ivan Vuli\'{c},
287
+ NAACL 2019 by Stephanie Lukin and Alla Roskovskaya,
288
+ ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu,
289
+ NAACL 2018 by Margaret Mitchell and Stephanie Lukin,
290
+ Bib\TeX{} suggestions for (NA)ACL 2017/2018 from Jason Eisner,
291
+ ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell,
292
+ ACL 2012 by Maggie Li and Michael White,
293
+ ACL 2010 by Jing-Shin Chang and Philipp Koehn,
294
+ ACL 2008 by Johanna D. Moore, Simone Teufel, James Allan, and Sadaoki Furui,
295
+ ACL 2005 by Hwee Tou Ng and Kemal Oflazer,
296
+ ACL 2002 by Eugene Charniak and Dekang Lin,
297
+ and earlier ACL and EACL formats written by several people, including
298
+ John Chen, Henry S. Thompson and Donald Walker.
299
+ Additional elements were taken from the formatting instructions of the \emph{International Joint Conference on Artificial Intelligence} and the \emph{Conference on Computer Vision and Pattern Recognition}.
300
+
301
+ \bibliography{anthology,custom}
302
+ \bibliographystyle{acl_natbib}
303
+
304
+ \appendix
305
+
306
+ \section{Example Appendix}
307
+ \label{sec:appendix}
308
+
309
+ This is a section in the appendix.
310
+
311
+ \end{document}