File size: 15,716 Bytes
a3fa0c8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 |
The evaluation of this project is to answer trivia questions. You do
not need to do well at this task, but you should submit a system that
completes the task or create adversarial questions in that setting. This will help the whole class share data and
resources.
If you focus on something other than predicting answers, *that's fine*!
About the Data
==============
Quiz bowl is an academic competition between schools in
English-speaking countries; hundreds of teams compete in dozens of
tournaments each year. Quiz bowl is different from Jeopardy, a recent
application area. While Jeopardy also uses signaling devices, these
are only usable after a question is completed (interrupting Jeopardy's
questions would make for bad television). Thus, Jeopardy is rapacious
classification followed by a race---among those who know the
answer---to punch a button first.
Here's an example of a quiz bowl question:
Expanding on a 1908 paper by Smoluchowski, he derived a formula for
the intensity of scattered light in media fluctuating densities that
reduces to Rayleigh's law for ideal gases in The Theory of the
Opalescence of Homogenous Fluids and Liquid Mixtures near the Critical
State. That research supported his theories of matter first developed
when he calculated the diffusion constant in terms of fundamental
parameters of the particles of a gas undergoing Brownian Motion. In
that same year, 1905, he also published On a Heuristic Point of View
Concerning the Production and Transformation of Light. That
explication of the photoelectric effect won him 1921 Nobel in Physics.
For ten points, name this German physicist best known for his theory
of Relativity.
*ANSWER*: Albert _Einstein_
Two teams listen to the same question. Teams interrupt the question at
any point by "buzzing in"; if the answer is correct, the team gets
points and the next question is read. Otherwise, the team loses
points and the other team can answer.
You are welcome to use any *automatic* method to choose an answer. It
need not be similar nor build on our provided systems. In addition to
the data we provide, you are welcome to use any external data *except*
our test quiz bowl questions (i.e., don't hack our server!). You are
welcome (an encouraged) to use any publicly available software, but
you may want to check on Piazza for suggestions as many tools are
better (or easier to use) than others.
If you don't like the interruptability of questions, you can also just answer entire questions. However, you must also output a confidence.
Competition
==================
We will use Dynabech website (https://dynabench.org/tasks/qa). If you remember the past workshop about Dynabench submission, this is the way to do it. The specific task name is "Grounded QA". Here, with the help of the video tutorial, you submit your QA model and assess how your QA model did compared to others. The assessment will take place by testing your QA model on several QA test datasets and the results of yours and your competitors will be visible on the leaderboard. Your goal is to rank the highest in terms of expected wins: you buzz in with probability proportional to your confidence, and if you're more right than the competition, you win.
Writing Questions
==================
Alternatively, you can also *write* 50 adversarial questions that
challenge modern NLP systems. These questions must be diverse in the
subjects asked about, the skills computers need to answer the
questions, and the entities in those questions. Remember that your questions should be *factual* and
*specific* enough for humans to answer, because your task is to stump
the computers relative to humans!
In addition to the raw questions, you will also need to create citations describing:
* Why the question is difficult for computers: include citations from the NLP/AI/ML literature
* Why the information in the question is correct: include citations from the sources you drew on the write the question
* Why the question is interesting: include scholarly / popular culture artifacts to prove that people care about this
* Why the question is pyramidal: discuss why your first clues are harder than your later clues
**Category**
We want questions from many domains such as Art, Literature, Geography, History,
Science, TV and Film, Music, Lifestyle, and Sport. The questions
should be written using all topics above (5 questions for each
category and 5 more for the remaining categories). Indicate in your
writeup which category you chose to write on for each question.
Art:
* Questions about works: Mona Lisa, Raft of the Medussa
* Questions about forms: color, contour, texture
* Questions about artists: Picasso, Monet, Leonardo da Vinci
* Questions about context: Renaissance, post-modernism, expressionism, surrealism
Literature:
* Questions about works: novels (1984), plays (The Lion and the Jewel), poems (Rubaiyat), criticism (Poetics)
* Questions about major characters or events in literature: The Death of Anna Karenina, Noboru Wataya, the Marriage of Hippolyta and Theseus
* Questions about literary movements (Sturm und Drang)
* Questions about translations
* Cross-cutting questions (appearances of Overcoats in novels)
* Common link questions (the literary output of a country/region)
Geography:
* Questions about location: names of capital, state, river
* Questions about the place: temperature, wind flow, humidity
History:
* When: When did the First World war start?
* Who: Who is called Napoleon of Iran?
* Where: Where was the first Summer Olympics held?
* Which: Which is the oldest civilization in the world?
Science:
* Questions about terminology: The concept of gravity was discovered by which famous physicist?
* Questions about the experiment
* Questions about theory: The social action theory believes that individuals are influenced by this theory.
TV and Film:
* Quotes: What are the dying words of Charles Foster Kane in Citizen Kane?
* Title: What 1927 musical was the first "talkie"?
* Plot: In The Matrix, does Neo take the blue pill or the red pill?
Music:
* Singer: What singer has had a Billboard No. 1 hit in each of the last four decades?
* Band: Before Bleachers and fun., Jack Antonoff fronted what band?
* Title: What was Madonna's first top 10 hit?
* History: Which classical composer was deaf?
Lifestyle:
* Clothes: What clothing company, founded by a tennis player, has an alligator logo?
* Decoration: What was the first perfume sold by Coco Chanel?
Sport:
* Known facts: What sport is best known as the ‘king of sports’?
* Nationality: What’s the national sport of Canada?
* Sport player: The classic 1980 movie called Raging Bull is about which real-life boxer?
* Country: What country has competed the most times in the Summer Olympics yet hasn’t won any kind of medal?
**Diversity**
Other than category diversity, if you find an ingenious way of writing questions about underrepresented countries, you will get bonus points (indicate which questions you included the diversity component in your writeup). You may decide which are underrepresented countries with your own reasonable reason (etc., less population may indicate underrepresented), but make sure to articulate this in your writeup.
* Run state of the art QA systems on the questions to show they struggle, give individual results for each question and a summary over all questions
For an example of what the writeup for a single question should look like, see the adversarial HW:
https://github.com/Pinafore/nlp-hw/blob/master/adversarial/question.tex
Proposal
==================
The project proposal is a one page PDF document that describes:
* Who is on your team (team sizes can be between three and six
students, but six is really too big to be effective; my suggestion
is that most groups should be between four or five).
* What techniques you will explore
* Your timeline for completing the project (be realistic; you should
have your first submission in a week or two)
Submit the proposal on Gradescope, but make sure to include all group
members. If all group members are not included, you will lose points. Late days cannot be used on this
assignment.
Milestone 1
======================
You'll have to update how things are going: what's
working, what isn't, and how does it change your timeline? How does it change your division of labor?
*Question Writing*: You'll need to have answers selected for all of
your questions and first drafts of at least 15 questions. This must
be submitted as a JSON file so that we run computer QA systems on it.
*Project*: You'll need to have made a submission to the leaderboard with something that satisfies the API.
Submit a PDF updating on your progress to Gradescope. If all team
members are not on the submission, you will lose points.
Milestone 2
===================
As before, provide an updated timeline / division of labor, provide your intermediary results.
*Question Writing*: You'll need to have reflected the feedback from the first questions and completed a first draft of at least 30 questions. You'll also need machine results to your questions and an overall evaluation of your human/computer accuracy.
*Project*: You'll need to have a made a submission to the leaderboard with a working system (e.g., not just obey the API, but actually get reasonable answers).
Submit a PDF updating on your progress.
Final Presentation
======================
The final presentation will be virtual (uploading a video). In
the final presentation you will:
* Explain what you did
* Who did what. For example, for the question writing project a team of five people might write: A wrote the first draft of questions. B and C verified they were initially answerable by a human. B ran computer systems to verify they were challenging to a computer. C edited the questions and increased the computer difficulty. D and E verified that the edited questions were still answerable by a human. D and E checked all of the questions for factual accuracy and created citations and the writeup.
* What challenges you had
* Review how well you did (based on the competition or your own metrics). If you do not use the course infrastructure to evaluate your project's work, you should talk about what alternative evaluations you used, why they're appropriate/fair, and how well you did on them.
* Provide an error analysis. An error analysis must contain examples from the
development set that you get wrong. You should show those sentences
and explain why (in terms of features or the model) they have the
wrong answer. You should have been doing this all along as you
derive new features, but this is your final inspection of
your errors. The feature or model problems you discover should not
be trivial features you could add easily. Instead, these should be
features or models that are difficult to correct. An error analysis
is not the same thing as simply presenting the error matrix, as it
does not inspect any individual examples. If you're writing questions, talk about examples of questions that didn't work out as intended.
* The linguistic motivation for your features / how your wrote the questions. This is a
computational linguistics class, so you should give precedence to
features / techniques that we use in this class (e.g., syntax,
morphology, part of speech, word sense, etc.). Given two features
that work equally well and one that is linguistically motivated,
we'll prefer the linguistically motivated one.
* Presumably you did many different things; how did they each
individually contribute to your final result?
Each group has 10 minutes to deliver their presentation. Please record the video, and upload it to Google Drive, and include the link in your writeup submission.
Final Question Submission
======================
Because we need to get the questions ready for the systems, upload your raw questions on May 10. This doesn't include the citations or other parts of the writeup.
System Submission
======================
You must submit a version of your system by May 12. It may not be perfect, but this what the question writing teams will use to test their results.
Your system should be sent directly to the professor and TAs in zip files, including the correct dependencies and a working inference code. Your inference code should run successfully in the root folder (extracted from zip folder) directory with the command:
```
> python3 inference.py --data=evaluation_set.json
```
The input will be in the form of a .json file () in the same format as the file the adversarial question writing team submits. The output format should also be in string.
If you have any notes or comments that we should be aware of while running your code, please include them in the folder as a .txt file. Also, dependency information should be included as a .txt file.
Please prepend your email title with [2024-CMSC 470 System Submission].
Project Writeup and JSON file
======================
By May 17, submit your project writeup explaining what
you did and what results you achieved. This document should
make it clear:
* Why this is a good idea
* What you did
* Who did what
* Whether your technique worked or not
For systems, please do not go over 2500 words unless you have a really good reason.
Images are a much better use of space than words, usually (there's no
limit on including images, but use judgement and be selective).
For question writing, you have one page (single spaced, two column) per question plus a two page summary of results. Talk about how you organized the question writing, how you evaluated the questions, and a summary of the results. Along with your writeup, turn in a json including the raw text of the question and answer and category. The json file is included in this directory. Make sure your json file is in the correct format and is callable via below code. Your submission will not be graded if it does not follow the format of the example json file.
```
with open('path to your json file', 'r') as f:
data = json.load(f)
```
Grade
======================
The grade will be out of 25 points, broken into five areas:
* _Presentation_: For your oral presentation, do you highlight what
you did and make people care? Did you use time well during the
presentation?
* _Writeup_: Does the writeup explain what you did in a way that is
clear and effective?
The final three areas are different between the system and the questions.
| | System | Questions |
|----------|:-------------:|------:|
| _Technical Soundness_ | Did you use the right tools for the job, and did you use them correctly? Were they relevant to this class? | Were your questions correct and accurately cited. |
| _Effort_ | Did you do what you say you would, and was it the right ammount of effort. | Are the questions well-written, interesting, and thoroughly edited? |
| _Performance_ | How did your techniques perform in terms of accuracy, recall, etc.? | Is the human accuracy substantially higher than the computer accuracy? |
All members of the group will receive the same grade. It's impossible for the course staff to adjudicate Rashomon-style accounts of who did what, and the goal of a group project is for all team members to work together to create a cohesive project that works well together. While it makes sense to divide the work into distinct areas of responsibility, at grading time we have now way to know who really did what, so it's the groups responsibility to create a piece of output that reflects well on the whole group.
|