Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
vladbogo 
posted an update Mar 1
Post
"What Evidence Do Language Models Find Convincing?" is a new paper that explores what types of evidence and argumentation techniques language models find convincing when presented with ambiguous, open-domain questions that have conflicting answers online.

Keypoints:
* Dataset: It introduces "ConflictingQA," a dataset of controversial questions and real-world evidence paragraphs supporting both "yes" and "no" answers.
* Convincingness Metric: It uses the "paragraph win rate" - when shown two conflicting paragraphs, this measures how often a model predicts the answer that aligns with a given paragraph's stance.
* Current models rely on the relevance of the content to the query, while largely ignoring stylistic features such as whether a text contains scientific references or if it is written with a neutral tone.

Congrats to the authors for their work!

Paper: What Evidence Do Language Models Find Convincing? (2402.11782)
Code: https://github.com/AlexWan0/rag-convincingness
In this post