Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
vladbogo 
posted an update Feb 16
Post
A new paper from Google DeepMind explores the effect of premise ordering on large language models (LLMs) in reasoning tasks. Despite the logical principle that the sequence of premises should not influence the conclusion's validity, the study finds LLMs' performance varies with different premise arrangements. Here's a summary:

The research investigates how the order of premises affects LLMs in logical and mathematical reasoning tasks, challenging the assumption that premise sequence is irrelevant to the outcome.

Key Findings:
* Logical Reasoning: LLMs perform best when premises are in a forward order that aligns with the proof's progression. Deviations from this order result in significant performance drops.
* Mathematical Reasoning: The introduction of the R-GSM benchmark shows a similar sensitivity in LLMs.

Congrats to the authors for their work!

Paper: Premise Order Matters in Reasoning with Large Language Models (2402.08939).
In this post