Papers
arxiv:2402.03620

Self-Discover: Large Language Models Self-Compose Reasoning Structures

Published on Feb 6
· Featured in Daily Papers on Feb 7
Authors:
,
,
,
,
,
,
,

Abstract

We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods. Core to the framework is a self-discovery process where LLMs select multiple atomic reasoning modules such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER substantially improves GPT-4 and PaLM 2's performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER outperforms inference-intensive methods such as CoT-Self-Consistency by more than 20%, while requiring 10-40x fewer inference compute. Finally, we show that the self-discovered reasoning structures are universally applicable across model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share commonalities with human reasoning patterns.

Community

🔥

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Introducing my new Blog Series: "AI Research Chronicle: Exploring the Latest in AI". Exploring the ever-expanding world of AI research is an exciting journey, full of promise and potential. With this new blog series, I aim to inspire and inform, unpacking the most interesting and visionary ideas in this rapidly evolving area. I covered this paper on my blog. - https://ajithp.com/2024/02/11/self-discover-large-language-models/

I created a custom GPT which automatically implements this Self-Discover 'Select - Adapt - Implement' approach, based on the paper; it utilizes the 39 reasoning modules provided in the paper also!

I'd love to get some feedback, I hope you all find it useful!
https://chat.openai.com/g/g-36cJS50di-self-discovering-gpt

I have had a little trouble getting it to reason the svg problem properly, if anyone has success or thinks they have an idea for improving the GPT instructions, I'd love to hear it!

@Flynnbo

I created a custom GPT which automatically implements this Self-Discover 'Select - Adapt - Implement' approach, based on the paper; it utilizes the 39 reasoning modules provided in the paper also!

I'd love to get some feedback, I hope you all find it useful!
https://chat.openai.com/g/g-36cJS50di-self-discovering-gpt

I have had a little trouble getting it to reason the svg problem properly, if anyone has success or thinks they have an idea for improving the GPT instructions, I'd love to hear it!

I got some interesting results using your prompt lol
Uploading IMG_1100.png…
Uploading IMG_1101.webp…

@Flynnbo I don't have GPTPlus to try out CustomGPT.

So, I have open sourced Github repository to use the Self-Discover 'Select - Adapt - Implement' approach in your apps.

https://github.com/sudhanshu746/self-discover-prompt

I looked at the one git hub shortly and asked myself if it coud be so simple.
https://github.com/meta-introspector/self-discover-prompt/issues/1 ran over some ideas with gemini on this if you are interested. My work on unimath and metacoq proof interpretation via the llm is how I would like to approach this. metacoq/unimath as one of the infintely many universal meta languages.
Gemini_Generated_Image (16) (17).jpeg

I implemented it using python. It works with openai models, Gemini Models and local GGUF models. You can even mix and match which models do the self-discover bit and which ones do the solving with the reasoning structure.

https://github.com/waszumteufel/autologic

Would the code be available?

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2402.03620 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2402.03620 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 64