Papers
arxiv:2312.04474

Chain of Code: Reasoning with a Language Model-Augmented Code Emulator

Published on Dec 7, 2023
· Featured in Daily Papers on Dec 8, 2023
Authors:
,
,
,
,
,

Abstract

Code provides a general syntactic structure to build complex programs and perform precise computations when paired with a code interpreter -- we hypothesize that language models (LMs) can leverage code-writing to improve Chain of Thought reasoning not only for logic and arithmetic tasks, but also for linguistic ones (and in particular, those that are a mix of both). For example, consider prompting an LM to write code that counts the number of times it detects sarcasm in an essay: the LM may struggle to write an implementation for "detect_sarcasm(string)" that can be executed by the interpreter (handling the edge cases would be insurmountable). However, LMs may still produce a valid solution if they are used not only to write the code, but also to selectively "emulate" the interpreter by generating the expected output of "detect_sarcasm(string)" and other lines of code (e.g., that the interpreter could not compile). In this work, we propose Chain of Code (CoT), a simple yet surprisingly effective extension that improves LM code-driven reasoning. The key idea is to encourage LMs to format linguistic sub-tasks in a program as flexible pseudocode that the compiler can explicitly catch undefined behaviors and hand off to simulate with an LM (as an "LMulator"). Experiments demonstrate that Chain of Code outperforms Chain of Thought and other baselines across a variety of benchmarks; on BIG-Bench Hard, Chain of Code achieves 84%, a gain of 12% over Chain of Thought. CoT scales well with large and small models alike, and broadens the scope of reasoning questions that LMs can correctly answer by "thinking in code". Project webpage: https://chain-of-code.github.io/.

Community

I love this.

It seems like a natural next step to create a language-agnostic psuedo-code compiler.

Language in. Binary out.

And vice versa.

Binary as an extra modality

Hi Authors,

First off - excellent work, congratulations.

Your video in https://chain-of-code.github.io/ has this prompt "Q: How many countries have I been to ? I have been to Mumbai, London, Washington, Grand Canyon ...".
You say "With direct prompting, the model predicts a number leading to mistakes".

I just tried the above question with direct prompting in Claude-3 and got the correct answer = 3.
Could you provide a more complex example, perhaps ?

"Based on the locations you mentioned:

Mumbai is a city in India
London is a city in the United Kingdom
Washington likely refers to Washington D.C. in the United States
Grand Canyon is a national park in the United States

So from the information provided, you have been to 3 different countries:

  1. India
  2. United Kingdom
  3. United States

Therefore, the total number of countries you have been to based on those locations is 3 countries."

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2312.04474 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2312.04474 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2312.04474 in a Space README.md to link it from this page.

Collections including this paper 22