taesiri commited on
Commit
681f057
1 Parent(s): 4d65ead

Upload abstract/2212.09196.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. abstract/2212.09196.txt +1 -0
abstract/2212.09196.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here, we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of GPT-3) on a range of analogical tasks, including a novel text-based matrix reasoning task closely modeled on Raven's Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.