Papers
arxiv:2303.10130

GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models

Published on Mar 17, 2023
Authors:
,
,
,

Abstract

We investigate the potential implications of large language models (LLMs), such as Generative Pre-trained Transformers (GPTs), on the U.S. labor market, focusing on the increased capabilities arising from LLM-powered software compared to LLMs on their own. Using a new rubric, we assess occupations based on their alignment with LLM capabilities, integrating both human expertise and GPT-4 classifications. Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. We do not make predictions about the development or adoption timeline of such LLMs. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. Significantly, these impacts are not restricted to industries with higher recent productivity growth. Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks. This finding implies that LLM-powered software will have a substantial effect on scaling the economic impacts of the underlying models. We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications.

Community

I love how they're talking about impacted "tasks" as a way to disconnect themselves for the redundancies that may happen as a consequence. The more serious impacts of GPT-4, like job displacement, are left for future research.

In order to keep up with the 2 GPTs pun they made in the title, the paper got bit confusing to read IMO

Thought I think the concept of exposure is interesting

We define exposure as a proxy for potential economic impact without distinguishing between labor-augmenting or labor-displacing effects.

However, one thing that is not clear from the paper to me is, in this part:

To ensure the quality of these annotations, the authors personally labeled a large sample of tasks and DWAs and enlisted experienced human annotators who have extensively reviewed GPT outputs as part of OpenAI’s alignment work (Ouyang et al., 2022).

Why would someone that reviewed a lot of GPT outputs be qualified to assess which industries it can affect the most, without experts on each industry involved? But if I understood it correctly, I guess they try to validate that with the fact that GPT-4 agrees in most part with the human labels

To me the most interesting chart of the paper is this one:
image.png
(was flipped on the paper and a bit low quality to read, but interesting to see which industries ranked high on exposure by the human annotators)

L

To me the most interesting chart of the paper is this one:
image.png
(was flipped on the paper and a bit low quality to read, but interesting to see which industries ranked high on exposure by the human annotators)

The list seems to be quite randomly ordered; Telecommunications is separated from Utilities by quite a bit - but they are similar industries in many ways (have to have a lot of folks digging lots of holes in the road).

Also I see barbers as being up to 50% exposed (Table 6) and Mathematicians 100% (Table 4) - my limited understanding of both those jobs makes me think that they are actually 0% exposed.

And I thought I would live this moment in 20 years. This is so exciting...

"Securities, commodities and financial investments"... yeah as if governments wont regulate the use of these technologies in this space (especially bond markets) in a heartbeat..

Don't think that these wild AIs wont be regulated -possibly banned entirely- for large banks and other critical institutions...

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2303.10130 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 1