Papers
arxiv:2302.13971

LLaMA: Open and Efficient Foundation Language Models

Published on Feb 27, 2023
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.

Community

Amazing model!

Yes, but would love to get access to it :D

absolutely incredible based on all my testing so far!

So appreciative of this work!
Would love to have access hehe

It's amazing work. Better they share the weights for research than no sharing at all, but they should share it with a license which allows commercial usage, too. It would feedback educatively, because LeCun struggles to see the full potential LLMs have. Multimodality is handy, not key.

Petition to free the weights πŸŽ‰ (src tweet)

Meta AI, please upload them weights under more permissive license 😘
Yours truly,
AI Community

Sign up or log in to comment

Models citing this paper 298

Browse 298 models citing this paper

Datasets citing this paper 6

Browse 6 datasets citing this paper

Spaces citing this paper 196

Collections including this paper 12