model_name_string
string
model_name_url
string
model_size_string
string
dataset
string
data_type
string
research_field
string
risks_and_limitations
string
risk_types
string
publication_date
string
organization_and_url
string
institution_type
float64
country
string
license
string
paper_name_url
string
model_description
string
organization_info
string
AudioLM
[AudioLM](https://arxiv.org/abs/2209.03143) πŸ“’
Not specified
The train split of [unlab-60k](https://github.com/facebookresearch/libri-light), consisting of 60k hours of English speech
Audio
Natural Language Processing, Speech Recognition
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts, Technological Unemployment
7/26/2023
[Google Research](https://research.google/) (Subsidiary)
null
United States of America
Proprietary License
[AudioLM: a Language Modeling Approach to Audio Generation](https://arxiv.org/abs/2209.03143)
[AudioLM](https://arxiv.org/abs/2209.03143) is a framework for a high-quality audio generation with long-term consistency that can map the input audio to a sequence of discrete tokens and cast audio generation as a language modeling task. According to Google Research, models trained via this framework learn to generate natural and coherent continuations given short audio prompts. Furthermore, this approach extends beyond speech by generating coherent piano music continuations, despite being trained without any symbolic representation of music. Models trained via the AudioLM are comprised of different components: [SoundStream](https://arxiv.org/abs/2107.03312) (i.e., neural audio codec that can efficiently compress speech), [w2v-BERT](https://arxiv.org/abs/2108.06209), the k-means quantizer for w2v-BERT embeddings, and a decoder-only Transformer. Models were trained on the unlab-60k train split of [Libri-Light](https://github.com/facebookresearch/libri-light), consisting of 60k hours of English speech. AudioLM models are not available to the public, but demos of its capabilities are exposed in the following [URL](https://google-research.github.io/seanet/audiolm/examples/).
[Google Research](https://research.google/) is a research division of Google that focuses on advancing computer science, machine learning, artificial intelligence, and other related fields. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google Research is responsible for developing new technologies and products that can be used by Google and its users. The division has several research areas, including natural language processing, computer vision, robotics, and more. Google Research, like other Google-related organizations, abides by [Google's AI Principles](https://ai.google/responsibility/principles/).
BlenderBot 3
[BlenderBot 3](https://arxiv.org/abs/2208.03188) πŸ“š
175B
Approximately 1.3B training tokens. A complete list of training datasets can be found in this [data card](https://github.com/facebookresearch/ParlAI/blob/main/parlai/zoo/bb3/data_card.md)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
8/5/2022
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
BB3-175B License
[BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage](https://arxiv.org/abs/2208.03188)
[BlenderBot 3](https://about.fb.com/news/2022/08/blenderbot-ai-chatbot-improves-through-conversation/) is a 175B parameter dialogue model capable of open-domain conversation with access to the internet and long-term memory. The model was developed using [OPT-175B](https://arxiv.org/abs/2205.01068) as its foundation, being the continuation of the [BlenderBot series](https://ai.meta.com/blog/state-of-the-art-open-source-chatbot/), and approximately 58 times the size of [BlenderBot 2](https://ai.meta.com/blog/blender-bot-2-an-open-source-chatbot-that-builds-long-term-memory-and-searches-the-internet/). The result is a conversational model that [learns from interactions and feedback](https://parl.ai/projects/fits). BlenderBot 3 is designed to improve its capabilities through natural conversations and feedback from its users, being one of the first chatbots able to build long-term memory and continuously access and search the web. BlenderBot 3 comes in three sizes: 3B, 30B, and 175B parameters. While the 3B and 30B models are available in the [ParlAI model zoo](https://parl.ai/docs/zoo.html), the 175B parameter version requires registration and acceptance of the terms and conditions of the BB3-175B License Agreement.
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta] and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, and initially directed by [Yann LeCun](https://en.wikipedia.org/wiki/Yann_LeCun "Yann LeCun"). When it comes to matters of AI Ethics, [Meta AI expresses](https://ai.meta.com/about/) that "_Our commitment to responsible AI is driven by the belief that everyone should have equitable access to information, services, and opportunities_". Meta AI also claims to adhere to Meta's core principles: Privacy and security, Fairness and inclusion, Robustness and safety, Transparency and control, Accountability and governance, among other key principles.
BLOOM
[BLOOM](https://huggingface.co/bigscience/bloom) πŸ“š
176B
[ROOTS corpus](https://arxiv.org/abs/2303.03915), a dataset comprising hundreds of sources in 46 natural and 13 programming languages (366B tokens)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
11/9/2022
[BigScience](https://bigscience.huggingface.co/) (Academic/Research Institution)
null
France
BigScience RAIL License
[BLOOM: A 176B-Parameter Open-Access Multilingual Language Model](https://arxiv.org/abs/2211.05100)
BigScience Large Open-science Open-access Multilingual Language Model ([BLOOM](https://huggingface.co/bigscience/bloom)) is a transformer-based large language model created by over 1,000 AI researchers. BLOOM was trained on around 366 billion tokens from March through July 2022, and it was one of the first open alternatives to large language models like GPT-3. BLOOM uses a decoder-only transformer model architecture modified from Megatron-LM GPT-2. It can output coherent text in 46 languages and 13 programming languages that are hardly distinguishable from text written by humans. BLOOM can also be instructed to perform text tasks it hasn't been explicitly trained for. BLOOM was trained in six sizes ranging from 560 million to 176 billion parameters. All models are [publicly available](https://huggingface.co/bigscience) and released under the Responsible AI License.
[BigScience](https://bigscience.huggingface.co/) is an open and collaborative workshop around studying and creating very large language models, gathering more than 1000 researchers worldwide. According to their [homepage](https://bigscience.notion.site/Introduction-5facbf41a16848d198bda853485e23a0), the BigScience project is inspired by existing partnership projects in other fields, such as [CERN](https://home.web.cern.ch/), [LIGO]( https://www.ligo.caltech.edu/), and [ITER](https://www.iter.org/), in which research collaborations are open, facilitating large-scale results.
ChatGPT
[ChatGPT](https://openai.com/blog/chatgpt/) πŸ“š
175B
Improved version of GPT-3 dataset + human demonstrations/evaluations
Text
Reinforcement Learning, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
11/30/2022
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Introducing ChatGPT: OpenAI's blog](https://openai.com/blog/chatgpt)
[ChatGPT](https://openai.com/blog/chatgpt/) is a large language model based on an improved version of GPT-3, developed in a similar way to [InstructGPT](https://arxiv.org/abs/2203.02155), which is trained to follow an instruction in a prompt. ChatGPT was trained by OpenAI with reinforcement learning from human feedback ([RLHF](https://huggingface.co/blog/rlhf)). The basic idea behind RLHF is that a language model can be considered a policy over a vocabulary, which allows one to use RL techniques (e.g., optimizing over a reward model) for training purposes. In RLHF, this reward model is trained to be a surrogate for human evaluations. Hence, this synthetic version of human approval becomes the learning signal for the policy during training. ChatGPT can provide answers to complex questions, [utilize plugins](https://openai.com/blog/chatgpt-plugins), and query other OpenAI models (e.g., DALL-E 3), being the core engine being the [Chat completions API](https://platform.openai.com/docs/api-reference/chat) from OpenAI. Currently, the model is available for use via the OpenAI platform ([chat.openai](https://chat.openai.com/)) and API, where users can interreact with the model using GPT-3.5 and GPT-4 as the base model.
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
CICERO
[CICERO](https://ai.meta.com/research/cicero/) πŸ“šπŸ•ΉοΈ
2.7B
A dataset of almost 13M messages from online Diplomacy games
Text
Reinforcement Learning, Natural Language Processing
No
Algorithmic Discrimination, Social Engineering, Environmental Impacts
11/22/2022
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
CC BY-NC-SA 4.0
[Human-level play in the game of Diplomacy by combining language models with strategic reasoning](https://www.science.org/doi/10.1126/science.ade9097)
[Cicero](https://github.com/facebookresearch/diplomacy_cicero) is an AI agent that can achieve human-level performance in [Diplomacy](https://en.wikipedia.org/wiki/Diplomacy_(game)), a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players. Cicero integrates several models, from language models (R2C2-based transformer encoder-decoder with 2.7B parameters) to reinforcement learning policies and value networks (transformer encoder), combining strategic reasoning and natural language processing. According to the authors, across 40 games of an anonymous online Diplomacy league, Cicero achieved more than double the average score of the human players and ranked in the top 10% of participants who played more than one game. The source code to create Cicero, as the weights of the models involved, are released under an [MIT](https://github.com/facebookresearch/diplomacy_cicero/blob/main/LICENSE.md) and [CC BY-NC-SA 4.0](https://github.com/facebookresearch/diplomacy_cicero/blob/main/LICENSE_FOR_MODEL_WEIGHTS.txt) licenses, respectively.
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, and initially directed by [Yann LeCun](https://en.wikipedia.org/wiki/Yann_LeCun "Yann LeCun"). When it comes to matters of AI Ethics, [Meta AI expresses](https://ai.meta.com/about/) that "_Our commitment to responsible AI is driven by the belief that everyone should have equitable access to information, services, and opportunities_". Meta AI also claims to adhere to Meta's core principles: Privacy and security, Fairness and inclusion, Robustness and safety, Transparency and control, Accountability and governance, among other key principles.
CLIP
[CLIP](https://openai.com/blog/clip/) πŸ“šπŸ–ΌοΈ
Not specified
Trained on publicly available image-caption data
Text, Image
Computer Vision, Natural Language Processing
Yes
Algorithmic Discrimination, Surveillance and Social Control, Environmental Impacts
2/26/2021
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
MIT License
[Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
[CLIP](https://openai.com/blog/clip/) (Contrastive Language-Image Pre-training) is a neural network capable of associating natural language snippets with images, learning the relationship between sequences of tokens and images. According to its [model card](https://github.com/openai/CLIP/blob/main/model-card.md#model-type), "_the base model uses a ResNet50 with several modifications as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer_." According to its developers, one of the critical insights behind the CLIP model was to leverage natural language as a flexible prediction space, thus allowing for greater generalization and transfer. Hence, CLIP can be instructed in natural language, with a simple prompt agnostic training task, without needing specific phrases and labeled images. The dataset used to train CLIP has 400 million records split between images and text collected from the internet. CLIP has been evaluated on 30 benchmarks, covering tasks like OCR, video recognition, geolocation, and other fine-grained object classification tasks. Like other foundation models, by not directly optimizing for a given benchmark/task, CLIP proves to be more general. CLIP is open source and can be used following the [linked tutorial](https://github.com/openai/CLIP).
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
Code Llama
[Code Llama](https://github.com/facebookresearch/codellama) πŸ“š
34B
500B tokens of publicly available code plus a small portion of natural language datasets related to code
Text
Natural Language Processing
Yes
Algorithmic Discrimination, Malware Development, Environmental Impacts, Technological Unemployment
8/24/2023
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
LLaMA 2 Community License Agreement
[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)
[Code Llama](https://github.com/facebookresearch/codellama) is a family of large language models for code based on Llama 2, trained on a size ranging from 7B to 34B, that has zero-shot instruction following ability for programming tasks. Code Llama was developed by fine-tuning Llama 2 using 500B tokens of publicly available code plus a small portion of natural language datasets related to code. Meta AI provides multiple flavors of the Code Llama series, from foundation models (Code Llama) to Python specialists (Code Llama - Python) and even instruction-following models (Code Llama - Instruct). Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B and 13B additionally support infilling text generation. All Code Llama models are available under the LLaMA 2 Community License Agreement.
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, and initially directed by [Yann LeCun](https://en.wikipedia.org/wiki/Yann_LeCun "Yann LeCun"). When it comes to matters of AI Ethics, [Meta AI expresses](https://ai.meta.com/about/) that "_Our commitment to responsible AI is driven by the belief that everyone should have equitable access to information, services, and opportunities_". Meta AI also claims to adhere to Meta's core principles: Privacy and security, Fairness and inclusion, Robustness and safety, Transparency and control, Accountability and governance, among other key principles.
Codex
[Codex](https://openai.com/blog/openai-codex/) πŸ“š
12B
159GB from public software repositories hosted on GitHub
Text
Natural Language Processing
Yes
Algorithmic Discrimination, Malware Development, Environmental Impacts, Technological Unemployment
7/7/2021
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Evaluating Large Language Models Trained on Code](https://arxiv.org/abs/2107.03374)
[Codex](https://openai.com/blog/openai-codex/) is a GPT model fine-tuned on code containing up to 12B parameters, capable of translating natural language sentences into programmatic code (e.g., Python). In an initial investigation of the GPT-3 model, it turned out that it could generate simple programs from docstrings. According to the OpenAI researchers, although rudimentary, this capability was exciting because GPT-3 was not explicitly trained for code generation, showing code generations could be a downstream application of this foundational model via fine-tuning. Codex can deal with languages like Python, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, and even Shell. Its data was compiled in May 2020 from 54 million GitHub repositories, totaling 159GB of text/code. Codex served as the engine behind the initial versions of [GitHub Copilot](https://copilot.github.com/). As of March 2023, the Codex models are deprecated and have been substituted by the newer [Chat models](https://platform.openai.com/docs/guides/gpt/chat-completions-api) from OpenAI.
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
DALL-E 2
[DALL-E 2](https://openai.com/dall-e-2/) πŸ“šπŸ–ΌοΈ
3.5B
Encoder dataset: DALL-E and CLIP dataset (approximately 650M images). Decoder dataset: DALL-E dataset (approximately 250M images)
Text, Image
Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
4/13/2022
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125)
[DALL-E 2](https://openai.com/dall-e-2) is a text-to-image model composed of two main parts: an encoder that generates a CLIP image embedding given a text caption and a decoder that generates an image conditioned on the image embedding. The result of this combination is DALL-E 2, a multimodal model that can generate photo-realistic images from simple natural language prompts. DALL-E 2 was trained on pairs of images and their corresponding captions drawn from a combination of publicly available sources and sources licensed by OpenAI. In addition to generating images based on text description prompts, DALL-E 2 can modify existing images as prompted using a text description. It can also take an existing image as an input and be prompted to produce a creative variation on it. Currently, DALL-E 2 can be accessed via API in a restricted and controlled way by OpenAI.
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
DALL-E 3
[DALL-E 3](https://openai.com/dall-e-3/) πŸ“šπŸ–ΌοΈ
Not specified
Not specified
Text, Image
Computer Vision, Natural Language Processing
No
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
9/20/2023
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[DALL-E 3: OpenAI's Blog](https://openai.com/dall-e-3)
[DALL-E 3](https://openai.com/dall-e-3) is the successor of DALL-E 2, which, according to OpenAI's release, can understand significantly more nuance and detail instructions than previous versions, diminishing the need for prompt engineering in its use. Not much is known about the architecture, training protocol, or dataset involved in the development of this technology. However, one can speculate that the system is an improved version of the encoder-decoder type model used in DALL-E 2, but now based on the GPT-4 model and trained with an improved dataset. Currently, DALL-3 powers Bing Chat assistant and is built natively on ChatGPT, following similar release protocols to earlier versions of the technology (i.e., restricted access via the OpenAI API).
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
DALL-E
[DALL-E](https://openai.com/blog/dall-e/) πŸ“šπŸ–ΌοΈ
12B
250 million text-image pairs from Wikipedia, and a filtered subset from YFCC100M
Text, Image
Computer Vision, Natural Language Processing
No
Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
2/24/2021
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Zero-Shot Text-to-Image Generation](https://arxiv.org/abs/2102.12092)
[DALL-E](https://openai.com/blog/dall-e/) is a text-to-image model comprised of two main components. The first one is a discrete variational autoencoder (dVAE) that compresses 256Γ—256 RGB images into a 32 Γ— 32 grid of image tokens with a vocabulary of size 8192. The second component is an autoregressive transformer that takes concatenated sequences of text tokens and image tokens to model the joint distribution over this joint latent space. DALL-E uses the standard causal mask for the text tokens, and sparse attention for the image tokens with either a row, column, or convolutional attention pattern, depending on the layer. DALL-E can create plausible images for a great variety of sentences that explore the compositional structure of language. Also, given this compositional nature, DALL-E is capable of putting together concepts to describe both real and imaginary things. And just like other large foundation models, like GPT-3, DALL-E can perform several kinds of image-to-image translation tasks when prompted in the right way, extending the capability of large-scale neural networks to perform tasks in a zero/few-shot manner. DALL-E is not an open-source model. However, there are some open-source alternatives to DALL-E, such as [DALL-E Mini](https://huggingface.co/dalle-mini/dalle-mini), available on Hugging Face. In regards to the images generated by DALL-E, [according to OpenAI](https://help.openai.com/en/articles/6425277-can-i-sell-images-i-create-with-dall-e), "_you own the images you create with DALL-E_." Hence, one is allowed to reprint, sell, and merchandise them.
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
ESM-2
[ESM-2](https://www.science.org/doi/abs/10.1126/science.ade2574) 🧬
15B
Uniref50 (UR50) dataset: a biological dataset taken from the [Uniprot database](https://www.uniprot.org/)
Biological Data
Pattern Recognition, Forecasting
No
Biological Risks, Environmental Impacts
3/16/2023
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
MIT License
[Evolutionary-scale prediction of atomic level protein structure with a language model](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v3)
[ESM-2](https://github.com/facebookresearch/esm) is a SOTA general-purpose protein language model that can be used to predict structure, function, and other protein properties directly from individual sequences of amino acids. The ESM-2 series was trained on a range from 8M to 15B parameters, being the continuation of other models developed by Meta AI (ESMFold, ESM-1, etc.), created to tackle the [protein folding problem](https://en.wikipedia.org/wiki/Protein_folding). ESM-2 was trained on an unsupervised masked language modeling task (i.e., predicting the identity of randomly selected amino acids in a protein sequence by observing their context in the rest of the series). One of the significant contributions this technology makes is in developing a prediction method that eliminates costly aspects of current state-of-the-art structure prediction methods by removing the need for multiple sequence alignment while greatly simplifying the neural architecture used for inference. This results in up to 60x speed improvement in inference, completely removing the related protein search process, which can take over 10 minutes with models such as [AlphaFold](https://www.nature.com/articles/s41586-021-03819-2) and [RosettaFold](https://www.science.org/doi/10.1126/science.abj8754). All ESM models are licensed under an MIT License and available on Meta's Research [GitHub repository](https://github.com/facebookresearch/esm).
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, and initially directed by [Yann LeCun](https://en.wikipedia.org/wiki/Yann_LeCun "Yann LeCun"). When it comes to matters of AI Ethics, [Meta AI expresses](https://ai.meta.com/about/) that "_Our commitment to responsible AI is driven by the belief that everyone should have equitable access to information, services, and opportunities_". Meta AI also claims to adhere to Meta's core principles: Privacy and security, Fairness and inclusion, Robustness and safety, Transparency and control, Accountability and governance, among other key principles.
Galactica
[Galactica](https://galactica.org/explore/) πŸ“š
120B
106 billion tokens of articles, reference materials, encyclopedias, and other scientific sources
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Intelectual Fraud
11/16/2022
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
CC BY-NC-SA 4.0
[Galactica: A Large Language Model for Science](https://arxiv.org/abs/2211.09085)
[Galactica](https://github.com/paperswithcode/galai) is a large language model that can store, combine, and reason about scientific knowledge. Galactica was trained in five sizes (from 125M to 120B), using a large scientific corpus of papers, reference material, knowledge bases, and many other sources. It can perform scientific NLP tasks at a high level, like citation prediction, mathematical reasoning, molecular property prediction, protein annotation, summarization, and entity extraction. Galactica is not instruction-tuned, which means it requires the use of prompts and special tokens (e.g., [START_REF], [END_REF]) to produce the intended behavior. These capabilities were developed via prompt pre-training, enabling the model to work out of the box for popular tasks. Galactica is available as an open-source model and can be used through the [Galai](https://github.com/paperswithcode/galai) module (Python). Models are also available for download on [Hugging Face](https://huggingface.co/facebook/galactica-1.3b).
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, and initially directed by [Yann LeCun](https://en.wikipedia.org/wiki/Yann_LeCun "Yann LeCun"). When it comes to matters of AI Ethics, [Meta AI expresses](https://ai.meta.com/about/) that "_Our commitment to responsible AI is driven by the belief that everyone should have equitable access to information, services, and opportunities_". Meta AI also claims to adhere to Meta's core principles: Privacy and security, Fairness and inclusion, Robustness and safety, Transparency and control, Accountability and governance, among other key principles.
Github Copilot
[Github Copilot](https://github.com/features/copilot) πŸ“š
Not specified
Not specified
Text
Natural Language Processing
No
Algorithmic Discrimination, Malware Development, Environmental Impacts, Technological Unemployment
10/29/2021
[Github](https://github.com/) (Subsidiary)
null
United States of America
Proprietary License
[Your AI pair programmer](https://github.com/features/copilot)
[GitHub Copilot](https://github.com/features/copilot) is a cloud-based artificial intelligence tool developed by GitHub and OpenAI to assist users of Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs) by autocompleting code. GitHub Copilot is powered by the OpenAI Codex, a modified, production version of the Generative Pre-trained Transformer 3 (GPT-3). When provided with a programming problem in natural language, Copilot generates a solution in code. It can also describe input code in English, autocomplete code sequences, and translate code between programming languages.
[GitHub](https://github.com/) is a platform for hosting source code and other files with version control using [Git](https://git-scm.com/). GitHub was created by Chris Wanstrath, J. Hyett, Tom Preston-Werner, and Scott Chacon in 2008. Currently (2023), the company is a subsidiary of Microsoft, which bought the platform for $7.5 billion in 2018.
GLIDE
[GLIDE](https://gpt3demo.com/apps/openai-glide) πŸ“šπŸ–ΌοΈ
3.5B
Not specified
Text, Image
Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
12/20/2021
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
MIT License*
[GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models](https://arxiv.org/abs/2112.10741v1)
[GLIDE](https://gpt3demo.com/apps/openai-glide) (Guided Language to Image Diffusion for Generation and Editing) is a [diffusion model](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/) that generates images through natural language. GLIDE also allows editions to be made to existing images using natural language prompts. These edits include inserting new objects, adding shadows and reflections, conducting images into the painting, and so on. According to OpenAI, human evaluators preferred the output images of GLIDE (3.5 billion parameters) to those of [DALL-E](https://openai.com/blog/dall-e/) (12 billion parameters), even though GLIDE is a considerably smaller model. OpenAI released a smaller model of GLIDE (GLIDE filtered), trained with filtered data, whose code and weights are available on the project's [GitHub](https://github.com/openai/glide-text2im). Other models are not available to the public.
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
GPT-2
[GPT-2](https://github.com/openai/gpt-2) πŸ“š
1.5B
[WebText](https://github.com/openai/gpt-2/blob/master/domains.txt)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
2/24/2019
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
MIT License
[Language Models are Unsupervised Multitask Learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
GPT-2 is a large language model developed by OpenAI, being able to generate human-like text from minimal prompts. The model is a decoder-only transformer pretrained on a large corpus of English data in a self-supervised fashion, meaning it learned from the raw texts without any human labels. It was trained on a dataset of over 8 million web pages and 7,000 fiction books from various genres. The resulting dataset (called [WebText](https://github.com/openai/gpt-2/blob/master/domains.txt)) weighs 40GB of text but has not been publicly released. GPT-2 has had a significant impact on the field. The open-sourcing of the models (from [124M](https://huggingface.co/gpt2) to the [1.5B](https://huggingface.co/gpt2-xl) model) allowed for the creation of many fine-tuned versions and services based on this technology. The model was released in November 2019 and was followed by the 175-billion-parameter GPT-3 in 2020.
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
GPT-3
[GPT-3](https://arxiv.org/abs/2005.14165) πŸ“š
175B
570 GB of text-format data from CommonCrawl, WebText2, Books1, Books2, and Wikipedia
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
5/28/2020
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
GPT-3 is a transformer-based autoregressive language model with 175 billion parameters that achieves high performance, without any gradient updating or fine-tuning, on a wide range of NLP tasks (in a [zero-shot](https://en.wikipedia.org/wiki/Zero-shot_learning) or few-shot fashion), including translation, Q&A, word unscrambling, execute 3-digit arithmetic, text classification, sentiment analysis, translation, and many others. The capabilities of GPT-3 have also been used for various other applications, such as [code generation](https://openai.com/blog/openai-codex/) and [chat applications](https://chat.openai.com/) being the foundation that powers many modern AI applications. The model was released in May 2020 and was followed by the 4th iteration (GPT-4) in March 2023. GPT-3 has been quoted by numerous [researchers and philosophers](https://dailynous.com/2020/07/30/philosophers-gpt-3/), such as David Chalmers, who describes it as "_one of the most interesting and important systems ever produced_".
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
GPT-4
[GPT-4](https://arxiv.org/abs/2303.08774) πŸ“šπŸ–ΌοΈ
Not specified
Not specified
Text, Image
Reinforcement Learning, Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
3/15/2023
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[GPT-4 Technical Report](https://arxiv.org/abs/2303.08774)
[GPT-4](https://arxiv.org/abs/2303.08774) is a [generative pre-trained transformer](https://paperswithcode.com/method/gpt) model and successor of the GPT-3(3.5) series. Besides being capable of dealing with several NLP tasks, GPT-4 is also a text-to-image model, making it different from its predecessors (i.e., multimodal). Also, unlike its predecessors, the GPT-4 technology already comes tuned by Reinforcement Learning from Human Feedback ([RLHF](https://huggingface.co/blog/rlhf)), like ChatGPT. The information provided in the documents [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774) and [GPT-4 System Card](https://cdn.openai.com/papers/gpt-4-system-card.pdf) that accompanied the model release is limited. In these documents, no information is found about the model's architecture, size, hardware, training protocol, datasets used, or any information that can be used to replicate the technology. According to its developers, this lack of disclosure is due to the competitive landscape and security implications related to LLMs. Although less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance in several professional and academic benchmarks. As demonstrated by the ChatGPT technology, the post-training alignment process (RLHF) results in performance improvements on measures of factuality and adherence to the desired behavior.
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
GPT-J
[GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) πŸ“š
6B
[The Pile](https://pile.eleuther.ai/)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
6/9/2021
[EleutherAI](https://www.eleuther.ai) (Non-profit)
null
United States of America
Apache 2.0 License
[GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B)
[GPT-J](https://huggingface.co/EleutherAI/gpt-j-6b) is a 6B parameter autoregressive language model with 28 layers, a model dimension of 4096, and a feedforward dimension of 16384. Like the GPT-Neo series, GPT-J uses [Rotary Position Embeddings](https://huggingface.co/docs/transformers/model_doc/roformer). The model is also trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. GPT-J was trained on the Pile, being an addition to the EleutherAI collection of GPT models that can perform various language processing tasks without fine-tuning, such as language translation, code completion, chatting, blog posting, searching information, and much more. GPT-J was trained on the [TPU Research Cloud](https://sites.research.google/trc/about/) and its weights are licensed under version 2.0 of the Apache License.
[EleutherAI](https://www.eleuther.ai/) is a non-profit artificial intelligence research group. The group was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute. According to its [mission statement](https://www.eleuther.ai/about), EleutherAI seeks to (1) advance research on the interpretability and alignment of foundation models, (2) ensure that the ability to study foundation models is not restricted to a handful of companies, and (3) educate people about the capabilities, limitations, and risks associated with these technologies.
GPT-NeoX
[GPT-NeoX](https://huggingface.co/EleutherAI/gpt-neox-20b) πŸ“š
20B
[The Pile](https://pile.eleuther.ai/)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
4/14/2022
[EleutherAI](https://www.eleuther.ai) (Non-profit)
null
United States of America
Apache 2.0 License
[GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745)
GPT-Neo is a series of large language autoregressive models, from [125M](https://huggingface.co/EleutherAI/gpt-neo-125m) to [20B parameter](https://huggingface.co/EleutherAI/gpt-neox-20b), trained on the Pile, openly available to the public through a permissive license. The GPT-Neo series is EleutherAI's replication of the GPT-3 architecture, while GPT-NeoX is the 20B parameter version. At the time of submission, GPT-NeoX was the largest dense autoregressive model with publicly available weights. EleutherAI used model parallelism to train GPT-NeoX on a single machine with multiple GPUs, using a combination of techniques such as gradient checkpointing, pipeline parallelism, and mixed precision training to train efficiently. Instead of the learned positional embeddings used in GPT models, the GPT-Neo series uses [rotary embeddings](https://arxiv.org/abs/2104.09864). The final result is GPT-NeoX, which achieves state-of-the-art results on several benchmarks such as LAMBADA, SuperGLUE, and Pile-MNLI. All GPT-Neo models are open-sourced and licensed under the Apache 2.0 license.
[EleutherAI](https://www.eleuther.ai/) is a non-profit artificial intelligence research group. The group was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute. According to its [mission statement](https://www.eleuther.ai/about), EleutherAI seeks to (1) advance research on the interpretability and alignment of foundation models, (2) ensure that the ability to study foundation models is not restricted to a handful of companies, and (3) educate people about the capabilities, limitations, and risks associated with these technologies.
Imagen
[Imagen](https://imagen.research.google/) πŸ“šπŸ–ΌοΈ
14B
860 million text-image pairs from Google's internal datasets and the [Laion](https://huggingface.co/datasets/laion/laion400m) dataset
Text, Image
Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
5/23/2022
[Google Research](https://research.google/) (Subsidiary)
null
United States of America
Proprietary License
[Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding](https://arxiv.org/abs/2205.11487)
[Imagen](https://imagen.research.google/) is a text-to-image diffusion model that builds on the language understanding capabilities of large transformer language models and the capacities of diffusion models to promote high-fidelity image generation. Imagen uses a large frozen [T5-XXL](https://huggingface.co/google/t5-efficient-xxl) encoder to encode the input text into embeddings. Then, a conditional diffusion model maps the text embedding into a 64Γ—64 image. Imagen further utilizes text-conditional super-resolution diffusion models to upsample the 64Γ—64 image to 256Γ—256, and later to 1024Γ—1024. Imagen is a text-to-image model that has not been released for public use due to concerns regarding responsible open-sourcing of code and demos.
[Google Research](https://research.google/) is a research division of Google that focuses on advancing computer science, machine learning, artificial intelligence, and other related fields. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google Research is responsible for developing new technologies and products that can be used by Google and its users. The division has several research areas, including natural language processing, computer vision, robotics, and more. Google Research, like other Google-related organizations, abides by [Google's AI Principles](https://ai.google/responsibility/principles/).
InstructGPT
[InstructGPT](https://arxiv.org/abs/2203.02155) πŸ“š
175B
Prompt/completion pairs submitted to the OpenAI API
Text
Reinforcement Learning, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
3/4/2022
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Training Language Models to Follow Instructions with Human Feedback](https://arxiv.org/abs/2203.02155)
[InstructGPT](https://arxiv.org/abs/2203.02155) is a fined-tuned version of OpenAI's [GPT-3](https://arxiv.org/abs/2005.14165), achieved via a mix of supervised fine-tuning and reinforcement learning from human feedback. While GPT-3 was trained via causal language modeling, that is, to predict the next token in a sequence of tokens, InstructGPT was trained to model (minimize cross-entropy loss) the distribution of its fine-tuning dataset, comprised of human demonstrations of appropriate instruction-following behavior, and later to maximize the cumulative reward of a learning signal that serves as a proxy for human evaluations. In general, this process can be divided into three steps: (1) Prompt/compilation pairs are collected (using human contractors to create them) and used to fine-tune the base model (GPT-3). (2) After fine-tuning, the model generates several outputs per sample (prompt). Crowdsourced human evaluators then score each output. This scoring is then used to train a reward model. (3) The reward model is then used to update the fine-tuned model again via [proximal policy optimization](https://openai.com/blog/openai-baselines-ppo/). The final result of this process is InstructGPT, which comes in three different sizes (1.3B, 6B, and 175B parameters), fine-tuned from the Babbage, Curie, and Davinci versions of GPT-3 (currently deprecated). InstructGPT represents a significant advance towards alignment when compared to other LLMs, being capable of creating fewer imitative falsehoods, making up facts less frequently, and generating more acceptable outputs. InstructGPT can be said to be a predecessor of more capable/aligned models, like [ChatGPT](https://openai.com/blog/chatgpt/).
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
LaMDA
[LaMDA](https://arxiv.org/abs/2201.08239) πŸ“š
137B
2.81T tokens from public dialog data and other public web documents
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
2/10/2022
[Google](https://about.google/) (Subsidiary)
null
United States of America
Proprietary License
[LaMDA: Language Models for Dialog Applications](https://arxiv.org/abs/2201.08239)
[LaMDA](https://arxiv.org/abs/2201.08239) is a family of Transformer-based neural language models specialized for dialog. These models’ sizes range from 2B to 137B parameters, and they are pre-trained on a dataset containing 1.56T words of public dialog data and web text. LaMDA has been pre-trained in causal language modeling and can used as a general generative language model before fine-tuning. To improve general performance, Google uses a series of classifiers and information retrieval systems to enhance the generative capabilities of LaMDA, where rejection sampling techniques filter the completions in terms of quality, safety, and groundedness. LaMDA is not available to the public.
[Google](https://about.google/) is an American multinational technology company focusing on artificial intelligence, online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, and consumer electronics. Google is also a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google abides by [Google's AI Principles](https://ai.google/responsibility/principles/).
LLaMA 2
[LLaMA 2](https://arxiv.org/abs/2307.09288) πŸ“š
70B
2 trillion tokens with over 1 million human annotations
Text
Reinforcement Learning, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
7/18/2023
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
LLaMA 2 Community License Agreement
[Llama 2: Open Foundation and Fine-Tuned Chat Models](https://arxiv.org/abs/2307.09288)
[Llama 2](https://arxiv.org/abs/2307.09288) is a collection of pre-trained and fine-tuned large language models ranging in scale from 7B to 70B parameters. This model is an updated version of [Llama 1](https://arxiv.org/abs/2302.13971), trained on a new mix of publicly available data (increased the size of the pretraining corpus by 40%), with doubled the context length of the Llama 1, and adopting the use of [grouped-query attention](https://arxiv.org/abs/2305.13245) (allowing faster inference). Meanwhile, the fine-tuned versions of Llama 2, called Llama 2-Chat, are optimized for dialogue use cases. The fine-tuned versions of Llama 2 were achieved via preference modeling techniques like [Reinforcement Learning from Human Feedback](https://huggingface.co/blog/rlhf). The RLHF approaches used by the authors were variants of [Proximal Policy Optimization](https://arxiv.org/abs/1707.06347) and [Rejection Sampling](https://arxiv.org/abs/2204.05862) fine-tuning. Llama 2 models are available under the [LLaMA 2 Community License Agreement](https://ai.meta.com/llama/license/), making Llama models available for commercial and research use.
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, and initially directed by [Yann LeCun](https://en.wikipedia.org/wiki/Yann_LeCun "Yann LeCun"). When it comes to matters of AI Ethics, [Meta AI expresses](https://ai.meta.com/about/) that "_Our commitment to responsible AI is driven by the belief that everyone should have equitable access to information, services, and opportunities_". Meta AI also claims to adhere to Meta's core principles: Privacy and security, Fairness and inclusion, Robustness and safety, Transparency and control, Accountability and governance, among other key principles.
LLaMA
[LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) πŸ“š
65B
1.4 trillion tokens drawn from publicly available data sources and text from 20 different languages
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
2/27/2023
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
LLaMA 2 Community License Agreement
[LLaMA: A foundational, 65-billion-parameter large language model](https://arxiv.org/abs/2302.13971)
[LLaMA](https://ai.meta.com/blog/large-language-model-llama-meta-ai/) is a collection of foundation language models ranging from 7B to 65B parameters, trained on over a trillion tokens of publicly available data. According to Meta AI, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. LLaMA models are GPT-style autoregressive transformers trained on a substantially more significant amount of data than other language models, following the [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556), i.e., a set of empirical rules that describe the optimal trade-off between model size and training data for LLMs. LLaMA models can perform several NLP tasks in a zero/few-shot fashion, like closed-book questions and answering, mathematical reasoning, reading comprehension, and code generation. LLaMA models were released under a noncommercial license focused on research use cases. [Access to the models](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform) is granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world.
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, and initially directed by [Yann LeCun](https://en.wikipedia.org/wiki/Yann_LeCun "Yann LeCun"). When it comes to matters of AI Ethics, [Meta AI expresses](https://ai.meta.com/about/) that "_Our commitment to responsible AI is driven by the belief that everyone should have equitable access to information, services, and opportunities_". Meta AI also claims to adhere to Meta's core principles: Privacy and security, Fairness and inclusion, Robustness and safety, Transparency and control, Accountability and governance, among other key principles.
Midjourney
[Midjourney](https://www.midjourney.com/) πŸ“šπŸ–ΌοΈ
Not specified
Not specified
Text, Image
Computer Vision, Natural Language Processing
No
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
7/12/2022
[Midjourney, Inc.](https://www.midjourney.com/) (Independent Research Lab)
null
United States of America
Proprietary License
[Midjourney Documentation](https://docs.midjourney.com/)
[Midjourney](https://www.midjourney.com/) is a generative artificial intelligence program and service created and hosted by San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions (prompts), similar to OpenAI’s DALL-E and Stability AI’s Stable Diffusion Midjourney is currently only accessible through a Discord bot on their official Discord server, by messaging the bot, or by inviting the bot to a third-party server. Users use the /imagine command and type in a prompt to generate images; in response, the bot returns a set of four images. Users may then choose which images they want to upscale. Beyond the /imagine command, Midjourney offers other commands to send to the Discord bot, like the /blend command (allows the user to blend two images) and the /shorten command (gives the user suggestions on how to make a long prompt shorter).
[Midjourney](https://www.midjourney.com/) is an independent research lab involved in generative AI. Midjourney develops text-to-image models similar to OpenAI’s DALL-E and Stability AI’s Stable Diffusion.
Muse
[Muse](https://muse-model.github.io/) πŸ“šπŸ–ΌοΈ
3B
[Imagen](https://imagen.research.google/) dataset consisting of 460M text-image pairs
Text, Image
Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
1/2/2023
[Google Research](https://research.google/) (Subsidiary)
null
United States of America
Proprietary License
[Muse: Text-To-Image Generation via Masked Generative Transformers](https://arxiv.org/abs/2301.00704)
[Muse](https://muse-model.github.io/) is a text-to-image Transformer model trained on a masked modeling task, i.e., given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens, similar to what language models trained via MLM (masked language modeling), like BERT and RoBERTa do. Muse models achieve SOTA on benchmarks like CC3M and COCO, being also able to directly enable several image editing applications without the need to fine-tune or invert the model. Muse models come in a size range from 632M parameters to 3B parameters. Each model consists of several sub-models: (1) a pair of VQGAN tokenizer models that can encode an input image to a sequence of discrete tokens as well as decode a token sequence back to an image; (2) a base masked image model conditioned on the unmasked tokens and a T5XXL; and (3) a "_superres_" transformer model which translates (unmasked) low-res tokens into high-res tokens (also conditioned on the unmasked tokens and a T5XXL). Muse models are not open to the public, but a demo can be found in this [URL](https://muse-model.github.io/).
[Google Research](https://research.google/) is a research division of Google that focuses on advancing computer science, machine learning, artificial intelligence, and other related fields. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google Research is responsible for developing new technologies and products that can be used by Google and its users. The division has several research areas, including natural language processing, computer vision, robotics, and more. Google Research, like other Google-related organizations, abides by [Google's AI Principles](https://ai.google/responsibility/principles/).
OPT-175B
[OPT-175B](https://arxiv.org/abs/2205.01068) πŸ“š
175B
Approximately 180B tokens corresponding to 800 GB of data. A complete list of datasets used is listed in [Appendix C](https://arxiv.org/abs/2205.01068)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
5/2/2022
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
OPT-175B License
[OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068)
[OPT-175B](https://ai.meta.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/) is a language model with 175 billion parameters, following a similar architectural design to that of GPT models, trained on publicly available data sets, to allow for more community engagement in understanding foundational models. According to Meta AI, OPT-175B is comparable to GPT-3, while requiring only 1/7th of the carbon footprint to develop. Like other foundational models, OPT-175B can perform several NLP tasks out of the box in a zero/few-shot fashion. OPT models were trained from a range between 125M to the 175B parameters model. All of them are openly released under the [OPT-175B License Agreement](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/MODEL_LICENSE.md).
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, and initially directed by [Yann LeCun](https://en.wikipedia.org/wiki/Yann_LeCun "Yann LeCun"). When it comes to matters of AI Ethics, [Meta AI expresses](https://ai.meta.com/about/) that "_Our commitment to responsible AI is driven by the belief that everyone should have equitable access to information, services, and opportunities_". Meta AI also claims to adhere to Meta's core principles: Privacy and security, Fairness and inclusion, Robustness and safety, Transparency and control, Accountability and governance, among other key principles.
PaLM 2
[PaLM 2](https://ai.google/discover/palm2) πŸ“š
Not specified
A diverse set of sources containing web documents, books, code, mathematics, and conversational data
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
5/1/2023
[Google AI](https://ai.google/) (Subsidiary)
null
United States of America
Proprietary License
[PaLM 2 Technical Report](https://ai.google/static/documents/palm2techreport.pdf)
[PaLM 2](https://ai.google/discover/palm2) is a state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor ([PaLM](https://arxiv.org/abs/2204.02311)). The largest model in the PaLM 2 series, PaLM 2-L, is significantly smaller than the largest PaLM model but uses more training computing. The actual sizes (parameter count) of the PaLM 2 series are unknown. According to Google AI, results show that PaLM 2 models significantly outperform PaLM on a variety of tasks, including natural language generation, translation, and reasoning. These results suggest that model scaling is not the only way to improve performance. Instead, performance can be unlocked by meticulous data selection and efficient architecture/objectives. PaLM 2 is the foundation that powers other state-of-the-art models, like [Sec-PaLM](https://cloud.google.com/blog/products/identity-security/rsa-google-cloud-security-ai-workbench-generative-ai) and [Bard](https://bard.google.com/).
[Google AI](https://ai.google/) is a research division at Google that focuses on developing artificial intelligence. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google AI offers a range of machine learning products, solutions, and services that are powered by its research and technology. According to Google AI, "_While we are optimistic about the potential of AI, we recognize that advanced technologies can raise important challenges that must be addressed, thoughtfully, and affirmatively. These AI Principles describe our commitment to developing technology responsibly and work to establish specific application areas we will not pursue._" More on the principles that guide Google AI can be found on their [website](https://ai.google/responsibility/principles/).
PaLM
[PaLM](https://arxiv.org/abs/2204.02311) πŸ“š
540B
780 billion tokens that represent a wide range of natural language use cases
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
10/5/2022
[Google Research](https://research.google/) (Subsidiary)
null
United States of America
Proprietary License
[PaLM: Scaling Language Modeling with Pathways](https://arxiv.org/abs/2204.02311)
The Pathways Language Model, or [PaLM](https://arxiv.org/abs/2204.02311), consists of a series of large language models, with sizes ranging from 8 billion, 62 billion, to 540 billion parameters. The development of PaLM was made possible through the utilization of Pathways, a machine learning system introduced in the [Pathways](https://arxiv.org/abs/2203.01253) paper Pathways is designed to facilitate the highly efficient training of large neural networks, leveraging the power of thousands of accelerator chips. PaLM's training data draws from diverse sources, including English and multilingual datasets that encompass high-quality web documents, books, Wikipedia, conversations, and GitHub code. Among PaLM's capabilities, one can cite language understanding, generation, reasoning, and code-related tasks. Currently (2023), PaLM is not released to the public.
[Google Research](https://research.google/) is a research division of Google that focuses on advancing computer science, machine learning, artificial intelligence, and other related fields. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google Research is responsible for developing new technologies and products that can be used by Google and its users. The division has several research areas, including natural language processing, computer vision, robotics, and more. Google Research, like other Google-related organizations, abides by [Google's AI Principles](https://ai.google/responsibility/principles/).
Parti
[Parti](https://github.com/google-research/parti) πŸ“šπŸ–ΌοΈ
20B
[LAION-400M dataset](https://huggingface.co/datasets/laion/laion400m), [ALIGN training data](https://arxiv.org/abs/2102.05918), and the [JFT-4B dataset](https://paperswithcode.com/paper/scaling-vision-transformers)
Text, Image
Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
6/22/2022
[Google Research](https://research.google/) (Subsidiary)
null
United States of America
Proprietary License
[Scaling Autoregressive Models for Content-Rich Text-to-Image Generation](https://arxiv.org/abs/2206.10789)
Pathways Autoregressive Text-to-Image model ([Parti](https://github.com/google-research/parti)) is a series of autoregressive text-to-image generation models, from 350M to 20B parameters, that achieves high-fidelity photorealistic image generation. Unlike Google’s [Imagen](https://imagen.research.google/), a diffusion model, Parti is an autoregressive model. Parti treats text-to-image generation as a sequence-to-sequence modeling problem, analogous to machine translation – which allows it to benefit from advances in large language models, especially capabilities that are unlocked by scaling data and model sizes. In this case, the target outputs are sequences of image tokens instead of text tokens in another language. Parti uses an image tokenizer, [ViT-VQGAN](https://ai.googleblog.com/2022/05/vector-quantized-image-modeling-with.html), to encode images as sequences of discrete tokens, and then reconstructs the image token sequences as images. Parti models, code, and data are not available to the public.
[Google Research](https://research.google/) is a research division of Google that focuses on advancing computer science, machine learning, artificial intelligence, and other related fields. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google Research is responsible for developing new technologies and products that can be used by Google and its users. The division has several research areas, including natural language processing, computer vision, robotics, and more. Google Research, like other Google-related organizations, abides by [Google's AI Principles](https://ai.google/responsibility/principles/).
Polyglot-Ko
[Polyglot-Ko](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) πŸ“š
12.8B
863 GB of Korean language data curated by [TUNiB](https://tunib.ai/)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
4/3/2023
[EleutherAI](https://www.eleuther.ai) (Non-profit)
null
United States of America
Apache 2.0 License
[A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models](https://arxiv.org/abs/2306.02254)
[Polyglot-Ko](https://huggingface.co/EleutherAI/polyglot-ko-12.8b/tree/main) is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team, available in sizes from 1.3B to 12.8B. The model consists of 40 transformer layers with a model dimension of 5120 and a feedforward dimension of 20480. The model dimension is split into 40 heads, each with a dimension of 128. Rotary Position Embedding ([RoPE](https://arxiv.org/abs/2104.09864)) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 30003. Polyglot-Ko-12.8B was trained for 167 billion tokens over 301,000 steps on 256 A100 GPUs with the GPT-NeoX framework. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token. Polyglot-Ko-12.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://www.tunib.ai/). The data collection process has abided by South Korean laws, and it will not be released for public use. All Polyglot-Ko models are released under the Apache 2.0 License.
[EleutherAI](https://www.eleuther.ai/) is a non-profit artificial intelligence research group. The group was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute. According to its [mission statement](https://www.eleuther.ai/about), EleutherAI seeks to (1) advance research on the interpretability and alignment of foundation models, (2) ensure that the ability to study foundation models is not restricted to a handful of companies, and (3) educate people about the capabilities, limitations, and risks associated with these technologies.
Pythia
[Pythia](https://huggingface.co/EleutherAI/pythia-12b) πŸ“š
12B
[The Pile](https://pile.eleuther.ai/)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
4/3/2023
[EleutherAI](https://www.eleuther.ai) (Non-profit)
null
United States of America
Apache 2.0 License
[Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling](https://arxiv.org/abs/2304.01373)
[Pythia](https://huggingface.co/EleutherAI/pythia-12b) is a collection of models developed to facilitate interpretability research, trained on sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the same data, in the same order. EleutherAI also provides 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, Pythia models [match or exceed](https://huggingface.co/EleutherAI/pythia-12b#evaluations) the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. All Pythia models are released under the Apache 2.0 License.
[EleutherAI](https://www.eleuther.ai/) is a non-profit artificial intelligence research group. The group was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute. According to its [mission statement](https://www.eleuther.ai/about), EleutherAI seeks to (1) advance research on the interpretability and alignment of foundation models, (2) ensure that the ability to study foundation models is not restricted to a handful of companies, and (3) educate people about the capabilities, limitations, and risks associated with these technologies.
WebGPT
[WebGPT](https://arxiv.org/abs/2112.09332) πŸ“š
175B
A collection of demonstrations and comparisons made by freelance contractors from [Upwork](https://www.upwork.com) and [Surge AI](https://www.surgehq.ai)
Text
Reinforcement Learning, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts, Technological Unemployment
12/17/2021
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[WebGPT: Browser-assisted question-answering with human feedback](https://arxiv.org/abs/2112.09332)
[WebGPT](https://openai.com/research/webgpt) is a fine-tuned version of [GPT-3](https://arxiv.org/abs/2005.14165). While GPT-3 tends to hallucinate information when performing tasks requiring real-world knowledge, WebGPT was trained to search the web via a text-based web browser and generate responses via the retrieved information. The model was tuned through a combination of imitation learning and behavior cloning (using human experts as a reference signal), reinforcement learning from human feedback, and rejection sampling, where a reward model trained by human feedback was used to sample the best responses generated by the model. The comparisons evaluated by WebGPT can be found in [this repository](https://huggingface.co/datasets/openai/webgpt_comparisons). WebGPT was evaluated against benchmarks such as [ELI5](https://arxiv.org/abs/1907.09190) and [TruthfulQA](https://arxiv.org/abs/2109.07958), demonstrating superior performance to its foundation in matters of response preferability and factual groundness.
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.
Whisper
[Whisper](https://arxiv.org/abs/2212.04356) πŸ“’πŸ“š
1.55B
680,000 hours of labeled audio
Text, Audio
Natural Language Processing, Speech Recognition
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Surveillance and Social Control, Technological Unemployment
12/6/2022
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
MIT License
[Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
[Whisper](https://arxiv.org/abs/2212.04356) is a general-purpose speech recognition model based on the original [encoder-decoder transformer](https://arxiv.org/abs/1706.03762) architecture trained on over 96 languages (other than English). Whisper can perform multilingual speech recognition, translation, and language identification, among other tasks. These multitask capabilities came via specification techniques (i.e., conditional training) used to enable a single input signal (and model) to generate different outputs (depending on the context of the task). OpenAI trained several models of different sizes, from the smallest (39M parameters) to the largest (1550M), called [large-v2](https://github.com/openai/whisper/discussions/661). These models were released on different dates, the first in September 2022 and the last model (the largest) in December 2022. In their publication, researchers report (except for English speech recognition) that the performance of Whisper technology on tasks such as multilingual speech recognition, speech translation, and language identification continues to increase with model size. At the same time, increases in dataset size result in improved performance on all tasks, although significant variability in improvement rates across tasks and sizes occurs. Whisper is an open-source model intended to "_serve as a basis for building useful applications and for further research on speech processing_."
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba. Their mission "_is to ensure that General Artificial Intelligence benefits all of humanity_." The organization maintains an active research and development agenda in AI Safety. In "_[Our approach to AI safety](https://openai.com/blog/our-approach-to-ai-safety)_", OpenAI states that it is committed to keeping Artificial Intelligence safe and broadly beneficial.

Model Library DB

Dataset Summary

The Model Library is a project that maps the risks associated with modern machine learning systems. Here, we assess some of the most recent and capable AI systems ever created. This is the database for the Model Library.

Supported Tasks and Leaderboards

This dataset serves as a catalog of machine learning models, all displayed in the Model Library.

Languages

English.

Dataset Structure

Data Instances

Features available are: model_name_string, model_name_url, model_size_string, dataset, data_type, research_field, risks_and_limitations, risk_types,publication_date, organization_and_url, institution_type, country, license, paper_name_url, model_description, organization_info.

Data Fields

Read Data Instances.

Data Splits

"main" slipt is the current version displayed in the Model Library.

Dataset Creation

Curation Rationale

This dataset is maintained as part of a research project to catalog risks related to ML models.

Source Data

Initial Data Collection and Normalization

All data was collected manually.

Who are the source language producers?

More information can be found here.

Annotations

Annotation process

More information can be found here.

Who are the annotators?

Members of the AI Robotics Ethics Society (AIRES).

Personal and Sensitive Information

No personal or sensitive information is part of this dataset.

Considerations for Using the Data

Social Impact of Dataset

No considerations.

Discussion of Biases

No considerations.

Other Known Limitations

No considerations.

Additional Information

Dataset Curators

Members of the AI Robotics Ethics Society (AIRES).

Licensing Information

This dataset is licensed under the Apache License, version 2.0.

Citation Information


@misc{correa24library,
    author = {Nicholas Kluge Corr{\^e}a and Faizah Naqvi and Robayet Rossain},
    title = {Model Library},
    year = {2024},
    howpublished = {\url{https://github.com/Nkluge-correa/Model-Library}}
}

Contributions

If you would like to add a model, read our documentation and submit a PR on GitHub!

Downloads last month
0
Edit dataset card