AI & ML interests

How To Collections for Learning and Teaching AI - Brain Development

Organization Card
About org cards

Version 1

Perform a deep dive synopsis in markdown code describing the datasets and input datasetts used by two models in comparison - delving much deeper into real time papers and information on these datasets and ideally find the URL where the dataset or dataset paper can be viewed. Fix the article I have written below with a start on the datasets that were used to train the two models:

Language Models πŸ—£οΈ

πŸ† Bloom sets new record for most performant and efficient AI model in science! 🌸

Comparison of Large Language Models

Model Name Model Size (in Parameters)
BigScience-tr11-176B 176 billion
GPT-3 175 billion

GPT-3 Datasets πŸ“š

  • WebText
  • Common Crawl
  • BooksCorpus
  • English Wikipedia
  • Toronto Books Corpus
  • OpenWebText

ChatGPT Datasets - Details πŸ“š

Big Science Model πŸš€

Datasets:

    • Universal Dependencies: A collection of annotated corpora for natural language processing in a range of languages, with a focus on dependency parsing.
    • WMT 2014: The fourth edition of the Workshop on Statistical Machine Translation, featuring shared tasks on translating between English and various other languages.
    • The Pile: An English language corpus of diverse text, sourced from various places on the internet.
    • HumanEval: A dataset of English sentences, annotated with human judgments on a range of linguistic qualities.
    • FLORES-101: A dataset of parallel sentences in 101 languages, designed for multilingual machine translation.
    • CrowS-Pairs: A dataset of sentence pairs, designed for evaluating the plausibility of generated text.
    • WikiLingua: A dataset of parallel sentences in 75 languages, sourced from Wikipedia.
    • MTEB: A dataset of English sentences, annotated with their entailment relationships with respect to other sentences.
    • xP3: A dataset of English sentences, annotated with their paraphrase relationships with respect to other sentences.
    • DiaBLa: A dataset of English dialogue, annotated with dialogue acts.

Deep RL ML Strategy 🧠

The AI strategies are:

  • Language Model Preparation using Human Augmented with Supervised Fine Tuning πŸ€–
  • Reward Model Training with Prompts Dataset Multi-Model Generate Data to Rank 🎁
  • Fine Tuning with Reinforcement Reward and Distance Distribution Regret Score 🎯
  • Proximal Policy Optimization Fine Tuning 🀝
  • Variations - Preference Model Pretraining πŸ€”
  • Use Ranking Datasets Sentiment - Thumbs Up/Down, Distribution πŸ“Š
  • Online Version Getting Feedback πŸ’¬
  • OpenAI - InstructGPT - Humans generate LM Training Text πŸ”
  • DeepMind - Advantage Actor Critic Sparrow, GopherCite 🦜
  • Reward Model Human Prefence Feedback πŸ†

For more information on specific techniques and implementations, check out the following resources:

  • OpenAI's paper on GPT-3 which details their Language Model Preparation approach
  • DeepMind's paper on SAC which describes the Advantage Actor Critic algorithm
  • OpenAI's paper on Reward Learning which explains their approach to training Reward Models
  • OpenAI's blog post on GPT-3's fine-tuning process

Version 2:

models

None public yet

datasets

None public yet