AI-ZeroToHero-031523
AI & ML interests
AGI and ML Pipelines, Ambient IoT AI, Behavior Cognitive and Memory AI, Clinical Medical and Nursing AI, Genomics AI, GAN Gaming GAIL AR VR XR and Simulation AI, Graph Ontology KR KE AI, Languages and NLP AI, Quantum Compute GPU TPU NPU AI, Vision Image Document and Audio/Video AI
Recent Activity
Classroom Examples for Today:
HF Features to Check Out First - Boost your Speed:
- HF_TOKEN create - Why? Hit quota on free usage and see errors - Solve w this. Also this lets spaces read/write as you.
- Model Easy Button with Gradio
- https://huggingface.co/spaces/awacke1/Model-Easy-Button1-ZeroShotImageClassifier-Openai-clip-vit-large-patch14
- https://huggingface.co/spaces/awacke1/Easy-Button-Zero-Shot-Text-Classifier-facebook-bart-large-mnli
- https://huggingface.co/spaces/awacke1/Model-Easy-Button-Generative-Images-runwayml-stable-diffusion-v1-5
- https://huggingface.co/spaces/awacke1/Model-Easy-Button-Generative-Text-bigscience-bloom
- Check out API Link at Bottom - Gradio auto generates API for you along with usage.
- Spaces Embed Button
- Bring all four together now into a dashboard!
- Space Duplicate Button
Examples 03_16_2023:
- HTML5 - Build AI Dashboards with HTML5 Spaces. Spaces Context Menu. Mediapipe. https://huggingface.co/spaces/awacke1/AI.Dashboard.HEDIS.Terminology.Vocabulary.Codes
- ChatGPT - Demonstrate three modes including GPT-4 which started this week. https://chat.openai.com/chat
- Wikipedia Crowdsource Human Feedback (HF) and Headless URL: https://awacke1-streamlitwikipediachat.hf.space https://huggingface.co/spaces/awacke1/StreamlitWikipediaChat
- Cognitive Memory - AI Human Feedback (HF), Wikichat, Tweet Sentiment Dash: https://huggingface.co/spaces/awacke1/AI.Dashboard.Wiki.Chat.Cognitive.HTML5
- Twitter Sentiment Graph Example: https://awacke1-twitter-sentiment-live-realtime.hf.space/ Modify to split URL w ChatGPT?
- ASR Comparitive Review:
- Multilingual Models: jonatasgrosman/wav2vec2-large-xlsr-53-english Space: https://huggingface.co/spaces/awacke1/ASR-High-Accuracy-Test
- Speech to Text and Back to Speech in Voice Models: https://huggingface.co/spaces/awacke1/TTS-STT-Blocks Model: https://huggingface.co/facebook/wav2vec2-base-960h
- Gradio Live Mode: https://huggingface.co/spaces/awacke1/2-LiveASR Models: facebook/blenderbot-400M-distill nvidia/stt_en_conformer_transducer_xlarge
- Bloom Example:
- Step By Step w Bloom: https://huggingface.co/spaces/EuroPython2022/Step-By-Step-With-Bloom
- ChatGPT with Key Example: https://huggingface.co/spaces/awacke1/chatgpt-demo
- Get or revoke your keys here: https://platform.openai.com/account/api-keys
- Example fake: tsk-H2W4lEeT4Aonxe2tQnUzT3BlbkFJq1cMwMANfYc0ftXwrJSo12345t
Components for Dash - Demo button to Embed Space to get IFRAME code:
https://huggingface.co/spaces/awacke1/Health.Assessments.Summarizer HEDIS Dash:
- HEDIS Related Dashboard with CT: https://huggingface.co/spaces/awacke1/AI.Dashboard.HEDIS
π Two easy ways to turbo boost your AI learning journey! π»
π AI Pair Programming
Open 2 Browsers to:
π₯ YouTube University Method:
π₯ 2023 AI/ML Advanced Learning Playlists:
- 2023 Streamlit Pro Tips for AI UI UX for Data Science, Engineering, and Mathematics
- 2023 Fun, New and Interesting AI, Videos, and AI/ML Techniques
- 2023 Best Minds in AGI AI Gamification and Large Language Models
- 2023 State of the Art for Vision Image Classification, Text Classification and Regression, Extractive Question Answering and Tabular Classification
- 2023 QA Models and Long Form Question Answering NLP
Cloud Patterns - Dataset Architecture Patterns for Cloud Optimal Datasets:
- Azure Blob/DataLake adlfs: https://huggingface.co/docs/datasets/filesystems
- AWS: Amazon S3 s3fs: https://s3fs.readthedocs.io/en/latest/
- Google Cloud Storage gcsfs: https://gcsfs.readthedocs.io/en/latest/
- Google Drive: Google Drive gdrivefs: https://github.com/intake/gdrivefs
Apache BEAM: https://huggingface.co/docs/datasets/beam Datasets: https://huggingface.co/docs/datasets/index
Datasets Spaces - High Performance Cloud Dataset Patterns
- Health Care AI Datasets: https://huggingface.co/spaces/awacke1/Health-Care-AI-and-Datasets
- Dataset Analyzer: https://huggingface.co/spaces/awacke1/DatasetAnalyzer
- Shared Memory with Github LFS: https://huggingface.co/spaces/awacke1/Memory-Shared
- CSV Dataset Analyzer: https://huggingface.co/spaces/awacke1/CSVDatasetAnalyzer
- Pandas Profiler Report for EDA Datasets: https://huggingface.co/spaces/awacke1/WikipediaProfilerTestforDatasets
- Datasets High Performance IMDB Patterns for AI: https://huggingface.co/spaces/awacke1/SaveAndReloadDataset
ChatGPT Prompts Datasets
- https://huggingface.co/datasets/fka/awesome-chatgpt-prompts
- https://github.com/f/awesome-chatgpt-prompts
- Example with role based behavior: I want you to act as a stand-up comedian. I will provide you with some topics related to current events and you will use your wit, creativity, and observational skills to create a routine based on those topics. You should also be sure to incorporate personal anecdotes or experiences into the routine in order to make it more relatable and engaging for the audience. My first request is "I want a humorous story and jokes to talk about the funny things about AI development and executive presentation videos"
Language Models π£οΈ
π Bloom sets new record for most performant and efficient AI model in science! πΈ
Comparison of Large Language Models
Model Name | Model Size (in Parameters) |
---|---|
BigScience-tr11-176B | 176 billion |
GPT-3 | 175 billion |
OpenAI's DALL-E 2.0 | 500 million |
NVIDIA's Megatron | 8.3 billion |
Transformer-XL | 250 million |
XLNet | 210 million |
ChatGPT Datasets π
- WebText
- Common Crawl
- BooksCorpus
- English Wikipedia
- Toronto Books Corpus
- OpenWebText
ChatGPT Datasets - Details π
- WebText: A dataset of web pages crawled from domains on the Alexa top 5,000 list. This dataset was used to pretrain GPT-2.
- Common Crawl: A dataset of web pages from a variety of domains, which is updated regularly. This dataset was used to pretrain GPT-3.
- Language Models are Few-Shot Learners by Brown et al.
- BooksCorpus: A dataset of over 11,000 books from a variety of genres.
- Scalable Methods for 8 Billion Token Language Modeling by Zhu et al.
- English Wikipedia: A dump of the English-language Wikipedia as of 2018, with articles from 2001-2017.
- Improving Language Understanding by Generative Pre-Training Space for Wikipedia Search
- Toronto Books Corpus: A dataset of over 7,000 books from a variety of genres, collected by the University of Toronto.
- OpenWebText: A dataset of web pages that were filtered to remove content that was likely to be low-quality or spammy. This dataset was used to pretrain GPT-3.
- Language Models are Few-Shot Learners by Brown et al.
Big Science Model π
π Papers:
- BLOOM: A 176B-Parameter Open-Access Multilingual Language Model Paper
- Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism Paper
- 8-bit Optimizers via Block-wise Quantization Paper
- Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation Paper
- Other papers related to Big Science
- 217 other models optimized for use with Bloom
π Datasets:
Datasets:
- Universal Dependencies: A collection of annotated corpora for natural language processing in a range of languages, with a focus on dependency parsing.
- WMT 2014: The fourth edition of the Workshop on Statistical Machine Translation, featuring shared tasks on translating between English and various other languages.
- The Pile: An English language corpus of diverse text, sourced from various places on the internet.
- HumanEval: A dataset of English sentences, annotated with human judgments on a range of linguistic qualities.
- HumanEval: An Evaluation Benchmark for Language Understanding by Gabriel Ilharco, Daniel Loureiro, Pedro Rodriguez, and Afonso Mendes.
- FLORES-101: A dataset of parallel sentences in 101 languages, designed for multilingual machine translation.
- FLORES-101: A Massively Multilingual Parallel Corpus for Language Understanding by Aman Madaan, Shruti Rijhwani, Raghav Gupta, and Mitesh M. Khapra.
- CrowS-Pairs: A dataset of sentence pairs, designed for evaluating the plausibility of generated text.
- CrowS-Pairs: A Challenge Dataset for Plausible Plausibility Judgments by Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Pascale Fung, and Caiming Xiong.
- WikiLingua: A dataset of parallel sentences in 75 languages, sourced from Wikipedia.
- WikiLingua: A New Benchmark Dataset for Cross-Lingual Wikification by Jiarui Yao, Yanqiao Zhu, Ruihan Bao, Guosheng Lin, Lidong Bing, and Bei Shi.
- MTEB: A dataset of English sentences, annotated with their entailment relationships with respect to other sentences.
- Multi-Task Evaluation Benchmark for Natural Language Inference by MichaΕ Lukasik, Marcin Junczys-Dowmunt, and Houda Bouamor.
- xP3: A dataset of English sentences, annotated with their paraphrase relationships with respect to other sentences.
- xP3: A Large-Scale Evaluation Benchmark for Paraphrase Identification in Context by Aniket Didolkar, James Mayfield, Markus Saers, and Jason Baldridge.
- DiaBLa: A dataset of English dialogue, annotated with dialogue acts.
A Large-Scale Corpus for Conversation Disentanglement by Samuel Broscheit, AntΓ³nio Branco, and AndrΓ© F. T. Martins.
π Dataset Papers with Code
Deep RL ML Strategy π§
The AI strategies are:
- Language Model Preparation using Human Augmented with Supervised Fine Tuning π€
- Reward Model Training with Prompts Dataset Multi-Model Generate Data to Rank π
- Fine Tuning with Reinforcement Reward and Distance Distribution Regret Score π―
- Proximal Policy Optimization Fine Tuning π€
- Variations - Preference Model Pretraining π€
- Use Ranking Datasets Sentiment - Thumbs Up/Down, Distribution π
- Online Version Getting Feedback π¬
- OpenAI - InstructGPT - Humans generate LM Training Text π
- DeepMind - Advantage Actor Critic Sparrow, GopherCite π¦
- Reward Model Human Prefence Feedback π
For more information on specific techniques and implementations, check out the following resources:
- OpenAI's paper on GPT-3 which details their Language Model Preparation approach
- DeepMind's paper on SAC which describes the Advantage Actor Critic algorithm
- OpenAI's paper on Reward Learning which explains their approach to training Reward Models
- OpenAI's blog post on GPT-3's fine-tuning process