Words ML Blog - Orca – Progressive Learning from Complex Explanation Traces of GPT-4 Hands-On GNNs LLM Course Notes Publications About @maximelabonne πŸ—£οΈ Large language modelsOrca – – Models Author Maxime Lbonne Published August "7," 2024 πŸ—£οΈ Extending the Context Window LLMs Report – Few-Shot Text Classification GPTQ – Accurate Post-Training Quantization for Generative Pre-trained Transformers InCoder – A Model Code Infilling and Synthesis Inference Optimization – Lil’Log LIMA – Less Is More Alignment Local – Int8 LongNet – Scaling to "1,000,000,000" Tokens LoRA – Low-Rank Adaptation LoraHub – Efficient Cross-Task Generalization via Dynamic Composition Multipack Sampler – phi-1 – Textbooks Are All You Need Self-Rewarding Tart – A plug-and-play Transformer module task-agnostic reasoning πŸ’‘ Machine Training Data Influence Analysis Estimation A Survey Sections Tuning Experiments Tip a 13B parameter with ChatGPT level performance thanks a huge dataset 5M samples step-by-step explanations. πŸ“ Paper: https://arxiv.org/abs/2306.02707 will probably never be released by "Microsoft," but open-source projects try replicate it "(OpenOrca," Dolphin). authors note that while Vicuna-13B display excellent when evaluated performs quite poorly on benchmarks like "SAT," "LSAT," "GRE," GMAT. Self-Instruct involves using an initial set prompts ask create new instructions. Low-quality or overly similar responses "removed," remaining recycled back into task pool further iterations. "However," queries generated can lack diversity complexity. Alpaca WizardLM use a variant introduces concept "Evol-Instruct," which gradually rewrites versions BFS DFS. Vicuna Koala demonstrate impressive due their human-like conversations natural (ShareGPT). Problem capture style not process. This motivates creation a auto-evaluation has several "drawbacks," such as limited test sizes "example," 80 in 218 inherent biases tends favor instruction-tuned its own resulting a preference longer texts over shorter ones. also exhibits a bias order candidate overestimates abilities smaller Contributions: Augmenting query-response pairs detailed outline system tasks FLANv2 used offers a wide variety They created a 5 million 1 Evaluation: comprehension assessed under various settings. focus a lot how guide adopting right "tone," format. I believe same effect achieved user (maybe slightly sampled a diverse instruction including chain-of-thought "steps," explain I’m "five," being helpful "informative," etc. Construction Each sample a triplet "message," response. FLAN-v2 raw Collection consists sub-collections: "CoT," "NiV2," T0 "only)," Flan "2021," Dialogue: most interesting one 150K V2 "FLAN2021," were randomly (~10% was selected Dialog completely skipped because lacks then inputs generate high-quality (1M). These prompted + 16 handcrafted messages ensure different kinds AI assistant. Provide a answer so don’t search outside understand given a must a long a who always Think answering a year old. follows extremely well. Help much helps people find information. a give a Your goal complete faithfully performing justify should describe a multiple choice "question," first output correct why other answers wrong. a definition come up a might additional knowledge a a some job follow a teacher. a simple what "asking," any guidelines provides those knows every translate another. a solve show a a a "input," break small parts. have meaning showing meets criteria following Part #: a key Usage: motivated curriculum a a big technical reasons "(cost," time). LLaMA BPE tokenizer padding (vocabulary size = "32,001)." examples packed a single sequence maximize length "(2,048" get a uniform trained 160h 20xA100 GPUs (4 epochs) ChatGPT-generated + 40h GPT-4-generated Open-ended generation: significantly better than AGIEval: doesn’t perform BigBench-Hard: par Copyright "2023," Labonne