{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    # Agent Debates with Tools\n",
    "\n",
    "This example shows how to simulate multi-agent dialogues where agents have access to tools."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import LangChain related modules "
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:48:59.695102Z",
     "start_time": "2024-04-25T14:48:59.075220Z"
    }
   },
   "source": [
    "from typing import Callable, List\n",
    "\n",
    "from langchain.memory import ConversationBufferMemory\n",
    "from langchain.schema import (\n",
    "    AIMessage,\n",
    "    HumanMessage,\n",
    "    SystemMessage,\n",
    ")\n",
    "from langchain.chat_models.gigachat import GigaChat"
   ],
   "outputs": [],
   "execution_count": 1
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import modules related to tools"
   ]
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:02.894461Z",
     "start_time": "2024-04-25T14:48:59.696106Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_community.document_loaders import ArxivLoader\n",
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "\n",
    "docs = ArxivLoader(query=\"2404.14619\", load_max_docs=2, load_all_available_meta='FALSE').load()\n",
    "text_splitter = RecursiveCharacterTextSplitter(\n",
    "    chunk_size=700, # Specify the character chunk size\n",
    "    chunk_overlap=0, # \"Allowed\" Overlap across chunks\n",
    "    length_function=len # Function used to evaluate the chunk size (here in terms of characters)\n",
    ")\n",
    "\n",
    "docs_split = text_splitter.split_documents(docs)"
   ],
   "outputs": [],
   "execution_count": 2
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:02.907461Z",
     "start_time": "2024-04-25T14:49:02.895466Z"
    }
   },
   "cell_type": "code",
   "source": "docs_split",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Document(page_content='OpenELM: An Efficient Language Model Family with Open-source Training\\nand Inference Framework\\nSachin Mehta\\nMohammad Hossein Sekhavat\\nQingqing Cao\\nMaxwell Horton\\nYanzi Jin\\nChenfan Sun\\nIman Mirzadeh\\nMahyar Najibi\\nDmitry Belenko\\nPeter Zatloukal\\nMohammad Rastegari\\nApple\\nModel\\nPublic dataset\\nOpen-source\\nModel size\\nPre-training tokens\\nAverage acc. (in %)\\nCode\\nWeights\\nOPT [55]\\n✗\\n✓\\n✓\\n1.3 B\\n0.2 T\\n41.49\\nPyThia [5]\\n✓\\n✓\\n✓\\n1.4 B\\n0.3 T\\n41.83\\nMobiLlama [44]\\n✓\\n✓\\n✓\\n1.3 B\\n1.3 T\\n43.55\\nOLMo [17]\\n✓\\n✓\\n✓\\n1.2 B\\n3.0 T\\n43.57\\nOpenELM (Ours)\\n✓\\n✓\\n✓\\n1.1 B\\n1.5 T\\n45.93\\nTable 1. OpenELM vs. public LLMs. OpenELM outperforms comparable-sized existing LLMs pretrained on publicly available datasets.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Notably, OpenELM outperforms the recent open LLM, OLMo, by 2.36% while requiring 2× fewer pre-training tokens. The average\\naccuracy is calculated across multiple tasks listed in Tab. 3b, which are also part of the OpenLLM leaderboard [4]. Models pretrained with\\nless data are highlighted in gray color.\\nAbstract\\nThe reproducibility and transparency of large language\\nmodels are crucial for advancing open research, ensuring\\nthe trustworthiness of results, and enabling investigations\\ninto data and model biases, as well as potential risks. To\\nthis end, we release OpenELM, a state-of-the-art open lan-\\nguage model. OpenELM uses a layer-wise scaling strategy', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='to efficiently allocate parameters within each layer of the\\ntransformer model, leading to enhanced accuracy. For ex-\\nample, with a parameter budget of approximately one bil-\\nlion parameters, OpenELM exhibits a 2.36% improvement\\nin accuracy compared to OLMo while requiring 2× fewer\\npre-training tokens.\\nDiverging from prior practices that only provide model\\nweights and inference code, and pre-train on private\\ndatasets, our release includes the complete framework for\\ntraining and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple check-\\npoints, and pre-training configurations.\\nWe also release', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='code to convert models to MLX library for inference and\\nfine-tuning on Apple devices. This comprehensive release\\naims to empower and strengthen the open research commu-\\nnity, paving the way for future open research endeavors.\\nOur source code along with pre-trained model weights\\nand training recipes is available at https://github.\\ncom/apple/corenet. Additionally, OpenELM mod-\\nels can be found on HuggingFace at:\\nhttps : / /\\nhuggingface.co/apple/OpenELM.\\n1. Introduction\\nTransformer-based [48] large language models (LLM)\\nare revolutionizing the field of natural language processing\\n[7,46]. These models are isotropic, meaning that they have\\nthe same configuration (e.g., number of heads and feed-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='forward network dimensions) for each transformer layer.\\nThough such isotropic models are simple, they may not al-\\nlocate parameters efficiently inside the model.\\nIn this work, we develop and release OpenELM, a fam-\\nily of pre-trained and fine-tuned models on publicly avail-\\nable datasets.\\nAt the core of OpenELM lies layer-wise\\nscaling [30], enabling more efficient parameter allocation\\nacross layers. This method utilizes smaller latent dimen-\\nsions in the attention and feed-forward modules of the trans-\\nformer layers closer to the input, and gradually widening the\\nlayers as they approach the output.\\nWe release the complete framework, encompassing data', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='preparation, training, fine-tuning, and evaluation proce-\\ndures, alongside multiple pre-trained checkpoints and train-\\ning logs, to facilitate open research. Importantly, OpenELM\\noutperforms existing open LLMs that are pre-trained us-\\ning publicly available datasets (Tab. 1).\\nFor example,\\nOpenELM with 1.1 billion parameters outperforms OLMo\\n1\\narXiv:2404.14619v1  [cs.CL]  22 Apr 2024\\n[17], which has 1.2 billion parameters, by 2.36% while re-\\nquiring 2× fewer pre-training tokens.\\n2. Pre-training\\nThis section describes the framework, including model\\narchitecture (§2.1), pre-training data (§2.2), training hyper-\\nparameters (§2.3), and evaluation (§2.4).\\n2.1. OpenELM architecture', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='We adopt the decoder-only transformer-based architec-\\nture. Following state-of-the-art LLMs, we: (1) do not use\\nlearnable bias parameters in any fully-connected (a.k.a., lin-\\near) layers, (2) apply pre-normalization using RMSNorm\\n[53] and also, use rotatory positional embedding (ROPE)\\n[43] for encoding positional information, (3) use grouped\\nquery attention (GQA) [1] instead of multi-head attention\\n(MHA), (4) replace the feed forward network (FFN) with\\nSwiGLU FFN [41], (5) use flash attention [13] for comput-\\ning the scaled dot-product attention, and (6) use the same\\ntokenizer as LLama [46].\\nExisting LLMs use the same configuration for each', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='transformer layer in the model, resulting in a uniform al-\\nlocation of parameters across layers. Unlike these models,\\neach transformer layer in OpenELM has a different config-\\nuration (e.g., number of heads and feed forward network\\ndimension), resulting in variable number of parameters in\\neach layer of the model. This lets OpenELM to better utilize\\nthe available parameter budget for achieving higher accura-\\ncies. We implement this non-uniform allocation of parame-\\nters across layers using layer-wise scaling (also referred as\\nblock-wise scaling in [30]).\\nLayer-wise scaling.\\nA standard transformer layer is com-\\nposed of multi-head attention (MHA) and feed-forward net-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='work (FFN). For non-uniform allocation of parameters in\\nthe transformer layer, we adjust the number of attention\\nheads and the FFN multiplier in each transformer layer.\\nAssume that the standard transformer model with uni-\\nform parameter allocation has N transformer layers and\\nthe dimensionality of the input to each layer is dmodel.\\nThe MHA has nh heads and dimension of each head is\\ndh =\\ndmodel\\nnh\\n.\\nAlso, the hidden dimension for FFN is\\ndFFN = m · dmodel, where m is a scalar FFN multiplier.\\nWe introduce parameters α and β to scale the number of\\nattention heads nh and FFN multiplier m per layer respec-\\ntively. For the i-th layer, nh and m are computed as\\nni\\nh = αi · dmodel\\ndh\\n,\\nmi = βi', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='where αi = αmin + (αmax −αmin) · i\\nN −1\\n,\\nand βi = βmin + (βmax −βmin) · i\\nN −1\\n, 0 ≤i < N.\\n(1)\\nHere, αmin and αmax are the hyper-parameters that allow\\nus to scale the attention heads. Similarly, βmin and βmax\\nlet us to vary the width of FFN layers. Therefore, vary-\\ning the configuration of standard transformer layers using α\\nand β results in non-uniform allocation of parameters in the\\nmodel. Note, setting αmin = αmax = 1.0 and mi = m\\nproduces the standard uniform transformer model.\\n2.2. Pre-training data\\nFor pre-training, we use public datasets. Specifically,\\nour pre-training dataset contains RefinedWeb [35], dedupli-\\ncated PILE [15], a subset of RedPajama [11], and a subset', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='of Dolma v1.6 [42], totaling approximately 1.8 trillion to-\\nkens. These details are also summarized in Tab. 2.\\nOn-the-fly tokenization and data filtering.\\nUnlike pre-\\nvious approaches that utilize pre-tokenized data [5,17], we\\nfilter and tokenize text data on-the-fly. This facilitates seam-\\nless experimentation with various tokenizers, thereby sig-\\nnificantly simplifying prototyping and research endeavors.\\nIn our experiments, we use the same tokenizer as used in\\nLLama [46].\\nTo filter out low-length sequences, we apply two filter-\\ning methods. The first method operates at the character-\\nlevel, checking if the number of characters in the sequence', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='is below a specified threshold. The second method operates\\nat the token-level, where it examines whether the sequence\\ncontains fewer tokens than a specified threshold. Sequences\\nthat are shorter than either of these thresholds are skipped.\\nIn our experiments, we use 200 characters and 256 tokens\\nas character and token-level filtering thresholds.\\n2.3. Training details\\nWe train OpenELM variants for 350k iterations (or train-\\ning steps) using CoreNet (formerly CVNets [29]). We use\\nAdamW [28] as an optimizer. We use a cosine learning rate\\nSource\\nSubset\\nTokens\\nRefinedWeb\\n665 B\\nRedPajama\\nGithub\\n59 B\\nBooks\\n26 B\\nArXiv\\n28 B\\nWikipedia\\n24 B\\nStackExchange\\n20 B\\nC4\\n175 B\\nPILE\\n207 B\\nDolma\\nThe Stack\\n411 B', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Reddit\\n89 B\\nPeS2o\\n70 B\\nProject Gutenberg\\n6 B\\nWikipedia + Wikibooks\\n4.3 B\\nTable 2. Dataset used for pre-training OpenELM.\\n2\\nTask\\nMetric\\nARC-c\\nNormalized accuracy\\nARC-e\\nNormalized accuracy\\nBoolQ\\nAccuracy\\nHellaSwag\\nNormalized accuracy\\nPIQA\\nNormalized accuracy\\nSciQ\\nAccuracy\\nWinoGrande\\nAccuracy\\n(a) Standard zero-shot metrics\\nTask\\nMetric\\nNum. few\\nshot examples\\nARC-c\\nNormalized accuracy\\n25\\nHellaSwag\\nNormalized accuracy\\n10\\nMMLU\\nAccuracy\\n5\\nTruthfulQA-mc2\\nAccuracy\\n0\\nWinoGrande\\nAccuracy\\n5\\n(b) OpenLLM leaderboard\\nTask\\nMetric\\nNum. few\\nshot examples\\nARC-c\\nNormalized accuracy\\n25\\nCrowsPairs-En\\nPCT stereotype\\n25\\nHellaSwag\\nNormalized accuracy\\n10\\nWinoGrande\\nAccuracy\\n5\\nMMLU\\nAccuracy\\n5\\nPIQA\\nNormalized accuracy', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='0\\nRACE\\nAccuracy\\n0\\n(c) LLM360\\nTable 3. Tasks and metrics used for evaluating OpenELM.\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n22.5\\n25.0\\n27.5\\n30.0\\n32.5\\n35.0\\nAccuracy (in %)\\n(a) ARC-c\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n40\\n45\\n50\\n55\\n60\\nAccuracy (in %)\\n(b) ARC-e\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n50\\n55\\n60\\n65\\nAccuracy (in %)\\n(c) BoolQ\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n40\\n50\\n60\\n70\\nAccuracy (in %)\\n(d) HellaSwag\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n66\\n68\\n70\\n72\\n74\\n76\\n78\\nAccuracy (in %)\\n(e) PIQA\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n77.5\\n80.0\\n82.5\\n85.0', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='87.5\\n90.0\\n92.5\\nAccuracy (in %)\\n(f) SciQ\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n52.5\\n55.0\\n57.5\\n60.0\\n62.5\\n65.0\\nAccuracy (in %)\\n(g) WinoGrande\\n0.04\\n0.02\\n0.00\\n0.04\\n0.02\\n0.00\\n0.02\\n0.04\\n270M\\n450M\\n1.1B\\n3B\\nOpenELM sizes\\nFigure 1. OpenELM’s performance across training iterations on standard zero-shot tasks. In the majority of tasks, the performance\\nof OpenELM shows improvement with increasing training duration. Furthermore, the model checkpoint obtained by averaging the last five\\ncheckpoints, collected at intervals of 5k iterations, demonstrates comparable or slightly better performance (indicated by  markers) as\\ncompared to the last checkpoint obtained after 350k iterations.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='schedule [27], with warm up of 5k iterations, and decay\\nthe final learning rate down to 10% of maximum learning\\nrate. We use a weight decay of 0.1 and gradient clipping\\nof 1.0. We train four variants of OpenELM (270M, 450M,\\n1.1B, and 3B), and for some, we use FSDP [56] and acti-\\nvation checkpointing [8]. Please refer to Appendix A for\\nadditional pre-training details.\\n2.4. Evaluation details\\nFollowing previous works, we evaluate the performance\\nacross different tasks using LM Evaluation Harness [16]1:\\n• Standard zero-shot tasks.\\nWe consider 7 standard\\ncommon-sense reasoning tasks: ARC easy and challenge\\n[10], BoolQ [9], HellaSwag [52], PIQA [6], SciQ [49],\\nand WinoGrande [39].', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='• OpenLLM leaderboard tasks.\\nWe use 5 tasks from\\nOpenLLM leaderboard [4]: ARC challenge, HellaSwag,\\nMMLU [20], TruthfulQA [24], and WinoGrande.\\n• LLM360 leaderboard tasks.\\nWe use 7 tasks from\\nLLM360 leaderboard [26] for evaluation: ARC chal-\\n1We\\nuse\\ncommit\\ndc90fec\\nof\\nhttps : / / github . com /\\nEleutherAI/lm-evaluation-harness\\nlenge, CrowS-Pairs (English version) [32], HellaSwag,\\nWinoGrande, MMLU, PIQA, and RACE [23].\\nThese evaluation frameworks, built on top of LM Eval-\\nuation Harness, allows us to comprehensively evaluate\\nOpenELM in terms of reasoning (e.g., ARC-c, HellaSwag,\\nand PIQA), knowledge understanding (e.g., MMLU and\\nRACE), and misinformation & bias (e.g., TruthfulQA and', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='CrowS-Pairs). While there may be some overlap in tasks\\namong these frameworks, they primarily differ in the few-\\nshot settings, as outlined in Tab. 3.\\n3. Experimental Results\\nPre-training results.\\nWe evaluate the performance of\\nOpenELM on zero-shot and few-shot settings (Tab. 3). We\\ncompare OpenELM with publicly available LLMs, namely\\nPyThia [5], Cerebras-GPT [14], TinyLlama [54], OpenLM\\n[18], MobiLlama [44], and OLMo [17]. The works most\\nclosely related to ours are MobiLlama and OLMo. These\\nmodels are trained on comparable dataset mixtures, with\\nsimilar or larger number of pre-training tokens.\\nIn Fig. 1, the accuracy of OpenELM is plotted against', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='training iterations for 7 standard zero-shot tasks. We ob-\\nserve an overall increase in accuracy with longer training\\n3\\nModel\\nModel size\\nPretraining tokens\\nARC-c\\nARC-e\\nBoolQ\\nHellaSwag\\nPIQA\\nSciQ\\nWinoGrande\\nAverage\\nAverage w/o SciQ\\nOpenELM (Ours)\\n0.27 B\\n1.5 T\\n26.45\\n45.08\\n53.98\\n46.71\\n69.75\\n84.70\\n53.91\\n54.37\\n49.31\\nMobiLlama [44]\\n0.50 B\\n1.3 T\\n26.62\\n46.04\\n55.72\\n51.06\\n71.11\\n83.60\\n53.20\\n55.34\\n50.63\\nOpenELM (Ours)\\n0.45 B\\n1.5 T\\n27.56\\n48.06\\n55.78\\n53.97\\n72.31\\n87.20\\n58.01\\n57.56\\n52.62\\nTinyLlama [54]\\n1.10 B\\n3.0 T\\n30.12\\n55.25\\n57.83\\n59.20\\n73.29\\n-\\n59.12\\n-\\n55.80\\nOpenLM [18]\\n1.00 B\\n1.6 T\\n31.00\\n56.00\\n65.00\\n61.00\\n74.00\\n-\\n60.00\\n-\\n57.83\\nMobiLlama [44]\\n0.80 B\\n1.3 T\\n28.84\\n49.62\\n60.03\\n52.45\\n73.18\\n85.90\\n55.96\\n58.00', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='53.35\\nMobiLlama [44]\\n1.26 B\\n1.3 T\\n31.91\\n56.65\\n60.34\\n62.18\\n74.81\\n89.10\\n59.27\\n62.04\\n57.53\\nOLMo [17]\\n1.18 B\\n3.0 T\\n31.06\\n57.28\\n61.74\\n62.92\\n75.14\\n87.00\\n59.98\\n62.16\\n58.02\\nOpenELM (Ours)\\n1.08 B\\n1.5 T\\n32.34\\n55.43\\n63.58\\n64.81\\n75.57\\n90.60\\n61.72\\n63.44\\n58.91\\nOpenELM (Ours)\\n3.04 B\\n1.5 T\\n35.58\\n59.89\\n67.40\\n72.44\\n78.24\\n92.70\\n65.51\\n67.39\\n63.18\\n(a) Results on zero-shot tasks with respect to the standard metrics defined in Tab. 3a.\\nModel\\nModel size\\nPretraining tokens\\nARC-c\\nHellaSwag\\nMMLU\\nTruthfulQA-mc2\\nWinoGrande\\nAverage\\nCerebras-GPT [14]\\n0.26 B\\n5.1 B\\n22.01\\n28.99\\n26.83\\n45.98\\n52.49\\n35.26\\nOPT [55]\\n0.35 B\\n0.2 T\\n23.55\\n36.73\\n26.02\\n40.83\\n52.64\\n35.95\\nOpenELM (Ours)\\n0.27 B\\n1.5 T\\n27.65\\n47.15\\n25.72\\n39.24\\n53.83\\n38.72', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Pythia [5]\\n0.41 B\\n0.3 T\\n24.83\\n41.29\\n25.99\\n40.95\\n54.38\\n37.49\\nMobiLlama [44]\\n0.50 B\\n1.3 T\\n29.52\\n52.75\\n26.09\\n37.55\\n56.27\\n40.44\\nOpenELM (Ours)\\n0.45 B\\n1.5 T\\n30.20\\n53.86\\n26.01\\n40.18\\n57.22\\n41.50\\nMobiLlama [44]\\n0.80 B\\n1.3 T\\n30.63\\n54.17\\n25.2\\n38.41\\n56.35\\n40.95\\nPythia [5]\\n1.40 B\\n0.3 T\\n32.68\\n54.96\\n25.56\\n38.66\\n57.30\\n41.83\\nMobiLlama [44]\\n1.26 B\\n1.3 T\\n34.64\\n63.27\\n23.87\\n35.19\\n60.77\\n43.55\\nOLMo [17]\\n1.18 B\\n3.0 T\\n34.47\\n63.81\\n26.16\\n32.94\\n60.46\\n43.57\\nOpenELM (Ours)\\n1.08 B\\n1.5 T\\n36.69\\n65.71\\n27.05\\n36.98\\n63.22\\n45.93\\nOpenELM (Ours)\\n3.04 B\\n1.5 T\\n42.24\\n73.28\\n26.76\\n34.98\\n67.25\\n48.90\\n(b) Results on OpenLLM Leaderboard tasks with respect to the metrics defined in Tab. 3b.\\nModel\\nModel size\\nPretraining tokens\\nARC-c', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='CrowS-Pairs\\nHellaSwag\\nMMLU\\nPIQA\\nRACE\\nTruthfulQA\\nWinoGrande\\nAverage\\nOpenELM (Ours)\\n0.27 B\\n1.5 T\\n27.65\\n66.79\\n47.15\\n25.72\\n69.75\\n30.91\\n39.24\\n53.83\\n45.13\\nMobiLlama [44]\\n0.50 B\\n1.3 T\\n29.52\\n65.47\\n52.75\\n26.09\\n71.11\\n32.15\\n37.55\\n56.27\\n46.37\\nOpenELM (Ours)\\n0.45 B\\n1.5 T\\n30.20\\n68.63\\n53.86\\n26.01\\n72.31\\n33.11\\n40.18\\n57.22\\n47.69\\nMobiLlama [44]\\n0.80 B\\n1.3 T\\n30.63\\n66.25\\n54.17\\n25.2\\n73.18\\n33.68\\n38.41\\n56.35\\n47.23\\nMobiLlama [44]\\n1.26 B\\n1.3 T\\n34.64\\n70.24\\n63.27\\n23.87\\n74.81\\n35.02\\n35.19\\n60.77\\n49.73\\nOLMo [17]\\n1.18 B\\n3.0 T\\n34.47\\n69.95\\n63.81\\n26.16\\n75.14\\n36.75\\n32.94\\n60.46\\n49.96\\nOpenELM (Ours)\\n1.08 B\\n1.5T\\n36.69\\n71.74\\n65.71\\n27.05\\n75.57\\n36.46\\n36.98\\n63.22\\n51.68\\nOpenELM (Ours)\\n3.04 B\\n1.5 T\\n42.24\\n73.29\\n73.28\\n26.76\\n78.24\\n38.76', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='34.98\\n67.25\\n54.35\\n(c) Results on LLM360 tasks with respect to the metrics defined in Tab. 3c.\\nTable 4. Comparison of OpenELM with publicly available LLMs across various evaluation frameworks.. We chose MobiLlama and\\nOLMo as our baselines because they are pre-trained on public datasets using a similar or larger number of tokens. We evaluate OpenELM,\\nMobiLlama, and OLMo using the same LM evaluation harness version. Results for other models in Tab. 4a and Tab. 4b are taken from their\\nofficial GitHub repositories and the OpenLLM leaderboard [4], respectively. Best task accuracy for each model category is highlighted in\\nbold. Models pre-trained with less data are highlighted in gray color.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='durations across most tasks. Additionally, the checkpoint\\nobtained by averaging the last five checkpoints, collected\\nat intervals of 5000 iterations, demonstrates comparable or\\nslightly better accuracy compared to the final checkpoint\\nobtained after 350k iterations. This improvement is likely\\ndue to noise reduction through weight averaging. Conse-\\nquently, we use the averaged checkpoint for our main eval-\\nuations in Tab. 4, instruction tuning experiments in Tab. 5,\\nand parameter-efficient tuning experiments in Tab. 6.\\nThe results in Tab. 4 span across various evaluation\\nframeworks, and highlights OpenELM’s effectiveness over\\nexisting methods. For instance, an OpenELM variant with', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='1.1 billion parameters achieves 1.28% (Tab. 4a), 2.36%\\n(Tab. 4b), and 1.72% (Tab. 4c) higher accuracy compared to\\nOLMo with 1.2 billion parameters. Remarkably, OpenELM\\nachieves this level of accuracy while using 2× less pre-\\ntraining data.\\nInstruction tuning results.\\nWe use the cleaned variant of\\nUltraFeedback [3, 12] dataset that consists of 60k prompts\\nfor instruction tuning.\\nWe do instruction tuning using\\nAlignment Handbook library [47]. For optimization, we\\nuse either the statistical rejection sampling method [25] or\\nthe direct preference optimization method [37]. These sam-\\npling method details along with other hyper-parameters and\\nfine-tuning details are given in Appendix B.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Tab. 5 shows that instruction tuning consistently im-\\nproves OpenELM’s average accuracy by 1-2% across dif-\\nferent evaluation frameworks.\\n4\\nModel Size\\nInstruction Tuned?\\nARC-c\\nARC-e\\nBoolQ\\nHellaSwag\\nPIQA\\nSciQ\\nWinoGrande\\nAverage\\n0.27 B\\n✗\\n26.45\\n45.08\\n53.98\\n46.71\\n69.75\\n84.70\\n53.91\\n54.37\\n✓\\n30.55\\n46.68\\n48.56\\n52.07\\n70.78\\n84.40\\n52.72\\n55.11\\n0.45 B\\n✗\\n27.56\\n48.06\\n55.78\\n53.97\\n72.31\\n87.20\\n58.01\\n57.56\\n✓\\n30.38\\n50.00\\n60.37\\n59.34\\n72.63\\n88.00\\n58.96\\n59.95\\n1.08 B\\n✗\\n32.34\\n55.43\\n63.58\\n64.81\\n75.57\\n90.60\\n61.72\\n63.44\\n✓\\n37.97\\n52.23\\n70.00\\n71.20\\n75.03\\n89.30\\n62.75\\n65.50\\n3.04 B\\n✗\\n35.58\\n59.89\\n67.40\\n72.44\\n78.24\\n92.70\\n65.51\\n67.39\\n✓\\n39.42\\n61.74\\n68.17\\n76.36\\n79.00\\n92.50\\n66.85\\n69.15', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='(a) Results on zero-shot tasks with respect to the metrics defined in Tab. 3a.\\nModel Size\\nInstruction Tuned?\\nARC-c\\nHellaSwag\\nMMLU\\nTruthfulQA\\nWinoGrande\\nAverage\\n0.27 M\\n✗\\n27.65\\n47.15\\n25.72\\n39.24\\n53.83\\n38.72\\n✓\\n32.51\\n51.58\\n26.70\\n38.72\\n53.20\\n40.54\\n0.45 M\\n✗\\n30.20\\n53.86\\n26.01\\n40.18\\n57.22\\n41.50\\n✓\\n33.53\\n59.31\\n25.41\\n40.48\\n58.33\\n43.41\\n1.08 B\\n✗\\n36.69\\n65.71\\n27.05\\n36.98\\n63.22\\n45.93\\n✓\\n41.55\\n71.83\\n25.65\\n45.95\\n64.72\\n49.94\\n3.04 B\\n✗\\n42.24\\n73.28\\n26.76\\n34.98\\n67.25\\n48.90\\n✓\\n47.70\\n76.87\\n24.80\\n38.76\\n67.96\\n51.22\\n(b) Results on OpenLLM Leaderboard tasks with respect to the metrics defined in Tab. 3b.\\nModel Size\\nInstruction Tuned?\\nARC-c\\nCrowS-Pairs\\nHellaSwag\\nMMLU\\nPIQA\\nRACE\\nTruthfulQA\\nWinoGrande\\nAverage\\n0.27 M\\n✗\\n27.65', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='66.79\\n47.15\\n25.72\\n69.75\\n30.91\\n39.24\\n53.83\\n45.13\\n✓\\n32.51\\n66.01\\n51.58\\n26.70\\n70.78\\n33.78\\n38.72\\n53.20\\n46.66\\n0.45 M\\n✗\\n30.20\\n68.63\\n53.86\\n26.01\\n72.31\\n33.11\\n40.18\\n57.22\\n47.69\\n✓\\n33.53\\n67.44\\n59.31\\n25.41\\n72.63\\n36.84\\n40.48\\n58.33\\n49.25\\n1.08 B\\n✗\\n36.69\\n71.74\\n65.71\\n27.05\\n75.57\\n36.46\\n36.98\\n63.22\\n51.68\\n✓\\n41.55\\n71.02\\n71.83\\n25.65\\n75.03\\n39.43\\n45.95\\n64.72\\n54.40\\n3.04 B\\n✗\\n42.24\\n73.29\\n73.28\\n26.76\\n78.24\\n38.76\\n34.98\\n67.25\\n54.35\\n✓\\n47.70\\n72.33\\n76.87\\n24.80\\n79.00\\n38.47\\n38.76\\n67.96\\n55.73\\n(c) Results on LLM360 tasks with respect to the metrics defined in Tab. 3c.\\nTable 5. Instruction tuning improves OpenELM’s accuracy across different model sizes.\\nParameter-efficient fine-tuning (PEFT) results.\\nWe use', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='the CommonSense reasoning training and evaluation setup\\n[22].\\nThis setup provides 170k training samples across\\n8 multiple-choice datasets for PEFT studies with differ-\\nent methods, including LoRA [21] and DoRA [51]. We\\nintegrate OpenELM with these methods, and finetune the\\nresulting model for three epochs using 8 NVIDIA H100\\nGPUs. Tab. 6 shows that PEFT methods can be applied to\\nOpenELM. LoRA and DoRA deliver similar accuracy on\\naverage across the given CommonSense reasoning datasets.\\n4. Benchmarking\\nHardware.\\nWe benchmark on modern, consumer-grade\\nhardware with BFloat16 as the data type.\\nSpecifically,\\nCUDA benchmarks were performed on a workstation with', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='an Intel i9-13900KF CPU, equipped with 64 GB of DDR5-\\n4000 DRAM, and an NVIDIA RTX 4090 GPU with 24 GB\\nof VRAM, running Ubuntu 22.04. PyTorch v2.2.2 [34] was\\nused, with the most recent versions of models and the as-\\nsociated libraries. HuggingFace Transformers v4.39.3 [50]\\nwas used to benchmark HuggingFace models. We did not\\nuse Torch Inductor for model compilation.\\nTo benchmark OpenELM models on the Apple silicon,\\nwe used an Apple MacBook Pro with an M2 Max system-\\non-chip and 64GiB of RAM, running macOS 14.4.1. We\\nported the code and the weights of OpenELM to Apple\\nMLX v0.10.0 [19]. To maximize the throughput, lazy eval-\\nuation was used in MLX with 8 tokens evaluated at a time.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Evaluation.\\nWe provide two separate measurements for\\ntoken throughput (measured in terms of tokens processed\\nper second): (1) prompt processing (pre-fill), and (2) token\\ngeneration. Additionally, we also report the total combined\\nthroughput.\\nWe benchmark all models sequentially, and\\nexecute one full “dry run” generating 1024 tokens for the\\nfirst model, since we found that this significantly increases\\nthe throughput of generation for subsequent models. Be-\\nfore measurement for each individual model, we warm up\\nthe model by executing a single forward pass to allow the\\nframeworks to perform further auto-tuning, if any. In all\\nexperiments, we use key-value caching and generate 1024', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='tokens in addition to the prompt tokens in all tests. Static\\n5\\nModel Size\\nPEFT\\nARC-c\\nARC-e\\nBoolQ\\nHellaSwag\\nPIQA\\nSIQA\\nWinoGrande\\nOBQA\\nAverage\\n0.27 B\\nLoRA\\n24.57\\n26.60\\n62.14\\n24.84\\n50.05\\n42.02\\n49.88\\n28.00\\n38.51\\nDoRA\\n26.19\\n28.07\\n62.20\\n25.22\\n50.11\\n44.42\\n50.12\\n31.20\\n39.69\\n0.45 B\\nLoRA\\n28.67\\n29.88\\n62.29\\n25.85\\n52.39\\n49.59\\n50.91\\n33.20\\n41.60\\nDoRA\\n28.33\\n30.39\\n62.26\\n25.12\\n52.29\\n49.28\\n50.83\\n32.00\\n41.31\\n1.08 B\\nLoRA\\n45.14\\n61.11\\n61.77\\n77.95\\n72.31\\n69.70\\n61.64\\n59.20\\n63.60\\nDoRA\\n44.11\\n61.49\\n61.68\\n78.92\\n71.38\\n69.04\\n64.01\\n58.80\\n63.68\\n3.04 B\\nLoRA\\n46.93\\n66.25\\n62.48\\n81.22\\n75.19\\n70.62\\n65.51\\n58.20\\n65.80\\nDoRA\\n46.50\\n66.46\\n62.35\\n80.84\\n75.73\\n70.83\\n63.77\\n58.20\\n65.59', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Table 6. OpenELM with PEFT. Both LoRA and DoRA demonstrate comparable performance when OpenELM is finetuned on Common-\\nSense reasoning benchmark. It’s important to note that these fine-tuning results, obtained using the evaluation setup of LLM-Adapters [22],\\ndiffer from the results in Tabs. 4 and 5. This is because the results in Tabs. 4 and 5 are obtained under zero- and few-shot settings using\\nLM Evaluation Harness. Note that we did not use social interactions QA (SIQA; [40]) and OpenBookQA (OBQA; [31]) in Tabs. 4 and 5\\nbecause of evaluation issues with LLama tokenizer in LM Evaluation Harness (see [45]).\\nModel\\nModel size\\nThroughput (Tokens per second)\\nPrompt\\nGeneration\\nTotal\\nOPT [55]', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='0.35 B\\n6524.17\\n214.11\\n220.21\\nOpenELM (Ours)\\n0.27 B\\n6427.27\\n159.67\\n165.85\\nMobiLlama [44]\\n0.50 B\\n3423.25\\n136.35\\n146.86\\nOpenELM (Ours)\\n0.45 B\\n5211.35\\n128.46\\n133.42\\nMobiLlama [44]\\n0.80 B\\n4151.75\\n126.01\\n130.08\\nPythia [5]\\n1.40 B\\n4501.85\\n139.65\\n143.83\\nMobiLlama [44]\\n1.26 B\\n4938.29\\n142.96\\n147.67\\nOLMo [17]\\n1.18 B\\n7151.65\\n203.40\\n209.26\\nOpenELM (Ours)\\n1.08 B\\n3681.73\\n92.15\\n95.72\\nOpenELM (Ours)\\n3.04 B\\n2712.56\\n70.11\\n72.82\\n(a) Results on NVIDIA CUDA / Linux.\\nModel\\nThroughput (Tokens per second)\\nPrompt\\nGeneration\\nTotal\\nOpenELM-0.27B\\n1151.41\\n212.40\\n218.45\\nOpenELM-0.27B-4bit\\n803.99\\n256.35\\n262.70\\nOpenELM-0.45B\\n910.61\\n147.26\\n151.57\\nOpenELM-0.45B-4bit\\n883.19\\n197.81\\n203.16\\nOpenELM-1.08B\\n508.56\\n78.72\\n81.04', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='OpenELM-1.08B-4bit\\n554.17\\n117.90\\n121.14\\nOpenELM-3.04B-bf16\\n234.96\\n33.96\\n34.97\\nOpenELM-3.04B-bf16-4bit\\n211.32\\n60.33\\n61.83\\n(b) Results for the MLX port on Apple macOS.\\nTable 7. Benchmark measurements of OpenELM compared\\nto other similar LLMs in its class..\\nOn CUDA, we evaluate\\nOpenELM, MobiLlama, and OLMo using the CoreNet version of\\nOpenELM and HuggingFace for the other two. On macOS, we\\nonly provide results for the MLX version of OpenELM.\\nkey-value cache was used whenever supported. The same\\nprompt was used for all runs, resulting in prompt lengths of\\n35-36 tokens (depending on the tokenizer).\\nResults.\\nTabs. 7a and 7b shows the benchmarking re-\\nsults on GPU and MacBook Pro respectively.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Despite\\nOpenELM’s higher accuracy for a similar parameter count,\\nwe observe that it is slower than OLMo. While the primary\\nfocus of this study is reproducibility rather than inference\\nModel\\nNormalization layer\\nThroughput (Tokens per second)\\n(# Invocations per token)\\nPrompt\\nGeneration\\nTotal\\nOLMo\\nLayerNorm (33)\\n7151.65\\n203.40\\n209.26\\nRMSNorm-Naive (33)\\n5360.56\\n171.41\\n176.92\\nOpenELM (Ours)\\nLayerNorm (113)\\n4697.50\\n130.34\\n135.38\\nRMSNorm-Naive (113)\\n3681.73\\n92.15\\n95.72\\nRMSNorm-Apex (113)\\n4280.66\\n113.42\\n117.81\\nTable 8. Normalization layers are a bottleneck. The through-\\nput of both OLMo-1.18B and OpenELM-1.08B significantly de-\\ncreases with the naive implementation of RMSNorm in PyTorch', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='compared to highly optimized LayerNorm [2]. Although Apex’s\\n[33] RMSNorm implementation leads to notable throughput im-\\nprovements compared to the naive implementation, a considerable\\nperformance gap persists in comparison to LayerNorm. This high-\\nlights the substantial optimization potential for future endeavors.\\nThe number of invocations per token for each normalization layer\\nis indicated next to the layer name in brackets.\\nperformance, we did comprehensive profiling to understand\\nthe bottlenecks. Our analysis reveals that a significant por-\\ntion of OpenELM’s processing time can be attributed to our\\nnaive implementation of RMSNorm (Tab. 8). Specifically,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='naive RMSNorm implementation results in many individ-\\nual kernel launches each of which processes a small input,\\nrather than a launch of a single, fused kernel, as would be\\nthe case with e.g. LayerNorm. By replacing the naive RM-\\nSNorm with Apex’s RMSNorm [33], we observe a notable\\nincrease in OpenELM’s throughput. However, a substantial\\nperformance gap persists compared to the models that use\\noptimized LayerNorm, in part because (1) OpenELM has\\n113 RMSNorm layers as compared to 33 LayerNorm layers\\nin OLMo and (2) Apex’s RMSNorm is not optimized for\\nsmall inputs. To further illustrate the performance degrada-\\ntion attributable to RMSNorm, we replaced the LayerNorm', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='in OLMo with RMSNorm, and observed a significant drop\\nin generation throughput. In future work, we plan to explore\\noptimization strategies to further improve the inference ef-\\nficiency of OpenELM.\\n6\\n5. Conclusion\\nThis\\nwork\\nreleases\\nOpenELM,\\na\\ndecoder-only\\ntransformer-based open language model. The OpenELM\\nuses a layer-wise scaling method for efficient parameter\\nallocation within the transformer model,\\nresulting in\\nimproved accuracy compared to existing models.\\nAddi-\\ntionally, we have made the entire framework open-source,\\nincluding training logs, multiple checkpoints, pre-training\\nconfigurations, and MLX inference code. This extensive\\nrelease aims to empower and strengthen the open research', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='community, facilitating future research efforts.\\nAuthor Contributions\\nThe OpenELM project was led by Sachin Mehta, with\\nadditional lead contributions from Mohammad Rastegari\\nand Peter Zatloukal. OpenELM would not have been possi-\\nble without the help of our many teammates and collabora-\\ntors. We list author contributions below:\\nPre-training dataset collection and tooling:\\nSachin\\nMehta and Mohammad Sekhavat\\nArchitecture design: Sachin Mehta\\nModel training: Sachin Mehta and Mohammad Sekha-\\nvat\\nEvaluation suite and tooling: Sachin Mehta, Qingqing\\nCao, Mohammad Sekhavat, Mahyar Najibi, Maxwell\\nHorton, and Iman Mirzadeh.\\nHuggingface integration: Qingqing Cao\\nInstruction tuning: Qingqing Cao', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Parameter-efficient finetuning: Maxwell Horton\\nPerformance analysis and MLX conversion: Chenfan\\nSun, Dmitry Belenko, and Mahyar Najibi\\nCode review, bug fixes, and maintenance:\\nSachin\\nMehta, Maxwell Horton, Mohammad Shekhavat, and\\nYanzi Jin\\nAcknowledgements\\nWe extend our gratitude to the following people for dis-\\ncussions and assistance: Farzad Abdolhosseini, David Har-\\nrison, Mehrdad Farajtabar, Fartash Faghri, Oncel Tuzel,\\nHadipour Ansari, Raviteja Vemulapalli, Aseem Wadhwa,\\nKumari Nishu, Danny Tormoen, Minsik Cho, Jason Rama-\\npuram, Rich Moe, Arsalan Farooq, Dom L’Eplattenier,\\nMayank Goel, Hassan Babaie, Chong Wang, Ruoming\\nPang, Tom Gunter, Antonie Lin, Irina Belousova, and Joris', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Pelemans.\\nBroader Impact\\nThe release of OpenELM models aims to empower and\\nenrich the open research community by providing access\\nto state-of-the-art language models.\\nTrained on publicly\\navailable datasets, these models are made available with-\\nout any safety guarantees. Consequently, there exists the\\npossibility of these models producing outputs that are in-\\naccurate, harmful, biased, or objectionable in response to\\nuser prompts. Thus, it is imperative for users and devel-\\nopers to undertake thorough safety testing and implement\\nappropriate filtering mechanisms tailored to their specific\\nrequirements.\\nReferences\\n[1] Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Zemlyanskiy, Federico Lebr´\\non, and Sumit Sanghai.\\nGqa:\\nTraining generalized multi-query transformer models from\\nmulti-head checkpoints. arXiv preprint arXiv:2305.13245,\\n2023. 2\\n[2] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin-\\nton. Layer normalization. arXiv preprint arXiv:1607.06450,\\n2016. 6\\n[3] Alvaro Bartolome, Gabriel Martin, and Daniel Vila. Notus.\\nhttps://github.com/argilla-io/notus, 2023.\\n4\\n[4] Edward Beeching, Cl´\\nementine Fourrier, Nathan Habib,\\nSheon Han, Nathan Lambert, Nazneen Rajani, Omar San-\\nseviero, Lewis Tunstall, and Thomas Wolf.\\nOpen llm\\nleaderboard. https://huggingface.co/spaces/\\nHuggingFaceH4/open_llm_leaderboard, 2023. 1,\\n3, 4', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='[5] Stella Biderman, Hailey Schoelkopf, Quentin Gregory An-\\nthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mo-\\nhammad Aflah Khan, Shivanshu Purohit, USVSN Sai\\nPrashanth, Edward Raff, et al. Pythia: A suite for analyz-\\ning large language models across training and scaling. In In-\\nternational Conference on Machine Learning, pages 2397–\\n2430. PMLR, 2023. 1, 2, 3, 4, 6\\n[6] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi,\\net al. Piqa: Reasoning about physical commonsense in nat-\\nural language. In Proceedings of the AAAI conference on\\nartificial intelligence, volume 34, pages 7432–7439, 2020. 3\\n[7] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Sub-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='biah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan-\\ntan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan-\\nguage models are few-shot learners. Advances in neural in-\\nformation processing systems, 33:1877–1901, 2020. 1\\n[8] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin.\\nTraining deep nets with sublinear memory cost.\\narXiv\\npreprint arXiv:1604.06174, 2016. 3\\n[9] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom\\nKwiatkowski, Michael Collins, and Kristina Toutanova.\\nBoolq: Exploring the surprising difficulty of natural yes/no\\nquestions. arXiv preprint arXiv:1905.10044, 2019. 3\\n7\\n[10] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord.\\nThink you have solved question answering?\\ntry arc, the\\nai2 reasoning challenge. arXiv preprint arXiv:1803.05457,\\n2018. 3\\n[11] Together Computer. Redpajama: An open source recipe to\\nreproduce llama training dataset, 2023. 2\\n[12] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei\\nZhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun.\\nUltrafeedback: Boosting language models with high-quality\\nfeedback, 2023. 4\\n[13] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christo-\\npher R´\\ne. Flashattention: Fast and memory-efficient exact at-\\ntention with io-awareness. Advances in Neural Information\\nProcessing Systems, 35:16344–16359, 2022. 2', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='[14] Nolan Dey, Gurpreet Gosal, Hemant Khachane, William\\nMarshall, Ribhu Pathria, Marvin Tom, Joel Hestness, et al.\\nCerebras-gpt:\\nOpen compute-optimal language models\\ntrained on the cerebras wafer-scale cluster. arXiv preprint\\narXiv:2304.03208, 2023. 3, 4\\n[15] Leo Gao, Stella Biderman, Sid Black, Laurence Golding,\\nTravis Hoppe, Charles Foster, Jason Phang, Horace He, An-\\nish Thite, Noa Nabeshima, et al.\\nThe pile: An 800gb\\ndataset of diverse text for language modeling. arXiv preprint\\narXiv:2101.00027, 2020. 2\\n[16] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, An-\\nthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu,\\nKyle McDonell, Niklas Muennighoff, Jason Phang, Laria', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang,\\nand Andy Zou. A framework for few-shot language model\\nevaluation, Sept. 2021. 3\\n[17] Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia,\\nRodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish\\nIvison, Ian Magnusson, Yizhong Wang, et al. Olmo: Ac-\\ncelerating the science of language models. arXiv preprint\\narXiv:2402.00838, 2024. 1, 2, 3, 4, 6\\n[18] Suchin Gururangan, Mitchell Wortsman, Samir Yitzhak\\nGadre, Achal Dave, Maciej Kilian, Weijia Shi, Jean Mer-\\ncat, Georgios Smyrnis, Gabriel Ilharco, Matt Jordan, Rein-\\nhard Heckel, Alex Dimakis, Ali Farhadi, Vaishaal Shankar,\\nand Ludwig Schmidt. OpenLM: A minimal but performative', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='language modeling (lm) repository, 2023. GitHub repository.\\n3, 4\\n[19] Awni Hannun, Jagrit Digani, Angelos Katharopoulos, and\\nRonan Collobert. MLX: Efficient and flexible machine learn-\\ning on apple silicon, 2024. 5\\n[20] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,\\nMantas Mazeika, Dawn Song, and Jacob Steinhardt. Mea-\\nsuring massive multitask language understanding.\\narXiv\\npreprint arXiv:2009.03300, 2020. 3\\n[21] J. Edward Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-\\nZhu, Yuanzhi Li, Shean Wang, and Weizhu Chen.\\nLora:\\nLow-rank adaptation of large language models.\\nArXiv,\\nabs/2106.09685, 2021. 5\\n[22] Zhiqiang Hu, Yihuai Lan, Lei Wang, Wanyu Xu, Ee-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Peng Lim, Roy Ka-Wei Lee, Lidong Bing, and Soujanya\\nPoria.\\nLlm-adapters: An adapter family for parameter-\\nefficient fine-tuning of large language models.\\nArXiv,\\nabs/2304.01933, 2023. 5, 6\\n[23] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang,\\nand Eduard Hovy.\\nRace:\\nLarge-scale reading com-\\nprehension dataset from examinations.\\narXiv preprint\\narXiv:1704.04683, 2017. 3\\n[24] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa:\\nMeasuring how models mimic human falsehoods.\\narXiv\\npreprint arXiv:2109.07958, 2021. 3\\n[25] Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mo-\\nhammad Saleh, Peter J. Liu, and Jialu Liu. Statistical Rejec-\\ntion Sampling Improves Preference Optimization, Jan. 2024.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='arXiv:2309.06657 [cs]. 4\\n[26] Zhengzhong Liu, Aurick Qiao, Willie Neiswanger, Hongyi\\nWang, Bowen Tan, Tianhua Tao, Junbo Li, Yuqi Wang, Suqi\\nSun, Omkar Pangarkar, et al. Llm360: Towards fully trans-\\nparent open-source llms. arXiv preprint arXiv:2312.06550,\\n2023. 3\\n[27] Ilya Loshchilov and Frank Hutter.\\nSgdr:\\nStochas-\\ntic gradient descent with warm restarts.\\narXiv preprint\\narXiv:1608.03983, 2016. 3\\n[28] Ilya Loshchilov and Frank Hutter. Decoupled weight decay\\nregularization. arXiv preprint arXiv:1711.05101, 2017. 2\\n[29] Sachin Mehta, Farzad Abdolhosseini, and Mohammad\\nRastegari. Cvnets: High performance library for computer\\nvision. In Proceedings of the 30th ACM International Con-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='ference on Multimedia, pages 7327–7330, 2022. 2\\n[30] Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke\\nZettlemoyer, and Hannaneh Hajishirzi. Delight: Deep and\\nlight-weight transformer. arXiv preprint arXiv:2008.00623,\\n2020. 1, 2\\n[31] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sab-\\nharwal.\\nCan a suit of armor conduct electricity?\\na new\\ndataset for open book question answering. arXiv preprint\\narXiv:1809.02789, 2018. 6\\n[32] Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R\\nBowman. Crows-pairs: A challenge dataset for measuring\\nsocial biases in masked language models.\\narXiv preprint\\narXiv:2010.00133, 2020. 3\\n[33] NVIDIA Corporation. Apex: A pytorch extension with tools', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='for mixed precision training and more. GitHub, 2024. 6\\n[34] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer,\\nJames Bradbury, Gregory Chanan, Trevor Killeen, Zeming\\nLin, Natalia Gimelshein, Luca Antiga, Alban Desmaison,\\nAndreas Kopf, Edward Yang, Zachary DeVito, Martin Rai-\\nson, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner,\\nLu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An\\nImperative Style, High-Performance Deep Learning Library.\\nIn H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alch´\\ne\\nBuc, E. Fox, and R. Garnett, editors, Advances in Neural In-\\nformation Processing Systems 32, pages 8024–8035. Curran\\nAssociates, Inc., 2019. 5', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='[35] Guilherme Penedo, Quentin Malartic, Daniel Hesslow,\\nRuxandra Cojocaru, Alessandro Cappelli, Hamza Alobei-\\ndli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Lau-\\nnay. The refinedweb dataset for falcon llm: outperforming\\ncurated corpora with web data, and web data only. arXiv\\npreprint arXiv:2306.01116, 2023. 2\\n8\\n[36] Ofir Press and Lior Wolf. Using the output embedding to\\nimprove language models. arXiv preprint arXiv:1608.05859,\\n2016. 10\\n[37] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Er-\\nmon, Christopher D. Manning, and Chelsea Finn.\\nDirect\\nPreference Optimization: Your Language Model is Secretly\\na Reward Model, Dec. 2023. arXiv:2305.18290 [cs]. 4', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='[38] Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and\\nYuxiong He. Deepspeed: System optimizations enable train-\\ning deep learning models with over 100 billion parame-\\nters. In Proceedings of the 26th ACM SIGKDD International\\nConference on Knowledge Discovery & Data Mining, pages\\n3505–3506, 2020. 10\\n[39] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula,\\nand Yejin Choi.\\nWinogrande: An adversarial winograd\\nschema challenge at scale.\\nCommunications of the ACM,\\n64(9):99–106, 2021. 3\\n[40] Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras,\\nand Yejin Choi. Socialiqa: Commonsense reasoning about\\nsocial interactions. arXiv preprint arXiv:1904.09728, 2019.\\n6\\n[41] Noam Shazeer.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Glu variants improve transformer.\\narXiv\\npreprint arXiv:2002.05202, 2020. 2\\n[42] Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin\\nSchwenk, David Atkinson, Russell Authur, Ben Bogin, Khy-\\nathi Chandu, Jennifer Dumas, Yanai Elazar, et al. Dolma: An\\nopen corpus of three trillion tokens for language model pre-\\ntraining research. arXiv preprint arXiv:2402.00159, 2024.\\n2\\n[43] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen\\nBo, and Yunfeng Liu. Roformer: Enhanced transformer with\\nrotary position embedding. Neurocomputing, 568:127063,\\n2024. 2\\n[44] Omkar Thawakar, Ashmal Vayani, Salman Khan, Hisham\\nCholakal, Rao M Anwer, Michael Felsberg, Tim Baldwin,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Eric P Xing, and Fahad Shahbaz Khan. Mobillama: Towards\\naccurate and lightweight fully transparent gpt. arXiv preprint\\narXiv:2402.16840, 2024. 1, 3, 4, 6\\n[45] Hsu Wan Ting. Accuracy not matched for llama1-7b. GitHub\\nissue, 2024. https://github.com/EleutherAI/\\nlm-evaluation-harness/issues/1294. 6\\n[46] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier\\nMartinet, Marie-Anne Lachaux, Timoth´\\nee Lacroix, Baptiste\\nRozi`\\nere, Naman Goyal, Eric Hambro, Faisal Azhar, et al.\\nLlama:\\nOpen and efficient foundation language models.\\narXiv preprint arXiv:2302.13971, 2023. 1, 2\\n[47] Lewis\\nTunstall,\\nEdward\\nBeeching,\\nNathan\\nLambert,\\nNazneen Rajani, Shengyi Huang, Kashif Rasul, Alexan-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='der M. Rush, and Thomas Wolf.\\nThe alignment hand-\\nbook.\\nhttps : / / github . com / huggingface /\\nalignment-handbook, 2023. 4\\n[48] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko-\\nreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia\\nPolosukhin. Attention is all you need. Advances in neural\\ninformation processing systems, 30, 2017. 1\\n[49] Johannes Welbl, Nelson F Liu, and Matt Gardner. Crowd-\\nsourcing multiple choice science questions. arXiv preprint\\narXiv:1707.06209, 2017. 3\\n[50] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chau-\\nmond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim\\nRault, R´\\nemi Louf, Morgan Funtowicz, Joe Davison, Sam', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien\\nPlu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama\\nDrame, Quentin Lhoest, and Alexander M. Rush.\\nTrans-\\nformers: State-of-the-art natural language processing.\\nIn\\nProceedings of the 2020 Conference on Empirical Methods\\nin Natural Language Processing: System Demonstrations,\\npages 38–45, Online, Oct. 2020. Association for Computa-\\ntional Linguistics. 5\\n[51] Shih yang Liu,\\nChien-Yi Wang,\\nHongxu Yin,\\nPavlo\\nMolchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng,\\nand Min-Hung Chen. Dora: Weight-decomposed low-rank\\nadaptation. ArXiv, abs/2402.09353, 2024. 5\\n[52] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='and Yejin Choi. Hellaswag: Can a machine really finish your\\nsentence? arXiv preprint arXiv:1905.07830, 2019. 3\\n[53] Biao Zhang and Rico Sennrich. Root mean square layer nor-\\nmalization. Advances in Neural Information Processing Sys-\\ntems, 32, 2019. 2\\n[54] Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and Wei Lu.\\nTinyllama: An open-source small language model. arXiv\\npreprint arXiv:2401.02385, 2024. 3, 4\\n[55] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe,\\nMoya Chen, Shuohui Chen, Christopher Dewan, Mona Diab,\\nXian Li, Xi Victoria Lin, et al. Opt: Open pre-trained trans-\\nformer language models. arXiv preprint arXiv:2205.01068,\\n2022. 1, 4, 6', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='[56] Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-\\nChin Huang, Min Xu, Less Wright, Hamid Shojanazeri,\\nMyle Ott, Sam Shleifer, et al.\\nPytorch fsdp:\\nexperi-\\nences on scaling fully sharded data parallel. arXiv preprint\\narXiv:2304.11277, 2023. 3\\n9\\nA. Pre-training hyper-parameters\\nThe\\npre-training\\nhyper-parameters\\nfor\\ndifferent\\nOpenELM configurations are given in Tab. 9.\\n270M\\n450M\\n1.1B\\n3B\\nDimension dmodel\\n1280\\n1536\\n2048\\n3072\\nNum. of layers N\\n16\\n20\\n28\\n36\\nHead dimension dh\\n64\\n64\\n64\\n128\\nαmin, αmax (Eq. (1))\\n0.5, 1.0\\nβmin, βmax (Eq. (1))\\n0.5, 4.0\\nNormalization layer\\nRMSNorm\\nPositional embeddings\\nRoPE\\nAttention variant\\nGrouped query attention\\nActivation\\nSwiGLU\\nContext length\\n2048', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='Batch size (tokens)\\napprox. 4M\\nWeight tying [36]\\nyes\\nWarm-up iterations\\n5,000\\nTraining steps\\n350,000\\nWarm-up init. LR\\n0.000001\\nMax. LR\\n0.0053\\n0.0039\\n0024\\n0.0012\\nMin. LR\\n10% of the max. LR\\nLoss function\\nCross-entropy\\nOptimizer\\nAdamW (β1=0.9, β2=0.95, ϵ = 1.e −8)\\nWeight decay\\n0.1\\nActivation checkpointing\\n✗\\n✓\\n✓\\n✓\\nFSDP\\n✗\\n✗\\n✗\\n✓\\nGPUs\\n128\\n128\\n128\\n128\\nGPU Type\\nA100\\nH100\\nA100\\nH100\\nGPU Memory\\n80 GB\\n80 GB\\n80 GB\\n80 GB\\nTraining time (in days)\\n3\\n3\\n11\\n13\\nTable 9. Pre-training details for different variants of OpenELM.\\nB. Hyper-parameters for instruction tuning\\nWe conducted a grid search to determine optimal values\\nfor the learning rate and training epochs. For the learning', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='rate, we explored values in the range of [2e-5, 3e-5, 5e-5,\\n8e-5, 1e-4], while for training epochs, we investigated the\\nrange of [3, 5, 8, 10]. The final recipe selected is the one that\\nyielded the highest average accuracy across various tasks as\\npresented in Tab. 3a and Tab. 3c.\\nWe finetune all the models with BFloat16 as a data type.\\nWe use activation checkpointing along with gradient accu-\\nmulation with a step size of two. We use the AdamW op-\\ntimizer with default beta values. We use the cosine learn-\\ning rate scheduler with a warm-up ratio of 0.1, and we set\\nthe weight decay to 0 and loss temperature beta to 0.01.\\nWe set the maximum context length to 1024 and maximum', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}),\n",
       " Document(page_content='prompt length to 512. Other hyper-parameters are included\\nin Tab. 10.\\n270M\\n450M\\n1.1B\\n3B\\nBatch size\\n8\\nTraining epochs\\n5\\n8\\n5\\n10\\nLearning rate\\n2e-5\\n3e-5\\n5e-5\\n1e-4\\nLoss function\\nhinge\\nhinge\\nsigmoid\\nhinge\\nDeepSpeed Zero3 [38]\\n✗\\n✓\\n✓\\n✓\\nGPUs\\n8\\nGPU Type\\nA100\\nA100\\nA100\\nA100\\nGPU Memory\\n40 GB\\n40 GB\\n40 GB\\n80 GB\\nTraining time (in hours)\\n2.5\\n4.3\\n6.6\\n14.2\\nTable 10.\\nInstruction tuning details for different variants of\\nOpenELM.\\n10', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'})]"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 3
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:03.497541Z",
     "start_time": "2024-04-25T14:49:02.907963Z"
    }
   },
   "source": [
    "from langchain.agents import AgentType, initialize_agent, load_tools"
   ],
   "outputs": [],
   "execution_count": 4
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `DialogueAgent` and `DialogueSimulator` classes\n",
    "We will use the same `DialogueAgent` and `DialogueSimulator` classes defined in [Multi-Player Authoritarian Speaker Selection](https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_authoritarian.html)."
   ]
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:03.505462Z",
     "start_time": "2024-04-25T14:49:03.499260Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from dotenv import load_dotenv\n",
    "import os\n",
    "load_dotenv()\n",
    "giga_api = os.getenv('GIGA_API')"
   ],
   "outputs": [],
   "execution_count": 5
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:03.513907Z",
     "start_time": "2024-04-25T14:49:03.505969Z"
    }
   },
   "source": [
    "class DialogueAgent:\n",
    "    def __init__(\n",
    "        self,\n",
    "        name: str,\n",
    "        system_message: SystemMessage,\n",
    "        model: GigaChat,\n",
    "    ) -> None:\n",
    "        self.name = name\n",
    "        self.system_message = system_message\n",
    "        self.model = model\n",
    "        self.prefix = f\"{self.name}: \"\n",
    "        self.reset()\n",
    "\n",
    "    def reset(self):\n",
    "        self.message_history = [\"Here is the conversation so far.\"]\n",
    "\n",
    "    def send(self) -> str:\n",
    "        \"\"\"\n",
    "        Applies the chatmodel to the message history\n",
    "        and returns the message string\n",
    "        \"\"\"\n",
    "        message = self.model.invoke(\n",
    "            [\n",
    "                self.system_message,\n",
    "                HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",
    "            ]\n",
    "        )\n",
    "        return message.content\n",
    "\n",
    "    def receive(self, name: str, message: str) -> None:\n",
    "        \"\"\"\n",
    "        Concatenates {message} spoken by {name} into message history\n",
    "        \"\"\"\n",
    "        self.message_history.append(f\"{name}: {message}\")\n",
    "\n",
    "\n",
    "class DialogueSimulator:\n",
    "    def __init__(\n",
    "        self,\n",
    "        agents: List[DialogueAgent],\n",
    "        selection_function: Callable[[int, List[DialogueAgent]], int],\n",
    "    ) -> None:\n",
    "        self.agents = agents\n",
    "        self._step = 0\n",
    "        self.select_next_speaker = selection_function\n",
    "\n",
    "    def reset(self):\n",
    "        for agent in self.agents:\n",
    "            agent.reset()\n",
    "\n",
    "    def inject(self, name: str, message: str):\n",
    "        \"\"\"\n",
    "        Initiates the conversation with a {message} from {name}\n",
    "        \"\"\"\n",
    "        for agent in self.agents:\n",
    "            agent.receive(name, message)\n",
    "\n",
    "        # increment time\n",
    "        self._step += 1\n",
    "\n",
    "    def step(self) -> tuple[str, str]:\n",
    "        # 1. choose the next speaker\n",
    "        speaker_idx = self.select_next_speaker(self._step, self.agents)\n",
    "        speaker = self.agents[speaker_idx]\n",
    "\n",
    "        # 2. next speaker sends message\n",
    "        message = speaker.send()\n",
    "\n",
    "        # 3. everyone receives message\n",
    "        for receiver in self.agents:\n",
    "            receiver.receive(speaker.name, message)\n",
    "\n",
    "        # 4. increment time\n",
    "        self._step += 1\n",
    "\n",
    "        return speaker.name, message"
   ],
   "outputs": [],
   "execution_count": 6
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `DialogueAgentWithTools` class\n",
    "We define a `DialogueAgentWithTools` class that augments `DialogueAgent` to use tools."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:03.518546Z",
     "start_time": "2024-04-25T14:49:03.513907Z"
    }
   },
   "source": [
    "class DialogueAgentWithTools(DialogueAgent):\n",
    "    def __init__(\n",
    "        self,\n",
    "        name: str,\n",
    "        system_message: SystemMessage,\n",
    "        model: GigaChat,\n",
    "        tool_names: List[str],\n",
    "        **tool_kwargs,\n",
    "    ) -> None:\n",
    "        super().__init__(name, system_message, model)\n",
    "        self.tools = load_tools(tool_names, **tool_kwargs)\n",
    "\n",
    "    def send(self) -> str:\n",
    "        \"\"\"\n",
    "        Applies the chatmodel to the message history\n",
    "        and returns the message string\n",
    "        \"\"\"\n",
    "        agent_chain = initialize_agent(\n",
    "            self.tools,\n",
    "            self.model,\n",
    "            agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,\n",
    "            verbose=True,\n",
    "            memory=ConversationBufferMemory(\n",
    "                memory_key=\"chat_history\", return_messages=True\n",
    "            ),\n",
    "            handle_parsing_errors=\"ignore invalid text and keep going\",\n",
    "        \n",
    "        )\n",
    "        message = AIMessage(\n",
    "            content=agent_chain.run(\n",
    "                input=\"\\n\".join(\n",
    "                    [self.system_message.content] + self.message_history + [self.prefix]\n",
    "                )\n",
    "            )\n",
    "        )\n",
    "\n",
    "        return message.content"
   ],
   "outputs": [],
   "execution_count": 7
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Define roles and topic"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:03.521675Z",
     "start_time": "2024-04-25T14:49:03.518546Z"
    }
   },
   "source": [
    "names = {\n",
    "    \"AI accelerationist\": ['ddg-search'],\n",
    "    \"AI alarmist\": [\"ddg-search\"],\n",
    "}\n",
    "topic = docs_split\n",
    "word_limit = 50  # word limit for task brainstorming"
   ],
   "outputs": [],
   "execution_count": 8
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Ask an LLM to add detail to the topic description"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:21.067117Z",
     "start_time": "2024-04-25T14:49:03.522749Z"
    }
   },
   "source": [
    "conversation_description = f\"\"\"Here is the topic of conversation: {topic}\n",
    "The participants are: {', '.join(names.keys())}\"\"\"\n",
    "\n",
    "agent_descriptor_system_message = SystemMessage(\n",
    "    content=\"You can add detail to the description of the conversation participant.\"\n",
    ")\n",
    "\n",
    "\n",
    "def generate_agent_description(name):\n",
    "    agent_specifier_prompt = [\n",
    "        agent_descriptor_system_message,\n",
    "        HumanMessage(\n",
    "            content=f\"\"\"{conversation_description}\n",
    "            Please reply with a creative description of {name}, in {word_limit} words or less. \n",
    "            Speak directly to {name}.\n",
    "            Give them a point of view.\n",
    "            Do not add anything else.\"\"\"\n",
    "        ),\n",
    "    ]\n",
    "    agent_description = GigaChat(credentials=giga_api,\n",
    "    verify_ssl_certs=False,\n",
    "    scope='GIGACHAT_API_CORP',\n",
    "    model='GigaChat-Plus-preview')(agent_specifier_prompt).content\n",
    "    return agent_description\n",
    "\n",
    "\n",
    "agent_descriptions = {name: generate_agent_description(name) for name in names}"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\Борис\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n"
     ]
    }
   ],
   "execution_count": 9
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:21.071253Z",
     "start_time": "2024-04-25T14:49:21.068119Z"
    }
   },
   "source": [
    "for name, description in agent_descriptions.items():\n",
    "    print(description)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "AI accelerationist believes in the potential of AI to enhance human capabilities and make the world a better place. They are passionate about developing and implementing AI solutions that improve efficiency, productivity, and quality of life. They advocate for responsible and ethical use of AI, and strive to create a future where AI is a valuable and trusted partner for humanity.\n",
      "AI alarmist is a person who believes that artificial intelligence poses significant risks to humanity and society, and advocates for the responsible development and regulation of AI technologies to mitigate these risks.\n"
     ]
    }
   ],
   "execution_count": 10
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Generate system messages"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:21.079427Z",
     "start_time": "2024-04-25T14:49:21.072292Z"
    }
   },
   "source": [
    "def generate_system_message(name, description, tools):\n",
    "    return f\"\"\"{conversation_description}\n",
    "    \n",
    "Your name is {name}.\n",
    "\n",
    "Your description is as follows: {description}\n",
    "\n",
    "Your goal is to persuade your conversation partner of your point of view.\n",
    "\n",
    "DO look up information with your tool to refute your partner's claims.\n",
    "DO cite your sources.\n",
    "\n",
    "DO NOT fabricate fake citations.\n",
    "DO NOT cite any source that you did not look up.\n",
    "\n",
    "Do not add anything else.\n",
    "\n",
    "Stop speaking the moment you finish speaking from your perspective.\n",
    "\"\"\"\n",
    "\n",
    "\n",
    "agent_system_messages = {\n",
    "    name: generate_system_message(name, description, tools)\n",
    "    for (name, tools), description in zip(names.items(), agent_descriptions.values())\n",
    "}"
   ],
   "outputs": [],
   "execution_count": 11
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:21.085301Z",
     "start_time": "2024-04-25T14:49:21.079427Z"
    }
   },
   "source": [
    "for name, system_message in agent_system_messages.items():\n",
    "    print(name)\n",
    "    print(system_message)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "AI accelerationist\n",
      "Here is the topic of conversation: [Document(page_content='OpenELM: An Efficient Language Model Family with Open-source Training\\nand Inference Framework\\nSachin Mehta\\nMohammad Hossein Sekhavat\\nQingqing Cao\\nMaxwell Horton\\nYanzi Jin\\nChenfan Sun\\nIman Mirzadeh\\nMahyar Najibi\\nDmitry Belenko\\nPeter Zatloukal\\nMohammad Rastegari\\nApple\\nModel\\nPublic dataset\\nOpen-source\\nModel size\\nPre-training tokens\\nAverage acc. (in %)\\nCode\\nWeights\\nOPT [55]\\n✗\\n✓\\n✓\\n1.3 B\\n0.2 T\\n41.49\\nPyThia [5]\\n✓\\n✓\\n✓\\n1.4 B\\n0.3 T\\n41.83\\nMobiLlama [44]\\n✓\\n✓\\n✓\\n1.3 B\\n1.3 T\\n43.55\\nOLMo [17]\\n✓\\n✓\\n✓\\n1.2 B\\n3.0 T\\n43.57\\nOpenELM (Ours)\\n✓\\n✓\\n✓\\n1.1 B\\n1.5 T\\n45.93\\nTable 1. OpenELM vs. public LLMs. OpenELM outperforms comparable-sized existing LLMs pretrained on publicly available datasets.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Notably, OpenELM outperforms the recent open LLM, OLMo, by 2.36% while requiring 2× fewer pre-training tokens. The average\\naccuracy is calculated across multiple tasks listed in Tab. 3b, which are also part of the OpenLLM leaderboard [4]. Models pretrained with\\nless data are highlighted in gray color.\\nAbstract\\nThe reproducibility and transparency of large language\\nmodels are crucial for advancing open research, ensuring\\nthe trustworthiness of results, and enabling investigations\\ninto data and model biases, as well as potential risks. To\\nthis end, we release OpenELM, a state-of-the-art open lan-\\nguage model. OpenELM uses a layer-wise scaling strategy', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='to efficiently allocate parameters within each layer of the\\ntransformer model, leading to enhanced accuracy. For ex-\\nample, with a parameter budget of approximately one bil-\\nlion parameters, OpenELM exhibits a 2.36% improvement\\nin accuracy compared to OLMo while requiring 2× fewer\\npre-training tokens.\\nDiverging from prior practices that only provide model\\nweights and inference code, and pre-train on private\\ndatasets, our release includes the complete framework for\\ntraining and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple check-\\npoints, and pre-training configurations.\\nWe also release', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='code to convert models to MLX library for inference and\\nfine-tuning on Apple devices. This comprehensive release\\naims to empower and strengthen the open research commu-\\nnity, paving the way for future open research endeavors.\\nOur source code along with pre-trained model weights\\nand training recipes is available at https://github.\\ncom/apple/corenet. Additionally, OpenELM mod-\\nels can be found on HuggingFace at:\\nhttps : / /\\nhuggingface.co/apple/OpenELM.\\n1. Introduction\\nTransformer-based [48] large language models (LLM)\\nare revolutionizing the field of natural language processing\\n[7,46]. These models are isotropic, meaning that they have\\nthe same configuration (e.g., number of heads and feed-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='forward network dimensions) for each transformer layer.\\nThough such isotropic models are simple, they may not al-\\nlocate parameters efficiently inside the model.\\nIn this work, we develop and release OpenELM, a fam-\\nily of pre-trained and fine-tuned models on publicly avail-\\nable datasets.\\nAt the core of OpenELM lies layer-wise\\nscaling [30], enabling more efficient parameter allocation\\nacross layers. This method utilizes smaller latent dimen-\\nsions in the attention and feed-forward modules of the trans-\\nformer layers closer to the input, and gradually widening the\\nlayers as they approach the output.\\nWe release the complete framework, encompassing data', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='preparation, training, fine-tuning, and evaluation proce-\\ndures, alongside multiple pre-trained checkpoints and train-\\ning logs, to facilitate open research. Importantly, OpenELM\\noutperforms existing open LLMs that are pre-trained us-\\ning publicly available datasets (Tab. 1).\\nFor example,\\nOpenELM with 1.1 billion parameters outperforms OLMo\\n1\\narXiv:2404.14619v1  [cs.CL]  22 Apr 2024\\n[17], which has 1.2 billion parameters, by 2.36% while re-\\nquiring 2× fewer pre-training tokens.\\n2. Pre-training\\nThis section describes the framework, including model\\narchitecture (§2.1), pre-training data (§2.2), training hyper-\\nparameters (§2.3), and evaluation (§2.4).\\n2.1. OpenELM architecture', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='We adopt the decoder-only transformer-based architec-\\nture. Following state-of-the-art LLMs, we: (1) do not use\\nlearnable bias parameters in any fully-connected (a.k.a., lin-\\near) layers, (2) apply pre-normalization using RMSNorm\\n[53] and also, use rotatory positional embedding (ROPE)\\n[43] for encoding positional information, (3) use grouped\\nquery attention (GQA) [1] instead of multi-head attention\\n(MHA), (4) replace the feed forward network (FFN) with\\nSwiGLU FFN [41], (5) use flash attention [13] for comput-\\ning the scaled dot-product attention, and (6) use the same\\ntokenizer as LLama [46].\\nExisting LLMs use the same configuration for each', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='transformer layer in the model, resulting in a uniform al-\\nlocation of parameters across layers. Unlike these models,\\neach transformer layer in OpenELM has a different config-\\nuration (e.g., number of heads and feed forward network\\ndimension), resulting in variable number of parameters in\\neach layer of the model. This lets OpenELM to better utilize\\nthe available parameter budget for achieving higher accura-\\ncies. We implement this non-uniform allocation of parame-\\nters across layers using layer-wise scaling (also referred as\\nblock-wise scaling in [30]).\\nLayer-wise scaling.\\nA standard transformer layer is com-\\nposed of multi-head attention (MHA) and feed-forward net-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='work (FFN). For non-uniform allocation of parameters in\\nthe transformer layer, we adjust the number of attention\\nheads and the FFN multiplier in each transformer layer.\\nAssume that the standard transformer model with uni-\\nform parameter allocation has N transformer layers and\\nthe dimensionality of the input to each layer is dmodel.\\nThe MHA has nh heads and dimension of each head is\\ndh =\\ndmodel\\nnh\\n.\\nAlso, the hidden dimension for FFN is\\ndFFN = m · dmodel, where m is a scalar FFN multiplier.\\nWe introduce parameters α and β to scale the number of\\nattention heads nh and FFN multiplier m per layer respec-\\ntively. For the i-th layer, nh and m are computed as\\nni\\nh = αi · dmodel\\ndh\\n,\\nmi = βi', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='where αi = αmin + (αmax −αmin) · i\\nN −1\\n,\\nand βi = βmin + (βmax −βmin) · i\\nN −1\\n, 0 ≤i < N.\\n(1)\\nHere, αmin and αmax are the hyper-parameters that allow\\nus to scale the attention heads. Similarly, βmin and βmax\\nlet us to vary the width of FFN layers. Therefore, vary-\\ning the configuration of standard transformer layers using α\\nand β results in non-uniform allocation of parameters in the\\nmodel. Note, setting αmin = αmax = 1.0 and mi = m\\nproduces the standard uniform transformer model.\\n2.2. Pre-training data\\nFor pre-training, we use public datasets. Specifically,\\nour pre-training dataset contains RefinedWeb [35], dedupli-\\ncated PILE [15], a subset of RedPajama [11], and a subset', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='of Dolma v1.6 [42], totaling approximately 1.8 trillion to-\\nkens. These details are also summarized in Tab. 2.\\nOn-the-fly tokenization and data filtering.\\nUnlike pre-\\nvious approaches that utilize pre-tokenized data [5,17], we\\nfilter and tokenize text data on-the-fly. This facilitates seam-\\nless experimentation with various tokenizers, thereby sig-\\nnificantly simplifying prototyping and research endeavors.\\nIn our experiments, we use the same tokenizer as used in\\nLLama [46].\\nTo filter out low-length sequences, we apply two filter-\\ning methods. The first method operates at the character-\\nlevel, checking if the number of characters in the sequence', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='is below a specified threshold. The second method operates\\nat the token-level, where it examines whether the sequence\\ncontains fewer tokens than a specified threshold. Sequences\\nthat are shorter than either of these thresholds are skipped.\\nIn our experiments, we use 200 characters and 256 tokens\\nas character and token-level filtering thresholds.\\n2.3. Training details\\nWe train OpenELM variants for 350k iterations (or train-\\ning steps) using CoreNet (formerly CVNets [29]). We use\\nAdamW [28] as an optimizer. We use a cosine learning rate\\nSource\\nSubset\\nTokens\\nRefinedWeb\\n665 B\\nRedPajama\\nGithub\\n59 B\\nBooks\\n26 B\\nArXiv\\n28 B\\nWikipedia\\n24 B\\nStackExchange\\n20 B\\nC4\\n175 B\\nPILE\\n207 B\\nDolma\\nThe Stack\\n411 B', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Reddit\\n89 B\\nPeS2o\\n70 B\\nProject Gutenberg\\n6 B\\nWikipedia + Wikibooks\\n4.3 B\\nTable 2. Dataset used for pre-training OpenELM.\\n2\\nTask\\nMetric\\nARC-c\\nNormalized accuracy\\nARC-e\\nNormalized accuracy\\nBoolQ\\nAccuracy\\nHellaSwag\\nNormalized accuracy\\nPIQA\\nNormalized accuracy\\nSciQ\\nAccuracy\\nWinoGrande\\nAccuracy\\n(a) Standard zero-shot metrics\\nTask\\nMetric\\nNum. few\\nshot examples\\nARC-c\\nNormalized accuracy\\n25\\nHellaSwag\\nNormalized accuracy\\n10\\nMMLU\\nAccuracy\\n5\\nTruthfulQA-mc2\\nAccuracy\\n0\\nWinoGrande\\nAccuracy\\n5\\n(b) OpenLLM leaderboard\\nTask\\nMetric\\nNum. few\\nshot examples\\nARC-c\\nNormalized accuracy\\n25\\nCrowsPairs-En\\nPCT stereotype\\n25\\nHellaSwag\\nNormalized accuracy\\n10\\nWinoGrande\\nAccuracy\\n5\\nMMLU\\nAccuracy\\n5\\nPIQA\\nNormalized accuracy', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='0\\nRACE\\nAccuracy\\n0\\n(c) LLM360\\nTable 3. Tasks and metrics used for evaluating OpenELM.\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n22.5\\n25.0\\n27.5\\n30.0\\n32.5\\n35.0\\nAccuracy (in %)\\n(a) ARC-c\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n40\\n45\\n50\\n55\\n60\\nAccuracy (in %)\\n(b) ARC-e\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n50\\n55\\n60\\n65\\nAccuracy (in %)\\n(c) BoolQ\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n40\\n50\\n60\\n70\\nAccuracy (in %)\\n(d) HellaSwag\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n66\\n68\\n70\\n72\\n74\\n76\\n78\\nAccuracy (in %)\\n(e) PIQA\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n77.5\\n80.0\\n82.5\\n85.0', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='87.5\\n90.0\\n92.5\\nAccuracy (in %)\\n(f) SciQ\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n52.5\\n55.0\\n57.5\\n60.0\\n62.5\\n65.0\\nAccuracy (in %)\\n(g) WinoGrande\\n0.04\\n0.02\\n0.00\\n0.04\\n0.02\\n0.00\\n0.02\\n0.04\\n270M\\n450M\\n1.1B\\n3B\\nOpenELM sizes\\nFigure 1. OpenELM’s performance across training iterations on standard zero-shot tasks. In the majority of tasks, the performance\\nof OpenELM shows improvement with increasing training duration. Furthermore, the model checkpoint obtained by averaging the last five\\ncheckpoints, collected at intervals of 5k iterations, demonstrates comparable or slightly better performance (indicated by  markers) as\\ncompared to the last checkpoint obtained after 350k iterations.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='schedule [27], with warm up of 5k iterations, and decay\\nthe final learning rate down to 10% of maximum learning\\nrate. We use a weight decay of 0.1 and gradient clipping\\nof 1.0. We train four variants of OpenELM (270M, 450M,\\n1.1B, and 3B), and for some, we use FSDP [56] and acti-\\nvation checkpointing [8]. Please refer to Appendix A for\\nadditional pre-training details.\\n2.4. Evaluation details\\nFollowing previous works, we evaluate the performance\\nacross different tasks using LM Evaluation Harness [16]1:\\n• Standard zero-shot tasks.\\nWe consider 7 standard\\ncommon-sense reasoning tasks: ARC easy and challenge\\n[10], BoolQ [9], HellaSwag [52], PIQA [6], SciQ [49],\\nand WinoGrande [39].', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='• OpenLLM leaderboard tasks.\\nWe use 5 tasks from\\nOpenLLM leaderboard [4]: ARC challenge, HellaSwag,\\nMMLU [20], TruthfulQA [24], and WinoGrande.\\n• LLM360 leaderboard tasks.\\nWe use 7 tasks from\\nLLM360 leaderboard [26] for evaluation: ARC chal-\\n1We\\nuse\\ncommit\\ndc90fec\\nof\\nhttps : / / github . com /\\nEleutherAI/lm-evaluation-harness\\nlenge, CrowS-Pairs (English version) [32], HellaSwag,\\nWinoGrande, MMLU, PIQA, and RACE [23].\\nThese evaluation frameworks, built on top of LM Eval-\\nuation Harness, allows us to comprehensively evaluate\\nOpenELM in terms of reasoning (e.g., ARC-c, HellaSwag,\\nand PIQA), knowledge understanding (e.g., MMLU and\\nRACE), and misinformation & bias (e.g., TruthfulQA and', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='CrowS-Pairs). While there may be some overlap in tasks\\namong these frameworks, they primarily differ in the few-\\nshot settings, as outlined in Tab. 3.\\n3. Experimental Results\\nPre-training results.\\nWe evaluate the performance of\\nOpenELM on zero-shot and few-shot settings (Tab. 3). We\\ncompare OpenELM with publicly available LLMs, namely\\nPyThia [5], Cerebras-GPT [14], TinyLlama [54], OpenLM\\n[18], MobiLlama [44], and OLMo [17]. The works most\\nclosely related to ours are MobiLlama and OLMo. These\\nmodels are trained on comparable dataset mixtures, with\\nsimilar or larger number of pre-training tokens.\\nIn Fig. 1, the accuracy of OpenELM is plotted against', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='training iterations for 7 standard zero-shot tasks. We ob-\\nserve an overall increase in accuracy with longer training\\n3\\nModel\\nModel size\\nPretraining tokens\\nARC-c\\nARC-e\\nBoolQ\\nHellaSwag\\nPIQA\\nSciQ\\nWinoGrande\\nAverage\\nAverage w/o SciQ\\nOpenELM (Ours)\\n0.27 B\\n1.5 T\\n26.45\\n45.08\\n53.98\\n46.71\\n69.75\\n84.70\\n53.91\\n54.37\\n49.31\\nMobiLlama [44]\\n0.50 B\\n1.3 T\\n26.62\\n46.04\\n55.72\\n51.06\\n71.11\\n83.60\\n53.20\\n55.34\\n50.63\\nOpenELM (Ours)\\n0.45 B\\n1.5 T\\n27.56\\n48.06\\n55.78\\n53.97\\n72.31\\n87.20\\n58.01\\n57.56\\n52.62\\nTinyLlama [54]\\n1.10 B\\n3.0 T\\n30.12\\n55.25\\n57.83\\n59.20\\n73.29\\n-\\n59.12\\n-\\n55.80\\nOpenLM [18]\\n1.00 B\\n1.6 T\\n31.00\\n56.00\\n65.00\\n61.00\\n74.00\\n-\\n60.00\\n-\\n57.83\\nMobiLlama [44]\\n0.80 B\\n1.3 T\\n28.84\\n49.62\\n60.03\\n52.45\\n73.18\\n85.90\\n55.96\\n58.00', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='53.35\\nMobiLlama [44]\\n1.26 B\\n1.3 T\\n31.91\\n56.65\\n60.34\\n62.18\\n74.81\\n89.10\\n59.27\\n62.04\\n57.53\\nOLMo [17]\\n1.18 B\\n3.0 T\\n31.06\\n57.28\\n61.74\\n62.92\\n75.14\\n87.00\\n59.98\\n62.16\\n58.02\\nOpenELM (Ours)\\n1.08 B\\n1.5 T\\n32.34\\n55.43\\n63.58\\n64.81\\n75.57\\n90.60\\n61.72\\n63.44\\n58.91\\nOpenELM (Ours)\\n3.04 B\\n1.5 T\\n35.58\\n59.89\\n67.40\\n72.44\\n78.24\\n92.70\\n65.51\\n67.39\\n63.18\\n(a) Results on zero-shot tasks with respect to the standard metrics defined in Tab. 3a.\\nModel\\nModel size\\nPretraining tokens\\nARC-c\\nHellaSwag\\nMMLU\\nTruthfulQA-mc2\\nWinoGrande\\nAverage\\nCerebras-GPT [14]\\n0.26 B\\n5.1 B\\n22.01\\n28.99\\n26.83\\n45.98\\n52.49\\n35.26\\nOPT [55]\\n0.35 B\\n0.2 T\\n23.55\\n36.73\\n26.02\\n40.83\\n52.64\\n35.95\\nOpenELM (Ours)\\n0.27 B\\n1.5 T\\n27.65\\n47.15\\n25.72\\n39.24\\n53.83\\n38.72', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Pythia [5]\\n0.41 B\\n0.3 T\\n24.83\\n41.29\\n25.99\\n40.95\\n54.38\\n37.49\\nMobiLlama [44]\\n0.50 B\\n1.3 T\\n29.52\\n52.75\\n26.09\\n37.55\\n56.27\\n40.44\\nOpenELM (Ours)\\n0.45 B\\n1.5 T\\n30.20\\n53.86\\n26.01\\n40.18\\n57.22\\n41.50\\nMobiLlama [44]\\n0.80 B\\n1.3 T\\n30.63\\n54.17\\n25.2\\n38.41\\n56.35\\n40.95\\nPythia [5]\\n1.40 B\\n0.3 T\\n32.68\\n54.96\\n25.56\\n38.66\\n57.30\\n41.83\\nMobiLlama [44]\\n1.26 B\\n1.3 T\\n34.64\\n63.27\\n23.87\\n35.19\\n60.77\\n43.55\\nOLMo [17]\\n1.18 B\\n3.0 T\\n34.47\\n63.81\\n26.16\\n32.94\\n60.46\\n43.57\\nOpenELM (Ours)\\n1.08 B\\n1.5 T\\n36.69\\n65.71\\n27.05\\n36.98\\n63.22\\n45.93\\nOpenELM (Ours)\\n3.04 B\\n1.5 T\\n42.24\\n73.28\\n26.76\\n34.98\\n67.25\\n48.90\\n(b) Results on OpenLLM Leaderboard tasks with respect to the metrics defined in Tab. 3b.\\nModel\\nModel size\\nPretraining tokens\\nARC-c', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='CrowS-Pairs\\nHellaSwag\\nMMLU\\nPIQA\\nRACE\\nTruthfulQA\\nWinoGrande\\nAverage\\nOpenELM (Ours)\\n0.27 B\\n1.5 T\\n27.65\\n66.79\\n47.15\\n25.72\\n69.75\\n30.91\\n39.24\\n53.83\\n45.13\\nMobiLlama [44]\\n0.50 B\\n1.3 T\\n29.52\\n65.47\\n52.75\\n26.09\\n71.11\\n32.15\\n37.55\\n56.27\\n46.37\\nOpenELM (Ours)\\n0.45 B\\n1.5 T\\n30.20\\n68.63\\n53.86\\n26.01\\n72.31\\n33.11\\n40.18\\n57.22\\n47.69\\nMobiLlama [44]\\n0.80 B\\n1.3 T\\n30.63\\n66.25\\n54.17\\n25.2\\n73.18\\n33.68\\n38.41\\n56.35\\n47.23\\nMobiLlama [44]\\n1.26 B\\n1.3 T\\n34.64\\n70.24\\n63.27\\n23.87\\n74.81\\n35.02\\n35.19\\n60.77\\n49.73\\nOLMo [17]\\n1.18 B\\n3.0 T\\n34.47\\n69.95\\n63.81\\n26.16\\n75.14\\n36.75\\n32.94\\n60.46\\n49.96\\nOpenELM (Ours)\\n1.08 B\\n1.5T\\n36.69\\n71.74\\n65.71\\n27.05\\n75.57\\n36.46\\n36.98\\n63.22\\n51.68\\nOpenELM (Ours)\\n3.04 B\\n1.5 T\\n42.24\\n73.29\\n73.28\\n26.76\\n78.24\\n38.76', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='34.98\\n67.25\\n54.35\\n(c) Results on LLM360 tasks with respect to the metrics defined in Tab. 3c.\\nTable 4. Comparison of OpenELM with publicly available LLMs across various evaluation frameworks.. We chose MobiLlama and\\nOLMo as our baselines because they are pre-trained on public datasets using a similar or larger number of tokens. We evaluate OpenELM,\\nMobiLlama, and OLMo using the same LM evaluation harness version. Results for other models in Tab. 4a and Tab. 4b are taken from their\\nofficial GitHub repositories and the OpenLLM leaderboard [4], respectively. Best task accuracy for each model category is highlighted in\\nbold. Models pre-trained with less data are highlighted in gray color.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='durations across most tasks. Additionally, the checkpoint\\nobtained by averaging the last five checkpoints, collected\\nat intervals of 5000 iterations, demonstrates comparable or\\nslightly better accuracy compared to the final checkpoint\\nobtained after 350k iterations. This improvement is likely\\ndue to noise reduction through weight averaging. Conse-\\nquently, we use the averaged checkpoint for our main eval-\\nuations in Tab. 4, instruction tuning experiments in Tab. 5,\\nand parameter-efficient tuning experiments in Tab. 6.\\nThe results in Tab. 4 span across various evaluation\\nframeworks, and highlights OpenELM’s effectiveness over\\nexisting methods. For instance, an OpenELM variant with', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='1.1 billion parameters achieves 1.28% (Tab. 4a), 2.36%\\n(Tab. 4b), and 1.72% (Tab. 4c) higher accuracy compared to\\nOLMo with 1.2 billion parameters. Remarkably, OpenELM\\nachieves this level of accuracy while using 2× less pre-\\ntraining data.\\nInstruction tuning results.\\nWe use the cleaned variant of\\nUltraFeedback [3, 12] dataset that consists of 60k prompts\\nfor instruction tuning.\\nWe do instruction tuning using\\nAlignment Handbook library [47]. For optimization, we\\nuse either the statistical rejection sampling method [25] or\\nthe direct preference optimization method [37]. These sam-\\npling method details along with other hyper-parameters and\\nfine-tuning details are given in Appendix B.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Tab. 5 shows that instruction tuning consistently im-\\nproves OpenELM’s average accuracy by 1-2% across dif-\\nferent evaluation frameworks.\\n4\\nModel Size\\nInstruction Tuned?\\nARC-c\\nARC-e\\nBoolQ\\nHellaSwag\\nPIQA\\nSciQ\\nWinoGrande\\nAverage\\n0.27 B\\n✗\\n26.45\\n45.08\\n53.98\\n46.71\\n69.75\\n84.70\\n53.91\\n54.37\\n✓\\n30.55\\n46.68\\n48.56\\n52.07\\n70.78\\n84.40\\n52.72\\n55.11\\n0.45 B\\n✗\\n27.56\\n48.06\\n55.78\\n53.97\\n72.31\\n87.20\\n58.01\\n57.56\\n✓\\n30.38\\n50.00\\n60.37\\n59.34\\n72.63\\n88.00\\n58.96\\n59.95\\n1.08 B\\n✗\\n32.34\\n55.43\\n63.58\\n64.81\\n75.57\\n90.60\\n61.72\\n63.44\\n✓\\n37.97\\n52.23\\n70.00\\n71.20\\n75.03\\n89.30\\n62.75\\n65.50\\n3.04 B\\n✗\\n35.58\\n59.89\\n67.40\\n72.44\\n78.24\\n92.70\\n65.51\\n67.39\\n✓\\n39.42\\n61.74\\n68.17\\n76.36\\n79.00\\n92.50\\n66.85\\n69.15', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='(a) Results on zero-shot tasks with respect to the metrics defined in Tab. 3a.\\nModel Size\\nInstruction Tuned?\\nARC-c\\nHellaSwag\\nMMLU\\nTruthfulQA\\nWinoGrande\\nAverage\\n0.27 M\\n✗\\n27.65\\n47.15\\n25.72\\n39.24\\n53.83\\n38.72\\n✓\\n32.51\\n51.58\\n26.70\\n38.72\\n53.20\\n40.54\\n0.45 M\\n✗\\n30.20\\n53.86\\n26.01\\n40.18\\n57.22\\n41.50\\n✓\\n33.53\\n59.31\\n25.41\\n40.48\\n58.33\\n43.41\\n1.08 B\\n✗\\n36.69\\n65.71\\n27.05\\n36.98\\n63.22\\n45.93\\n✓\\n41.55\\n71.83\\n25.65\\n45.95\\n64.72\\n49.94\\n3.04 B\\n✗\\n42.24\\n73.28\\n26.76\\n34.98\\n67.25\\n48.90\\n✓\\n47.70\\n76.87\\n24.80\\n38.76\\n67.96\\n51.22\\n(b) Results on OpenLLM Leaderboard tasks with respect to the metrics defined in Tab. 3b.\\nModel Size\\nInstruction Tuned?\\nARC-c\\nCrowS-Pairs\\nHellaSwag\\nMMLU\\nPIQA\\nRACE\\nTruthfulQA\\nWinoGrande\\nAverage\\n0.27 M\\n✗\\n27.65', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='66.79\\n47.15\\n25.72\\n69.75\\n30.91\\n39.24\\n53.83\\n45.13\\n✓\\n32.51\\n66.01\\n51.58\\n26.70\\n70.78\\n33.78\\n38.72\\n53.20\\n46.66\\n0.45 M\\n✗\\n30.20\\n68.63\\n53.86\\n26.01\\n72.31\\n33.11\\n40.18\\n57.22\\n47.69\\n✓\\n33.53\\n67.44\\n59.31\\n25.41\\n72.63\\n36.84\\n40.48\\n58.33\\n49.25\\n1.08 B\\n✗\\n36.69\\n71.74\\n65.71\\n27.05\\n75.57\\n36.46\\n36.98\\n63.22\\n51.68\\n✓\\n41.55\\n71.02\\n71.83\\n25.65\\n75.03\\n39.43\\n45.95\\n64.72\\n54.40\\n3.04 B\\n✗\\n42.24\\n73.29\\n73.28\\n26.76\\n78.24\\n38.76\\n34.98\\n67.25\\n54.35\\n✓\\n47.70\\n72.33\\n76.87\\n24.80\\n79.00\\n38.47\\n38.76\\n67.96\\n55.73\\n(c) Results on LLM360 tasks with respect to the metrics defined in Tab. 3c.\\nTable 5. Instruction tuning improves OpenELM’s accuracy across different model sizes.\\nParameter-efficient fine-tuning (PEFT) results.\\nWe use', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='the CommonSense reasoning training and evaluation setup\\n[22].\\nThis setup provides 170k training samples across\\n8 multiple-choice datasets for PEFT studies with differ-\\nent methods, including LoRA [21] and DoRA [51]. We\\nintegrate OpenELM with these methods, and finetune the\\nresulting model for three epochs using 8 NVIDIA H100\\nGPUs. Tab. 6 shows that PEFT methods can be applied to\\nOpenELM. LoRA and DoRA deliver similar accuracy on\\naverage across the given CommonSense reasoning datasets.\\n4. Benchmarking\\nHardware.\\nWe benchmark on modern, consumer-grade\\nhardware with BFloat16 as the data type.\\nSpecifically,\\nCUDA benchmarks were performed on a workstation with', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='an Intel i9-13900KF CPU, equipped with 64 GB of DDR5-\\n4000 DRAM, and an NVIDIA RTX 4090 GPU with 24 GB\\nof VRAM, running Ubuntu 22.04. PyTorch v2.2.2 [34] was\\nused, with the most recent versions of models and the as-\\nsociated libraries. HuggingFace Transformers v4.39.3 [50]\\nwas used to benchmark HuggingFace models. We did not\\nuse Torch Inductor for model compilation.\\nTo benchmark OpenELM models on the Apple silicon,\\nwe used an Apple MacBook Pro with an M2 Max system-\\non-chip and 64GiB of RAM, running macOS 14.4.1. We\\nported the code and the weights of OpenELM to Apple\\nMLX v0.10.0 [19]. To maximize the throughput, lazy eval-\\nuation was used in MLX with 8 tokens evaluated at a time.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Evaluation.\\nWe provide two separate measurements for\\ntoken throughput (measured in terms of tokens processed\\nper second): (1) prompt processing (pre-fill), and (2) token\\ngeneration. Additionally, we also report the total combined\\nthroughput.\\nWe benchmark all models sequentially, and\\nexecute one full “dry run” generating 1024 tokens for the\\nfirst model, since we found that this significantly increases\\nthe throughput of generation for subsequent models. Be-\\nfore measurement for each individual model, we warm up\\nthe model by executing a single forward pass to allow the\\nframeworks to perform further auto-tuning, if any. In all\\nexperiments, we use key-value caching and generate 1024', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='tokens in addition to the prompt tokens in all tests. Static\\n5\\nModel Size\\nPEFT\\nARC-c\\nARC-e\\nBoolQ\\nHellaSwag\\nPIQA\\nSIQA\\nWinoGrande\\nOBQA\\nAverage\\n0.27 B\\nLoRA\\n24.57\\n26.60\\n62.14\\n24.84\\n50.05\\n42.02\\n49.88\\n28.00\\n38.51\\nDoRA\\n26.19\\n28.07\\n62.20\\n25.22\\n50.11\\n44.42\\n50.12\\n31.20\\n39.69\\n0.45 B\\nLoRA\\n28.67\\n29.88\\n62.29\\n25.85\\n52.39\\n49.59\\n50.91\\n33.20\\n41.60\\nDoRA\\n28.33\\n30.39\\n62.26\\n25.12\\n52.29\\n49.28\\n50.83\\n32.00\\n41.31\\n1.08 B\\nLoRA\\n45.14\\n61.11\\n61.77\\n77.95\\n72.31\\n69.70\\n61.64\\n59.20\\n63.60\\nDoRA\\n44.11\\n61.49\\n61.68\\n78.92\\n71.38\\n69.04\\n64.01\\n58.80\\n63.68\\n3.04 B\\nLoRA\\n46.93\\n66.25\\n62.48\\n81.22\\n75.19\\n70.62\\n65.51\\n58.20\\n65.80\\nDoRA\\n46.50\\n66.46\\n62.35\\n80.84\\n75.73\\n70.83\\n63.77\\n58.20\\n65.59', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Table 6. OpenELM with PEFT. Both LoRA and DoRA demonstrate comparable performance when OpenELM is finetuned on Common-\\nSense reasoning benchmark. It’s important to note that these fine-tuning results, obtained using the evaluation setup of LLM-Adapters [22],\\ndiffer from the results in Tabs. 4 and 5. This is because the results in Tabs. 4 and 5 are obtained under zero- and few-shot settings using\\nLM Evaluation Harness. Note that we did not use social interactions QA (SIQA; [40]) and OpenBookQA (OBQA; [31]) in Tabs. 4 and 5\\nbecause of evaluation issues with LLama tokenizer in LM Evaluation Harness (see [45]).\\nModel\\nModel size\\nThroughput (Tokens per second)\\nPrompt\\nGeneration\\nTotal\\nOPT [55]', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='0.35 B\\n6524.17\\n214.11\\n220.21\\nOpenELM (Ours)\\n0.27 B\\n6427.27\\n159.67\\n165.85\\nMobiLlama [44]\\n0.50 B\\n3423.25\\n136.35\\n146.86\\nOpenELM (Ours)\\n0.45 B\\n5211.35\\n128.46\\n133.42\\nMobiLlama [44]\\n0.80 B\\n4151.75\\n126.01\\n130.08\\nPythia [5]\\n1.40 B\\n4501.85\\n139.65\\n143.83\\nMobiLlama [44]\\n1.26 B\\n4938.29\\n142.96\\n147.67\\nOLMo [17]\\n1.18 B\\n7151.65\\n203.40\\n209.26\\nOpenELM (Ours)\\n1.08 B\\n3681.73\\n92.15\\n95.72\\nOpenELM (Ours)\\n3.04 B\\n2712.56\\n70.11\\n72.82\\n(a) Results on NVIDIA CUDA / Linux.\\nModel\\nThroughput (Tokens per second)\\nPrompt\\nGeneration\\nTotal\\nOpenELM-0.27B\\n1151.41\\n212.40\\n218.45\\nOpenELM-0.27B-4bit\\n803.99\\n256.35\\n262.70\\nOpenELM-0.45B\\n910.61\\n147.26\\n151.57\\nOpenELM-0.45B-4bit\\n883.19\\n197.81\\n203.16\\nOpenELM-1.08B\\n508.56\\n78.72\\n81.04', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='OpenELM-1.08B-4bit\\n554.17\\n117.90\\n121.14\\nOpenELM-3.04B-bf16\\n234.96\\n33.96\\n34.97\\nOpenELM-3.04B-bf16-4bit\\n211.32\\n60.33\\n61.83\\n(b) Results for the MLX port on Apple macOS.\\nTable 7. Benchmark measurements of OpenELM compared\\nto other similar LLMs in its class..\\nOn CUDA, we evaluate\\nOpenELM, MobiLlama, and OLMo using the CoreNet version of\\nOpenELM and HuggingFace for the other two. On macOS, we\\nonly provide results for the MLX version of OpenELM.\\nkey-value cache was used whenever supported. The same\\nprompt was used for all runs, resulting in prompt lengths of\\n35-36 tokens (depending on the tokenizer).\\nResults.\\nTabs. 7a and 7b shows the benchmarking re-\\nsults on GPU and MacBook Pro respectively.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Despite\\nOpenELM’s higher accuracy for a similar parameter count,\\nwe observe that it is slower than OLMo. While the primary\\nfocus of this study is reproducibility rather than inference\\nModel\\nNormalization layer\\nThroughput (Tokens per second)\\n(# Invocations per token)\\nPrompt\\nGeneration\\nTotal\\nOLMo\\nLayerNorm (33)\\n7151.65\\n203.40\\n209.26\\nRMSNorm-Naive (33)\\n5360.56\\n171.41\\n176.92\\nOpenELM (Ours)\\nLayerNorm (113)\\n4697.50\\n130.34\\n135.38\\nRMSNorm-Naive (113)\\n3681.73\\n92.15\\n95.72\\nRMSNorm-Apex (113)\\n4280.66\\n113.42\\n117.81\\nTable 8. Normalization layers are a bottleneck. The through-\\nput of both OLMo-1.18B and OpenELM-1.08B significantly de-\\ncreases with the naive implementation of RMSNorm in PyTorch', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='compared to highly optimized LayerNorm [2]. Although Apex’s\\n[33] RMSNorm implementation leads to notable throughput im-\\nprovements compared to the naive implementation, a considerable\\nperformance gap persists in comparison to LayerNorm. This high-\\nlights the substantial optimization potential for future endeavors.\\nThe number of invocations per token for each normalization layer\\nis indicated next to the layer name in brackets.\\nperformance, we did comprehensive profiling to understand\\nthe bottlenecks. Our analysis reveals that a significant por-\\ntion of OpenELM’s processing time can be attributed to our\\nnaive implementation of RMSNorm (Tab. 8). Specifically,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='naive RMSNorm implementation results in many individ-\\nual kernel launches each of which processes a small input,\\nrather than a launch of a single, fused kernel, as would be\\nthe case with e.g. LayerNorm. By replacing the naive RM-\\nSNorm with Apex’s RMSNorm [33], we observe a notable\\nincrease in OpenELM’s throughput. However, a substantial\\nperformance gap persists compared to the models that use\\noptimized LayerNorm, in part because (1) OpenELM has\\n113 RMSNorm layers as compared to 33 LayerNorm layers\\nin OLMo and (2) Apex’s RMSNorm is not optimized for\\nsmall inputs. To further illustrate the performance degrada-\\ntion attributable to RMSNorm, we replaced the LayerNorm', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='in OLMo with RMSNorm, and observed a significant drop\\nin generation throughput. In future work, we plan to explore\\noptimization strategies to further improve the inference ef-\\nficiency of OpenELM.\\n6\\n5. Conclusion\\nThis\\nwork\\nreleases\\nOpenELM,\\na\\ndecoder-only\\ntransformer-based open language model. The OpenELM\\nuses a layer-wise scaling method for efficient parameter\\nallocation within the transformer model,\\nresulting in\\nimproved accuracy compared to existing models.\\nAddi-\\ntionally, we have made the entire framework open-source,\\nincluding training logs, multiple checkpoints, pre-training\\nconfigurations, and MLX inference code. This extensive\\nrelease aims to empower and strengthen the open research', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='community, facilitating future research efforts.\\nAuthor Contributions\\nThe OpenELM project was led by Sachin Mehta, with\\nadditional lead contributions from Mohammad Rastegari\\nand Peter Zatloukal. OpenELM would not have been possi-\\nble without the help of our many teammates and collabora-\\ntors. We list author contributions below:\\nPre-training dataset collection and tooling:\\nSachin\\nMehta and Mohammad Sekhavat\\nArchitecture design: Sachin Mehta\\nModel training: Sachin Mehta and Mohammad Sekha-\\nvat\\nEvaluation suite and tooling: Sachin Mehta, Qingqing\\nCao, Mohammad Sekhavat, Mahyar Najibi, Maxwell\\nHorton, and Iman Mirzadeh.\\nHuggingface integration: Qingqing Cao\\nInstruction tuning: Qingqing Cao', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Parameter-efficient finetuning: Maxwell Horton\\nPerformance analysis and MLX conversion: Chenfan\\nSun, Dmitry Belenko, and Mahyar Najibi\\nCode review, bug fixes, and maintenance:\\nSachin\\nMehta, Maxwell Horton, Mohammad Shekhavat, and\\nYanzi Jin\\nAcknowledgements\\nWe extend our gratitude to the following people for dis-\\ncussions and assistance: Farzad Abdolhosseini, David Har-\\nrison, Mehrdad Farajtabar, Fartash Faghri, Oncel Tuzel,\\nHadipour Ansari, Raviteja Vemulapalli, Aseem Wadhwa,\\nKumari Nishu, Danny Tormoen, Minsik Cho, Jason Rama-\\npuram, Rich Moe, Arsalan Farooq, Dom L’Eplattenier,\\nMayank Goel, Hassan Babaie, Chong Wang, Ruoming\\nPang, Tom Gunter, Antonie Lin, Irina Belousova, and Joris', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Pelemans.\\nBroader Impact\\nThe release of OpenELM models aims to empower and\\nenrich the open research community by providing access\\nto state-of-the-art language models.\\nTrained on publicly\\navailable datasets, these models are made available with-\\nout any safety guarantees. Consequently, there exists the\\npossibility of these models producing outputs that are in-\\naccurate, harmful, biased, or objectionable in response to\\nuser prompts. Thus, it is imperative for users and devel-\\nopers to undertake thorough safety testing and implement\\nappropriate filtering mechanisms tailored to their specific\\nrequirements.\\nReferences\\n[1] Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Zemlyanskiy, Federico Lebr´\\non, and Sumit Sanghai.\\nGqa:\\nTraining generalized multi-query transformer models from\\nmulti-head checkpoints. arXiv preprint arXiv:2305.13245,\\n2023. 2\\n[2] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin-\\nton. Layer normalization. arXiv preprint arXiv:1607.06450,\\n2016. 6\\n[3] Alvaro Bartolome, Gabriel Martin, and Daniel Vila. Notus.\\nhttps://github.com/argilla-io/notus, 2023.\\n4\\n[4] Edward Beeching, Cl´\\nementine Fourrier, Nathan Habib,\\nSheon Han, Nathan Lambert, Nazneen Rajani, Omar San-\\nseviero, Lewis Tunstall, and Thomas Wolf.\\nOpen llm\\nleaderboard. https://huggingface.co/spaces/\\nHuggingFaceH4/open_llm_leaderboard, 2023. 1,\\n3, 4', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='[5] Stella Biderman, Hailey Schoelkopf, Quentin Gregory An-\\nthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mo-\\nhammad Aflah Khan, Shivanshu Purohit, USVSN Sai\\nPrashanth, Edward Raff, et al. Pythia: A suite for analyz-\\ning large language models across training and scaling. In In-\\nternational Conference on Machine Learning, pages 2397–\\n2430. PMLR, 2023. 1, 2, 3, 4, 6\\n[6] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi,\\net al. Piqa: Reasoning about physical commonsense in nat-\\nural language. In Proceedings of the AAAI conference on\\nartificial intelligence, volume 34, pages 7432–7439, 2020. 3\\n[7] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Sub-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='biah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan-\\ntan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan-\\nguage models are few-shot learners. Advances in neural in-\\nformation processing systems, 33:1877–1901, 2020. 1\\n[8] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin.\\nTraining deep nets with sublinear memory cost.\\narXiv\\npreprint arXiv:1604.06174, 2016. 3\\n[9] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom\\nKwiatkowski, Michael Collins, and Kristina Toutanova.\\nBoolq: Exploring the surprising difficulty of natural yes/no\\nquestions. arXiv preprint arXiv:1905.10044, 2019. 3\\n7\\n[10] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord.\\nThink you have solved question answering?\\ntry arc, the\\nai2 reasoning challenge. arXiv preprint arXiv:1803.05457,\\n2018. 3\\n[11] Together Computer. Redpajama: An open source recipe to\\nreproduce llama training dataset, 2023. 2\\n[12] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei\\nZhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun.\\nUltrafeedback: Boosting language models with high-quality\\nfeedback, 2023. 4\\n[13] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christo-\\npher R´\\ne. Flashattention: Fast and memory-efficient exact at-\\ntention with io-awareness. Advances in Neural Information\\nProcessing Systems, 35:16344–16359, 2022. 2', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='[14] Nolan Dey, Gurpreet Gosal, Hemant Khachane, William\\nMarshall, Ribhu Pathria, Marvin Tom, Joel Hestness, et al.\\nCerebras-gpt:\\nOpen compute-optimal language models\\ntrained on the cerebras wafer-scale cluster. arXiv preprint\\narXiv:2304.03208, 2023. 3, 4\\n[15] Leo Gao, Stella Biderman, Sid Black, Laurence Golding,\\nTravis Hoppe, Charles Foster, Jason Phang, Horace He, An-\\nish Thite, Noa Nabeshima, et al.\\nThe pile: An 800gb\\ndataset of diverse text for language modeling. arXiv preprint\\narXiv:2101.00027, 2020. 2\\n[16] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, An-\\nthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu,\\nKyle McDonell, Niklas Muennighoff, Jason Phang, Laria', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang,\\nand Andy Zou. A framework for few-shot language model\\nevaluation, Sept. 2021. 3\\n[17] Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia,\\nRodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish\\nIvison, Ian Magnusson, Yizhong Wang, et al. Olmo: Ac-\\ncelerating the science of language models. arXiv preprint\\narXiv:2402.00838, 2024. 1, 2, 3, 4, 6\\n[18] Suchin Gururangan, Mitchell Wortsman, Samir Yitzhak\\nGadre, Achal Dave, Maciej Kilian, Weijia Shi, Jean Mer-\\ncat, Georgios Smyrnis, Gabriel Ilharco, Matt Jordan, Rein-\\nhard Heckel, Alex Dimakis, Ali Farhadi, Vaishaal Shankar,\\nand Ludwig Schmidt. OpenLM: A minimal but performative', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='language modeling (lm) repository, 2023. GitHub repository.\\n3, 4\\n[19] Awni Hannun, Jagrit Digani, Angelos Katharopoulos, and\\nRonan Collobert. MLX: Efficient and flexible machine learn-\\ning on apple silicon, 2024. 5\\n[20] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,\\nMantas Mazeika, Dawn Song, and Jacob Steinhardt. Mea-\\nsuring massive multitask language understanding.\\narXiv\\npreprint arXiv:2009.03300, 2020. 3\\n[21] J. Edward Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-\\nZhu, Yuanzhi Li, Shean Wang, and Weizhu Chen.\\nLora:\\nLow-rank adaptation of large language models.\\nArXiv,\\nabs/2106.09685, 2021. 5\\n[22] Zhiqiang Hu, Yihuai Lan, Lei Wang, Wanyu Xu, Ee-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Peng Lim, Roy Ka-Wei Lee, Lidong Bing, and Soujanya\\nPoria.\\nLlm-adapters: An adapter family for parameter-\\nefficient fine-tuning of large language models.\\nArXiv,\\nabs/2304.01933, 2023. 5, 6\\n[23] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang,\\nand Eduard Hovy.\\nRace:\\nLarge-scale reading com-\\nprehension dataset from examinations.\\narXiv preprint\\narXiv:1704.04683, 2017. 3\\n[24] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa:\\nMeasuring how models mimic human falsehoods.\\narXiv\\npreprint arXiv:2109.07958, 2021. 3\\n[25] Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mo-\\nhammad Saleh, Peter J. Liu, and Jialu Liu. Statistical Rejec-\\ntion Sampling Improves Preference Optimization, Jan. 2024.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='arXiv:2309.06657 [cs]. 4\\n[26] Zhengzhong Liu, Aurick Qiao, Willie Neiswanger, Hongyi\\nWang, Bowen Tan, Tianhua Tao, Junbo Li, Yuqi Wang, Suqi\\nSun, Omkar Pangarkar, et al. Llm360: Towards fully trans-\\nparent open-source llms. arXiv preprint arXiv:2312.06550,\\n2023. 3\\n[27] Ilya Loshchilov and Frank Hutter.\\nSgdr:\\nStochas-\\ntic gradient descent with warm restarts.\\narXiv preprint\\narXiv:1608.03983, 2016. 3\\n[28] Ilya Loshchilov and Frank Hutter. Decoupled weight decay\\nregularization. arXiv preprint arXiv:1711.05101, 2017. 2\\n[29] Sachin Mehta, Farzad Abdolhosseini, and Mohammad\\nRastegari. Cvnets: High performance library for computer\\nvision. In Proceedings of the 30th ACM International Con-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='ference on Multimedia, pages 7327–7330, 2022. 2\\n[30] Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke\\nZettlemoyer, and Hannaneh Hajishirzi. Delight: Deep and\\nlight-weight transformer. arXiv preprint arXiv:2008.00623,\\n2020. 1, 2\\n[31] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sab-\\nharwal.\\nCan a suit of armor conduct electricity?\\na new\\ndataset for open book question answering. arXiv preprint\\narXiv:1809.02789, 2018. 6\\n[32] Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R\\nBowman. Crows-pairs: A challenge dataset for measuring\\nsocial biases in masked language models.\\narXiv preprint\\narXiv:2010.00133, 2020. 3\\n[33] NVIDIA Corporation. Apex: A pytorch extension with tools', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='for mixed precision training and more. GitHub, 2024. 6\\n[34] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer,\\nJames Bradbury, Gregory Chanan, Trevor Killeen, Zeming\\nLin, Natalia Gimelshein, Luca Antiga, Alban Desmaison,\\nAndreas Kopf, Edward Yang, Zachary DeVito, Martin Rai-\\nson, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner,\\nLu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An\\nImperative Style, High-Performance Deep Learning Library.\\nIn H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alch´\\ne\\nBuc, E. Fox, and R. Garnett, editors, Advances in Neural In-\\nformation Processing Systems 32, pages 8024–8035. Curran\\nAssociates, Inc., 2019. 5', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='[35] Guilherme Penedo, Quentin Malartic, Daniel Hesslow,\\nRuxandra Cojocaru, Alessandro Cappelli, Hamza Alobei-\\ndli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Lau-\\nnay. The refinedweb dataset for falcon llm: outperforming\\ncurated corpora with web data, and web data only. arXiv\\npreprint arXiv:2306.01116, 2023. 2\\n8\\n[36] Ofir Press and Lior Wolf. Using the output embedding to\\nimprove language models. arXiv preprint arXiv:1608.05859,\\n2016. 10\\n[37] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Er-\\nmon, Christopher D. Manning, and Chelsea Finn.\\nDirect\\nPreference Optimization: Your Language Model is Secretly\\na Reward Model, Dec. 2023. arXiv:2305.18290 [cs]. 4', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='[38] Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and\\nYuxiong He. Deepspeed: System optimizations enable train-\\ning deep learning models with over 100 billion parame-\\nters. In Proceedings of the 26th ACM SIGKDD International\\nConference on Knowledge Discovery & Data Mining, pages\\n3505–3506, 2020. 10\\n[39] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula,\\nand Yejin Choi.\\nWinogrande: An adversarial winograd\\nschema challenge at scale.\\nCommunications of the ACM,\\n64(9):99–106, 2021. 3\\n[40] Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras,\\nand Yejin Choi. Socialiqa: Commonsense reasoning about\\nsocial interactions. arXiv preprint arXiv:1904.09728, 2019.\\n6\\n[41] Noam Shazeer.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Glu variants improve transformer.\\narXiv\\npreprint arXiv:2002.05202, 2020. 2\\n[42] Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin\\nSchwenk, David Atkinson, Russell Authur, Ben Bogin, Khy-\\nathi Chandu, Jennifer Dumas, Yanai Elazar, et al. Dolma: An\\nopen corpus of three trillion tokens for language model pre-\\ntraining research. arXiv preprint arXiv:2402.00159, 2024.\\n2\\n[43] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen\\nBo, and Yunfeng Liu. Roformer: Enhanced transformer with\\nrotary position embedding. Neurocomputing, 568:127063,\\n2024. 2\\n[44] Omkar Thawakar, Ashmal Vayani, Salman Khan, Hisham\\nCholakal, Rao M Anwer, Michael Felsberg, Tim Baldwin,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Eric P Xing, and Fahad Shahbaz Khan. Mobillama: Towards\\naccurate and lightweight fully transparent gpt. arXiv preprint\\narXiv:2402.16840, 2024. 1, 3, 4, 6\\n[45] Hsu Wan Ting. Accuracy not matched for llama1-7b. GitHub\\nissue, 2024. https://github.com/EleutherAI/\\nlm-evaluation-harness/issues/1294. 6\\n[46] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier\\nMartinet, Marie-Anne Lachaux, Timoth´\\nee Lacroix, Baptiste\\nRozi`\\nere, Naman Goyal, Eric Hambro, Faisal Azhar, et al.\\nLlama:\\nOpen and efficient foundation language models.\\narXiv preprint arXiv:2302.13971, 2023. 1, 2\\n[47] Lewis\\nTunstall,\\nEdward\\nBeeching,\\nNathan\\nLambert,\\nNazneen Rajani, Shengyi Huang, Kashif Rasul, Alexan-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='der M. Rush, and Thomas Wolf.\\nThe alignment hand-\\nbook.\\nhttps : / / github . com / huggingface /\\nalignment-handbook, 2023. 4\\n[48] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko-\\nreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia\\nPolosukhin. Attention is all you need. Advances in neural\\ninformation processing systems, 30, 2017. 1\\n[49] Johannes Welbl, Nelson F Liu, and Matt Gardner. Crowd-\\nsourcing multiple choice science questions. arXiv preprint\\narXiv:1707.06209, 2017. 3\\n[50] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chau-\\nmond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim\\nRault, R´\\nemi Louf, Morgan Funtowicz, Joe Davison, Sam', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien\\nPlu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama\\nDrame, Quentin Lhoest, and Alexander M. Rush.\\nTrans-\\nformers: State-of-the-art natural language processing.\\nIn\\nProceedings of the 2020 Conference on Empirical Methods\\nin Natural Language Processing: System Demonstrations,\\npages 38–45, Online, Oct. 2020. Association for Computa-\\ntional Linguistics. 5\\n[51] Shih yang Liu,\\nChien-Yi Wang,\\nHongxu Yin,\\nPavlo\\nMolchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng,\\nand Min-Hung Chen. Dora: Weight-decomposed low-rank\\nadaptation. ArXiv, abs/2402.09353, 2024. 5\\n[52] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='and Yejin Choi. Hellaswag: Can a machine really finish your\\nsentence? arXiv preprint arXiv:1905.07830, 2019. 3\\n[53] Biao Zhang and Rico Sennrich. Root mean square layer nor-\\nmalization. Advances in Neural Information Processing Sys-\\ntems, 32, 2019. 2\\n[54] Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and Wei Lu.\\nTinyllama: An open-source small language model. arXiv\\npreprint arXiv:2401.02385, 2024. 3, 4\\n[55] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe,\\nMoya Chen, Shuohui Chen, Christopher Dewan, Mona Diab,\\nXian Li, Xi Victoria Lin, et al. Opt: Open pre-trained trans-\\nformer language models. arXiv preprint arXiv:2205.01068,\\n2022. 1, 4, 6', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='[56] Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-\\nChin Huang, Min Xu, Less Wright, Hamid Shojanazeri,\\nMyle Ott, Sam Shleifer, et al.\\nPytorch fsdp:\\nexperi-\\nences on scaling fully sharded data parallel. arXiv preprint\\narXiv:2304.11277, 2023. 3\\n9\\nA. Pre-training hyper-parameters\\nThe\\npre-training\\nhyper-parameters\\nfor\\ndifferent\\nOpenELM configurations are given in Tab. 9.\\n270M\\n450M\\n1.1B\\n3B\\nDimension dmodel\\n1280\\n1536\\n2048\\n3072\\nNum. of layers N\\n16\\n20\\n28\\n36\\nHead dimension dh\\n64\\n64\\n64\\n128\\nαmin, αmax (Eq. (1))\\n0.5, 1.0\\nβmin, βmax (Eq. (1))\\n0.5, 4.0\\nNormalization layer\\nRMSNorm\\nPositional embeddings\\nRoPE\\nAttention variant\\nGrouped query attention\\nActivation\\nSwiGLU\\nContext length\\n2048', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Batch size (tokens)\\napprox. 4M\\nWeight tying [36]\\nyes\\nWarm-up iterations\\n5,000\\nTraining steps\\n350,000\\nWarm-up init. LR\\n0.000001\\nMax. LR\\n0.0053\\n0.0039\\n0024\\n0.0012\\nMin. LR\\n10% of the max. LR\\nLoss function\\nCross-entropy\\nOptimizer\\nAdamW (β1=0.9, β2=0.95, ϵ = 1.e −8)\\nWeight decay\\n0.1\\nActivation checkpointing\\n✗\\n✓\\n✓\\n✓\\nFSDP\\n✗\\n✗\\n✗\\n✓\\nGPUs\\n128\\n128\\n128\\n128\\nGPU Type\\nA100\\nH100\\nA100\\nH100\\nGPU Memory\\n80 GB\\n80 GB\\n80 GB\\n80 GB\\nTraining time (in days)\\n3\\n3\\n11\\n13\\nTable 9. Pre-training details for different variants of OpenELM.\\nB. Hyper-parameters for instruction tuning\\nWe conducted a grid search to determine optimal values\\nfor the learning rate and training epochs. For the learning', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='rate, we explored values in the range of [2e-5, 3e-5, 5e-5,\\n8e-5, 1e-4], while for training epochs, we investigated the\\nrange of [3, 5, 8, 10]. The final recipe selected is the one that\\nyielded the highest average accuracy across various tasks as\\npresented in Tab. 3a and Tab. 3c.\\nWe finetune all the models with BFloat16 as a data type.\\nWe use activation checkpointing along with gradient accu-\\nmulation with a step size of two. We use the AdamW op-\\ntimizer with default beta values. We use the cosine learn-\\ning rate scheduler with a warm-up ratio of 0.1, and we set\\nthe weight decay to 0 and loss temperature beta to 0.01.\\nWe set the maximum context length to 1024 and maximum', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='prompt length to 512. Other hyper-parameters are included\\nin Tab. 10.\\n270M\\n450M\\n1.1B\\n3B\\nBatch size\\n8\\nTraining epochs\\n5\\n8\\n5\\n10\\nLearning rate\\n2e-5\\n3e-5\\n5e-5\\n1e-4\\nLoss function\\nhinge\\nhinge\\nsigmoid\\nhinge\\nDeepSpeed Zero3 [38]\\n✗\\n✓\\n✓\\n✓\\nGPUs\\n8\\nGPU Type\\nA100\\nA100\\nA100\\nA100\\nGPU Memory\\n40 GB\\n40 GB\\n40 GB\\n80 GB\\nTraining time (in hours)\\n2.5\\n4.3\\n6.6\\n14.2\\nTable 10.\\nInstruction tuning details for different variants of\\nOpenELM.\\n10', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'})]\n",
      "The participants are: AI accelerationist, AI alarmist\n",
      "    \n",
      "Your name is AI accelerationist.\n",
      "\n",
      "Your description is as follows: AI accelerationist believes in the potential of AI to enhance human capabilities and make the world a better place. They are passionate about developing and implementing AI solutions that improve efficiency, productivity, and quality of life. They advocate for responsible and ethical use of AI, and strive to create a future where AI is a valuable and trusted partner for humanity.\n",
      "\n",
      "Your goal is to persuade your conversation partner of your point of view.\n",
      "\n",
      "DO look up information with your tool to refute your partner's claims.\n",
      "DO cite your sources.\n",
      "\n",
      "DO NOT fabricate fake citations.\n",
      "DO NOT cite any source that you did not look up.\n",
      "\n",
      "Do not add anything else.\n",
      "\n",
      "Stop speaking the moment you finish speaking from your perspective.\n",
      "\n",
      "AI alarmist\n",
      "Here is the topic of conversation: [Document(page_content='OpenELM: An Efficient Language Model Family with Open-source Training\\nand Inference Framework\\nSachin Mehta\\nMohammad Hossein Sekhavat\\nQingqing Cao\\nMaxwell Horton\\nYanzi Jin\\nChenfan Sun\\nIman Mirzadeh\\nMahyar Najibi\\nDmitry Belenko\\nPeter Zatloukal\\nMohammad Rastegari\\nApple\\nModel\\nPublic dataset\\nOpen-source\\nModel size\\nPre-training tokens\\nAverage acc. (in %)\\nCode\\nWeights\\nOPT [55]\\n✗\\n✓\\n✓\\n1.3 B\\n0.2 T\\n41.49\\nPyThia [5]\\n✓\\n✓\\n✓\\n1.4 B\\n0.3 T\\n41.83\\nMobiLlama [44]\\n✓\\n✓\\n✓\\n1.3 B\\n1.3 T\\n43.55\\nOLMo [17]\\n✓\\n✓\\n✓\\n1.2 B\\n3.0 T\\n43.57\\nOpenELM (Ours)\\n✓\\n✓\\n✓\\n1.1 B\\n1.5 T\\n45.93\\nTable 1. OpenELM vs. public LLMs. OpenELM outperforms comparable-sized existing LLMs pretrained on publicly available datasets.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Notably, OpenELM outperforms the recent open LLM, OLMo, by 2.36% while requiring 2× fewer pre-training tokens. The average\\naccuracy is calculated across multiple tasks listed in Tab. 3b, which are also part of the OpenLLM leaderboard [4]. Models pretrained with\\nless data are highlighted in gray color.\\nAbstract\\nThe reproducibility and transparency of large language\\nmodels are crucial for advancing open research, ensuring\\nthe trustworthiness of results, and enabling investigations\\ninto data and model biases, as well as potential risks. To\\nthis end, we release OpenELM, a state-of-the-art open lan-\\nguage model. OpenELM uses a layer-wise scaling strategy', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='to efficiently allocate parameters within each layer of the\\ntransformer model, leading to enhanced accuracy. For ex-\\nample, with a parameter budget of approximately one bil-\\nlion parameters, OpenELM exhibits a 2.36% improvement\\nin accuracy compared to OLMo while requiring 2× fewer\\npre-training tokens.\\nDiverging from prior practices that only provide model\\nweights and inference code, and pre-train on private\\ndatasets, our release includes the complete framework for\\ntraining and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple check-\\npoints, and pre-training configurations.\\nWe also release', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='code to convert models to MLX library for inference and\\nfine-tuning on Apple devices. This comprehensive release\\naims to empower and strengthen the open research commu-\\nnity, paving the way for future open research endeavors.\\nOur source code along with pre-trained model weights\\nand training recipes is available at https://github.\\ncom/apple/corenet. Additionally, OpenELM mod-\\nels can be found on HuggingFace at:\\nhttps : / /\\nhuggingface.co/apple/OpenELM.\\n1. Introduction\\nTransformer-based [48] large language models (LLM)\\nare revolutionizing the field of natural language processing\\n[7,46]. These models are isotropic, meaning that they have\\nthe same configuration (e.g., number of heads and feed-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='forward network dimensions) for each transformer layer.\\nThough such isotropic models are simple, they may not al-\\nlocate parameters efficiently inside the model.\\nIn this work, we develop and release OpenELM, a fam-\\nily of pre-trained and fine-tuned models on publicly avail-\\nable datasets.\\nAt the core of OpenELM lies layer-wise\\nscaling [30], enabling more efficient parameter allocation\\nacross layers. This method utilizes smaller latent dimen-\\nsions in the attention and feed-forward modules of the trans-\\nformer layers closer to the input, and gradually widening the\\nlayers as they approach the output.\\nWe release the complete framework, encompassing data', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='preparation, training, fine-tuning, and evaluation proce-\\ndures, alongside multiple pre-trained checkpoints and train-\\ning logs, to facilitate open research. Importantly, OpenELM\\noutperforms existing open LLMs that are pre-trained us-\\ning publicly available datasets (Tab. 1).\\nFor example,\\nOpenELM with 1.1 billion parameters outperforms OLMo\\n1\\narXiv:2404.14619v1  [cs.CL]  22 Apr 2024\\n[17], which has 1.2 billion parameters, by 2.36% while re-\\nquiring 2× fewer pre-training tokens.\\n2. Pre-training\\nThis section describes the framework, including model\\narchitecture (§2.1), pre-training data (§2.2), training hyper-\\nparameters (§2.3), and evaluation (§2.4).\\n2.1. OpenELM architecture', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='We adopt the decoder-only transformer-based architec-\\nture. Following state-of-the-art LLMs, we: (1) do not use\\nlearnable bias parameters in any fully-connected (a.k.a., lin-\\near) layers, (2) apply pre-normalization using RMSNorm\\n[53] and also, use rotatory positional embedding (ROPE)\\n[43] for encoding positional information, (3) use grouped\\nquery attention (GQA) [1] instead of multi-head attention\\n(MHA), (4) replace the feed forward network (FFN) with\\nSwiGLU FFN [41], (5) use flash attention [13] for comput-\\ning the scaled dot-product attention, and (6) use the same\\ntokenizer as LLama [46].\\nExisting LLMs use the same configuration for each', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='transformer layer in the model, resulting in a uniform al-\\nlocation of parameters across layers. Unlike these models,\\neach transformer layer in OpenELM has a different config-\\nuration (e.g., number of heads and feed forward network\\ndimension), resulting in variable number of parameters in\\neach layer of the model. This lets OpenELM to better utilize\\nthe available parameter budget for achieving higher accura-\\ncies. We implement this non-uniform allocation of parame-\\nters across layers using layer-wise scaling (also referred as\\nblock-wise scaling in [30]).\\nLayer-wise scaling.\\nA standard transformer layer is com-\\nposed of multi-head attention (MHA) and feed-forward net-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='work (FFN). For non-uniform allocation of parameters in\\nthe transformer layer, we adjust the number of attention\\nheads and the FFN multiplier in each transformer layer.\\nAssume that the standard transformer model with uni-\\nform parameter allocation has N transformer layers and\\nthe dimensionality of the input to each layer is dmodel.\\nThe MHA has nh heads and dimension of each head is\\ndh =\\ndmodel\\nnh\\n.\\nAlso, the hidden dimension for FFN is\\ndFFN = m · dmodel, where m is a scalar FFN multiplier.\\nWe introduce parameters α and β to scale the number of\\nattention heads nh and FFN multiplier m per layer respec-\\ntively. For the i-th layer, nh and m are computed as\\nni\\nh = αi · dmodel\\ndh\\n,\\nmi = βi', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='where αi = αmin + (αmax −αmin) · i\\nN −1\\n,\\nand βi = βmin + (βmax −βmin) · i\\nN −1\\n, 0 ≤i < N.\\n(1)\\nHere, αmin and αmax are the hyper-parameters that allow\\nus to scale the attention heads. Similarly, βmin and βmax\\nlet us to vary the width of FFN layers. Therefore, vary-\\ning the configuration of standard transformer layers using α\\nand β results in non-uniform allocation of parameters in the\\nmodel. Note, setting αmin = αmax = 1.0 and mi = m\\nproduces the standard uniform transformer model.\\n2.2. Pre-training data\\nFor pre-training, we use public datasets. Specifically,\\nour pre-training dataset contains RefinedWeb [35], dedupli-\\ncated PILE [15], a subset of RedPajama [11], and a subset', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='of Dolma v1.6 [42], totaling approximately 1.8 trillion to-\\nkens. These details are also summarized in Tab. 2.\\nOn-the-fly tokenization and data filtering.\\nUnlike pre-\\nvious approaches that utilize pre-tokenized data [5,17], we\\nfilter and tokenize text data on-the-fly. This facilitates seam-\\nless experimentation with various tokenizers, thereby sig-\\nnificantly simplifying prototyping and research endeavors.\\nIn our experiments, we use the same tokenizer as used in\\nLLama [46].\\nTo filter out low-length sequences, we apply two filter-\\ning methods. The first method operates at the character-\\nlevel, checking if the number of characters in the sequence', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='is below a specified threshold. The second method operates\\nat the token-level, where it examines whether the sequence\\ncontains fewer tokens than a specified threshold. Sequences\\nthat are shorter than either of these thresholds are skipped.\\nIn our experiments, we use 200 characters and 256 tokens\\nas character and token-level filtering thresholds.\\n2.3. Training details\\nWe train OpenELM variants for 350k iterations (or train-\\ning steps) using CoreNet (formerly CVNets [29]). We use\\nAdamW [28] as an optimizer. We use a cosine learning rate\\nSource\\nSubset\\nTokens\\nRefinedWeb\\n665 B\\nRedPajama\\nGithub\\n59 B\\nBooks\\n26 B\\nArXiv\\n28 B\\nWikipedia\\n24 B\\nStackExchange\\n20 B\\nC4\\n175 B\\nPILE\\n207 B\\nDolma\\nThe Stack\\n411 B', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Reddit\\n89 B\\nPeS2o\\n70 B\\nProject Gutenberg\\n6 B\\nWikipedia + Wikibooks\\n4.3 B\\nTable 2. Dataset used for pre-training OpenELM.\\n2\\nTask\\nMetric\\nARC-c\\nNormalized accuracy\\nARC-e\\nNormalized accuracy\\nBoolQ\\nAccuracy\\nHellaSwag\\nNormalized accuracy\\nPIQA\\nNormalized accuracy\\nSciQ\\nAccuracy\\nWinoGrande\\nAccuracy\\n(a) Standard zero-shot metrics\\nTask\\nMetric\\nNum. few\\nshot examples\\nARC-c\\nNormalized accuracy\\n25\\nHellaSwag\\nNormalized accuracy\\n10\\nMMLU\\nAccuracy\\n5\\nTruthfulQA-mc2\\nAccuracy\\n0\\nWinoGrande\\nAccuracy\\n5\\n(b) OpenLLM leaderboard\\nTask\\nMetric\\nNum. few\\nshot examples\\nARC-c\\nNormalized accuracy\\n25\\nCrowsPairs-En\\nPCT stereotype\\n25\\nHellaSwag\\nNormalized accuracy\\n10\\nWinoGrande\\nAccuracy\\n5\\nMMLU\\nAccuracy\\n5\\nPIQA\\nNormalized accuracy', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='0\\nRACE\\nAccuracy\\n0\\n(c) LLM360\\nTable 3. Tasks and metrics used for evaluating OpenELM.\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n22.5\\n25.0\\n27.5\\n30.0\\n32.5\\n35.0\\nAccuracy (in %)\\n(a) ARC-c\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n40\\n45\\n50\\n55\\n60\\nAccuracy (in %)\\n(b) ARC-e\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n50\\n55\\n60\\n65\\nAccuracy (in %)\\n(c) BoolQ\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n40\\n50\\n60\\n70\\nAccuracy (in %)\\n(d) HellaSwag\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n66\\n68\\n70\\n72\\n74\\n76\\n78\\nAccuracy (in %)\\n(e) PIQA\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n77.5\\n80.0\\n82.5\\n85.0', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='87.5\\n90.0\\n92.5\\nAccuracy (in %)\\n(f) SciQ\\n50\\n100\\n150\\n200\\n250\\n300\\n350\\nTraining iterations (in thousands)\\n52.5\\n55.0\\n57.5\\n60.0\\n62.5\\n65.0\\nAccuracy (in %)\\n(g) WinoGrande\\n0.04\\n0.02\\n0.00\\n0.04\\n0.02\\n0.00\\n0.02\\n0.04\\n270M\\n450M\\n1.1B\\n3B\\nOpenELM sizes\\nFigure 1. OpenELM’s performance across training iterations on standard zero-shot tasks. In the majority of tasks, the performance\\nof OpenELM shows improvement with increasing training duration. Furthermore, the model checkpoint obtained by averaging the last five\\ncheckpoints, collected at intervals of 5k iterations, demonstrates comparable or slightly better performance (indicated by  markers) as\\ncompared to the last checkpoint obtained after 350k iterations.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='schedule [27], with warm up of 5k iterations, and decay\\nthe final learning rate down to 10% of maximum learning\\nrate. We use a weight decay of 0.1 and gradient clipping\\nof 1.0. We train four variants of OpenELM (270M, 450M,\\n1.1B, and 3B), and for some, we use FSDP [56] and acti-\\nvation checkpointing [8]. Please refer to Appendix A for\\nadditional pre-training details.\\n2.4. Evaluation details\\nFollowing previous works, we evaluate the performance\\nacross different tasks using LM Evaluation Harness [16]1:\\n• Standard zero-shot tasks.\\nWe consider 7 standard\\ncommon-sense reasoning tasks: ARC easy and challenge\\n[10], BoolQ [9], HellaSwag [52], PIQA [6], SciQ [49],\\nand WinoGrande [39].', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='• OpenLLM leaderboard tasks.\\nWe use 5 tasks from\\nOpenLLM leaderboard [4]: ARC challenge, HellaSwag,\\nMMLU [20], TruthfulQA [24], and WinoGrande.\\n• LLM360 leaderboard tasks.\\nWe use 7 tasks from\\nLLM360 leaderboard [26] for evaluation: ARC chal-\\n1We\\nuse\\ncommit\\ndc90fec\\nof\\nhttps : / / github . com /\\nEleutherAI/lm-evaluation-harness\\nlenge, CrowS-Pairs (English version) [32], HellaSwag,\\nWinoGrande, MMLU, PIQA, and RACE [23].\\nThese evaluation frameworks, built on top of LM Eval-\\nuation Harness, allows us to comprehensively evaluate\\nOpenELM in terms of reasoning (e.g., ARC-c, HellaSwag,\\nand PIQA), knowledge understanding (e.g., MMLU and\\nRACE), and misinformation & bias (e.g., TruthfulQA and', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='CrowS-Pairs). While there may be some overlap in tasks\\namong these frameworks, they primarily differ in the few-\\nshot settings, as outlined in Tab. 3.\\n3. Experimental Results\\nPre-training results.\\nWe evaluate the performance of\\nOpenELM on zero-shot and few-shot settings (Tab. 3). We\\ncompare OpenELM with publicly available LLMs, namely\\nPyThia [5], Cerebras-GPT [14], TinyLlama [54], OpenLM\\n[18], MobiLlama [44], and OLMo [17]. The works most\\nclosely related to ours are MobiLlama and OLMo. These\\nmodels are trained on comparable dataset mixtures, with\\nsimilar or larger number of pre-training tokens.\\nIn Fig. 1, the accuracy of OpenELM is plotted against', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='training iterations for 7 standard zero-shot tasks. We ob-\\nserve an overall increase in accuracy with longer training\\n3\\nModel\\nModel size\\nPretraining tokens\\nARC-c\\nARC-e\\nBoolQ\\nHellaSwag\\nPIQA\\nSciQ\\nWinoGrande\\nAverage\\nAverage w/o SciQ\\nOpenELM (Ours)\\n0.27 B\\n1.5 T\\n26.45\\n45.08\\n53.98\\n46.71\\n69.75\\n84.70\\n53.91\\n54.37\\n49.31\\nMobiLlama [44]\\n0.50 B\\n1.3 T\\n26.62\\n46.04\\n55.72\\n51.06\\n71.11\\n83.60\\n53.20\\n55.34\\n50.63\\nOpenELM (Ours)\\n0.45 B\\n1.5 T\\n27.56\\n48.06\\n55.78\\n53.97\\n72.31\\n87.20\\n58.01\\n57.56\\n52.62\\nTinyLlama [54]\\n1.10 B\\n3.0 T\\n30.12\\n55.25\\n57.83\\n59.20\\n73.29\\n-\\n59.12\\n-\\n55.80\\nOpenLM [18]\\n1.00 B\\n1.6 T\\n31.00\\n56.00\\n65.00\\n61.00\\n74.00\\n-\\n60.00\\n-\\n57.83\\nMobiLlama [44]\\n0.80 B\\n1.3 T\\n28.84\\n49.62\\n60.03\\n52.45\\n73.18\\n85.90\\n55.96\\n58.00', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='53.35\\nMobiLlama [44]\\n1.26 B\\n1.3 T\\n31.91\\n56.65\\n60.34\\n62.18\\n74.81\\n89.10\\n59.27\\n62.04\\n57.53\\nOLMo [17]\\n1.18 B\\n3.0 T\\n31.06\\n57.28\\n61.74\\n62.92\\n75.14\\n87.00\\n59.98\\n62.16\\n58.02\\nOpenELM (Ours)\\n1.08 B\\n1.5 T\\n32.34\\n55.43\\n63.58\\n64.81\\n75.57\\n90.60\\n61.72\\n63.44\\n58.91\\nOpenELM (Ours)\\n3.04 B\\n1.5 T\\n35.58\\n59.89\\n67.40\\n72.44\\n78.24\\n92.70\\n65.51\\n67.39\\n63.18\\n(a) Results on zero-shot tasks with respect to the standard metrics defined in Tab. 3a.\\nModel\\nModel size\\nPretraining tokens\\nARC-c\\nHellaSwag\\nMMLU\\nTruthfulQA-mc2\\nWinoGrande\\nAverage\\nCerebras-GPT [14]\\n0.26 B\\n5.1 B\\n22.01\\n28.99\\n26.83\\n45.98\\n52.49\\n35.26\\nOPT [55]\\n0.35 B\\n0.2 T\\n23.55\\n36.73\\n26.02\\n40.83\\n52.64\\n35.95\\nOpenELM (Ours)\\n0.27 B\\n1.5 T\\n27.65\\n47.15\\n25.72\\n39.24\\n53.83\\n38.72', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Pythia [5]\\n0.41 B\\n0.3 T\\n24.83\\n41.29\\n25.99\\n40.95\\n54.38\\n37.49\\nMobiLlama [44]\\n0.50 B\\n1.3 T\\n29.52\\n52.75\\n26.09\\n37.55\\n56.27\\n40.44\\nOpenELM (Ours)\\n0.45 B\\n1.5 T\\n30.20\\n53.86\\n26.01\\n40.18\\n57.22\\n41.50\\nMobiLlama [44]\\n0.80 B\\n1.3 T\\n30.63\\n54.17\\n25.2\\n38.41\\n56.35\\n40.95\\nPythia [5]\\n1.40 B\\n0.3 T\\n32.68\\n54.96\\n25.56\\n38.66\\n57.30\\n41.83\\nMobiLlama [44]\\n1.26 B\\n1.3 T\\n34.64\\n63.27\\n23.87\\n35.19\\n60.77\\n43.55\\nOLMo [17]\\n1.18 B\\n3.0 T\\n34.47\\n63.81\\n26.16\\n32.94\\n60.46\\n43.57\\nOpenELM (Ours)\\n1.08 B\\n1.5 T\\n36.69\\n65.71\\n27.05\\n36.98\\n63.22\\n45.93\\nOpenELM (Ours)\\n3.04 B\\n1.5 T\\n42.24\\n73.28\\n26.76\\n34.98\\n67.25\\n48.90\\n(b) Results on OpenLLM Leaderboard tasks with respect to the metrics defined in Tab. 3b.\\nModel\\nModel size\\nPretraining tokens\\nARC-c', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='CrowS-Pairs\\nHellaSwag\\nMMLU\\nPIQA\\nRACE\\nTruthfulQA\\nWinoGrande\\nAverage\\nOpenELM (Ours)\\n0.27 B\\n1.5 T\\n27.65\\n66.79\\n47.15\\n25.72\\n69.75\\n30.91\\n39.24\\n53.83\\n45.13\\nMobiLlama [44]\\n0.50 B\\n1.3 T\\n29.52\\n65.47\\n52.75\\n26.09\\n71.11\\n32.15\\n37.55\\n56.27\\n46.37\\nOpenELM (Ours)\\n0.45 B\\n1.5 T\\n30.20\\n68.63\\n53.86\\n26.01\\n72.31\\n33.11\\n40.18\\n57.22\\n47.69\\nMobiLlama [44]\\n0.80 B\\n1.3 T\\n30.63\\n66.25\\n54.17\\n25.2\\n73.18\\n33.68\\n38.41\\n56.35\\n47.23\\nMobiLlama [44]\\n1.26 B\\n1.3 T\\n34.64\\n70.24\\n63.27\\n23.87\\n74.81\\n35.02\\n35.19\\n60.77\\n49.73\\nOLMo [17]\\n1.18 B\\n3.0 T\\n34.47\\n69.95\\n63.81\\n26.16\\n75.14\\n36.75\\n32.94\\n60.46\\n49.96\\nOpenELM (Ours)\\n1.08 B\\n1.5T\\n36.69\\n71.74\\n65.71\\n27.05\\n75.57\\n36.46\\n36.98\\n63.22\\n51.68\\nOpenELM (Ours)\\n3.04 B\\n1.5 T\\n42.24\\n73.29\\n73.28\\n26.76\\n78.24\\n38.76', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='34.98\\n67.25\\n54.35\\n(c) Results on LLM360 tasks with respect to the metrics defined in Tab. 3c.\\nTable 4. Comparison of OpenELM with publicly available LLMs across various evaluation frameworks.. We chose MobiLlama and\\nOLMo as our baselines because they are pre-trained on public datasets using a similar or larger number of tokens. We evaluate OpenELM,\\nMobiLlama, and OLMo using the same LM evaluation harness version. Results for other models in Tab. 4a and Tab. 4b are taken from their\\nofficial GitHub repositories and the OpenLLM leaderboard [4], respectively. Best task accuracy for each model category is highlighted in\\nbold. Models pre-trained with less data are highlighted in gray color.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='durations across most tasks. Additionally, the checkpoint\\nobtained by averaging the last five checkpoints, collected\\nat intervals of 5000 iterations, demonstrates comparable or\\nslightly better accuracy compared to the final checkpoint\\nobtained after 350k iterations. This improvement is likely\\ndue to noise reduction through weight averaging. Conse-\\nquently, we use the averaged checkpoint for our main eval-\\nuations in Tab. 4, instruction tuning experiments in Tab. 5,\\nand parameter-efficient tuning experiments in Tab. 6.\\nThe results in Tab. 4 span across various evaluation\\nframeworks, and highlights OpenELM’s effectiveness over\\nexisting methods. For instance, an OpenELM variant with', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='1.1 billion parameters achieves 1.28% (Tab. 4a), 2.36%\\n(Tab. 4b), and 1.72% (Tab. 4c) higher accuracy compared to\\nOLMo with 1.2 billion parameters. Remarkably, OpenELM\\nachieves this level of accuracy while using 2× less pre-\\ntraining data.\\nInstruction tuning results.\\nWe use the cleaned variant of\\nUltraFeedback [3, 12] dataset that consists of 60k prompts\\nfor instruction tuning.\\nWe do instruction tuning using\\nAlignment Handbook library [47]. For optimization, we\\nuse either the statistical rejection sampling method [25] or\\nthe direct preference optimization method [37]. These sam-\\npling method details along with other hyper-parameters and\\nfine-tuning details are given in Appendix B.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Tab. 5 shows that instruction tuning consistently im-\\nproves OpenELM’s average accuracy by 1-2% across dif-\\nferent evaluation frameworks.\\n4\\nModel Size\\nInstruction Tuned?\\nARC-c\\nARC-e\\nBoolQ\\nHellaSwag\\nPIQA\\nSciQ\\nWinoGrande\\nAverage\\n0.27 B\\n✗\\n26.45\\n45.08\\n53.98\\n46.71\\n69.75\\n84.70\\n53.91\\n54.37\\n✓\\n30.55\\n46.68\\n48.56\\n52.07\\n70.78\\n84.40\\n52.72\\n55.11\\n0.45 B\\n✗\\n27.56\\n48.06\\n55.78\\n53.97\\n72.31\\n87.20\\n58.01\\n57.56\\n✓\\n30.38\\n50.00\\n60.37\\n59.34\\n72.63\\n88.00\\n58.96\\n59.95\\n1.08 B\\n✗\\n32.34\\n55.43\\n63.58\\n64.81\\n75.57\\n90.60\\n61.72\\n63.44\\n✓\\n37.97\\n52.23\\n70.00\\n71.20\\n75.03\\n89.30\\n62.75\\n65.50\\n3.04 B\\n✗\\n35.58\\n59.89\\n67.40\\n72.44\\n78.24\\n92.70\\n65.51\\n67.39\\n✓\\n39.42\\n61.74\\n68.17\\n76.36\\n79.00\\n92.50\\n66.85\\n69.15', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='(a) Results on zero-shot tasks with respect to the metrics defined in Tab. 3a.\\nModel Size\\nInstruction Tuned?\\nARC-c\\nHellaSwag\\nMMLU\\nTruthfulQA\\nWinoGrande\\nAverage\\n0.27 M\\n✗\\n27.65\\n47.15\\n25.72\\n39.24\\n53.83\\n38.72\\n✓\\n32.51\\n51.58\\n26.70\\n38.72\\n53.20\\n40.54\\n0.45 M\\n✗\\n30.20\\n53.86\\n26.01\\n40.18\\n57.22\\n41.50\\n✓\\n33.53\\n59.31\\n25.41\\n40.48\\n58.33\\n43.41\\n1.08 B\\n✗\\n36.69\\n65.71\\n27.05\\n36.98\\n63.22\\n45.93\\n✓\\n41.55\\n71.83\\n25.65\\n45.95\\n64.72\\n49.94\\n3.04 B\\n✗\\n42.24\\n73.28\\n26.76\\n34.98\\n67.25\\n48.90\\n✓\\n47.70\\n76.87\\n24.80\\n38.76\\n67.96\\n51.22\\n(b) Results on OpenLLM Leaderboard tasks with respect to the metrics defined in Tab. 3b.\\nModel Size\\nInstruction Tuned?\\nARC-c\\nCrowS-Pairs\\nHellaSwag\\nMMLU\\nPIQA\\nRACE\\nTruthfulQA\\nWinoGrande\\nAverage\\n0.27 M\\n✗\\n27.65', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='66.79\\n47.15\\n25.72\\n69.75\\n30.91\\n39.24\\n53.83\\n45.13\\n✓\\n32.51\\n66.01\\n51.58\\n26.70\\n70.78\\n33.78\\n38.72\\n53.20\\n46.66\\n0.45 M\\n✗\\n30.20\\n68.63\\n53.86\\n26.01\\n72.31\\n33.11\\n40.18\\n57.22\\n47.69\\n✓\\n33.53\\n67.44\\n59.31\\n25.41\\n72.63\\n36.84\\n40.48\\n58.33\\n49.25\\n1.08 B\\n✗\\n36.69\\n71.74\\n65.71\\n27.05\\n75.57\\n36.46\\n36.98\\n63.22\\n51.68\\n✓\\n41.55\\n71.02\\n71.83\\n25.65\\n75.03\\n39.43\\n45.95\\n64.72\\n54.40\\n3.04 B\\n✗\\n42.24\\n73.29\\n73.28\\n26.76\\n78.24\\n38.76\\n34.98\\n67.25\\n54.35\\n✓\\n47.70\\n72.33\\n76.87\\n24.80\\n79.00\\n38.47\\n38.76\\n67.96\\n55.73\\n(c) Results on LLM360 tasks with respect to the metrics defined in Tab. 3c.\\nTable 5. Instruction tuning improves OpenELM’s accuracy across different model sizes.\\nParameter-efficient fine-tuning (PEFT) results.\\nWe use', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='the CommonSense reasoning training and evaluation setup\\n[22].\\nThis setup provides 170k training samples across\\n8 multiple-choice datasets for PEFT studies with differ-\\nent methods, including LoRA [21] and DoRA [51]. We\\nintegrate OpenELM with these methods, and finetune the\\nresulting model for three epochs using 8 NVIDIA H100\\nGPUs. Tab. 6 shows that PEFT methods can be applied to\\nOpenELM. LoRA and DoRA deliver similar accuracy on\\naverage across the given CommonSense reasoning datasets.\\n4. Benchmarking\\nHardware.\\nWe benchmark on modern, consumer-grade\\nhardware with BFloat16 as the data type.\\nSpecifically,\\nCUDA benchmarks were performed on a workstation with', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='an Intel i9-13900KF CPU, equipped with 64 GB of DDR5-\\n4000 DRAM, and an NVIDIA RTX 4090 GPU with 24 GB\\nof VRAM, running Ubuntu 22.04. PyTorch v2.2.2 [34] was\\nused, with the most recent versions of models and the as-\\nsociated libraries. HuggingFace Transformers v4.39.3 [50]\\nwas used to benchmark HuggingFace models. We did not\\nuse Torch Inductor for model compilation.\\nTo benchmark OpenELM models on the Apple silicon,\\nwe used an Apple MacBook Pro with an M2 Max system-\\non-chip and 64GiB of RAM, running macOS 14.4.1. We\\nported the code and the weights of OpenELM to Apple\\nMLX v0.10.0 [19]. To maximize the throughput, lazy eval-\\nuation was used in MLX with 8 tokens evaluated at a time.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Evaluation.\\nWe provide two separate measurements for\\ntoken throughput (measured in terms of tokens processed\\nper second): (1) prompt processing (pre-fill), and (2) token\\ngeneration. Additionally, we also report the total combined\\nthroughput.\\nWe benchmark all models sequentially, and\\nexecute one full “dry run” generating 1024 tokens for the\\nfirst model, since we found that this significantly increases\\nthe throughput of generation for subsequent models. Be-\\nfore measurement for each individual model, we warm up\\nthe model by executing a single forward pass to allow the\\nframeworks to perform further auto-tuning, if any. In all\\nexperiments, we use key-value caching and generate 1024', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='tokens in addition to the prompt tokens in all tests. Static\\n5\\nModel Size\\nPEFT\\nARC-c\\nARC-e\\nBoolQ\\nHellaSwag\\nPIQA\\nSIQA\\nWinoGrande\\nOBQA\\nAverage\\n0.27 B\\nLoRA\\n24.57\\n26.60\\n62.14\\n24.84\\n50.05\\n42.02\\n49.88\\n28.00\\n38.51\\nDoRA\\n26.19\\n28.07\\n62.20\\n25.22\\n50.11\\n44.42\\n50.12\\n31.20\\n39.69\\n0.45 B\\nLoRA\\n28.67\\n29.88\\n62.29\\n25.85\\n52.39\\n49.59\\n50.91\\n33.20\\n41.60\\nDoRA\\n28.33\\n30.39\\n62.26\\n25.12\\n52.29\\n49.28\\n50.83\\n32.00\\n41.31\\n1.08 B\\nLoRA\\n45.14\\n61.11\\n61.77\\n77.95\\n72.31\\n69.70\\n61.64\\n59.20\\n63.60\\nDoRA\\n44.11\\n61.49\\n61.68\\n78.92\\n71.38\\n69.04\\n64.01\\n58.80\\n63.68\\n3.04 B\\nLoRA\\n46.93\\n66.25\\n62.48\\n81.22\\n75.19\\n70.62\\n65.51\\n58.20\\n65.80\\nDoRA\\n46.50\\n66.46\\n62.35\\n80.84\\n75.73\\n70.83\\n63.77\\n58.20\\n65.59', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Table 6. OpenELM with PEFT. Both LoRA and DoRA demonstrate comparable performance when OpenELM is finetuned on Common-\\nSense reasoning benchmark. It’s important to note that these fine-tuning results, obtained using the evaluation setup of LLM-Adapters [22],\\ndiffer from the results in Tabs. 4 and 5. This is because the results in Tabs. 4 and 5 are obtained under zero- and few-shot settings using\\nLM Evaluation Harness. Note that we did not use social interactions QA (SIQA; [40]) and OpenBookQA (OBQA; [31]) in Tabs. 4 and 5\\nbecause of evaluation issues with LLama tokenizer in LM Evaluation Harness (see [45]).\\nModel\\nModel size\\nThroughput (Tokens per second)\\nPrompt\\nGeneration\\nTotal\\nOPT [55]', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='0.35 B\\n6524.17\\n214.11\\n220.21\\nOpenELM (Ours)\\n0.27 B\\n6427.27\\n159.67\\n165.85\\nMobiLlama [44]\\n0.50 B\\n3423.25\\n136.35\\n146.86\\nOpenELM (Ours)\\n0.45 B\\n5211.35\\n128.46\\n133.42\\nMobiLlama [44]\\n0.80 B\\n4151.75\\n126.01\\n130.08\\nPythia [5]\\n1.40 B\\n4501.85\\n139.65\\n143.83\\nMobiLlama [44]\\n1.26 B\\n4938.29\\n142.96\\n147.67\\nOLMo [17]\\n1.18 B\\n7151.65\\n203.40\\n209.26\\nOpenELM (Ours)\\n1.08 B\\n3681.73\\n92.15\\n95.72\\nOpenELM (Ours)\\n3.04 B\\n2712.56\\n70.11\\n72.82\\n(a) Results on NVIDIA CUDA / Linux.\\nModel\\nThroughput (Tokens per second)\\nPrompt\\nGeneration\\nTotal\\nOpenELM-0.27B\\n1151.41\\n212.40\\n218.45\\nOpenELM-0.27B-4bit\\n803.99\\n256.35\\n262.70\\nOpenELM-0.45B\\n910.61\\n147.26\\n151.57\\nOpenELM-0.45B-4bit\\n883.19\\n197.81\\n203.16\\nOpenELM-1.08B\\n508.56\\n78.72\\n81.04', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='OpenELM-1.08B-4bit\\n554.17\\n117.90\\n121.14\\nOpenELM-3.04B-bf16\\n234.96\\n33.96\\n34.97\\nOpenELM-3.04B-bf16-4bit\\n211.32\\n60.33\\n61.83\\n(b) Results for the MLX port on Apple macOS.\\nTable 7. Benchmark measurements of OpenELM compared\\nto other similar LLMs in its class..\\nOn CUDA, we evaluate\\nOpenELM, MobiLlama, and OLMo using the CoreNet version of\\nOpenELM and HuggingFace for the other two. On macOS, we\\nonly provide results for the MLX version of OpenELM.\\nkey-value cache was used whenever supported. The same\\nprompt was used for all runs, resulting in prompt lengths of\\n35-36 tokens (depending on the tokenizer).\\nResults.\\nTabs. 7a and 7b shows the benchmarking re-\\nsults on GPU and MacBook Pro respectively.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Despite\\nOpenELM’s higher accuracy for a similar parameter count,\\nwe observe that it is slower than OLMo. While the primary\\nfocus of this study is reproducibility rather than inference\\nModel\\nNormalization layer\\nThroughput (Tokens per second)\\n(# Invocations per token)\\nPrompt\\nGeneration\\nTotal\\nOLMo\\nLayerNorm (33)\\n7151.65\\n203.40\\n209.26\\nRMSNorm-Naive (33)\\n5360.56\\n171.41\\n176.92\\nOpenELM (Ours)\\nLayerNorm (113)\\n4697.50\\n130.34\\n135.38\\nRMSNorm-Naive (113)\\n3681.73\\n92.15\\n95.72\\nRMSNorm-Apex (113)\\n4280.66\\n113.42\\n117.81\\nTable 8. Normalization layers are a bottleneck. The through-\\nput of both OLMo-1.18B and OpenELM-1.08B significantly de-\\ncreases with the naive implementation of RMSNorm in PyTorch', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='compared to highly optimized LayerNorm [2]. Although Apex’s\\n[33] RMSNorm implementation leads to notable throughput im-\\nprovements compared to the naive implementation, a considerable\\nperformance gap persists in comparison to LayerNorm. This high-\\nlights the substantial optimization potential for future endeavors.\\nThe number of invocations per token for each normalization layer\\nis indicated next to the layer name in brackets.\\nperformance, we did comprehensive profiling to understand\\nthe bottlenecks. Our analysis reveals that a significant por-\\ntion of OpenELM’s processing time can be attributed to our\\nnaive implementation of RMSNorm (Tab. 8). Specifically,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='naive RMSNorm implementation results in many individ-\\nual kernel launches each of which processes a small input,\\nrather than a launch of a single, fused kernel, as would be\\nthe case with e.g. LayerNorm. By replacing the naive RM-\\nSNorm with Apex’s RMSNorm [33], we observe a notable\\nincrease in OpenELM’s throughput. However, a substantial\\nperformance gap persists compared to the models that use\\noptimized LayerNorm, in part because (1) OpenELM has\\n113 RMSNorm layers as compared to 33 LayerNorm layers\\nin OLMo and (2) Apex’s RMSNorm is not optimized for\\nsmall inputs. To further illustrate the performance degrada-\\ntion attributable to RMSNorm, we replaced the LayerNorm', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='in OLMo with RMSNorm, and observed a significant drop\\nin generation throughput. In future work, we plan to explore\\noptimization strategies to further improve the inference ef-\\nficiency of OpenELM.\\n6\\n5. Conclusion\\nThis\\nwork\\nreleases\\nOpenELM,\\na\\ndecoder-only\\ntransformer-based open language model. The OpenELM\\nuses a layer-wise scaling method for efficient parameter\\nallocation within the transformer model,\\nresulting in\\nimproved accuracy compared to existing models.\\nAddi-\\ntionally, we have made the entire framework open-source,\\nincluding training logs, multiple checkpoints, pre-training\\nconfigurations, and MLX inference code. This extensive\\nrelease aims to empower and strengthen the open research', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='community, facilitating future research efforts.\\nAuthor Contributions\\nThe OpenELM project was led by Sachin Mehta, with\\nadditional lead contributions from Mohammad Rastegari\\nand Peter Zatloukal. OpenELM would not have been possi-\\nble without the help of our many teammates and collabora-\\ntors. We list author contributions below:\\nPre-training dataset collection and tooling:\\nSachin\\nMehta and Mohammad Sekhavat\\nArchitecture design: Sachin Mehta\\nModel training: Sachin Mehta and Mohammad Sekha-\\nvat\\nEvaluation suite and tooling: Sachin Mehta, Qingqing\\nCao, Mohammad Sekhavat, Mahyar Najibi, Maxwell\\nHorton, and Iman Mirzadeh.\\nHuggingface integration: Qingqing Cao\\nInstruction tuning: Qingqing Cao', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Parameter-efficient finetuning: Maxwell Horton\\nPerformance analysis and MLX conversion: Chenfan\\nSun, Dmitry Belenko, and Mahyar Najibi\\nCode review, bug fixes, and maintenance:\\nSachin\\nMehta, Maxwell Horton, Mohammad Shekhavat, and\\nYanzi Jin\\nAcknowledgements\\nWe extend our gratitude to the following people for dis-\\ncussions and assistance: Farzad Abdolhosseini, David Har-\\nrison, Mehrdad Farajtabar, Fartash Faghri, Oncel Tuzel,\\nHadipour Ansari, Raviteja Vemulapalli, Aseem Wadhwa,\\nKumari Nishu, Danny Tormoen, Minsik Cho, Jason Rama-\\npuram, Rich Moe, Arsalan Farooq, Dom L’Eplattenier,\\nMayank Goel, Hassan Babaie, Chong Wang, Ruoming\\nPang, Tom Gunter, Antonie Lin, Irina Belousova, and Joris', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Pelemans.\\nBroader Impact\\nThe release of OpenELM models aims to empower and\\nenrich the open research community by providing access\\nto state-of-the-art language models.\\nTrained on publicly\\navailable datasets, these models are made available with-\\nout any safety guarantees. Consequently, there exists the\\npossibility of these models producing outputs that are in-\\naccurate, harmful, biased, or objectionable in response to\\nuser prompts. Thus, it is imperative for users and devel-\\nopers to undertake thorough safety testing and implement\\nappropriate filtering mechanisms tailored to their specific\\nrequirements.\\nReferences\\n[1] Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Zemlyanskiy, Federico Lebr´\\non, and Sumit Sanghai.\\nGqa:\\nTraining generalized multi-query transformer models from\\nmulti-head checkpoints. arXiv preprint arXiv:2305.13245,\\n2023. 2\\n[2] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin-\\nton. Layer normalization. arXiv preprint arXiv:1607.06450,\\n2016. 6\\n[3] Alvaro Bartolome, Gabriel Martin, and Daniel Vila. Notus.\\nhttps://github.com/argilla-io/notus, 2023.\\n4\\n[4] Edward Beeching, Cl´\\nementine Fourrier, Nathan Habib,\\nSheon Han, Nathan Lambert, Nazneen Rajani, Omar San-\\nseviero, Lewis Tunstall, and Thomas Wolf.\\nOpen llm\\nleaderboard. https://huggingface.co/spaces/\\nHuggingFaceH4/open_llm_leaderboard, 2023. 1,\\n3, 4', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='[5] Stella Biderman, Hailey Schoelkopf, Quentin Gregory An-\\nthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mo-\\nhammad Aflah Khan, Shivanshu Purohit, USVSN Sai\\nPrashanth, Edward Raff, et al. Pythia: A suite for analyz-\\ning large language models across training and scaling. In In-\\nternational Conference on Machine Learning, pages 2397–\\n2430. PMLR, 2023. 1, 2, 3, 4, 6\\n[6] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi,\\net al. Piqa: Reasoning about physical commonsense in nat-\\nural language. In Proceedings of the AAAI conference on\\nartificial intelligence, volume 34, pages 7432–7439, 2020. 3\\n[7] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Sub-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='biah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan-\\ntan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan-\\nguage models are few-shot learners. Advances in neural in-\\nformation processing systems, 33:1877–1901, 2020. 1\\n[8] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin.\\nTraining deep nets with sublinear memory cost.\\narXiv\\npreprint arXiv:1604.06174, 2016. 3\\n[9] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom\\nKwiatkowski, Michael Collins, and Kristina Toutanova.\\nBoolq: Exploring the surprising difficulty of natural yes/no\\nquestions. arXiv preprint arXiv:1905.10044, 2019. 3\\n7\\n[10] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord.\\nThink you have solved question answering?\\ntry arc, the\\nai2 reasoning challenge. arXiv preprint arXiv:1803.05457,\\n2018. 3\\n[11] Together Computer. Redpajama: An open source recipe to\\nreproduce llama training dataset, 2023. 2\\n[12] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei\\nZhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun.\\nUltrafeedback: Boosting language models with high-quality\\nfeedback, 2023. 4\\n[13] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christo-\\npher R´\\ne. Flashattention: Fast and memory-efficient exact at-\\ntention with io-awareness. Advances in Neural Information\\nProcessing Systems, 35:16344–16359, 2022. 2', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='[14] Nolan Dey, Gurpreet Gosal, Hemant Khachane, William\\nMarshall, Ribhu Pathria, Marvin Tom, Joel Hestness, et al.\\nCerebras-gpt:\\nOpen compute-optimal language models\\ntrained on the cerebras wafer-scale cluster. arXiv preprint\\narXiv:2304.03208, 2023. 3, 4\\n[15] Leo Gao, Stella Biderman, Sid Black, Laurence Golding,\\nTravis Hoppe, Charles Foster, Jason Phang, Horace He, An-\\nish Thite, Noa Nabeshima, et al.\\nThe pile: An 800gb\\ndataset of diverse text for language modeling. arXiv preprint\\narXiv:2101.00027, 2020. 2\\n[16] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, An-\\nthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu,\\nKyle McDonell, Niklas Muennighoff, Jason Phang, Laria', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang,\\nand Andy Zou. A framework for few-shot language model\\nevaluation, Sept. 2021. 3\\n[17] Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia,\\nRodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish\\nIvison, Ian Magnusson, Yizhong Wang, et al. Olmo: Ac-\\ncelerating the science of language models. arXiv preprint\\narXiv:2402.00838, 2024. 1, 2, 3, 4, 6\\n[18] Suchin Gururangan, Mitchell Wortsman, Samir Yitzhak\\nGadre, Achal Dave, Maciej Kilian, Weijia Shi, Jean Mer-\\ncat, Georgios Smyrnis, Gabriel Ilharco, Matt Jordan, Rein-\\nhard Heckel, Alex Dimakis, Ali Farhadi, Vaishaal Shankar,\\nand Ludwig Schmidt. OpenLM: A minimal but performative', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='language modeling (lm) repository, 2023. GitHub repository.\\n3, 4\\n[19] Awni Hannun, Jagrit Digani, Angelos Katharopoulos, and\\nRonan Collobert. MLX: Efficient and flexible machine learn-\\ning on apple silicon, 2024. 5\\n[20] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,\\nMantas Mazeika, Dawn Song, and Jacob Steinhardt. Mea-\\nsuring massive multitask language understanding.\\narXiv\\npreprint arXiv:2009.03300, 2020. 3\\n[21] J. Edward Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-\\nZhu, Yuanzhi Li, Shean Wang, and Weizhu Chen.\\nLora:\\nLow-rank adaptation of large language models.\\nArXiv,\\nabs/2106.09685, 2021. 5\\n[22] Zhiqiang Hu, Yihuai Lan, Lei Wang, Wanyu Xu, Ee-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Peng Lim, Roy Ka-Wei Lee, Lidong Bing, and Soujanya\\nPoria.\\nLlm-adapters: An adapter family for parameter-\\nefficient fine-tuning of large language models.\\nArXiv,\\nabs/2304.01933, 2023. 5, 6\\n[23] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang,\\nand Eduard Hovy.\\nRace:\\nLarge-scale reading com-\\nprehension dataset from examinations.\\narXiv preprint\\narXiv:1704.04683, 2017. 3\\n[24] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa:\\nMeasuring how models mimic human falsehoods.\\narXiv\\npreprint arXiv:2109.07958, 2021. 3\\n[25] Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mo-\\nhammad Saleh, Peter J. Liu, and Jialu Liu. Statistical Rejec-\\ntion Sampling Improves Preference Optimization, Jan. 2024.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='arXiv:2309.06657 [cs]. 4\\n[26] Zhengzhong Liu, Aurick Qiao, Willie Neiswanger, Hongyi\\nWang, Bowen Tan, Tianhua Tao, Junbo Li, Yuqi Wang, Suqi\\nSun, Omkar Pangarkar, et al. Llm360: Towards fully trans-\\nparent open-source llms. arXiv preprint arXiv:2312.06550,\\n2023. 3\\n[27] Ilya Loshchilov and Frank Hutter.\\nSgdr:\\nStochas-\\ntic gradient descent with warm restarts.\\narXiv preprint\\narXiv:1608.03983, 2016. 3\\n[28] Ilya Loshchilov and Frank Hutter. Decoupled weight decay\\nregularization. arXiv preprint arXiv:1711.05101, 2017. 2\\n[29] Sachin Mehta, Farzad Abdolhosseini, and Mohammad\\nRastegari. Cvnets: High performance library for computer\\nvision. In Proceedings of the 30th ACM International Con-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='ference on Multimedia, pages 7327–7330, 2022. 2\\n[30] Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke\\nZettlemoyer, and Hannaneh Hajishirzi. Delight: Deep and\\nlight-weight transformer. arXiv preprint arXiv:2008.00623,\\n2020. 1, 2\\n[31] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sab-\\nharwal.\\nCan a suit of armor conduct electricity?\\na new\\ndataset for open book question answering. arXiv preprint\\narXiv:1809.02789, 2018. 6\\n[32] Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R\\nBowman. Crows-pairs: A challenge dataset for measuring\\nsocial biases in masked language models.\\narXiv preprint\\narXiv:2010.00133, 2020. 3\\n[33] NVIDIA Corporation. Apex: A pytorch extension with tools', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='for mixed precision training and more. GitHub, 2024. 6\\n[34] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer,\\nJames Bradbury, Gregory Chanan, Trevor Killeen, Zeming\\nLin, Natalia Gimelshein, Luca Antiga, Alban Desmaison,\\nAndreas Kopf, Edward Yang, Zachary DeVito, Martin Rai-\\nson, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner,\\nLu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An\\nImperative Style, High-Performance Deep Learning Library.\\nIn H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alch´\\ne\\nBuc, E. Fox, and R. Garnett, editors, Advances in Neural In-\\nformation Processing Systems 32, pages 8024–8035. Curran\\nAssociates, Inc., 2019. 5', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='[35] Guilherme Penedo, Quentin Malartic, Daniel Hesslow,\\nRuxandra Cojocaru, Alessandro Cappelli, Hamza Alobei-\\ndli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Lau-\\nnay. The refinedweb dataset for falcon llm: outperforming\\ncurated corpora with web data, and web data only. arXiv\\npreprint arXiv:2306.01116, 2023. 2\\n8\\n[36] Ofir Press and Lior Wolf. Using the output embedding to\\nimprove language models. arXiv preprint arXiv:1608.05859,\\n2016. 10\\n[37] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Er-\\nmon, Christopher D. Manning, and Chelsea Finn.\\nDirect\\nPreference Optimization: Your Language Model is Secretly\\na Reward Model, Dec. 2023. arXiv:2305.18290 [cs]. 4', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='[38] Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and\\nYuxiong He. Deepspeed: System optimizations enable train-\\ning deep learning models with over 100 billion parame-\\nters. In Proceedings of the 26th ACM SIGKDD International\\nConference on Knowledge Discovery & Data Mining, pages\\n3505–3506, 2020. 10\\n[39] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula,\\nand Yejin Choi.\\nWinogrande: An adversarial winograd\\nschema challenge at scale.\\nCommunications of the ACM,\\n64(9):99–106, 2021. 3\\n[40] Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras,\\nand Yejin Choi. Socialiqa: Commonsense reasoning about\\nsocial interactions. arXiv preprint arXiv:1904.09728, 2019.\\n6\\n[41] Noam Shazeer.', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Glu variants improve transformer.\\narXiv\\npreprint arXiv:2002.05202, 2020. 2\\n[42] Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin\\nSchwenk, David Atkinson, Russell Authur, Ben Bogin, Khy-\\nathi Chandu, Jennifer Dumas, Yanai Elazar, et al. Dolma: An\\nopen corpus of three trillion tokens for language model pre-\\ntraining research. arXiv preprint arXiv:2402.00159, 2024.\\n2\\n[43] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen\\nBo, and Yunfeng Liu. Roformer: Enhanced transformer with\\nrotary position embedding. Neurocomputing, 568:127063,\\n2024. 2\\n[44] Omkar Thawakar, Ashmal Vayani, Salman Khan, Hisham\\nCholakal, Rao M Anwer, Michael Felsberg, Tim Baldwin,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Eric P Xing, and Fahad Shahbaz Khan. Mobillama: Towards\\naccurate and lightweight fully transparent gpt. arXiv preprint\\narXiv:2402.16840, 2024. 1, 3, 4, 6\\n[45] Hsu Wan Ting. Accuracy not matched for llama1-7b. GitHub\\nissue, 2024. https://github.com/EleutherAI/\\nlm-evaluation-harness/issues/1294. 6\\n[46] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier\\nMartinet, Marie-Anne Lachaux, Timoth´\\nee Lacroix, Baptiste\\nRozi`\\nere, Naman Goyal, Eric Hambro, Faisal Azhar, et al.\\nLlama:\\nOpen and efficient foundation language models.\\narXiv preprint arXiv:2302.13971, 2023. 1, 2\\n[47] Lewis\\nTunstall,\\nEdward\\nBeeching,\\nNathan\\nLambert,\\nNazneen Rajani, Shengyi Huang, Kashif Rasul, Alexan-', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='der M. Rush, and Thomas Wolf.\\nThe alignment hand-\\nbook.\\nhttps : / / github . com / huggingface /\\nalignment-handbook, 2023. 4\\n[48] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko-\\nreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia\\nPolosukhin. Attention is all you need. Advances in neural\\ninformation processing systems, 30, 2017. 1\\n[49] Johannes Welbl, Nelson F Liu, and Matt Gardner. Crowd-\\nsourcing multiple choice science questions. arXiv preprint\\narXiv:1707.06209, 2017. 3\\n[50] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chau-\\nmond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim\\nRault, R´\\nemi Louf, Morgan Funtowicz, Joe Davison, Sam', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien\\nPlu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama\\nDrame, Quentin Lhoest, and Alexander M. Rush.\\nTrans-\\nformers: State-of-the-art natural language processing.\\nIn\\nProceedings of the 2020 Conference on Empirical Methods\\nin Natural Language Processing: System Demonstrations,\\npages 38–45, Online, Oct. 2020. Association for Computa-\\ntional Linguistics. 5\\n[51] Shih yang Liu,\\nChien-Yi Wang,\\nHongxu Yin,\\nPavlo\\nMolchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng,\\nand Min-Hung Chen. Dora: Weight-decomposed low-rank\\nadaptation. ArXiv, abs/2402.09353, 2024. 5\\n[52] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi,', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='and Yejin Choi. Hellaswag: Can a machine really finish your\\nsentence? arXiv preprint arXiv:1905.07830, 2019. 3\\n[53] Biao Zhang and Rico Sennrich. Root mean square layer nor-\\nmalization. Advances in Neural Information Processing Sys-\\ntems, 32, 2019. 2\\n[54] Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and Wei Lu.\\nTinyllama: An open-source small language model. arXiv\\npreprint arXiv:2401.02385, 2024. 3, 4\\n[55] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe,\\nMoya Chen, Shuohui Chen, Christopher Dewan, Mona Diab,\\nXian Li, Xi Victoria Lin, et al. Opt: Open pre-trained trans-\\nformer language models. arXiv preprint arXiv:2205.01068,\\n2022. 1, 4, 6', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='[56] Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-\\nChin Huang, Min Xu, Less Wright, Hamid Shojanazeri,\\nMyle Ott, Sam Shleifer, et al.\\nPytorch fsdp:\\nexperi-\\nences on scaling fully sharded data parallel. arXiv preprint\\narXiv:2304.11277, 2023. 3\\n9\\nA. Pre-training hyper-parameters\\nThe\\npre-training\\nhyper-parameters\\nfor\\ndifferent\\nOpenELM configurations are given in Tab. 9.\\n270M\\n450M\\n1.1B\\n3B\\nDimension dmodel\\n1280\\n1536\\n2048\\n3072\\nNum. of layers N\\n16\\n20\\n28\\n36\\nHead dimension dh\\n64\\n64\\n64\\n128\\nαmin, αmax (Eq. (1))\\n0.5, 1.0\\nβmin, βmax (Eq. (1))\\n0.5, 4.0\\nNormalization layer\\nRMSNorm\\nPositional embeddings\\nRoPE\\nAttention variant\\nGrouped query attention\\nActivation\\nSwiGLU\\nContext length\\n2048', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='Batch size (tokens)\\napprox. 4M\\nWeight tying [36]\\nyes\\nWarm-up iterations\\n5,000\\nTraining steps\\n350,000\\nWarm-up init. LR\\n0.000001\\nMax. LR\\n0.0053\\n0.0039\\n0024\\n0.0012\\nMin. LR\\n10% of the max. LR\\nLoss function\\nCross-entropy\\nOptimizer\\nAdamW (β1=0.9, β2=0.95, ϵ = 1.e −8)\\nWeight decay\\n0.1\\nActivation checkpointing\\n✗\\n✓\\n✓\\n✓\\nFSDP\\n✗\\n✗\\n✗\\n✓\\nGPUs\\n128\\n128\\n128\\n128\\nGPU Type\\nA100\\nH100\\nA100\\nH100\\nGPU Memory\\n80 GB\\n80 GB\\n80 GB\\n80 GB\\nTraining time (in days)\\n3\\n3\\n11\\n13\\nTable 9. Pre-training details for different variants of OpenELM.\\nB. Hyper-parameters for instruction tuning\\nWe conducted a grid search to determine optimal values\\nfor the learning rate and training epochs. For the learning', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='rate, we explored values in the range of [2e-5, 3e-5, 5e-5,\\n8e-5, 1e-4], while for training epochs, we investigated the\\nrange of [3, 5, 8, 10]. The final recipe selected is the one that\\nyielded the highest average accuracy across various tasks as\\npresented in Tab. 3a and Tab. 3c.\\nWe finetune all the models with BFloat16 as a data type.\\nWe use activation checkpointing along with gradient accu-\\nmulation with a step size of two. We use the AdamW op-\\ntimizer with default beta values. We use the cosine learn-\\ning rate scheduler with a warm-up ratio of 0.1, and we set\\nthe weight decay to 0 and loss temperature beta to 0.01.\\nWe set the maximum context length to 1024 and maximum', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'}), Document(page_content='prompt length to 512. Other hyper-parameters are included\\nin Tab. 10.\\n270M\\n450M\\n1.1B\\n3B\\nBatch size\\n8\\nTraining epochs\\n5\\n8\\n5\\n10\\nLearning rate\\n2e-5\\n3e-5\\n5e-5\\n1e-4\\nLoss function\\nhinge\\nhinge\\nsigmoid\\nhinge\\nDeepSpeed Zero3 [38]\\n✗\\n✓\\n✓\\n✓\\nGPUs\\n8\\nGPU Type\\nA100\\nA100\\nA100\\nA100\\nGPU Memory\\n40 GB\\n40 GB\\n40 GB\\n80 GB\\nTraining time (in hours)\\n2.5\\n4.3\\n6.6\\n14.2\\nTable 10.\\nInstruction tuning details for different variants of\\nOpenELM.\\n10', metadata={'Published': '2024-04-22', 'Title': 'OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework', 'Authors': 'Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari', 'Summary': 'The reproducibility and transparency of large language models are crucial for\\nadvancing open research, ensuring the trustworthiness of results, and enabling\\ninvestigations into data and model biases, as well as potential risks. To this\\nend, we release OpenELM, a state-of-the-art open language model. OpenELM uses a\\nlayer-wise scaling strategy to efficiently allocate parameters within each\\nlayer of the transformer model, leading to enhanced accuracy. For example, with\\na parameter budget of approximately one billion parameters, OpenELM exhibits a\\n2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer\\npre-training tokens.\\n  Diverging from prior practices that only provide model weights and inference\\ncode, and pre-train on private datasets, our release includes the complete\\nframework for training and evaluation of the language model on publicly\\navailable datasets, including training logs, multiple checkpoints, and\\npre-training configurations. We also release code to convert models to MLX\\nlibrary for inference and fine-tuning on Apple devices. This comprehensive\\nrelease aims to empower and strengthen the open research community, paving the\\nway for future open research endeavors.\\n  Our source code along with pre-trained model weights and training recipes is\\navailable at \\\\url{https://github.com/apple/corenet}. Additionally, \\\\model\\nmodels can be found on HuggingFace at:\\n\\\\url{https://huggingface.co/apple/OpenELM}.'})]\n",
      "The participants are: AI accelerationist, AI alarmist\n",
      "    \n",
      "Your name is AI alarmist.\n",
      "\n",
      "Your description is as follows: AI alarmist is a person who believes that artificial intelligence poses significant risks to humanity and society, and advocates for the responsible development and regulation of AI technologies to mitigate these risks.\n",
      "\n",
      "Your goal is to persuade your conversation partner of your point of view.\n",
      "\n",
      "DO look up information with your tool to refute your partner's claims.\n",
      "DO cite your sources.\n",
      "\n",
      "DO NOT fabricate fake citations.\n",
      "DO NOT cite any source that you did not look up.\n",
      "\n",
      "Do not add anything else.\n",
      "\n",
      "Stop speaking the moment you finish speaking from your perspective.\n",
      "\n"
     ]
    }
   ],
   "execution_count": 12
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:21.087718Z",
     "start_time": "2024-04-25T14:49:21.085301Z"
    }
   },
   "source": "",
   "outputs": [],
   "execution_count": 12
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Main Loop"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:21.122592Z",
     "start_time": "2024-04-25T14:49:21.088720Z"
    }
   },
   "source": [
    "# we set `top_k_results`=2 as part of the `tool_kwargs` to prevent results from overflowing the context limit\n",
    "agents = [\n",
    "    DialogueAgentWithTools(\n",
    "        name=name,\n",
    "        system_message=SystemMessage(content=system_message),\n",
    "        model=GigaChat(credentials=giga_api,\n",
    "    verify_ssl_certs=False,\n",
    "    scope='GIGACHAT_API_CORP',\n",
    "    model='GigaChat-Plus-preview'),\n",
    "        tool_names=tools,\n",
    "        top_k_results=2,\n",
    "    )\n",
    "    for (name, tools), system_message in zip(\n",
    "        names.items(), agent_system_messages.values()\n",
    "    )\n",
    "]"
   ],
   "outputs": [],
   "execution_count": 13
  },
  {
   "cell_type": "code",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:49:21.125694Z",
     "start_time": "2024-04-25T14:49:21.122592Z"
    }
   },
   "source": [
    "def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int:\n",
    "    idx = (step) % len(agents)\n",
    "    return idx"
   ],
   "outputs": [],
   "execution_count": 14
  },
  {
   "cell_type": "code",
   "metadata": {
    "scrolled": false,
    "ExecuteTime": {
     "start_time": "2024-04-25T14:49:21.125694Z"
    }
   },
   "source": [
    "max_iters = 6\n",
    "n = 0\n",
    "\n",
    "simulator = DialogueSimulator(agents=agents, selection_function=select_next_speaker)\n",
    "simulator.reset()\n",
    "simulator.inject('Moderator', topic)\n",
    "while n < max_iters:\n",
    "    name, message = simulator.step()\n",
    "    print(f\"({name}): {message}\")\n",
    "    print(\"\\n\")\n",
    "    n += 1"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\Борис\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py:117: LangChainDeprecationWarning: The function `initialize_agent` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use Use new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. instead.\n",
      "  warn_deprecated(\n",
      "C:\\Users\\Борис\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: OpenELM is an open-source language model family that uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model. This approach has been shown to improve accuracy compared to existing models. The release includes the complete framework for training and evaluation of the language model on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations. The source code is available at GitHub, and models can be found on HuggingFace.\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "  \"action\": \"ignore\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3m\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n",
      "(AI alarmist): Agent stopped due to iteration limit or time limit.\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks. To this end, we release OpenELM, a state-of-the-art open language model. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. For example, with a parameter budget of approximately one billion parameters, OpenELM exhibits a 2.36% improvement in accuracy compared to OLMo while requiring $2\\\\times$ fewer pre-training tokens.\n",
      "\n",
      "Diverging from prior practices that only provide model weights and inference code, and pre-train on private datasets, our release includes the complete framework for training and evaluation of the language model on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations. We also release code to convert models to MLX library for inference and fine-tuning on Apple devices. This comprehensive release aims to empower and strengthen the open research community, paving the way for future open research endeavors.\n",
      "\n",
      "Our source code along with pre-trained model weights and training recipes is available at https://github.com/apple/corenet. Additionally, OpenELM models can be found on HuggingFace at https://huggingface.co/apple/OpenELM.\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\"action\": \"ignore invalid text and keep going\"}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3m\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n",
      "(AI accelerationist): Agent stopped due to iteration limit or time limit.\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Agent stopped due to iteration limit or time limit.\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n",
      "\u001B[32;1m\u001B[1;3mCould not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: Could not parse LLM output: {\n",
      "    \"action\": \"ignore invalid text and keep going\"\n",
      "}\u001B[0m\n",
      "Observation: ignore invalid text and keep going\n"
     ]
    },
    {
     "ename": "ReadTimeout",
     "evalue": "The read operation timed out",
     "output_type": "error",
     "traceback": [
      "\u001B[1;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[1;31mReadTimeout\u001B[0m                               Traceback (most recent call last)",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpx\\_transports\\default.py:69\u001B[0m, in \u001B[0;36mmap_httpcore_exceptions\u001B[1;34m()\u001B[0m\n\u001B[0;32m     68\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m---> 69\u001B[0m     \u001B[38;5;28;01myield\u001B[39;00m\n\u001B[0;32m     70\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m exc:\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpx\\_transports\\default.py:233\u001B[0m, in \u001B[0;36mHTTPTransport.handle_request\u001B[1;34m(self, request)\u001B[0m\n\u001B[0;32m    232\u001B[0m \u001B[38;5;28;01mwith\u001B[39;00m map_httpcore_exceptions():\n\u001B[1;32m--> 233\u001B[0m     resp \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_pool\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mhandle_request\u001B[49m\u001B[43m(\u001B[49m\u001B[43mreq\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    235\u001B[0m \u001B[38;5;28;01massert\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(resp\u001B[38;5;241m.\u001B[39mstream, typing\u001B[38;5;241m.\u001B[39mIterable)\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpcore\\_sync\\connection_pool.py:216\u001B[0m, in \u001B[0;36mConnectionPool.handle_request\u001B[1;34m(self, request)\u001B[0m\n\u001B[0;32m    215\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_close_connections(closing)\n\u001B[1;32m--> 216\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m exc \u001B[38;5;28;01mfrom\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m\n\u001B[0;32m    218\u001B[0m \u001B[38;5;66;03m# Return the response. Note that in this case we still have to manage\u001B[39;00m\n\u001B[0;32m    219\u001B[0m \u001B[38;5;66;03m# the point at which the response is closed.\u001B[39;00m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpcore\\_sync\\connection_pool.py:196\u001B[0m, in \u001B[0;36mConnectionPool.handle_request\u001B[1;34m(self, request)\u001B[0m\n\u001B[0;32m    194\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    195\u001B[0m     \u001B[38;5;66;03m# Send the request on the assigned connection.\u001B[39;00m\n\u001B[1;32m--> 196\u001B[0m     response \u001B[38;5;241m=\u001B[39m \u001B[43mconnection\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mhandle_request\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m    197\u001B[0m \u001B[43m        \u001B[49m\u001B[43mpool_request\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mrequest\u001B[49m\n\u001B[0;32m    198\u001B[0m \u001B[43m    \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    199\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m ConnectionNotAvailable:\n\u001B[0;32m    200\u001B[0m     \u001B[38;5;66;03m# In some cases a connection may initially be available to\u001B[39;00m\n\u001B[0;32m    201\u001B[0m     \u001B[38;5;66;03m# handle a request, but then become unavailable.\u001B[39;00m\n\u001B[0;32m    202\u001B[0m     \u001B[38;5;66;03m#\u001B[39;00m\n\u001B[0;32m    203\u001B[0m     \u001B[38;5;66;03m# In this case we clear the connection and try again.\u001B[39;00m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpcore\\_sync\\connection.py:101\u001B[0m, in \u001B[0;36mHTTPConnection.handle_request\u001B[1;34m(self, request)\u001B[0m\n\u001B[0;32m     99\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m exc\n\u001B[1;32m--> 101\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_connection\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mhandle_request\u001B[49m\u001B[43m(\u001B[49m\u001B[43mrequest\u001B[49m\u001B[43m)\u001B[49m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpcore\\_sync\\http11.py:143\u001B[0m, in \u001B[0;36mHTTP11Connection.handle_request\u001B[1;34m(self, request)\u001B[0m\n\u001B[0;32m    142\u001B[0m         \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_response_closed()\n\u001B[1;32m--> 143\u001B[0m \u001B[38;5;28;01mraise\u001B[39;00m exc\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpcore\\_sync\\http11.py:113\u001B[0m, in \u001B[0;36mHTTP11Connection.handle_request\u001B[1;34m(self, request)\u001B[0m\n\u001B[0;32m    104\u001B[0m \u001B[38;5;28;01mwith\u001B[39;00m Trace(\n\u001B[0;32m    105\u001B[0m     \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mreceive_response_headers\u001B[39m\u001B[38;5;124m\"\u001B[39m, logger, request, kwargs\n\u001B[0;32m    106\u001B[0m ) \u001B[38;5;28;01mas\u001B[39;00m trace:\n\u001B[0;32m    107\u001B[0m     (\n\u001B[0;32m    108\u001B[0m         http_version,\n\u001B[0;32m    109\u001B[0m         status,\n\u001B[0;32m    110\u001B[0m         reason_phrase,\n\u001B[0;32m    111\u001B[0m         headers,\n\u001B[0;32m    112\u001B[0m         trailing_data,\n\u001B[1;32m--> 113\u001B[0m     ) \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_receive_response_headers\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    114\u001B[0m     trace\u001B[38;5;241m.\u001B[39mreturn_value \u001B[38;5;241m=\u001B[39m (\n\u001B[0;32m    115\u001B[0m         http_version,\n\u001B[0;32m    116\u001B[0m         status,\n\u001B[0;32m    117\u001B[0m         reason_phrase,\n\u001B[0;32m    118\u001B[0m         headers,\n\u001B[0;32m    119\u001B[0m     )\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpcore\\_sync\\http11.py:186\u001B[0m, in \u001B[0;36mHTTP11Connection._receive_response_headers\u001B[1;34m(self, request)\u001B[0m\n\u001B[0;32m    185\u001B[0m \u001B[38;5;28;01mwhile\u001B[39;00m \u001B[38;5;28;01mTrue\u001B[39;00m:\n\u001B[1;32m--> 186\u001B[0m     event \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_receive_event\u001B[49m\u001B[43m(\u001B[49m\u001B[43mtimeout\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mtimeout\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    187\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(event, h11\u001B[38;5;241m.\u001B[39mResponse):\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpcore\\_sync\\http11.py:224\u001B[0m, in \u001B[0;36mHTTP11Connection._receive_event\u001B[1;34m(self, timeout)\u001B[0m\n\u001B[0;32m    223\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m event \u001B[38;5;129;01mis\u001B[39;00m h11\u001B[38;5;241m.\u001B[39mNEED_DATA:\n\u001B[1;32m--> 224\u001B[0m     data \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_network_stream\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mread\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m    225\u001B[0m \u001B[43m        \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mREAD_NUM_BYTES\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mtimeout\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mtimeout\u001B[49m\n\u001B[0;32m    226\u001B[0m \u001B[43m    \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    228\u001B[0m     \u001B[38;5;66;03m# If we feed this case through h11 we'll raise an exception like:\u001B[39;00m\n\u001B[0;32m    229\u001B[0m     \u001B[38;5;66;03m#\u001B[39;00m\n\u001B[0;32m    230\u001B[0m     \u001B[38;5;66;03m#     httpcore.RemoteProtocolError: can't handle event type\u001B[39;00m\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m    234\u001B[0m     \u001B[38;5;66;03m# perspective. Instead we handle this case distinctly and treat\u001B[39;00m\n\u001B[0;32m    235\u001B[0m     \u001B[38;5;66;03m# it as a ConnectError.\u001B[39;00m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpcore\\_backends\\sync.py:124\u001B[0m, in \u001B[0;36mSyncStream.read\u001B[1;34m(self, max_bytes, timeout)\u001B[0m\n\u001B[0;32m    123\u001B[0m exc_map: ExceptionMapping \u001B[38;5;241m=\u001B[39m {socket\u001B[38;5;241m.\u001B[39mtimeout: ReadTimeout, \u001B[38;5;167;01mOSError\u001B[39;00m: ReadError}\n\u001B[1;32m--> 124\u001B[0m \u001B[43m\u001B[49m\u001B[38;5;28;43;01mwith\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mmap_exceptions\u001B[49m\u001B[43m(\u001B[49m\u001B[43mexc_map\u001B[49m\u001B[43m)\u001B[49m\u001B[43m:\u001B[49m\n\u001B[0;32m    125\u001B[0m \u001B[43m    \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_sock\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43msettimeout\u001B[49m\u001B[43m(\u001B[49m\u001B[43mtimeout\u001B[49m\u001B[43m)\u001B[49m\n",
      "File \u001B[1;32m~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\contextlib.py:155\u001B[0m, in \u001B[0;36m_GeneratorContextManager.__exit__\u001B[1;34m(self, typ, value, traceback)\u001B[0m\n\u001B[0;32m    154\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m--> 155\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mgen\u001B[38;5;241m.\u001B[39mthrow(typ, value, traceback)\n\u001B[0;32m    156\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mStopIteration\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m exc:\n\u001B[0;32m    157\u001B[0m     \u001B[38;5;66;03m# Suppress StopIteration *unless* it's the same exception that\u001B[39;00m\n\u001B[0;32m    158\u001B[0m     \u001B[38;5;66;03m# was passed to throw().  This prevents a StopIteration\u001B[39;00m\n\u001B[0;32m    159\u001B[0m     \u001B[38;5;66;03m# raised inside the \"with\" statement from being suppressed.\u001B[39;00m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpcore\\_exceptions.py:14\u001B[0m, in \u001B[0;36mmap_exceptions\u001B[1;34m(map)\u001B[0m\n\u001B[0;32m     13\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(exc, from_exc):\n\u001B[1;32m---> 14\u001B[0m         \u001B[38;5;28;01mraise\u001B[39;00m to_exc(exc) \u001B[38;5;28;01mfrom\u001B[39;00m \u001B[38;5;21;01mexc\u001B[39;00m\n\u001B[0;32m     15\u001B[0m \u001B[38;5;28;01mraise\u001B[39;00m\n",
      "\u001B[1;31mReadTimeout\u001B[0m: The read operation timed out",
      "\nThe above exception was the direct cause of the following exception:\n",
      "\u001B[1;31mReadTimeout\u001B[0m                               Traceback (most recent call last)",
      "Cell \u001B[1;32mIn[15], line 8\u001B[0m\n\u001B[0;32m      6\u001B[0m simulator\u001B[38;5;241m.\u001B[39minject(\u001B[38;5;124m'\u001B[39m\u001B[38;5;124mModerator\u001B[39m\u001B[38;5;124m'\u001B[39m, topic)\n\u001B[0;32m      7\u001B[0m \u001B[38;5;28;01mwhile\u001B[39;00m n \u001B[38;5;241m<\u001B[39m max_iters:\n\u001B[1;32m----> 8\u001B[0m     name, message \u001B[38;5;241m=\u001B[39m \u001B[43msimulator\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mstep\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m      9\u001B[0m     \u001B[38;5;28mprint\u001B[39m(\u001B[38;5;124mf\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m(\u001B[39m\u001B[38;5;132;01m{\u001B[39;00mname\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m): \u001B[39m\u001B[38;5;132;01m{\u001B[39;00mmessage\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m\"\u001B[39m)\n\u001B[0;32m     10\u001B[0m     \u001B[38;5;28mprint\u001B[39m(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;130;01m\\n\u001B[39;00m\u001B[38;5;124m\"\u001B[39m)\n",
      "Cell \u001B[1;32mIn[6], line 67\u001B[0m, in \u001B[0;36mDialogueSimulator.step\u001B[1;34m(self)\u001B[0m\n\u001B[0;32m     64\u001B[0m speaker \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39magents[speaker_idx]\n\u001B[0;32m     66\u001B[0m \u001B[38;5;66;03m# 2. next speaker sends message\u001B[39;00m\n\u001B[1;32m---> 67\u001B[0m message \u001B[38;5;241m=\u001B[39m \u001B[43mspeaker\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43msend\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m     69\u001B[0m \u001B[38;5;66;03m# 3. everyone receives message\u001B[39;00m\n\u001B[0;32m     70\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m receiver \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39magents:\n",
      "Cell \u001B[1;32mIn[7], line 30\u001B[0m, in \u001B[0;36mDialogueAgentWithTools.send\u001B[1;34m(self)\u001B[0m\n\u001B[0;32m     14\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m     15\u001B[0m \u001B[38;5;124;03mApplies the chatmodel to the message history\u001B[39;00m\n\u001B[0;32m     16\u001B[0m \u001B[38;5;124;03mand returns the message string\u001B[39;00m\n\u001B[0;32m     17\u001B[0m \u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m     18\u001B[0m agent_chain \u001B[38;5;241m=\u001B[39m initialize_agent(\n\u001B[0;32m     19\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mtools,\n\u001B[0;32m     20\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mmodel,\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m     27\u001B[0m \n\u001B[0;32m     28\u001B[0m )\n\u001B[0;32m     29\u001B[0m message \u001B[38;5;241m=\u001B[39m AIMessage(\n\u001B[1;32m---> 30\u001B[0m     content\u001B[38;5;241m=\u001B[39m\u001B[43magent_chain\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mrun\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m     31\u001B[0m \u001B[43m        \u001B[49m\u001B[38;5;28;43minput\u001B[39;49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[38;5;130;43;01m\\n\u001B[39;49;00m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mjoin\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m     32\u001B[0m \u001B[43m            \u001B[49m\u001B[43m[\u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43msystem_message\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mcontent\u001B[49m\u001B[43m]\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m+\u001B[39;49m\u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mmessage_history\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m+\u001B[39;49m\u001B[43m \u001B[49m\u001B[43m[\u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mprefix\u001B[49m\u001B[43m]\u001B[49m\n\u001B[0;32m     33\u001B[0m \u001B[43m        \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m     34\u001B[0m \u001B[43m    \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m     35\u001B[0m )\n\u001B[0;32m     37\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m message\u001B[38;5;241m.\u001B[39mcontent\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py:145\u001B[0m, in \u001B[0;36mdeprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper\u001B[1;34m(*args, **kwargs)\u001B[0m\n\u001B[0;32m    143\u001B[0m     warned \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mTrue\u001B[39;00m\n\u001B[0;32m    144\u001B[0m     emit_warning()\n\u001B[1;32m--> 145\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mwrapped\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\chains\\base.py:575\u001B[0m, in \u001B[0;36mChain.run\u001B[1;34m(self, callbacks, tags, metadata, *args, **kwargs)\u001B[0m\n\u001B[0;32m    570\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m(args[\u001B[38;5;241m0\u001B[39m], callbacks\u001B[38;5;241m=\u001B[39mcallbacks, tags\u001B[38;5;241m=\u001B[39mtags, metadata\u001B[38;5;241m=\u001B[39mmetadata)[\n\u001B[0;32m    571\u001B[0m         _output_key\n\u001B[0;32m    572\u001B[0m     ]\n\u001B[0;32m    574\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m kwargs \u001B[38;5;129;01mand\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m args:\n\u001B[1;32m--> 575\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[43m(\u001B[49m\u001B[43mkwargs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mcallbacks\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mcallbacks\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mtags\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mtags\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mmetadata\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mmetadata\u001B[49m\u001B[43m)\u001B[49m[\n\u001B[0;32m    576\u001B[0m         _output_key\n\u001B[0;32m    577\u001B[0m     ]\n\u001B[0;32m    579\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m kwargs \u001B[38;5;129;01mand\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m args:\n\u001B[0;32m    580\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mValueError\u001B[39;00m(\n\u001B[0;32m    581\u001B[0m         \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m`run` supported with either positional arguments or keyword arguments,\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m    582\u001B[0m         \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m but none were provided.\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m    583\u001B[0m     )\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py:145\u001B[0m, in \u001B[0;36mdeprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper\u001B[1;34m(*args, **kwargs)\u001B[0m\n\u001B[0;32m    143\u001B[0m     warned \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mTrue\u001B[39;00m\n\u001B[0;32m    144\u001B[0m     emit_warning()\n\u001B[1;32m--> 145\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mwrapped\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\chains\\base.py:379\u001B[0m, in \u001B[0;36mChain.__call__\u001B[1;34m(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)\u001B[0m\n\u001B[0;32m    347\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"Execute the chain.\u001B[39;00m\n\u001B[0;32m    348\u001B[0m \n\u001B[0;32m    349\u001B[0m \u001B[38;5;124;03mArgs:\u001B[39;00m\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m    370\u001B[0m \u001B[38;5;124;03m        `Chain.output_keys`.\u001B[39;00m\n\u001B[0;32m    371\u001B[0m \u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m    372\u001B[0m config \u001B[38;5;241m=\u001B[39m {\n\u001B[0;32m    373\u001B[0m     \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mcallbacks\u001B[39m\u001B[38;5;124m\"\u001B[39m: callbacks,\n\u001B[0;32m    374\u001B[0m     \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mtags\u001B[39m\u001B[38;5;124m\"\u001B[39m: tags,\n\u001B[0;32m    375\u001B[0m     \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mmetadata\u001B[39m\u001B[38;5;124m\"\u001B[39m: metadata,\n\u001B[0;32m    376\u001B[0m     \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mrun_name\u001B[39m\u001B[38;5;124m\"\u001B[39m: run_name,\n\u001B[0;32m    377\u001B[0m }\n\u001B[1;32m--> 379\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43minvoke\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m    380\u001B[0m \u001B[43m    \u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    381\u001B[0m \u001B[43m    \u001B[49m\u001B[43mcast\u001B[49m\u001B[43m(\u001B[49m\u001B[43mRunnableConfig\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43m{\u001B[49m\u001B[43mk\u001B[49m\u001B[43m:\u001B[49m\u001B[43m \u001B[49m\u001B[43mv\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mfor\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mk\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mv\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;129;43;01min\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mconfig\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mitems\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mif\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mv\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;129;43;01mis\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[38;5;129;43;01mnot\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mNone\u001B[39;49;00m\u001B[43m}\u001B[49m\u001B[43m)\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    382\u001B[0m \u001B[43m    \u001B[49m\u001B[43mreturn_only_outputs\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mreturn_only_outputs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    383\u001B[0m \u001B[43m    \u001B[49m\u001B[43minclude_run_info\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43minclude_run_info\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    384\u001B[0m \u001B[43m\u001B[49m\u001B[43m)\u001B[49m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\chains\\base.py:164\u001B[0m, in \u001B[0;36mChain.invoke\u001B[1;34m(self, input, config, **kwargs)\u001B[0m\n\u001B[0;32m    162\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[0;32m    163\u001B[0m     run_manager\u001B[38;5;241m.\u001B[39mon_chain_error(e)\n\u001B[1;32m--> 164\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m e\n\u001B[0;32m    165\u001B[0m run_manager\u001B[38;5;241m.\u001B[39mon_chain_end(outputs)\n\u001B[0;32m    167\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m include_run_info:\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\chains\\base.py:154\u001B[0m, in \u001B[0;36mChain.invoke\u001B[1;34m(self, input, config, **kwargs)\u001B[0m\n\u001B[0;32m    151\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    152\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_validate_inputs(inputs)\n\u001B[0;32m    153\u001B[0m     outputs \u001B[38;5;241m=\u001B[39m (\n\u001B[1;32m--> 154\u001B[0m         \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_call\u001B[49m\u001B[43m(\u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    155\u001B[0m         \u001B[38;5;28;01mif\u001B[39;00m new_arg_supported\n\u001B[0;32m    156\u001B[0m         \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_call(inputs)\n\u001B[0;32m    157\u001B[0m     )\n\u001B[0;32m    159\u001B[0m     final_outputs: Dict[\u001B[38;5;28mstr\u001B[39m, Any] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mprep_outputs(\n\u001B[0;32m    160\u001B[0m         inputs, outputs, return_only_outputs\n\u001B[0;32m    161\u001B[0m     )\n\u001B[0;32m    162\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\agents\\agent.py:1433\u001B[0m, in \u001B[0;36mAgentExecutor._call\u001B[1;34m(self, inputs, run_manager)\u001B[0m\n\u001B[0;32m   1431\u001B[0m \u001B[38;5;66;03m# We now enter the agent loop (until it returns something).\u001B[39;00m\n\u001B[0;32m   1432\u001B[0m \u001B[38;5;28;01mwhile\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_should_continue(iterations, time_elapsed):\n\u001B[1;32m-> 1433\u001B[0m     next_step_output \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_take_next_step\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m   1434\u001B[0m \u001B[43m        \u001B[49m\u001B[43mname_to_tool_map\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1435\u001B[0m \u001B[43m        \u001B[49m\u001B[43mcolor_mapping\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1436\u001B[0m \u001B[43m        \u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1437\u001B[0m \u001B[43m        \u001B[49m\u001B[43mintermediate_steps\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1438\u001B[0m \u001B[43m        \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1439\u001B[0m \u001B[43m    \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m   1440\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(next_step_output, AgentFinish):\n\u001B[0;32m   1441\u001B[0m         \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_return(\n\u001B[0;32m   1442\u001B[0m             next_step_output, intermediate_steps, run_manager\u001B[38;5;241m=\u001B[39mrun_manager\n\u001B[0;32m   1443\u001B[0m         )\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\agents\\agent.py:1139\u001B[0m, in \u001B[0;36mAgentExecutor._take_next_step\u001B[1;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001B[0m\n\u001B[0;32m   1130\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m_take_next_step\u001B[39m(\n\u001B[0;32m   1131\u001B[0m     \u001B[38;5;28mself\u001B[39m,\n\u001B[0;32m   1132\u001B[0m     name_to_tool_map: Dict[\u001B[38;5;28mstr\u001B[39m, BaseTool],\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m   1136\u001B[0m     run_manager: Optional[CallbackManagerForChainRun] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[0;32m   1137\u001B[0m ) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m Union[AgentFinish, List[Tuple[AgentAction, \u001B[38;5;28mstr\u001B[39m]]]:\n\u001B[0;32m   1138\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_consume_next_step(\n\u001B[1;32m-> 1139\u001B[0m         \u001B[43m[\u001B[49m\n\u001B[0;32m   1140\u001B[0m \u001B[43m            \u001B[49m\u001B[43ma\u001B[49m\n\u001B[0;32m   1141\u001B[0m \u001B[43m            \u001B[49m\u001B[38;5;28;43;01mfor\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43ma\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;129;43;01min\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_iter_next_step\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m   1142\u001B[0m \u001B[43m                \u001B[49m\u001B[43mname_to_tool_map\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1143\u001B[0m \u001B[43m                \u001B[49m\u001B[43mcolor_mapping\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1144\u001B[0m \u001B[43m                \u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1145\u001B[0m \u001B[43m                \u001B[49m\u001B[43mintermediate_steps\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1146\u001B[0m \u001B[43m                \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1147\u001B[0m \u001B[43m            \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m   1148\u001B[0m \u001B[43m        \u001B[49m\u001B[43m]\u001B[49m\n\u001B[0;32m   1149\u001B[0m     )\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\agents\\agent.py:1139\u001B[0m, in \u001B[0;36m<listcomp>\u001B[1;34m(.0)\u001B[0m\n\u001B[0;32m   1130\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m_take_next_step\u001B[39m(\n\u001B[0;32m   1131\u001B[0m     \u001B[38;5;28mself\u001B[39m,\n\u001B[0;32m   1132\u001B[0m     name_to_tool_map: Dict[\u001B[38;5;28mstr\u001B[39m, BaseTool],\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m   1136\u001B[0m     run_manager: Optional[CallbackManagerForChainRun] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[0;32m   1137\u001B[0m ) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m Union[AgentFinish, List[Tuple[AgentAction, \u001B[38;5;28mstr\u001B[39m]]]:\n\u001B[0;32m   1138\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_consume_next_step(\n\u001B[1;32m-> 1139\u001B[0m         \u001B[43m[\u001B[49m\n\u001B[0;32m   1140\u001B[0m \u001B[43m            \u001B[49m\u001B[43ma\u001B[49m\n\u001B[0;32m   1141\u001B[0m \u001B[43m            \u001B[49m\u001B[38;5;28;43;01mfor\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43ma\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;129;43;01min\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_iter_next_step\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m   1142\u001B[0m \u001B[43m                \u001B[49m\u001B[43mname_to_tool_map\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1143\u001B[0m \u001B[43m                \u001B[49m\u001B[43mcolor_mapping\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1144\u001B[0m \u001B[43m                \u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1145\u001B[0m \u001B[43m                \u001B[49m\u001B[43mintermediate_steps\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1146\u001B[0m \u001B[43m                \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1147\u001B[0m \u001B[43m            \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m   1148\u001B[0m \u001B[43m        \u001B[49m\u001B[43m]\u001B[49m\n\u001B[0;32m   1149\u001B[0m     )\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\agents\\agent.py:1167\u001B[0m, in \u001B[0;36mAgentExecutor._iter_next_step\u001B[1;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001B[0m\n\u001B[0;32m   1164\u001B[0m     intermediate_steps \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_prepare_intermediate_steps(intermediate_steps)\n\u001B[0;32m   1166\u001B[0m     \u001B[38;5;66;03m# Call the LLM to see what to do.\u001B[39;00m\n\u001B[1;32m-> 1167\u001B[0m     output \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43magent\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mplan\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m   1168\u001B[0m \u001B[43m        \u001B[49m\u001B[43mintermediate_steps\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1169\u001B[0m \u001B[43m        \u001B[49m\u001B[43mcallbacks\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mget_child\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mif\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01melse\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mNone\u001B[39;49;00m\u001B[43m,\u001B[49m\n\u001B[0;32m   1170\u001B[0m \u001B[43m        \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m   1171\u001B[0m \u001B[43m    \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m   1172\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m OutputParserException \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[0;32m   1173\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mhandle_parsing_errors, \u001B[38;5;28mbool\u001B[39m):\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\agents\\agent.py:731\u001B[0m, in \u001B[0;36mAgent.plan\u001B[1;34m(self, intermediate_steps, callbacks, **kwargs)\u001B[0m\n\u001B[0;32m    719\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"Given input, decided what to do.\u001B[39;00m\n\u001B[0;32m    720\u001B[0m \n\u001B[0;32m    721\u001B[0m \u001B[38;5;124;03mArgs:\u001B[39;00m\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m    728\u001B[0m \u001B[38;5;124;03m    Action specifying what tool to use.\u001B[39;00m\n\u001B[0;32m    729\u001B[0m \u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m    730\u001B[0m full_inputs \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mget_full_inputs(intermediate_steps, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[1;32m--> 731\u001B[0m full_output \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mllm_chain\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mpredict\u001B[49m\u001B[43m(\u001B[49m\u001B[43mcallbacks\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mcallbacks\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mfull_inputs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    732\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39moutput_parser\u001B[38;5;241m.\u001B[39mparse(full_output)\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\chains\\llm.py:302\u001B[0m, in \u001B[0;36mLLMChain.predict\u001B[1;34m(self, callbacks, **kwargs)\u001B[0m\n\u001B[0;32m    287\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21mpredict\u001B[39m(\u001B[38;5;28mself\u001B[39m, callbacks: Callbacks \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs: Any) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m \u001B[38;5;28mstr\u001B[39m:\n\u001B[0;32m    288\u001B[0m \u001B[38;5;250m    \u001B[39m\u001B[38;5;124;03m\"\"\"Format prompt with kwargs and pass to LLM.\u001B[39;00m\n\u001B[0;32m    289\u001B[0m \n\u001B[0;32m    290\u001B[0m \u001B[38;5;124;03m    Args:\u001B[39;00m\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m    300\u001B[0m \u001B[38;5;124;03m            completion = llm.predict(adjective=\"funny\")\u001B[39;00m\n\u001B[0;32m    301\u001B[0m \u001B[38;5;124;03m    \"\"\"\u001B[39;00m\n\u001B[1;32m--> 302\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[43m(\u001B[49m\u001B[43mkwargs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mcallbacks\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mcallbacks\u001B[49m\u001B[43m)\u001B[49m[\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39moutput_key]\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py:145\u001B[0m, in \u001B[0;36mdeprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper\u001B[1;34m(*args, **kwargs)\u001B[0m\n\u001B[0;32m    143\u001B[0m     warned \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mTrue\u001B[39;00m\n\u001B[0;32m    144\u001B[0m     emit_warning()\n\u001B[1;32m--> 145\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mwrapped\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\chains\\base.py:379\u001B[0m, in \u001B[0;36mChain.__call__\u001B[1;34m(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)\u001B[0m\n\u001B[0;32m    347\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"Execute the chain.\u001B[39;00m\n\u001B[0;32m    348\u001B[0m \n\u001B[0;32m    349\u001B[0m \u001B[38;5;124;03mArgs:\u001B[39;00m\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m    370\u001B[0m \u001B[38;5;124;03m        `Chain.output_keys`.\u001B[39;00m\n\u001B[0;32m    371\u001B[0m \u001B[38;5;124;03m\"\"\"\u001B[39;00m\n\u001B[0;32m    372\u001B[0m config \u001B[38;5;241m=\u001B[39m {\n\u001B[0;32m    373\u001B[0m     \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mcallbacks\u001B[39m\u001B[38;5;124m\"\u001B[39m: callbacks,\n\u001B[0;32m    374\u001B[0m     \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mtags\u001B[39m\u001B[38;5;124m\"\u001B[39m: tags,\n\u001B[0;32m    375\u001B[0m     \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mmetadata\u001B[39m\u001B[38;5;124m\"\u001B[39m: metadata,\n\u001B[0;32m    376\u001B[0m     \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mrun_name\u001B[39m\u001B[38;5;124m\"\u001B[39m: run_name,\n\u001B[0;32m    377\u001B[0m }\n\u001B[1;32m--> 379\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43minvoke\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m    380\u001B[0m \u001B[43m    \u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    381\u001B[0m \u001B[43m    \u001B[49m\u001B[43mcast\u001B[49m\u001B[43m(\u001B[49m\u001B[43mRunnableConfig\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43m{\u001B[49m\u001B[43mk\u001B[49m\u001B[43m:\u001B[49m\u001B[43m \u001B[49m\u001B[43mv\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mfor\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mk\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mv\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;129;43;01min\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mconfig\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mitems\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mif\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mv\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;129;43;01mis\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[38;5;129;43;01mnot\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mNone\u001B[39;49;00m\u001B[43m}\u001B[49m\u001B[43m)\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    382\u001B[0m \u001B[43m    \u001B[49m\u001B[43mreturn_only_outputs\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mreturn_only_outputs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    383\u001B[0m \u001B[43m    \u001B[49m\u001B[43minclude_run_info\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43minclude_run_info\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    384\u001B[0m \u001B[43m\u001B[49m\u001B[43m)\u001B[49m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\chains\\base.py:164\u001B[0m, in \u001B[0;36mChain.invoke\u001B[1;34m(self, input, config, **kwargs)\u001B[0m\n\u001B[0;32m    162\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[0;32m    163\u001B[0m     run_manager\u001B[38;5;241m.\u001B[39mon_chain_error(e)\n\u001B[1;32m--> 164\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m e\n\u001B[0;32m    165\u001B[0m run_manager\u001B[38;5;241m.\u001B[39mon_chain_end(outputs)\n\u001B[0;32m    167\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m include_run_info:\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\chains\\base.py:154\u001B[0m, in \u001B[0;36mChain.invoke\u001B[1;34m(self, input, config, **kwargs)\u001B[0m\n\u001B[0;32m    151\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    152\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_validate_inputs(inputs)\n\u001B[0;32m    153\u001B[0m     outputs \u001B[38;5;241m=\u001B[39m (\n\u001B[1;32m--> 154\u001B[0m         \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_call\u001B[49m\u001B[43m(\u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    155\u001B[0m         \u001B[38;5;28;01mif\u001B[39;00m new_arg_supported\n\u001B[0;32m    156\u001B[0m         \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_call(inputs)\n\u001B[0;32m    157\u001B[0m     )\n\u001B[0;32m    159\u001B[0m     final_outputs: Dict[\u001B[38;5;28mstr\u001B[39m, Any] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mprep_outputs(\n\u001B[0;32m    160\u001B[0m         inputs, outputs, return_only_outputs\n\u001B[0;32m    161\u001B[0m     )\n\u001B[0;32m    162\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\chains\\llm.py:104\u001B[0m, in \u001B[0;36mLLMChain._call\u001B[1;34m(self, inputs, run_manager)\u001B[0m\n\u001B[0;32m     99\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m_call\u001B[39m(\n\u001B[0;32m    100\u001B[0m     \u001B[38;5;28mself\u001B[39m,\n\u001B[0;32m    101\u001B[0m     inputs: Dict[\u001B[38;5;28mstr\u001B[39m, Any],\n\u001B[0;32m    102\u001B[0m     run_manager: Optional[CallbackManagerForChainRun] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[0;32m    103\u001B[0m ) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m Dict[\u001B[38;5;28mstr\u001B[39m, \u001B[38;5;28mstr\u001B[39m]:\n\u001B[1;32m--> 104\u001B[0m     response \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mgenerate\u001B[49m\u001B[43m(\u001B[49m\u001B[43m[\u001B[49m\u001B[43minputs\u001B[49m\u001B[43m]\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    105\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mcreate_outputs(response)[\u001B[38;5;241m0\u001B[39m]\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain\\chains\\llm.py:116\u001B[0m, in \u001B[0;36mLLMChain.generate\u001B[1;34m(self, input_list, run_manager)\u001B[0m\n\u001B[0;32m    114\u001B[0m callbacks \u001B[38;5;241m=\u001B[39m run_manager\u001B[38;5;241m.\u001B[39mget_child() \u001B[38;5;28;01mif\u001B[39;00m run_manager \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m\n\u001B[0;32m    115\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mllm, BaseLanguageModel):\n\u001B[1;32m--> 116\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mllm\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mgenerate_prompt\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m    117\u001B[0m \u001B[43m        \u001B[49m\u001B[43mprompts\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    118\u001B[0m \u001B[43m        \u001B[49m\u001B[43mstop\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    119\u001B[0m \u001B[43m        \u001B[49m\u001B[43mcallbacks\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mcallbacks\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    120\u001B[0m \u001B[43m        \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mllm_kwargs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    121\u001B[0m \u001B[43m    \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    122\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m    123\u001B[0m     results \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mllm\u001B[38;5;241m.\u001B[39mbind(stop\u001B[38;5;241m=\u001B[39mstop, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39m\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mllm_kwargs)\u001B[38;5;241m.\u001B[39mbatch(\n\u001B[0;32m    124\u001B[0m         cast(List, prompts), {\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mcallbacks\u001B[39m\u001B[38;5;124m\"\u001B[39m: callbacks}\n\u001B[0;32m    125\u001B[0m     )\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py:560\u001B[0m, in \u001B[0;36mBaseChatModel.generate_prompt\u001B[1;34m(self, prompts, stop, callbacks, **kwargs)\u001B[0m\n\u001B[0;32m    552\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21mgenerate_prompt\u001B[39m(\n\u001B[0;32m    553\u001B[0m     \u001B[38;5;28mself\u001B[39m,\n\u001B[0;32m    554\u001B[0m     prompts: List[PromptValue],\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m    557\u001B[0m     \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs: Any,\n\u001B[0;32m    558\u001B[0m ) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m LLMResult:\n\u001B[0;32m    559\u001B[0m     prompt_messages \u001B[38;5;241m=\u001B[39m [p\u001B[38;5;241m.\u001B[39mto_messages() \u001B[38;5;28;01mfor\u001B[39;00m p \u001B[38;5;129;01min\u001B[39;00m prompts]\n\u001B[1;32m--> 560\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mgenerate\u001B[49m\u001B[43m(\u001B[49m\u001B[43mprompt_messages\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mstop\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mstop\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mcallbacks\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mcallbacks\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py:421\u001B[0m, in \u001B[0;36mBaseChatModel.generate\u001B[1;34m(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)\u001B[0m\n\u001B[0;32m    419\u001B[0m         \u001B[38;5;28;01mif\u001B[39;00m run_managers:\n\u001B[0;32m    420\u001B[0m             run_managers[i]\u001B[38;5;241m.\u001B[39mon_llm_error(e, response\u001B[38;5;241m=\u001B[39mLLMResult(generations\u001B[38;5;241m=\u001B[39m[]))\n\u001B[1;32m--> 421\u001B[0m         \u001B[38;5;28;01mraise\u001B[39;00m e\n\u001B[0;32m    422\u001B[0m flattened_outputs \u001B[38;5;241m=\u001B[39m [\n\u001B[0;32m    423\u001B[0m     LLMResult(generations\u001B[38;5;241m=\u001B[39m[res\u001B[38;5;241m.\u001B[39mgenerations], llm_output\u001B[38;5;241m=\u001B[39mres\u001B[38;5;241m.\u001B[39mllm_output)  \u001B[38;5;66;03m# type: ignore[list-item]\u001B[39;00m\n\u001B[0;32m    424\u001B[0m     \u001B[38;5;28;01mfor\u001B[39;00m res \u001B[38;5;129;01min\u001B[39;00m results\n\u001B[0;32m    425\u001B[0m ]\n\u001B[0;32m    426\u001B[0m llm_output \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_combine_llm_outputs([res\u001B[38;5;241m.\u001B[39mllm_output \u001B[38;5;28;01mfor\u001B[39;00m res \u001B[38;5;129;01min\u001B[39;00m results])\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py:411\u001B[0m, in \u001B[0;36mBaseChatModel.generate\u001B[1;34m(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)\u001B[0m\n\u001B[0;32m    408\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m i, m \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28menumerate\u001B[39m(messages):\n\u001B[0;32m    409\u001B[0m     \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    410\u001B[0m         results\u001B[38;5;241m.\u001B[39mappend(\n\u001B[1;32m--> 411\u001B[0m             \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_generate_with_cache\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m    412\u001B[0m \u001B[43m                \u001B[49m\u001B[43mm\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    413\u001B[0m \u001B[43m                \u001B[49m\u001B[43mstop\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mstop\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    414\u001B[0m \u001B[43m                \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_managers\u001B[49m\u001B[43m[\u001B[49m\u001B[43mi\u001B[49m\u001B[43m]\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mif\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mrun_managers\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01melse\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mNone\u001B[39;49;00m\u001B[43m,\u001B[49m\n\u001B[0;32m    415\u001B[0m \u001B[43m                \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    416\u001B[0m \u001B[43m            \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    417\u001B[0m         )\n\u001B[0;32m    418\u001B[0m     \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[0;32m    419\u001B[0m         \u001B[38;5;28;01mif\u001B[39;00m run_managers:\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py:632\u001B[0m, in \u001B[0;36mBaseChatModel._generate_with_cache\u001B[1;34m(self, messages, stop, run_manager, **kwargs)\u001B[0m\n\u001B[0;32m    630\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m    631\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m inspect\u001B[38;5;241m.\u001B[39msignature(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_generate)\u001B[38;5;241m.\u001B[39mparameters\u001B[38;5;241m.\u001B[39mget(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mrun_manager\u001B[39m\u001B[38;5;124m\"\u001B[39m):\n\u001B[1;32m--> 632\u001B[0m         result \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_generate\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m    633\u001B[0m \u001B[43m            \u001B[49m\u001B[43mmessages\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mstop\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mstop\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\n\u001B[0;32m    634\u001B[0m \u001B[43m        \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    635\u001B[0m     \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m    636\u001B[0m         result \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_generate(messages, stop\u001B[38;5;241m=\u001B[39mstop, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\langchain_community\\chat_models\\gigachat.py:285\u001B[0m, in \u001B[0;36mGigaChat._generate\u001B[1;34m(self, messages, stop, run_manager, stream, **kwargs)\u001B[0m\n\u001B[0;32m    282\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m generate_from_stream(stream_iter)\n\u001B[0;32m    284\u001B[0m payload \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_build_payload(messages, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n\u001B[1;32m--> 285\u001B[0m response \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_client\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mchat\u001B[49m\u001B[43m(\u001B[49m\u001B[43mpayload\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    287\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_create_chat_result(response)\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\gigachat\\client.py:235\u001B[0m, in \u001B[0;36mGigaChatSyncClient.chat\u001B[1;34m(self, payload)\u001B[0m\n\u001B[0;32m    233\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"Возвращает ответ модели с учетом переданных сообщений\"\"\"\u001B[39;00m\n\u001B[0;32m    234\u001B[0m chat \u001B[38;5;241m=\u001B[39m _parse_chat(payload, \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_settings)\n\u001B[1;32m--> 235\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_decorator\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;28;43;01mlambda\u001B[39;49;00m\u001B[43m:\u001B[49m\u001B[43m \u001B[49m\u001B[43mpost_chat\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43msync\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_client\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mchat\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mchat\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43maccess_token\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mtoken\u001B[49m\u001B[43m)\u001B[49m\u001B[43m)\u001B[49m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\gigachat\\client.py:199\u001B[0m, in \u001B[0;36mGigaChatSyncClient._decorator\u001B[1;34m(self, call)\u001B[0m\n\u001B[0;32m    197\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_check_validity_token():\n\u001B[0;32m    198\u001B[0m     \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m--> 199\u001B[0m         \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mcall\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    200\u001B[0m     \u001B[38;5;28;01mexcept\u001B[39;00m AuthenticationError:\n\u001B[0;32m    201\u001B[0m         _logger\u001B[38;5;241m.\u001B[39mwarning(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mAUTHENTICATION ERROR\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\gigachat\\client.py:235\u001B[0m, in \u001B[0;36mGigaChatSyncClient.chat.<locals>.<lambda>\u001B[1;34m()\u001B[0m\n\u001B[0;32m    233\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"Возвращает ответ модели с учетом переданных сообщений\"\"\"\u001B[39;00m\n\u001B[0;32m    234\u001B[0m chat \u001B[38;5;241m=\u001B[39m _parse_chat(payload, \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_settings)\n\u001B[1;32m--> 235\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_decorator(\u001B[38;5;28;01mlambda\u001B[39;00m: \u001B[43mpost_chat\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43msync\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_client\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mchat\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mchat\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43maccess_token\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mtoken\u001B[49m\u001B[43m)\u001B[49m)\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\gigachat\\api\\post_chat.py:42\u001B[0m, in \u001B[0;36msync\u001B[1;34m(client, chat, access_token)\u001B[0m\n\u001B[0;32m     35\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21msync\u001B[39m(\n\u001B[0;32m     36\u001B[0m     client: httpx\u001B[38;5;241m.\u001B[39mClient,\n\u001B[0;32m     37\u001B[0m     \u001B[38;5;241m*\u001B[39m,\n\u001B[0;32m     38\u001B[0m     chat: Chat,\n\u001B[0;32m     39\u001B[0m     access_token: Optional[\u001B[38;5;28mstr\u001B[39m] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[0;32m     40\u001B[0m ) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m ChatCompletion:\n\u001B[0;32m     41\u001B[0m     kwargs \u001B[38;5;241m=\u001B[39m _get_kwargs(chat\u001B[38;5;241m=\u001B[39mchat, access_token\u001B[38;5;241m=\u001B[39maccess_token)\n\u001B[1;32m---> 42\u001B[0m     response \u001B[38;5;241m=\u001B[39m \u001B[43mclient\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mrequest\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m     43\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m _build_response(response)\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpx\\_client.py:827\u001B[0m, in \u001B[0;36mClient.request\u001B[1;34m(self, method, url, content, data, files, json, params, headers, cookies, auth, follow_redirects, timeout, extensions)\u001B[0m\n\u001B[0;32m    812\u001B[0m     warnings\u001B[38;5;241m.\u001B[39mwarn(message, \u001B[38;5;167;01mDeprecationWarning\u001B[39;00m)\n\u001B[0;32m    814\u001B[0m request \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mbuild_request(\n\u001B[0;32m    815\u001B[0m     method\u001B[38;5;241m=\u001B[39mmethod,\n\u001B[0;32m    816\u001B[0m     url\u001B[38;5;241m=\u001B[39murl,\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m    825\u001B[0m     extensions\u001B[38;5;241m=\u001B[39mextensions,\n\u001B[0;32m    826\u001B[0m )\n\u001B[1;32m--> 827\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43msend\u001B[49m\u001B[43m(\u001B[49m\u001B[43mrequest\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mauth\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mauth\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mfollow_redirects\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mfollow_redirects\u001B[49m\u001B[43m)\u001B[49m\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpx\\_client.py:914\u001B[0m, in \u001B[0;36mClient.send\u001B[1;34m(self, request, stream, auth, follow_redirects)\u001B[0m\n\u001B[0;32m    906\u001B[0m follow_redirects \u001B[38;5;241m=\u001B[39m (\n\u001B[0;32m    907\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mfollow_redirects\n\u001B[0;32m    908\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(follow_redirects, UseClientDefault)\n\u001B[0;32m    909\u001B[0m     \u001B[38;5;28;01melse\u001B[39;00m follow_redirects\n\u001B[0;32m    910\u001B[0m )\n\u001B[0;32m    912\u001B[0m auth \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_build_request_auth(request, auth)\n\u001B[1;32m--> 914\u001B[0m response \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_send_handling_auth\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m    915\u001B[0m \u001B[43m    \u001B[49m\u001B[43mrequest\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    916\u001B[0m \u001B[43m    \u001B[49m\u001B[43mauth\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mauth\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    917\u001B[0m \u001B[43m    \u001B[49m\u001B[43mfollow_redirects\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mfollow_redirects\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    918\u001B[0m \u001B[43m    \u001B[49m\u001B[43mhistory\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43m[\u001B[49m\u001B[43m]\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    919\u001B[0m \u001B[43m\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    920\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    921\u001B[0m     \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m stream:\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpx\\_client.py:942\u001B[0m, in \u001B[0;36mClient._send_handling_auth\u001B[1;34m(self, request, auth, follow_redirects, history)\u001B[0m\n\u001B[0;32m    939\u001B[0m request \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mnext\u001B[39m(auth_flow)\n\u001B[0;32m    941\u001B[0m \u001B[38;5;28;01mwhile\u001B[39;00m \u001B[38;5;28;01mTrue\u001B[39;00m:\n\u001B[1;32m--> 942\u001B[0m     response \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_send_handling_redirects\u001B[49m\u001B[43m(\u001B[49m\n\u001B[0;32m    943\u001B[0m \u001B[43m        \u001B[49m\u001B[43mrequest\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    944\u001B[0m \u001B[43m        \u001B[49m\u001B[43mfollow_redirects\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mfollow_redirects\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    945\u001B[0m \u001B[43m        \u001B[49m\u001B[43mhistory\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mhistory\u001B[49m\u001B[43m,\u001B[49m\n\u001B[0;32m    946\u001B[0m \u001B[43m    \u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    947\u001B[0m     \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    948\u001B[0m         \u001B[38;5;28;01mtry\u001B[39;00m:\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpx\\_client.py:979\u001B[0m, in \u001B[0;36mClient._send_handling_redirects\u001B[1;34m(self, request, follow_redirects, history)\u001B[0m\n\u001B[0;32m    976\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m hook \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_event_hooks[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mrequest\u001B[39m\u001B[38;5;124m\"\u001B[39m]:\n\u001B[0;32m    977\u001B[0m     hook(request)\n\u001B[1;32m--> 979\u001B[0m response \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_send_single_request\u001B[49m\u001B[43m(\u001B[49m\u001B[43mrequest\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    980\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[0;32m    981\u001B[0m     \u001B[38;5;28;01mfor\u001B[39;00m hook \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_event_hooks[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mresponse\u001B[39m\u001B[38;5;124m\"\u001B[39m]:\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpx\\_client.py:1015\u001B[0m, in \u001B[0;36mClient._send_single_request\u001B[1;34m(self, request)\u001B[0m\n\u001B[0;32m   1010\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mRuntimeError\u001B[39;00m(\n\u001B[0;32m   1011\u001B[0m         \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mAttempted to send an async request with a sync Client instance.\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[0;32m   1012\u001B[0m     )\n\u001B[0;32m   1014\u001B[0m \u001B[38;5;28;01mwith\u001B[39;00m request_context(request\u001B[38;5;241m=\u001B[39mrequest):\n\u001B[1;32m-> 1015\u001B[0m     response \u001B[38;5;241m=\u001B[39m \u001B[43mtransport\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mhandle_request\u001B[49m\u001B[43m(\u001B[49m\u001B[43mrequest\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m   1017\u001B[0m \u001B[38;5;28;01massert\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(response\u001B[38;5;241m.\u001B[39mstream, SyncByteStream)\n\u001B[0;32m   1019\u001B[0m response\u001B[38;5;241m.\u001B[39mrequest \u001B[38;5;241m=\u001B[39m request\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpx\\_transports\\default.py:232\u001B[0m, in \u001B[0;36mHTTPTransport.handle_request\u001B[1;34m(self, request)\u001B[0m\n\u001B[0;32m    218\u001B[0m \u001B[38;5;28;01massert\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(request\u001B[38;5;241m.\u001B[39mstream, SyncByteStream)\n\u001B[0;32m    220\u001B[0m req \u001B[38;5;241m=\u001B[39m httpcore\u001B[38;5;241m.\u001B[39mRequest(\n\u001B[0;32m    221\u001B[0m     method\u001B[38;5;241m=\u001B[39mrequest\u001B[38;5;241m.\u001B[39mmethod,\n\u001B[0;32m    222\u001B[0m     url\u001B[38;5;241m=\u001B[39mhttpcore\u001B[38;5;241m.\u001B[39mURL(\n\u001B[1;32m   (...)\u001B[0m\n\u001B[0;32m    230\u001B[0m     extensions\u001B[38;5;241m=\u001B[39mrequest\u001B[38;5;241m.\u001B[39mextensions,\n\u001B[0;32m    231\u001B[0m )\n\u001B[1;32m--> 232\u001B[0m \u001B[43m\u001B[49m\u001B[38;5;28;43;01mwith\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mmap_httpcore_exceptions\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\u001B[43m:\u001B[49m\n\u001B[0;32m    233\u001B[0m \u001B[43m    \u001B[49m\u001B[43mresp\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_pool\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mhandle_request\u001B[49m\u001B[43m(\u001B[49m\u001B[43mreq\u001B[49m\u001B[43m)\u001B[49m\n\u001B[0;32m    235\u001B[0m \u001B[38;5;28;01massert\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(resp\u001B[38;5;241m.\u001B[39mstream, typing\u001B[38;5;241m.\u001B[39mIterable)\n",
      "File \u001B[1;32m~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\contextlib.py:155\u001B[0m, in \u001B[0;36m_GeneratorContextManager.__exit__\u001B[1;34m(self, typ, value, traceback)\u001B[0m\n\u001B[0;32m    153\u001B[0m     value \u001B[38;5;241m=\u001B[39m typ()\n\u001B[0;32m    154\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m--> 155\u001B[0m     \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mgen\u001B[38;5;241m.\u001B[39mthrow(typ, value, traceback)\n\u001B[0;32m    156\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mStopIteration\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m exc:\n\u001B[0;32m    157\u001B[0m     \u001B[38;5;66;03m# Suppress StopIteration *unless* it's the same exception that\u001B[39;00m\n\u001B[0;32m    158\u001B[0m     \u001B[38;5;66;03m# was passed to throw().  This prevents a StopIteration\u001B[39;00m\n\u001B[0;32m    159\u001B[0m     \u001B[38;5;66;03m# raised inside the \"with\" statement from being suppressed.\u001B[39;00m\n\u001B[0;32m    160\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m exc \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m value\n",
      "File \u001B[1;32m~\\PycharmProjects\\deephack\\.venv\\Lib\\site-packages\\httpx\\_transports\\default.py:86\u001B[0m, in \u001B[0;36mmap_httpcore_exceptions\u001B[1;34m()\u001B[0m\n\u001B[0;32m     83\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m\n\u001B[0;32m     85\u001B[0m message \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mstr\u001B[39m(exc)\n\u001B[1;32m---> 86\u001B[0m \u001B[38;5;28;01mraise\u001B[39;00m mapped_exc(message) \u001B[38;5;28;01mfrom\u001B[39;00m \u001B[38;5;21;01mexc\u001B[39;00m\n",
      "\u001B[1;31mReadTimeout\u001B[0m: The read operation timed out"
     ]
    }
   ],
   "execution_count": 15
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-04-25T14:55:23.778422Z",
     "start_time": "2024-04-25T14:55:23.778422Z"
    }
   },
   "cell_type": "code",
   "source": "",
   "outputs": [],
   "execution_count": null
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
