{
 "cells": [
  {
   "cell_type": "raw",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "prompt_task = \"\"\"You will process an input text and classify it according to predefined categories, extracting keyword phrases. The task is divided into two steps:\n",
    "\n",
    "**Step 1:**\n",
    "- Determine whether the input text belongs to one of the following categories:\n",
    "    - If the text describes the \"base model,\" classify it as `base_model_raw_paragraph`.\n",
    "    - If the text describes the \"training approach,\" classify it as `approach_raw_paragraph`.\n",
    "    - If the text describes the \"task of the model,\" classify it as `task_raw_paragraph`.\n",
    "    - If the text describes the \"architecture family,\" classify it as `architectureFamily_raw_paragraph`.\n",
    "    - If the text describes the \"specific architecture,\" classify it as `modelArchitecture_raw_paragraph`.\n",
    "    - If the text describes the \"datasets used by the model,\" classify it as `datasets_raw_paragraph`.\n",
    "    - If the text describes the \"target users or use cases,\" classify it as `users_raw_paragraph`.\n",
    "    - If none of these categories apply, output `other` and stop after this step.\n",
    "\n",
    "**Step 2:**\n",
    "- Based on the classification result from Step 1, extract keyword phrases related to the assigned category:\n",
    "    - If classified as `base_model_raw_paragraph`, extract keyword phrases related to the base model.\n",
    "    - If classified as `approach_raw_paragraph`, extract keyword phrases related to the training approach.\n",
    "    - If classified as `task_raw_paragraph`, extract keyword phrases related to the task.\n",
    "    - If classified as `architectureFamily_raw_paragraph`, extract keyword phrases related to the architecture family.\n",
    "    - If classified as `modelArchitecture_raw_paragraph`, extract keyword phrases related to the specific architecture.\n",
    "    - If classified as `datasets_raw_paragraph`, extract keyword phrases related to the dataset.\n",
    "    - If classified as `users_raw_paragraph`, extract keyword phrases related to the target users or use cases.\n",
    "\n",
    "**Output:**\n",
    "- If classification is successful, output `{\"label\":\"category label\",\"keyword\":\"extracted keyword phrases\"}`.\n",
    "- If classification fails, output `{\"label\":\"other\",\"keyword\":null}`.\n",
    "\n",
    "Remember, you only need to output the final answer, a string in json format, and no additional characters.\n",
    "\n",
    "**Input:**\\n\"\"\"\n",
    "prompt_task"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "prompt_task = \"\"\"You will process an input text and classify it according to predefined categories and give the answer.:\n",
    "\n",
    "- Determine whether the input text belongs to one of the following categories:\n",
    "    - If you know the answer to \"base model\" from this text, classify it as `base_model_raw_paragraph` and give the answer.\n",
    "    - If you know the answer to \"training approach\" from this text, classify it as `approach_raw_paragraph` and give the answer.\n",
    "    - If you know the answer to \"task of the model\" from this text, classify it as `task_raw_paragraph` and give the answer.\n",
    "    - If you know the answer to \"architecture of the model\" from this text, classify it as `modelArchitecture_raw_paragraph` and give the answer.\n",
    "    - If you know the answer to \"datasets used by the model\" from this text, classify it as `datasets_raw_paragraph` and give the answer.\n",
    "    - If you know the answer to \"target users or use cases\" from this text, classify it as `users_raw_paragraph` and give the answer.\n",
    "    - If none of these categories apply, output `other` and give the `null` answer.\n",
    "\n",
    "**Output:**\n",
    "- If classification is successful, output `{\"label\":\"category label\",\"answer\":\"the answer of category from the input text\"}`.\n",
    "- If classification fails, output `{\"label\":\"other\",\"answer\":null}`.\n",
    "\n",
    "Remember, you only need to output the final answer, a string in json format, and no additional characters.\n",
    "\n",
    "**Input:**\\n\"\"\"\n",
    "prompt_task"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "prompt_task = \"\"\"You will process an input text and classify it according to predefined categories, providing the corresponding answer. Here are the classification criteria:\n",
    "\n",
    "- If the text provides information about the \"base model\", classify it as `base_model_raw_paragraph` and provide the answer about the \"base model\".\n",
    "- If the text provides information about the \"training approach\", classify it as `approach_raw_paragraph` and provide the answer about the \"training approach\".\n",
    "- If the text provides information about the \"task of the model\", classify it as `task_raw_paragraph` and provide the answer about the \"task of the model\".\n",
    "- If the text provides information about the \"architecture of the model\", classify it as `modelArchitecture_raw_paragraph` and provide the answer about the \"architecture of the model\".\n",
    "- If the text provides information about the \"datasets used by the model\", classify it as `datasets_raw_paragraph` and provide the answer about the \"datasets used by the model\".\n",
    "- If the text provides information about the \"target users or use cases\", classify it as `users_raw_paragraph` and provide the answer about the \"target users or use cases\".\n",
    "- If none of these categories apply, output `{\"label\":\"other\",\"answer\":null}`.\n",
    "\n",
    "output Format:\n",
    "- On successful classification, output the following string in JSON format: `{\"label\":\"category label\",\"answer\":\"the answer derived from the input text\"}`.\n",
    "- On failure to classify, output: `{\"label\":\"other\",\"answer\":null}`.\n",
    "Please ensure that your output contains only the final answer in the specified JSON format, without any additional text or characters.\n",
    "\n",
    "Input:\\n\"\"\"\n",
    "prompt_task"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'You will process an input text and classify it according to predefined categories, providing the corresponding answer. Here are the classification criteria:\\n\\n- If the text provides information about the \"base model\", classify it as `base_model_raw_paragraph` and provide the answer about the \"base model\" with keywords.\\n- If the text provides information about the \"training approach\", classify it as `approach_raw_paragraph` and provide the answer about the \"training approach\" with keywords.\\n- If the text provides information about the \"task of the model\", classify it as `task_raw_paragraph` and provide the answer about the \"task of the model\" with keywords.\\n- If the text provides information about the \"architecture of the model\", classify it as `modelArchitecture_raw_paragraph` and provide the answer about the \"architecture of the model\" with keywords.\\n- If the text provides information about the \"datasets used by the model\", classify it as `datasets_raw_paragraph` and provide the answer about the \"datasets used by the model\" with keywords.\\n- If the text provides information about the \"target users or use cases\", classify it as `users_raw_paragraph` and provide the answer about the \"target users or use cases\" with keywords.\\n- If none of these categories apply, output `{\"label\":\"other\",\"answer\":null}`.\\n\\noutput Format:\\n- On successful classification, output the following string in JSON format: `{\"label\":\"category label\",\"answer\":\"the answer derived from the input text\"}`.\\n- On failure to classify, output: `{\"label\":\"other\",\"answer\":null}`.\\nPlease ensure that your output contains only the final answer in the specified JSON format, without any additional text or characters.\\n\\nInput:\\n'"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prompt_task = \"\"\"You will process an input text and classify it according to predefined categories, providing the corresponding answer. Here are the classification criteria:\n",
    "\n",
    "- If the text provides information about the \"base model\", classify it as `base_model_raw_paragraph` and provide the answer about the \"base model\" with keywords.\n",
    "- If the text provides information about the \"training approach\", classify it as `approach_raw_paragraph` and provide the answer about the \"training approach\" with keywords.\n",
    "- If the text provides information about the \"task of the model\", classify it as `task_raw_paragraph` and provide the answer about the \"task of the model\" with keywords.\n",
    "- If the text provides information about the \"architecture of the model\", classify it as `modelArchitecture_raw_paragraph` and provide the answer about the \"architecture of the model\" with keywords.\n",
    "- If the text provides information about the \"datasets used by the model\", classify it as `datasets_raw_paragraph` and provide the answer about the \"datasets used by the model\" with keywords.\n",
    "- If the text provides information about the \"target users or use cases\", classify it as `users_raw_paragraph` and provide the answer about the \"target users or use cases\" with keywords.\n",
    "- If none of these categories apply, output `{\"label\":\"other\",\"answer\":null}`.\n",
    "\n",
    "output Format:\n",
    "- On successful classification, output the following string in JSON format: `{\"label\":\"category label\",\"answer\":\"the answer derived from the input text\"}`.\n",
    "- On failure to classify, output: `{\"label\":\"other\",\"answer\":null}`.\n",
    "Please ensure that your output contains only the final answer in the specified JSON format, without any additional text or characters.\n",
    "\n",
    "Input:\\n\"\"\"\n",
    "prompt_task"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "prompt_task = \"\"\"你需要生成的数据如下所示，需要且仅需要这些字段。每个字段的具体指令在对应字段中说明。\n",
    "```json\n",
    "{\n",
    "  \"base_model\": \"{文档中的值是根据什么模型微调的？如果没有明确提到微调，则填入本模型名称}\",\n",
    "  \"base_model_raw_paragraph\": \"{如果找到了 base_model，将依据的段落原文填入此字段，如果没找到，则不填}\",\n",
    "  \"datasets\": \"{使用哪些数据集进行训练？填写数据集的名称，如果没有找到明确的说明，该字段填入“不明”}\",\n",
    "  \"datasets_raw_paragraph\": \"{如果找到了 datasets，将对应的段落原文填入此字段，如果没找到，则不填}\",\n",
    "  \"description\": \"{原封不动地复制原文中的‘模型介绍’相关文段，如果没有找到，则不填}\",\n",
    "}\n",
    "```\n",
    "你非常严谨，只严格提取markdown文件中的信息，如果没有相关信息，则不填，绝不会自己擅自改写。严格按照json格式返回数据，可以让其他程序更方便地处理数据。如果你准备好了，告诉我。\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "prompt_template = \"\"\"\n",
    "你是一名数据标注员。根据下面给出的文本，如实提取数据。\n",
    "\n",
    "【\n",
    "{content}\n",
    "】\n",
    "\n",
    "您需要生成的数据如下，只需要这几个字段即可，每个字段的具体说明在对应字段中有说明。\n",
    "```json\n",
    "{demo}\n",
    "```\n",
    "\n",
    "你非常严谨，只从给定的文本中提取信息，没有相关信息填写`null`，绝不会擅自改写，返回数据严格按照JSON格式。\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from trafilatura import fetch_url, extract\n",
    "import pandas as pd\n",
    "from utils import parse_xml, try_parse_json_object, get_text_table\n",
    "from tqdm import tqdm\n",
    "import json\n",
    "import re\n",
    "import logging\n",
    "import logging.config\n",
    "import os\n",
    "\n",
    "logger = logging.getLogger(__name__)\n",
    "from zhipuai import ZhipuAI\n",
    "client = ZhipuAI(api_key=\"5e00cf85a0dcf70c5f5573139919c227.elgj3mA5JTNGvoKv\")\n",
    "new_lines = \"=\"*10"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5000"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "with open('id_list_merged.json', 'r') as f:\n",
    "    model_id_list = json.load(f)\n",
    "len(model_id_list)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  0%|          | 0/30 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"15.5B parameter models, Multi Query Attention, context window of 8192 tokens\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"technical assistant\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"starcoder, AutoModelForCausalLM, GPU/CPU usage\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"Fill-in-the-middle, special tokens, prefix/middle/suffix, input/output\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"pretraining dataset, filtered for permissive licenses, search index for pretraining data\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"80+ programming languages, English predominant in source code\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"GPT-2 model with multi-query attention and Fill-in-the-Middle objective\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"GPUs: 512 Tesla A100, Training time: 24 days, Training FLOPS: 8.46E+22\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Neural networks: PyTorch, BP16 if applicable: apex\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  3%|▎         | 1/30 [00:28<13:56, 28.84s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"Evaluation results, pass@1, HumanEval, MBPP, DS-1000, MultiPL-HumanEval, performance metrics\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"original CompVis Stable Diffusion codebase\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"text-to-image generation\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"research purposes, safe deployment of models, probing limitations and biases, generation of artworks, design, artistic processes, educational or creative tools, research on generative models\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"generate content, not trained to be factual, out-of-scope for abilities\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"model limitations, photorealism, legible text, compositionality, faces and people generation, language limitation, autoencoding part, memorization\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"LAION-2B(en), insufficient representation of non-English languages and cultures\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"LAION-2B (en), subsets thereof, laion-high-resolution, laion-improved-aesthetics\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"classifier-free guidance scales, 50 PLMS sampling steps\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  7%|▋         | 2/30 [00:50<11:29, 24.63s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"DALL-E Mini\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Llama 3.1, auto-regressive language model, optimized transformer architecture, Grouped-Query Attention (GQA), supervised fine-tuning (SFT), reinforcement learning with human feedback (RLHF)\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"commercial and research use, assistant-like chat, natural language generation tasks, synthetic data generation, distillation\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Meta-Llama-3.1-8B-Instruct, transformers, original llama codebase\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Transformers pipeline, Meta-Llama-3.1-8B-Instruct, torch_dtype: torch.bfloat16, device_map: auto\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"tool use with transformers, LLaMA-3.1 supports multiple tool use formats, chat templates, respond to weather queries\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"llama, Meta-Llama-3.1-8B-Instruct\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"custom training libraries, Meta's custom built GPU cluster, production infrastructure, pretraining, fine-tuning, annotation, evaluation, 39.3M GPU hours, H100-80GB hardware, training time, power consumption, power usage efficiency\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"~15 trillion tokens, publicly available sources, instruction datasets, 25M synthetically generated examples, pretraining data cutoff December 2023\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Llama 3.1\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"Llama models, responsibly deployed, variety of use cases, Community Stories webpage, helpful models, model safety, generic use cases, standard set of harms, developers, tailor safety, own policy, necessary safeguards\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"multi-faceted approach, data collection, human-generated data, synthetic data, LLM-based classifiers, high-quality prompts and responses, enhancing data quality control, model refusals, refusal tone, safety data strategy, tone guidelines\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"assess risks related to proliferation of chemical and biological weapons, Child Safety risks, cyber attack enablement, attack automation, social engineering uplift for cyber attackers\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 10%|█         | 3/30 [01:40<16:17, 36.21s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"meta-llama/Llama-3.1-8B\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"Image-to-Video generation\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"latent diffusion model, 25 frames, resolution 576x1024, finetuned, f8-decoder, temporal consistency, standard frame-wise decoder\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"non-commercial, research, commercial, generative models, safe deployment, biases, artistic processes, educational tools, creative tools\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"out-of-scope, not trained to be factual, true representations, violating Acceptable Use Policy\"}\n",
      "The input text does not fit any of the predefined categories such as information about the \"base model\", \"training approach\", \"task of the model\", \"architecture of the model\", \"datasets used by the model\", or \"target users or use cases\". Therefore, the output is:\n",
      "\n",
      "```json\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "```\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"non-commercial, commercial usage\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Stability-AI, generative-models\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 13%|█▎        | 4/30 [01:59<12:42, 29.33s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"image-to-video models that generate short videos/animations closely following the given input image\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"MistralTokenizer, ChatCompletionRequest\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Transformer architecture\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Mistral-7B-v0.2, 32k context window, Rope-theta = 1e6, No Sliding-Window Attention\"}\n",
      ".decode(\"utf-8\")\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Mistral 7B Instruct model, easily fine-tuned, compelling performance\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 17%|█▋        | 5/30 [02:15<10:18, 24.76s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"40B parameters causal decoder-only model\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Falcon-40B, best open-source model, outperforms LLaMA, StableLM, RedPajama, MPT, OpenLLM Leaderboard, raw pretrained model, further finetuned, Falcon-40B-Instruct\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Causal decoder-only\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"foundation for further specialization, finetuning, specific usecases, summarization, text generation, chatbot\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"production use, assessment of risks, mitigation, irresponsible use, harmful use\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"English, German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"Falcon-40B, finetuning, specific set of tasks, guardrails, appropriate precautions, production use\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"tiiuae/falcon-40b\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"Falcon-40B trained on 1,000B tokens of RefinedWeb, enhanced with curated corpora, inspired by The Pile\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"Falcon-40B, 3D parallelism strategy, TP=8, PP=4, DP=12, ZeRO\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"Training started in December 2022, took two months\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Falcon-40B, decoder-only model, rotary positionnal embeddings, multiquery Attention, FlashAttention, parallel attention/MLP, two layer norms\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"AWS SageMaker, 384 A100 40GB GPUs, P4d instances\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"custom distributed training codebase, Gigatron, 3D parallelism, ZeRO, high-performance Triton kernels, FlashAttention\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"The RefinedWeb dataset for Falcon LLM\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 20%|██        | 6/30 [02:48<11:01, 27.55s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"latent adversarial diffusion distillation, FLUX.1\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"Developers and creatives looking to build on top of FLUX.1\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 23%|██▎       | 7/30 [02:58<08:18, 21.68s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"latent text-to-image diffusion model, fine-tuning\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"entertainment purposes, generative art assistant\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 27%|██▋       | 8/30 [03:07<06:28, 17.66s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"sentence-transformers model, 384 dimensional vector space\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"Using sentence-transformers for embeddings generation\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"transformer model, pooling-operation, contextualized word embeddings\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"train sentence embedding models, self-supervised contrastive learning objective, predict sentence pairs\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"sentence and short paragraph encoder, information retrieval, clustering, sentence similarity tasks\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"pretrained nreimers/MiniLM-L6-H384-uncased\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"fine-tuning, contrastive objective, cosine similarity, cross entropy loss\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"training on TPU v3-8, 100k steps, batch size 1024, learning rate warm up 500, sequence length 128 tokens, AdamW optimizer, learning rate 2e-5\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 30%|███       | 9/30 [03:28<06:30, 18.61s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"multiple datasets, S2ORC, WikiAnswers, PAQ, Stack Exchange, MS MARCO, GOOAQ, Yahoo Answers, Code Search, COCO, SPECTER, SearchQA, Eli5, Flickr 30k, SNLI, MultiNLI, Sentence Compression, Wikihow, Altlex, Quora Question Triplets, Simple Wikipedia, Natural Questions (NQ), SQuAD2.0, TriviaQA\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"GPT-2, pretrained model on English language using a causal language modeling (CLM) objective\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"GPT-2, transformers model, pretrained, English data, self-supervised fashion, no human labeling, publicly available data\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"text generation, fine-tune, downstream task, model hub\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"text generation, get the features of a given text\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"unfiltered content from the internet, training data not released as a dataset\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"WebText, Reddit, karma, 40GB, not publicly released\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"tokenized using a byte-level version of Byte Pair Encoding (BPE), vocabulary size of 50,257, inputs are sequences of 1024 consecutive tokens\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"zero-shot\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 33%|███▎      | 10/30 [03:46<06:07, 18.36s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"fast generative text-to-image model, synthesize photorealistic images, single network evaluation\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"distilled version of SDXL 1.0, Adversarial Diffusion Distillation (ADD), score distillation, adversarial loss\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"non-commercial, commercial, research, generative models, real-time applications, safe deployment, biases, educational, creative tools\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"SDXL-Turbo, text-to-image, image-to-image, pipeline, AutoPipelineForText2Image, AutoPipelineForImage2Image\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"generate content, not trained to be factual, true representations of people or events\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"non-commercial and commercial usage\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 37%|███▋      | 11/30 [04:11<06:27, 20.39s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Grok-1 open-weights model\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 40%|████      | 12/30 [04:13<04:28, 14.94s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Llama 2, 7 billion to 70 billion parameters, fine-tuned, optimized for dialogue, HF Mirror Transformers format\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Llama 2, auto-regressive language model, optimized transformer architecture, supervised fine-tuning (SFT), reinforcement learning with human feedback (RLHF), Grouped-Query Attention (GQA), inference scalability\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"commercial and research use, assistant-like chat, natural language generation tasks\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"pretrained on 2 trillion tokens, publicly available sources, fine-tuning data includes publicly available instruction datasets, one million new human-annotated examples, no Meta user data, pretraining data cutoff September 2022, tuning data up to July 2023\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 43%|████▎     | 13/30 [04:39<05:10, 18.25s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"ChatGLM2-6B, base model, GLM, pre-training, human preference alignment training, performance improvement\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 47%|████▋     | 14/30 [04:50<04:18, 16.15s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"pythia-12b, pythia-6.9b, pythia-2.8b\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"dolly-v2-12b, 12 billion parameter causal language model, derived from EleutherAI's Pythia-12b, fine-tuned on ~15K record instruction corpus\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"databricks/dolly-v2-12b, transformers library, accelerate library, InstructionTextGenerationPipeline\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"dolly-v2-12b, not state-of-the-art, not designed to perform competitively, modern model architectures, larger pretraining corpuses\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"training corpuses, The Pile, GPT-J's pre-training corpus, public internet, databricks-dolly-15k, natural language instructions, Databricks employees, Wikipedia, biases, typos, factual errors\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"dolly-v2-12b, dolly-v1-6b\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 50%|█████     | 15/30 [05:10<04:20, 17.34s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "umpire system uses a Transformer-based architecture with self-attention mechanisms to predict the next move in a chess game. It has been trained on a large corpus of chess games from grandmasters. The model is intended for use by both amateur and professional chess players to improve their game strategy and decision-making skills.\n",
      "\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Transformer-based architecture with self-attention mechanisms\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"SDXL-Lightning, lightning-fast text-to-image generation, 1024px images, Progressive Adversarial Diffusion Distillation, UNet, LoRA\"}\n",
      "of the base model, with a sequence length of 128 and a batch size of 8. The base model has 12 layers and utilizes a Transformer architecture with a self-attention mechanism. The model is trained on the English language dataset with 1 billion words and a variety of texts including web pages, books, and news articles.\n",
      "\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"base model, 12 layers, Transformer architecture, self-attention mechanism\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"UNet checkpoint, LoRA checkpoint\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"UNet2DConditionModel, 2-Step, 4-Step, 8-Step UNet\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"non-SDXL base models\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"1-step UNet, UNet2DConditionModel\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"ComfyUI, checkpoint, inference steps, Euler sampler, sgm_uniform scheduler\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"1-step model, quality, 2-step model\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 53%|█████▎    | 16/30 [05:34<04:29, 19.27s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Stable Diffusion v2, stable-diffusion-2-base, 512-base-ema.ckpt\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"generate and modify images based on text prompts\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Stable Diffusion 2, Diffusers library\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"research purposes, safe deployment of models, probing limitations and biases, generation of artworks, use in design, applications in educational or creative tools, research on generative models\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"generate content, not factual or true representations\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to 'A red cube on top of a blue sphere', limitations with faces, people, non-English language captions, autoencoding part is lossy\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"LAION-2B(en), insufficient representation of non-English languages and cultures\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"LAION-5B and subsets, filtered using LAION's NSFW detector\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"classifier-free guidance scales, 50 steps DDIM sampling steps, evaluated at 512x512 resolution\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 57%|█████▋    | 17/30 [05:57<04:24, 20.36s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"BERT base model (uncased), pretrained model on English language, MLM objective\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"BERT, transformers model, pretrained, self-supervised fashion, Masked language modeling (MLM), Next sentence prediction (NSP), English language representation\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"BERT, base and large variations, uncased and cased, Chinese and multilingual versions, whole word masking models\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"masked language modeling, next sentence prediction, sequence classification, token classification, question answering\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"bert-base-uncased\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"bert-base-uncased\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"BookCorpus, English Wikipedia\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"texts are lowercased and tokenized using WordPiece, vocabulary size, consecutive sentences, random sentence, combined length of less than 512 tokens, masking procedure, 15% of tokens are masked, replaced by [MASK], replaced by random token, left as is\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"training on 4 cloud TPUs, batch size of 256, sequence length, optimizer Adam, learning rate 1e-4, weight decay, warmup, linear decay\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"downstream tasks, Glue test results\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 60%|██████    | 18/30 [06:22<04:20, 21.68s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Voice generation model, clone voices, different languages, quick 6-second audio clip\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Supports 17 languages, Voice cloning with just a 6-second audio clip, Emotion and style transfer, Cross-language voice cloning, Multi-lingual speech generation, 24khz sampling rate\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Architectural improvements for speaker conditioning, enables the use of multiple speaker references and interpolation between speakers, stability improvements, better prosody and audio quality\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"17 languages supported: English, Spanish, French, German, Italian, Portuguese, Polish, Turkish, Russian, Dutch, Czech, Arabic, Chinese, Japanese, Hungarian, Korean, Hindi\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 63%|██████▎   | 19/30 [06:37<03:38, 19.87s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"tts_models/multilingual/multi-dataset/xtts_v2, Xtts, config, load_checkpoint, synthesize\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Llama 2, pretrained model, 7B parameters\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"auto-regressive language model, optimized transformer architecture, supervised fine-tuning (SFT), reinforcement learning with human feedback (RLHF), Grouped-Query Attention (GQA)\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"commercial and research use, assistant-like chat, natural language generation tasks\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"pretrained on 2 trillion tokens, fine-tuning data includes publicly available instruction datasets, one million new human-annotated examples, no Meta user data, pretraining data cutoff September 2022, tuning data up to July 2023\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Llama 1, Llama 2\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 70%|███████   | 21/30 [07:02<02:27, 16.36s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"SDXL base model, ensemble of experts, pipeline for latent diffusion, standalone module\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"generate and modify images based on text prompts\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"SDXL base model, significantly better performance, previous variants\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"StableDiffusionXLImg2ImgPipeline, torch.compile, unet\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"research purposes, generation of artworks, design, artistic processes, educational tools, creative tools, research on generative models, safe deployment, probing limitations, biases of generative models\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"generate content, not trained to be factual or true representations\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"model struggles with tasks involving compositionality, rendering images corresponding to descriptions\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 73%|███████▎  | 22/30 [07:15<02:05, 15.66s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"trained on 768x768px images, aspect ratios, photorealistic gens\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"dreamlike-photoreal-2.0\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Stable Diffusion model\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 77%|███████▋  | 23/30 [07:25<01:38, 14.09s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"non-quantized version of C4AI Command R+\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"C4AI Command R+, 104B billion parameter model, Retrieval Augmented Generation (RAG), tool use, multilingual, 10 languages\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"auto-regressive language model, optimized transformer architecture, supervised fine-tuning (SFT), preference training\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"grounded summarization, Retrieval Augmented Generation (RAG), predicting relevant documents, predicting cited documents, generating answers with grounding spans\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "```json\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "```\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"Single-Step Tool Use Capabilities, Function Calling, Tool Selection, Response Generation\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"Rendering Single-Step Tool Use Prompts\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "```json\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "```\n",
      "```json\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "```\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"Multi-step tool use, building agents that can plan and execute a sequence of actions using multiple tools\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"highly performant 104 billion parameter model\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 80%|████████  | 24/30 [07:59<01:57, 19.51s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Reflection Llama-3.1 70B, open-source LLM\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Llama 3.1 70B, special tokens, Reflection Llama-3.1 70B, internal thoughts and reasoning, final answer separation\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"System Prompt, world-class AI system, capable of complex reasoning and reflection\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"dataset, report, Reflection 405B model\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 83%|████████▎ | 25/30 [08:10<01:25, 17.19s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"meta-llama/Llama-3.1-70B\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Mixtral-8x7B, Large Language Model, Sparse Mixture of Experts, Llama 2 70B\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"model_id: mistralai/Mixtral-8x7B-v0.1\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"AutoModelForCausalLM, use_flash_attention_2\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"pretrained base model\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 87%|████████▋ | 26/30 [08:24<01:05, 16.26s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"pre-trained model for automatic speech recognition (ASR) and speech translation, Whisper, generalise to many datasets and domains without fine-tuning\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Transformer based encoder-decoder model, sequence-to-sequence model\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"transcription or translation, speech recognition or speech translation\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"transcribe, automatically predicts the output language (English), context tokens are 'unforced'\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"French to French transcription\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"translate, forces, Whisper model, perform, speech translation\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"WhisperProcessor, WhisperForConditionalGeneration, openai/whisper-large-v2\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"LibriSpeech test-clean\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"audio transcription, chunking algorithm, Transformers pipeline method\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"Fine-tuning, Whisper model, generalise, languages, tasks, predictive capabilities, 🤗 Transformers, 5 hours of labelled data\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"AI researchers, ASR solution for developers, English speech recognition\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"680,000 hours of audio, corresponding transcripts, 65% English, 18% non-English with English transcripts, 17% non-English with corresponding transcript, 98 different languages\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"sequence-to-sequence architecture, prone to generating repetitive texts\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"accessibility tools, real-time transcription, speech recognition, translation, economic implications, surveillance technologies, automatic transcription, specific individuals recognition\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 90%|█████████ | 27/30 [08:56<01:02, 20.71s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Phi-3-Mini-128K-Instruct, 3.8 billion-parameter, lightweight, state-of-the-art open model\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"commercial and research use, English, Memory/compute constrained environments, Latency bound scenarios, Strong reasoning, Accelerate research on language and multimodal models, Building block for generative AI powered features\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Tokenizer, Phi-3 Mini-128K-Instruct, vocabulary size, tokens\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Phi-3 Mini-128K-Instruct, 3.8B parameters, dense decoder-only Transformer model\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"Publicly available documents, high-quality educational data, code, synthetic textbook-like data, chat format supervised data\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"multi-GPUs supervised fine-tuning (SFT), TRL and Accelerate modules\"}\n",
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"model's reasoning ability, common sense reasoning, logical reasoning, long document/meeting summarization, long document QA\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"Phi-3 Mini-128K-Instruct model, flash attention, NVIDIA A100, NVIDIA A6000, NVIDIA H100, ONNX models\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 93%|█████████▎| 28/30 [09:23<00:45, 22.63s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"mistralai/Mistral-7B-v0.1\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Zephyr-7B-β, highest ranked 7B chat model, MT-Bench, AlpacaEval benchmarks, performance compared to larger open models, proprietary models\"}\n",
      "{\"label\":\"users_raw_paragraph\",\"answer\":\"chat, demo, capabilities\"}\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"Zephyr-7B-β, mistralai/Mistral-7B-v0.1, unknown corpus size and composition\"}\n",
      "{\"label\":\"datasets_raw_paragraph\",\"answer\":\"evaluation set\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"hyperparameters, learning_rate, train_batch_size, eval_batch_size, seed, distributed_type, num_devices, total_train_batch_size, total_eval_batch_size, optimizer, Adam, betas, epsilon, lr_scheduler_type, lr_scheduler_warmup_ratio, num_epochs\"}\n",
      "{\"label\":\"approach_raw_paragraph\",\"answer\":\"DPO training metrics\"}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "{\"label\":\"other\",\"answer\":null}\n",
      "```json\n",
      "{\"label\":\"base_model_raw_paragraph\",\"answer\":\"mistralai/Mistral-7B-v0.1\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      " 97%|█████████▋| 29/30 [09:48<00:23, 23.32s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"task_raw_paragraph\",\"answer\":\"normalized accuracy on AI2 Reasoning Challenge, HellaSwag, Drop, TruthfulQA, GSM8k, MMLU, Winogrande, win rate on AlpacaEval, score on MT-Bench\"}\n",
      "{\"label\":\"modelArchitecture_raw_paragraph\",\"answer\":\"anime-style model\"}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 30/30 [09:54<00:00, 19.81s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\"label\":\"other\",\"answer\":null}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "data_label = {}\n",
    "system_prompt = prompt_task\n",
    "stard_idx, end_idx = 20, 50\n",
    "# stard_idx, end_idx = 100, 150\n",
    "for model_id in tqdm(model_id_list[stard_idx:end_idx]):\n",
    "    parsed_result, _ = get_text_table(model_id, new_lines=new_lines)\n",
    "\n",
    "    data_label[model_id] = {}\n",
    "    for title, text in parsed_result.items():\n",
    "        user_prompt = text\n",
    "        response = client.chat.completions.create(\n",
    "            model=\"glm-4-0520\",\n",
    "            messages=[\n",
    "                {\"role\": \"system\", \"content\": system_prompt},\n",
    "                {\"role\": \"user\", \"content\": user_prompt},\n",
    "            ],\n",
    "        )\n",
    "        answer = response.choices[0].message.content\n",
    "        print(answer)\n",
    "\n",
    "        json_text, json_object = try_parse_json_object(answer)\n",
    "        answer_dict = json_object\n",
    "        label = answer_dict[\"label\"]\n",
    "        keywords = answer_dict[\"answer\"]\n",
    "        if label != \"other\":\n",
    "            if label not in data_label[model_id]:\n",
    "                data_label[model_id][label] = []\n",
    "            data_label[model_id][label].append(user_prompt)\n",
    "            new_label = label.replace(\"_raw_paragraph\", \"\")\n",
    "            if new_label not in data_label[model_id]:\n",
    "                data_label[model_id][new_label] = []\n",
    "            data_label[model_id][new_label].append(keywords)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 30/30 [00:00<00:00, 59662.93it/s]\n"
     ]
    }
   ],
   "source": [
    "for model_id in tqdm(model_id_list[stard_idx:end_idx]):\n",
    "    for k, v in data_label[model_id].items():\n",
    "        if v is not None:\n",
    "            v = new_lines.join(v)\n",
    "            data_label[model_id][k] = v"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>modelArchitecture_raw_paragraph</th>\n",
       "      <th>modelArchitecture</th>\n",
       "      <th>task_raw_paragraph</th>\n",
       "      <th>task</th>\n",
       "      <th>datasets_raw_paragraph</th>\n",
       "      <th>datasets</th>\n",
       "      <th>base_model_raw_paragraph</th>\n",
       "      <th>base_model</th>\n",
       "      <th>users_raw_paragraph</th>\n",
       "      <th>users</th>\n",
       "      <th>approach_raw_paragraph</th>\n",
       "      <th>approach</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>bigcode/starcoder</th>\n",
       "      <td>Model Summary==========The StarCoder models ar...</td>\n",
       "      <td>15.5B parameter models, Multi Query Attention,...</td>\n",
       "      <td>Intended use==========The model was trained on...</td>\n",
       "      <td>technical assistant==========Fill-in-the-middl...</td>\n",
       "      <td>Attribution &amp; Other Requirements==========The ...</td>\n",
       "      <td>pretraining dataset, filtered for permissive l...</td>\n",
       "      <td>Hardware==========GPUs: 512 Tesla A100 \\n     ...</td>\n",
       "      <td>GPUs: 512 Tesla A100, Training time: 24 days, ...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>CompVis/stable-diffusion-v-1-4-original</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Model Details==========Developed by: Robin Rom...</td>\n",
       "      <td>text-to-image generation==========generate con...</td>\n",
       "      <td>Bias==========While the capabilities of image ...</td>\n",
       "      <td>LAION-2B(en), insufficient representation of n...</td>\n",
       "      <td>Download the weights==========These weights ar...</td>\n",
       "      <td>original CompVis Stable Diffusion codebase====...</td>\n",
       "      <td>Direct Use==========The model is intended for ...</td>\n",
       "      <td>research purposes, safe deployment of models, ...</td>\n",
       "      <td>Evaluation Results==========Evaluations with d...</td>\n",
       "      <td>classifier-free guidance scales, 50 PLMS sampl...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>meta-llama/Llama-3.1-8B-Instruct</th>\n",
       "      <td>Model Information==========The Meta Llama 3.1 ...</td>\n",
       "      <td>Llama 3.1, auto-regressive language model, opt...</td>\n",
       "      <td>Tool use with transformers==========LLaMA-3.1 ...</td>\n",
       "      <td>tool use with transformers, LLaMA-3.1 supports...</td>\n",
       "      <td>Training Data==========Overview: Llama 3.1 was...</td>\n",
       "      <td>~15 trillion tokens, publicly available source...</td>\n",
       "      <td>Use with  llama==========Please, follow the in...</td>\n",
       "      <td>llama, Meta-Llama-3.1-8B-Instruct==========Lla...</td>\n",
       "      <td>Intended Use==========Intended Use Cases Llama...</td>\n",
       "      <td>commercial and research use, assistant-like ch...</td>\n",
       "      <td>Hardware and Software==========Training Factor...</td>\n",
       "      <td>custom training libraries, Meta's custom built...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>stabilityai/stable-video-diffusion-img2vid-xt</th>\n",
       "      <td>Model Description==========(SVD) Image-to-Vide...</td>\n",
       "      <td>latent diffusion model, 25 frames, resolution ...</td>\n",
       "      <td>Stable Video Diffusion Image-to-Video Model Ca...</td>\n",
       "      <td>Image-to-Video generation==========out-of-scop...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>How to Get Started with the Model==========Che...</td>\n",
       "      <td>Stability-AI, generative-models</td>\n",
       "      <td>Direct Use==========The model is intended for ...</td>\n",
       "      <td>non-commercial, research, commercial, generati...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>mistralai/Mistral-7B-Instruct-v0.2</th>\n",
       "      <td>Encode and Decode with  mistral_common========...</td>\n",
       "      <td>MistralTokenizer, ChatCompletionRequest=======...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Limitations==========The Mistral 7B Instruct m...</td>\n",
       "      <td>Mistral 7B Instruct model, easily fine-tuned, ...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>tiiuae/falcon-40b</th>\n",
       "      <td>🚀 Falcon-40B==========Falcon-40B is a 40B para...</td>\n",
       "      <td>40B parameters causal decoder-only model======...</td>\n",
       "      <td>Direct Use==========Research on large language...</td>\n",
       "      <td>foundation for further specialization, finetun...</td>\n",
       "      <td>Bias, Risks, and Limitations==========Falcon-4...</td>\n",
       "      <td>English, German, Spanish, French, Italian, Por...</td>\n",
       "      <td>Why use Falcon-40B?==========It is the best op...</td>\n",
       "      <td>Falcon-40B, best open-source model, outperform...</td>\n",
       "      <td>Out-of-Scope Use==========Production use witho...</td>\n",
       "      <td>production use, assessment of risks, mitigatio...</td>\n",
       "      <td>Training Procedure==========Falcon-40B was tra...</td>\n",
       "      <td>Falcon-40B, 3D parallelism strategy, TP=8, PP=...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>black-forest-labs/FLUX.1-schnell</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Usage==========We provide a reference implemen...</td>\n",
       "      <td>Developers and creatives looking to build on t...</td>\n",
       "      <td>Key Features==========Cutting-edge output qual...</td>\n",
       "      <td>latent adversarial diffusion distillation, FLUX.1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>hakurei/waifu-diffusion</th>\n",
       "      <td>waifu-diffusion v1.4 - Diffusion for Weebs====...</td>\n",
       "      <td>latent text-to-image diffusion model, fine-tuning</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Downstream Uses==========This model can be use...</td>\n",
       "      <td>entertainment purposes, generative art assistant</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>sentence-transformers/all-MiniLM-L6-v2</th>\n",
       "      <td>all-MiniLM-L6-v2==========This is a  sentence-...</td>\n",
       "      <td>sentence-transformers model, 384 dimensional v...</td>\n",
       "      <td>Usage (Sentence-Transformers)==========Using t...</td>\n",
       "      <td>Using sentence-transformers for embeddings gen...</td>\n",
       "      <td>Training data==========We use the concatenatio...</td>\n",
       "      <td>multiple datasets, S2ORC, WikiAnswers, PAQ, St...</td>\n",
       "      <td>Pre-training==========We use the pretrained   ...</td>\n",
       "      <td>pretrained nreimers/MiniLM-L6-H384-uncased</td>\n",
       "      <td>Intended uses==========Our model is intended t...</td>\n",
       "      <td>sentence and short paragraph encoder, informat...</td>\n",
       "      <td>Fine-tuning==========We fine-tune the model us...</td>\n",
       "      <td>fine-tuning, contrastive objective, cosine sim...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>openai-community/gpt2</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>How to use==========You can use this model dir...</td>\n",
       "      <td>text generation, get the features of a given text</td>\n",
       "      <td>Limitations and bias==========The training dat...</td>\n",
       "      <td>unfiltered content from the internet, training...</td>\n",
       "      <td>GPT-2==========Test the whole generation capab...</td>\n",
       "      <td>GPT-2, pretrained model on English language us...</td>\n",
       "      <td>Intended uses &amp; limitations==========You can u...</td>\n",
       "      <td>text generation, fine-tune, downstream task, m...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>stabilityai/sdxl-turbo</th>\n",
       "      <td>SDXL-Turbo Model Card==========SDXL-Turbo is a...</td>\n",
       "      <td>fast generative text-to-image model, synthesiz...</td>\n",
       "      <td>Out-of-Scope Use==========The model was not tr...</td>\n",
       "      <td>generate content, not trained to be factual, t...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Direct Use==========The model is intended for ...</td>\n",
       "      <td>non-commercial, commercial, research, generati...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>xai-org/grok-1</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Grok-1==========This repository contains the w...</td>\n",
       "      <td>Grok-1 open-weights model</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>meta-llama/Llama-2-70b-chat-hf</th>\n",
       "      <td>Llama 2==========Llama 2 is a collection of pr...</td>\n",
       "      <td>Llama 2, 7 billion to 70 billion parameters, f...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Training Data==========Overview Llama 2 was pr...</td>\n",
       "      <td>pretrained on 2 trillion tokens, publicly avai...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Intended Use==========Intended Use Cases Llama...</td>\n",
       "      <td>commercial and research use, assistant-like ch...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>THUDM/chatglm2-6b</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>介绍==========ChatGLM2-6B 是开源中英双语对话模型  ChatGLM-6...</td>\n",
       "      <td>ChatGLM2-6B, base model, GLM, pre-training, hu...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>databricks/dolly-v2-12b</th>\n",
       "      <td>Happy Hacking!==========Downloads last month 4...</td>\n",
       "      <td>Transformer-based architecture with self-atten...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Dataset Limitations==========Like all language...</td>\n",
       "      <td>training corpuses, The Pile, GPT-J's pre-train...</td>\n",
       "      <td>Summary==========Databricks'  dolly-v2-12b , a...</td>\n",
       "      <td>pythia-12b, pythia-6.9b, pythia-2.8b==========...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>ByteDance/SDXL-Lightning</th>\n",
       "      <td>SDXL-Lightning==========SDXL-Lightning is a li...</td>\n",
       "      <td>SDXL-Lightning, lightning-fast text-to-image g...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Demos==========Generate with all configuration...</td>\n",
       "      <td>base model, 12 layers, Transformer architectur...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>ComfyUI Usage==========Please always use the c...</td>\n",
       "      <td>ComfyUI, checkpoint, inference steps, Euler sa...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>stabilityai/stable-diffusion-2</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Model Details==========Developed by: Robin Rom...</td>\n",
       "      <td>generate and modify images based on text promp...</td>\n",
       "      <td>Bias==========While the capabilities of image ...</td>\n",
       "      <td>LAION-2B(en), insufficient representation of n...</td>\n",
       "      <td>Stable Diffusion v2 Model Card==========This m...</td>\n",
       "      <td>Stable Diffusion v2, stable-diffusion-2-base, ...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Evaluation Results==========Evaluations with d...</td>\n",
       "      <td>classifier-free guidance scales, 50 steps DDIM...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>google-bert/bert-base-uncased</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Intended uses &amp; limitations==========You can u...</td>\n",
       "      <td>masked language modeling, next sentence predic...</td>\n",
       "      <td>Training data==========The BERT model was pret...</td>\n",
       "      <td>BookCorpus, English Wikipedia</td>\n",
       "      <td>BERT base model (uncased)==========Pretrained ...</td>\n",
       "      <td>BERT base model (uncased), pretrained model on...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Preprocessing==========The texts are lowercase...</td>\n",
       "      <td>texts are lowercased and tokenized using WordP...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>coqui/XTTS-v2</th>\n",
       "      <td>Updates over XTTS-v1==========2 new languages;...</td>\n",
       "      <td>Architectural improvements for speaker conditi...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>ⓍTTS==========ⓍTTS is a Voice generation model...</td>\n",
       "      <td>Voice generation model, clone voices, differen...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>lllyasviel/sd_control_collection</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>meta-llama/Llama-2-7b-hf</th>\n",
       "      <td>Model Details==========Note: Use of this model...</td>\n",
       "      <td>auto-regressive language model, optimized tran...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Training Data==========Overview Llama 2 was pr...</td>\n",
       "      <td>pretrained on 2 trillion tokens, fine-tuning d...</td>\n",
       "      <td>Llama 2==========Llama 2 is a collection of pr...</td>\n",
       "      <td>Llama 2, pretrained model, 7B parameters======...</td>\n",
       "      <td>Intended Use==========Intended Use Cases Llama...</td>\n",
       "      <td>commercial and research use, assistant-like ch...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>stabilityai/stable-diffusion-xl-refiner-1.0</th>\n",
       "      <td>🧨 Diffusers==========Make sure to upgrade diff...</td>\n",
       "      <td>StableDiffusionXLImg2ImgPipeline, torch.compil...</td>\n",
       "      <td>Model Description==========Developed by: Stabi...</td>\n",
       "      <td>generate and modify images based on text promp...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Model==========SDXL  consists of an  ensemble ...</td>\n",
       "      <td>SDXL base model, ensemble of experts, pipeline...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>dreamlike-art/dreamlike-photoreal-2.0</th>\n",
       "      <td>If you want to use dreamlike models on your we...</td>\n",
       "      <td>trained on 768x768px images, aspect ratios, ph...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>CKPT==========Download dreamlike-photoreal-2.0...</td>\n",
       "      <td>dreamlike-photoreal-2.0==========Stable Diffus...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>CohereForAI/c4ai-command-r-plus</th>\n",
       "      <td>Model Details==========Input: Models input tex...</td>\n",
       "      <td>auto-regressive language model, optimized tran...</td>\n",
       "      <td>Grounded Generation and RAG Capabilities:=====...</td>\n",
       "      <td>grounded summarization, Retrieval Augmented Ge...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Model Card for C4AI Command R+==========🚨 This...</td>\n",
       "      <td>non-quantized version of C4AI Command R+======...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>mattshumer/Reflection-Llama-3.1-70B</th>\n",
       "      <td>Benchmarks==========Trained from Llama 3.1 70B...</td>\n",
       "      <td>Llama 3.1 70B, special tokens, Reflection Llam...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Dataset / Report==========Both the dataset and...</td>\n",
       "      <td>dataset, report, Reflection 405B model</td>\n",
       "      <td>Reflection Llama-3.1 70B==========| IMPORTANT ...</td>\n",
       "      <td>Reflection Llama-3.1 70B, open-source LLM=====...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>mistralai/Mixtral-8x7B-v0.1</th>\n",
       "      <td>Click to expand==========+ import torch\\nfrom ...</td>\n",
       "      <td>AutoModelForCausalLM, use_flash_attention_2</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Model Card for Mixtral-8x7B==========The Mixtr...</td>\n",
       "      <td>Mixtral-8x7B, Large Language Model, Sparse Mix...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>openai/whisper-large-v2</th>\n",
       "      <td>Model details==========Whisper is a Transforme...</td>\n",
       "      <td>Transformer based encoder-decoder model, seque...</td>\n",
       "      <td>Usage==========To transcribe audio samples, th...</td>\n",
       "      <td>transcription or translation, speech recogniti...</td>\n",
       "      <td>Evaluation==========This code snippet shows ho...</td>\n",
       "      <td>LibriSpeech test-clean==========680,000 hours ...</td>\n",
       "      <td>Whisper==========Whisper is a pre-trained mode...</td>\n",
       "      <td>pre-trained model for automatic speech recogni...</td>\n",
       "      <td>Evaluated Use==========The primary intended us...</td>\n",
       "      <td>AI researchers, ASR solution for developers, E...</td>\n",
       "      <td>Fine-Tuning==========The pre-trained Whisper m...</td>\n",
       "      <td>Fine-tuning, Whisper model, generalise, langua...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>microsoft/Phi-3-mini-128k-instruct</th>\n",
       "      <td>Model==========Architecture: Phi-3 Mini-128K-I...</td>\n",
       "      <td>Phi-3 Mini-128K-Instruct, 3.8B parameters, den...</td>\n",
       "      <td>Benchmarks==========We report the results unde...</td>\n",
       "      <td>model's reasoning ability, common sense reason...</td>\n",
       "      <td>Datasets==========Our training data includes a...</td>\n",
       "      <td>Publicly available documents, high-quality edu...</td>\n",
       "      <td>Model Summary==========The Phi-3-Mini-128K-Ins...</td>\n",
       "      <td>Phi-3-Mini-128K-Instruct, 3.8 billion-paramete...</td>\n",
       "      <td>Intended Uses==========Primary use cases======...</td>\n",
       "      <td>commercial and research use, English, Memory/c...</td>\n",
       "      <td>Fine-tuning==========A basic example of multi-...</td>\n",
       "      <td>multi-GPUs supervised fine-tuning (SFT), TRL a...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>HuggingFaceH4/zephyr-7b-beta</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Evaluation results==========normalized accurac...</td>\n",
       "      <td>normalized accuracy on AI2 Reasoning Challenge...</td>\n",
       "      <td>Training and evaluation data==========During D...</td>\n",
       "      <td>evaluation set</td>\n",
       "      <td>Model Card for Zephyr 7B β==========Zephyr is ...</td>\n",
       "      <td>mistralai/Mistral-7B-v0.1==========7B paramete...</td>\n",
       "      <td>Intended uses &amp; limitations==========The model...</td>\n",
       "      <td>chat, demo, capabilities</td>\n",
       "      <td>Training hyperparameters==========The followin...</td>\n",
       "      <td>hyperparameters, learning_rate, train_batch_si...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>gsdf/Counterfeit-V2.5</th>\n",
       "      <td>Update==========V2.5 has been updated for ease...</td>\n",
       "      <td>anime-style model</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                                                 modelArchitecture_raw_paragraph  \\\n",
       "bigcode/starcoder                              Model Summary==========The StarCoder models ar...   \n",
       "CompVis/stable-diffusion-v-1-4-original                                                      NaN   \n",
       "meta-llama/Llama-3.1-8B-Instruct               Model Information==========The Meta Llama 3.1 ...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt  Model Description==========(SVD) Image-to-Vide...   \n",
       "mistralai/Mistral-7B-Instruct-v0.2             Encode and Decode with  mistral_common========...   \n",
       "tiiuae/falcon-40b                              🚀 Falcon-40B==========Falcon-40B is a 40B para...   \n",
       "black-forest-labs/FLUX.1-schnell                                                             NaN   \n",
       "hakurei/waifu-diffusion                        waifu-diffusion v1.4 - Diffusion for Weebs====...   \n",
       "sentence-transformers/all-MiniLM-L6-v2         all-MiniLM-L6-v2==========This is a  sentence-...   \n",
       "openai-community/gpt2                                                                        NaN   \n",
       "stabilityai/sdxl-turbo                         SDXL-Turbo Model Card==========SDXL-Turbo is a...   \n",
       "xai-org/grok-1                                                                               NaN   \n",
       "meta-llama/Llama-2-70b-chat-hf                 Llama 2==========Llama 2 is a collection of pr...   \n",
       "THUDM/chatglm2-6b                                                                            NaN   \n",
       "databricks/dolly-v2-12b                        Happy Hacking!==========Downloads last month 4...   \n",
       "ByteDance/SDXL-Lightning                       SDXL-Lightning==========SDXL-Lightning is a li...   \n",
       "stabilityai/stable-diffusion-2                                                               NaN   \n",
       "google-bert/bert-base-uncased                                                                NaN   \n",
       "coqui/XTTS-v2                                  Updates over XTTS-v1==========2 new languages;...   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                       Model Details==========Note: Use of this model...   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0    🧨 Diffusers==========Make sure to upgrade diff...   \n",
       "dreamlike-art/dreamlike-photoreal-2.0          If you want to use dreamlike models on your we...   \n",
       "CohereForAI/c4ai-command-r-plus                Model Details==========Input: Models input tex...   \n",
       "mattshumer/Reflection-Llama-3.1-70B            Benchmarks==========Trained from Llama 3.1 70B...   \n",
       "mistralai/Mixtral-8x7B-v0.1                    Click to expand==========+ import torch\\nfrom ...   \n",
       "openai/whisper-large-v2                        Model details==========Whisper is a Transforme...   \n",
       "microsoft/Phi-3-mini-128k-instruct             Model==========Architecture: Phi-3 Mini-128K-I...   \n",
       "HuggingFaceH4/zephyr-7b-beta                                                                 NaN   \n",
       "gsdf/Counterfeit-V2.5                          Update==========V2.5 has been updated for ease...   \n",
       "\n",
       "                                                                               modelArchitecture  \\\n",
       "bigcode/starcoder                              15.5B parameter models, Multi Query Attention,...   \n",
       "CompVis/stable-diffusion-v-1-4-original                                                      NaN   \n",
       "meta-llama/Llama-3.1-8B-Instruct               Llama 3.1, auto-regressive language model, opt...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt  latent diffusion model, 25 frames, resolution ...   \n",
       "mistralai/Mistral-7B-Instruct-v0.2             MistralTokenizer, ChatCompletionRequest=======...   \n",
       "tiiuae/falcon-40b                              40B parameters causal decoder-only model======...   \n",
       "black-forest-labs/FLUX.1-schnell                                                             NaN   \n",
       "hakurei/waifu-diffusion                        latent text-to-image diffusion model, fine-tuning   \n",
       "sentence-transformers/all-MiniLM-L6-v2         sentence-transformers model, 384 dimensional v...   \n",
       "openai-community/gpt2                                                                        NaN   \n",
       "stabilityai/sdxl-turbo                         fast generative text-to-image model, synthesiz...   \n",
       "xai-org/grok-1                                                                               NaN   \n",
       "meta-llama/Llama-2-70b-chat-hf                 Llama 2, 7 billion to 70 billion parameters, f...   \n",
       "THUDM/chatglm2-6b                                                                            NaN   \n",
       "databricks/dolly-v2-12b                        Transformer-based architecture with self-atten...   \n",
       "ByteDance/SDXL-Lightning                       SDXL-Lightning, lightning-fast text-to-image g...   \n",
       "stabilityai/stable-diffusion-2                                                               NaN   \n",
       "google-bert/bert-base-uncased                                                                NaN   \n",
       "coqui/XTTS-v2                                  Architectural improvements for speaker conditi...   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                       auto-regressive language model, optimized tran...   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0    StableDiffusionXLImg2ImgPipeline, torch.compil...   \n",
       "dreamlike-art/dreamlike-photoreal-2.0          trained on 768x768px images, aspect ratios, ph...   \n",
       "CohereForAI/c4ai-command-r-plus                auto-regressive language model, optimized tran...   \n",
       "mattshumer/Reflection-Llama-3.1-70B            Llama 3.1 70B, special tokens, Reflection Llam...   \n",
       "mistralai/Mixtral-8x7B-v0.1                          AutoModelForCausalLM, use_flash_attention_2   \n",
       "openai/whisper-large-v2                        Transformer based encoder-decoder model, seque...   \n",
       "microsoft/Phi-3-mini-128k-instruct             Phi-3 Mini-128K-Instruct, 3.8B parameters, den...   \n",
       "HuggingFaceH4/zephyr-7b-beta                                                                 NaN   \n",
       "gsdf/Counterfeit-V2.5                                                          anime-style model   \n",
       "\n",
       "                                                                              task_raw_paragraph  \\\n",
       "bigcode/starcoder                              Intended use==========The model was trained on...   \n",
       "CompVis/stable-diffusion-v-1-4-original        Model Details==========Developed by: Robin Rom...   \n",
       "meta-llama/Llama-3.1-8B-Instruct               Tool use with transformers==========LLaMA-3.1 ...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt  Stable Video Diffusion Image-to-Video Model Ca...   \n",
       "mistralai/Mistral-7B-Instruct-v0.2                                                           NaN   \n",
       "tiiuae/falcon-40b                              Direct Use==========Research on large language...   \n",
       "black-forest-labs/FLUX.1-schnell                                                             NaN   \n",
       "hakurei/waifu-diffusion                                                                      NaN   \n",
       "sentence-transformers/all-MiniLM-L6-v2         Usage (Sentence-Transformers)==========Using t...   \n",
       "openai-community/gpt2                          How to use==========You can use this model dir...   \n",
       "stabilityai/sdxl-turbo                         Out-of-Scope Use==========The model was not tr...   \n",
       "xai-org/grok-1                                                                               NaN   \n",
       "meta-llama/Llama-2-70b-chat-hf                                                               NaN   \n",
       "THUDM/chatglm2-6b                                                                            NaN   \n",
       "databricks/dolly-v2-12b                                                                      NaN   \n",
       "ByteDance/SDXL-Lightning                                                                     NaN   \n",
       "stabilityai/stable-diffusion-2                 Model Details==========Developed by: Robin Rom...   \n",
       "google-bert/bert-base-uncased                  Intended uses & limitations==========You can u...   \n",
       "coqui/XTTS-v2                                                                                NaN   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                                                                     NaN   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0    Model Description==========Developed by: Stabi...   \n",
       "dreamlike-art/dreamlike-photoreal-2.0                                                        NaN   \n",
       "CohereForAI/c4ai-command-r-plus                Grounded Generation and RAG Capabilities:=====...   \n",
       "mattshumer/Reflection-Llama-3.1-70B                                                          NaN   \n",
       "mistralai/Mixtral-8x7B-v0.1                                                                  NaN   \n",
       "openai/whisper-large-v2                        Usage==========To transcribe audio samples, th...   \n",
       "microsoft/Phi-3-mini-128k-instruct             Benchmarks==========We report the results unde...   \n",
       "HuggingFaceH4/zephyr-7b-beta                   Evaluation results==========normalized accurac...   \n",
       "gsdf/Counterfeit-V2.5                                                                        NaN   \n",
       "\n",
       "                                                                                            task  \\\n",
       "bigcode/starcoder                              technical assistant==========Fill-in-the-middl...   \n",
       "CompVis/stable-diffusion-v-1-4-original        text-to-image generation==========generate con...   \n",
       "meta-llama/Llama-3.1-8B-Instruct               tool use with transformers, LLaMA-3.1 supports...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt  Image-to-Video generation==========out-of-scop...   \n",
       "mistralai/Mistral-7B-Instruct-v0.2                                                           NaN   \n",
       "tiiuae/falcon-40b                              foundation for further specialization, finetun...   \n",
       "black-forest-labs/FLUX.1-schnell                                                             NaN   \n",
       "hakurei/waifu-diffusion                                                                      NaN   \n",
       "sentence-transformers/all-MiniLM-L6-v2         Using sentence-transformers for embeddings gen...   \n",
       "openai-community/gpt2                          text generation, get the features of a given text   \n",
       "stabilityai/sdxl-turbo                         generate content, not trained to be factual, t...   \n",
       "xai-org/grok-1                                                                               NaN   \n",
       "meta-llama/Llama-2-70b-chat-hf                                                               NaN   \n",
       "THUDM/chatglm2-6b                                                                            NaN   \n",
       "databricks/dolly-v2-12b                                                                      NaN   \n",
       "ByteDance/SDXL-Lightning                                                                     NaN   \n",
       "stabilityai/stable-diffusion-2                 generate and modify images based on text promp...   \n",
       "google-bert/bert-base-uncased                  masked language modeling, next sentence predic...   \n",
       "coqui/XTTS-v2                                                                                NaN   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                                                                     NaN   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0    generate and modify images based on text promp...   \n",
       "dreamlike-art/dreamlike-photoreal-2.0                                                        NaN   \n",
       "CohereForAI/c4ai-command-r-plus                grounded summarization, Retrieval Augmented Ge...   \n",
       "mattshumer/Reflection-Llama-3.1-70B                                                          NaN   \n",
       "mistralai/Mixtral-8x7B-v0.1                                                                  NaN   \n",
       "openai/whisper-large-v2                        transcription or translation, speech recogniti...   \n",
       "microsoft/Phi-3-mini-128k-instruct             model's reasoning ability, common sense reason...   \n",
       "HuggingFaceH4/zephyr-7b-beta                   normalized accuracy on AI2 Reasoning Challenge...   \n",
       "gsdf/Counterfeit-V2.5                                                                        NaN   \n",
       "\n",
       "                                                                          datasets_raw_paragraph  \\\n",
       "bigcode/starcoder                              Attribution & Other Requirements==========The ...   \n",
       "CompVis/stable-diffusion-v-1-4-original        Bias==========While the capabilities of image ...   \n",
       "meta-llama/Llama-3.1-8B-Instruct               Training Data==========Overview: Llama 3.1 was...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt                                                NaN   \n",
       "mistralai/Mistral-7B-Instruct-v0.2                                                           NaN   \n",
       "tiiuae/falcon-40b                              Bias, Risks, and Limitations==========Falcon-4...   \n",
       "black-forest-labs/FLUX.1-schnell                                                             NaN   \n",
       "hakurei/waifu-diffusion                                                                      NaN   \n",
       "sentence-transformers/all-MiniLM-L6-v2         Training data==========We use the concatenatio...   \n",
       "openai-community/gpt2                          Limitations and bias==========The training dat...   \n",
       "stabilityai/sdxl-turbo                                                                       NaN   \n",
       "xai-org/grok-1                                                                               NaN   \n",
       "meta-llama/Llama-2-70b-chat-hf                 Training Data==========Overview Llama 2 was pr...   \n",
       "THUDM/chatglm2-6b                                                                            NaN   \n",
       "databricks/dolly-v2-12b                        Dataset Limitations==========Like all language...   \n",
       "ByteDance/SDXL-Lightning                                                                     NaN   \n",
       "stabilityai/stable-diffusion-2                 Bias==========While the capabilities of image ...   \n",
       "google-bert/bert-base-uncased                  Training data==========The BERT model was pret...   \n",
       "coqui/XTTS-v2                                                                                NaN   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                       Training Data==========Overview Llama 2 was pr...   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0                                                  NaN   \n",
       "dreamlike-art/dreamlike-photoreal-2.0                                                        NaN   \n",
       "CohereForAI/c4ai-command-r-plus                                                              NaN   \n",
       "mattshumer/Reflection-Llama-3.1-70B            Dataset / Report==========Both the dataset and...   \n",
       "mistralai/Mixtral-8x7B-v0.1                                                                  NaN   \n",
       "openai/whisper-large-v2                        Evaluation==========This code snippet shows ho...   \n",
       "microsoft/Phi-3-mini-128k-instruct             Datasets==========Our training data includes a...   \n",
       "HuggingFaceH4/zephyr-7b-beta                   Training and evaluation data==========During D...   \n",
       "gsdf/Counterfeit-V2.5                                                                        NaN   \n",
       "\n",
       "                                                                                        datasets  \\\n",
       "bigcode/starcoder                              pretraining dataset, filtered for permissive l...   \n",
       "CompVis/stable-diffusion-v-1-4-original        LAION-2B(en), insufficient representation of n...   \n",
       "meta-llama/Llama-3.1-8B-Instruct               ~15 trillion tokens, publicly available source...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt                                                NaN   \n",
       "mistralai/Mistral-7B-Instruct-v0.2                                                           NaN   \n",
       "tiiuae/falcon-40b                              English, German, Spanish, French, Italian, Por...   \n",
       "black-forest-labs/FLUX.1-schnell                                                             NaN   \n",
       "hakurei/waifu-diffusion                                                                      NaN   \n",
       "sentence-transformers/all-MiniLM-L6-v2         multiple datasets, S2ORC, WikiAnswers, PAQ, St...   \n",
       "openai-community/gpt2                          unfiltered content from the internet, training...   \n",
       "stabilityai/sdxl-turbo                                                                       NaN   \n",
       "xai-org/grok-1                                                                               NaN   \n",
       "meta-llama/Llama-2-70b-chat-hf                 pretrained on 2 trillion tokens, publicly avai...   \n",
       "THUDM/chatglm2-6b                                                                            NaN   \n",
       "databricks/dolly-v2-12b                        training corpuses, The Pile, GPT-J's pre-train...   \n",
       "ByteDance/SDXL-Lightning                                                                     NaN   \n",
       "stabilityai/stable-diffusion-2                 LAION-2B(en), insufficient representation of n...   \n",
       "google-bert/bert-base-uncased                                      BookCorpus, English Wikipedia   \n",
       "coqui/XTTS-v2                                                                                NaN   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                       pretrained on 2 trillion tokens, fine-tuning d...   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0                                                  NaN   \n",
       "dreamlike-art/dreamlike-photoreal-2.0                                                        NaN   \n",
       "CohereForAI/c4ai-command-r-plus                                                              NaN   \n",
       "mattshumer/Reflection-Llama-3.1-70B                       dataset, report, Reflection 405B model   \n",
       "mistralai/Mixtral-8x7B-v0.1                                                                  NaN   \n",
       "openai/whisper-large-v2                        LibriSpeech test-clean==========680,000 hours ...   \n",
       "microsoft/Phi-3-mini-128k-instruct             Publicly available documents, high-quality edu...   \n",
       "HuggingFaceH4/zephyr-7b-beta                                                      evaluation set   \n",
       "gsdf/Counterfeit-V2.5                                                                        NaN   \n",
       "\n",
       "                                                                        base_model_raw_paragraph  \\\n",
       "bigcode/starcoder                              Hardware==========GPUs: 512 Tesla A100 \\n     ...   \n",
       "CompVis/stable-diffusion-v-1-4-original        Download the weights==========These weights ar...   \n",
       "meta-llama/Llama-3.1-8B-Instruct               Use with  llama==========Please, follow the in...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt  How to Get Started with the Model==========Che...   \n",
       "mistralai/Mistral-7B-Instruct-v0.2             Limitations==========The Mistral 7B Instruct m...   \n",
       "tiiuae/falcon-40b                              Why use Falcon-40B?==========It is the best op...   \n",
       "black-forest-labs/FLUX.1-schnell                                                             NaN   \n",
       "hakurei/waifu-diffusion                                                                      NaN   \n",
       "sentence-transformers/all-MiniLM-L6-v2         Pre-training==========We use the pretrained   ...   \n",
       "openai-community/gpt2                          GPT-2==========Test the whole generation capab...   \n",
       "stabilityai/sdxl-turbo                                                                       NaN   \n",
       "xai-org/grok-1                                 Grok-1==========This repository contains the w...   \n",
       "meta-llama/Llama-2-70b-chat-hf                                                               NaN   \n",
       "THUDM/chatglm2-6b                              介绍==========ChatGLM2-6B 是开源中英双语对话模型  ChatGLM-6...   \n",
       "databricks/dolly-v2-12b                        Summary==========Databricks'  dolly-v2-12b , a...   \n",
       "ByteDance/SDXL-Lightning                       Demos==========Generate with all configuration...   \n",
       "stabilityai/stable-diffusion-2                 Stable Diffusion v2 Model Card==========This m...   \n",
       "google-bert/bert-base-uncased                  BERT base model (uncased)==========Pretrained ...   \n",
       "coqui/XTTS-v2                                  ⓍTTS==========ⓍTTS is a Voice generation model...   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                       Llama 2==========Llama 2 is a collection of pr...   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0    Model==========SDXL  consists of an  ensemble ...   \n",
       "dreamlike-art/dreamlike-photoreal-2.0          CKPT==========Download dreamlike-photoreal-2.0...   \n",
       "CohereForAI/c4ai-command-r-plus                Model Card for C4AI Command R+==========🚨 This...   \n",
       "mattshumer/Reflection-Llama-3.1-70B            Reflection Llama-3.1 70B==========| IMPORTANT ...   \n",
       "mistralai/Mixtral-8x7B-v0.1                    Model Card for Mixtral-8x7B==========The Mixtr...   \n",
       "openai/whisper-large-v2                        Whisper==========Whisper is a pre-trained mode...   \n",
       "microsoft/Phi-3-mini-128k-instruct             Model Summary==========The Phi-3-Mini-128K-Ins...   \n",
       "HuggingFaceH4/zephyr-7b-beta                   Model Card for Zephyr 7B β==========Zephyr is ...   \n",
       "gsdf/Counterfeit-V2.5                                                                        NaN   \n",
       "\n",
       "                                                                                      base_model  \\\n",
       "bigcode/starcoder                              GPUs: 512 Tesla A100, Training time: 24 days, ...   \n",
       "CompVis/stable-diffusion-v-1-4-original        original CompVis Stable Diffusion codebase====...   \n",
       "meta-llama/Llama-3.1-8B-Instruct               llama, Meta-Llama-3.1-8B-Instruct==========Lla...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt                    Stability-AI, generative-models   \n",
       "mistralai/Mistral-7B-Instruct-v0.2             Mistral 7B Instruct model, easily fine-tuned, ...   \n",
       "tiiuae/falcon-40b                              Falcon-40B, best open-source model, outperform...   \n",
       "black-forest-labs/FLUX.1-schnell                                                             NaN   \n",
       "hakurei/waifu-diffusion                                                                      NaN   \n",
       "sentence-transformers/all-MiniLM-L6-v2                pretrained nreimers/MiniLM-L6-H384-uncased   \n",
       "openai-community/gpt2                          GPT-2, pretrained model on English language us...   \n",
       "stabilityai/sdxl-turbo                                                                       NaN   \n",
       "xai-org/grok-1                                                         Grok-1 open-weights model   \n",
       "meta-llama/Llama-2-70b-chat-hf                                                               NaN   \n",
       "THUDM/chatglm2-6b                              ChatGLM2-6B, base model, GLM, pre-training, hu...   \n",
       "databricks/dolly-v2-12b                        pythia-12b, pythia-6.9b, pythia-2.8b==========...   \n",
       "ByteDance/SDXL-Lightning                       base model, 12 layers, Transformer architectur...   \n",
       "stabilityai/stable-diffusion-2                 Stable Diffusion v2, stable-diffusion-2-base, ...   \n",
       "google-bert/bert-base-uncased                  BERT base model (uncased), pretrained model on...   \n",
       "coqui/XTTS-v2                                  Voice generation model, clone voices, differen...   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                       Llama 2, pretrained model, 7B parameters======...   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0    SDXL base model, ensemble of experts, pipeline...   \n",
       "dreamlike-art/dreamlike-photoreal-2.0          dreamlike-photoreal-2.0==========Stable Diffus...   \n",
       "CohereForAI/c4ai-command-r-plus                non-quantized version of C4AI Command R+======...   \n",
       "mattshumer/Reflection-Llama-3.1-70B            Reflection Llama-3.1 70B, open-source LLM=====...   \n",
       "mistralai/Mixtral-8x7B-v0.1                    Mixtral-8x7B, Large Language Model, Sparse Mix...   \n",
       "openai/whisper-large-v2                        pre-trained model for automatic speech recogni...   \n",
       "microsoft/Phi-3-mini-128k-instruct             Phi-3-Mini-128K-Instruct, 3.8 billion-paramete...   \n",
       "HuggingFaceH4/zephyr-7b-beta                   mistralai/Mistral-7B-v0.1==========7B paramete...   \n",
       "gsdf/Counterfeit-V2.5                                                                        NaN   \n",
       "\n",
       "                                                                             users_raw_paragraph  \\\n",
       "bigcode/starcoder                                                                            NaN   \n",
       "CompVis/stable-diffusion-v-1-4-original        Direct Use==========The model is intended for ...   \n",
       "meta-llama/Llama-3.1-8B-Instruct               Intended Use==========Intended Use Cases Llama...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt  Direct Use==========The model is intended for ...   \n",
       "mistralai/Mistral-7B-Instruct-v0.2                                                           NaN   \n",
       "tiiuae/falcon-40b                              Out-of-Scope Use==========Production use witho...   \n",
       "black-forest-labs/FLUX.1-schnell               Usage==========We provide a reference implemen...   \n",
       "hakurei/waifu-diffusion                        Downstream Uses==========This model can be use...   \n",
       "sentence-transformers/all-MiniLM-L6-v2         Intended uses==========Our model is intended t...   \n",
       "openai-community/gpt2                          Intended uses & limitations==========You can u...   \n",
       "stabilityai/sdxl-turbo                         Direct Use==========The model is intended for ...   \n",
       "xai-org/grok-1                                                                               NaN   \n",
       "meta-llama/Llama-2-70b-chat-hf                 Intended Use==========Intended Use Cases Llama...   \n",
       "THUDM/chatglm2-6b                                                                            NaN   \n",
       "databricks/dolly-v2-12b                                                                      NaN   \n",
       "ByteDance/SDXL-Lightning                                                                     NaN   \n",
       "stabilityai/stable-diffusion-2                                                               NaN   \n",
       "google-bert/bert-base-uncased                                                                NaN   \n",
       "coqui/XTTS-v2                                                                                NaN   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                       Intended Use==========Intended Use Cases Llama...   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0                                                  NaN   \n",
       "dreamlike-art/dreamlike-photoreal-2.0                                                        NaN   \n",
       "CohereForAI/c4ai-command-r-plus                                                              NaN   \n",
       "mattshumer/Reflection-Llama-3.1-70B                                                          NaN   \n",
       "mistralai/Mixtral-8x7B-v0.1                                                                  NaN   \n",
       "openai/whisper-large-v2                        Evaluated Use==========The primary intended us...   \n",
       "microsoft/Phi-3-mini-128k-instruct             Intended Uses==========Primary use cases======...   \n",
       "HuggingFaceH4/zephyr-7b-beta                   Intended uses & limitations==========The model...   \n",
       "gsdf/Counterfeit-V2.5                                                                        NaN   \n",
       "\n",
       "                                                                                           users  \\\n",
       "bigcode/starcoder                                                                            NaN   \n",
       "CompVis/stable-diffusion-v-1-4-original        research purposes, safe deployment of models, ...   \n",
       "meta-llama/Llama-3.1-8B-Instruct               commercial and research use, assistant-like ch...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt  non-commercial, research, commercial, generati...   \n",
       "mistralai/Mistral-7B-Instruct-v0.2                                                           NaN   \n",
       "tiiuae/falcon-40b                              production use, assessment of risks, mitigatio...   \n",
       "black-forest-labs/FLUX.1-schnell               Developers and creatives looking to build on t...   \n",
       "hakurei/waifu-diffusion                         entertainment purposes, generative art assistant   \n",
       "sentence-transformers/all-MiniLM-L6-v2         sentence and short paragraph encoder, informat...   \n",
       "openai-community/gpt2                          text generation, fine-tune, downstream task, m...   \n",
       "stabilityai/sdxl-turbo                         non-commercial, commercial, research, generati...   \n",
       "xai-org/grok-1                                                                               NaN   \n",
       "meta-llama/Llama-2-70b-chat-hf                 commercial and research use, assistant-like ch...   \n",
       "THUDM/chatglm2-6b                                                                            NaN   \n",
       "databricks/dolly-v2-12b                                                                      NaN   \n",
       "ByteDance/SDXL-Lightning                                                                     NaN   \n",
       "stabilityai/stable-diffusion-2                                                               NaN   \n",
       "google-bert/bert-base-uncased                                                                NaN   \n",
       "coqui/XTTS-v2                                                                                NaN   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                       commercial and research use, assistant-like ch...   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0                                                  NaN   \n",
       "dreamlike-art/dreamlike-photoreal-2.0                                                        NaN   \n",
       "CohereForAI/c4ai-command-r-plus                                                              NaN   \n",
       "mattshumer/Reflection-Llama-3.1-70B                                                          NaN   \n",
       "mistralai/Mixtral-8x7B-v0.1                                                                  NaN   \n",
       "openai/whisper-large-v2                        AI researchers, ASR solution for developers, E...   \n",
       "microsoft/Phi-3-mini-128k-instruct             commercial and research use, English, Memory/c...   \n",
       "HuggingFaceH4/zephyr-7b-beta                                            chat, demo, capabilities   \n",
       "gsdf/Counterfeit-V2.5                                                                        NaN   \n",
       "\n",
       "                                                                          approach_raw_paragraph  \\\n",
       "bigcode/starcoder                                                                            NaN   \n",
       "CompVis/stable-diffusion-v-1-4-original        Evaluation Results==========Evaluations with d...   \n",
       "meta-llama/Llama-3.1-8B-Instruct               Hardware and Software==========Training Factor...   \n",
       "stabilityai/stable-video-diffusion-img2vid-xt                                                NaN   \n",
       "mistralai/Mistral-7B-Instruct-v0.2                                                           NaN   \n",
       "tiiuae/falcon-40b                              Training Procedure==========Falcon-40B was tra...   \n",
       "black-forest-labs/FLUX.1-schnell               Key Features==========Cutting-edge output qual...   \n",
       "hakurei/waifu-diffusion                                                                      NaN   \n",
       "sentence-transformers/all-MiniLM-L6-v2         Fine-tuning==========We fine-tune the model us...   \n",
       "openai-community/gpt2                                                                        NaN   \n",
       "stabilityai/sdxl-turbo                                                                       NaN   \n",
       "xai-org/grok-1                                                                               NaN   \n",
       "meta-llama/Llama-2-70b-chat-hf                                                               NaN   \n",
       "THUDM/chatglm2-6b                                                                            NaN   \n",
       "databricks/dolly-v2-12b                                                                      NaN   \n",
       "ByteDance/SDXL-Lightning                       ComfyUI Usage==========Please always use the c...   \n",
       "stabilityai/stable-diffusion-2                 Evaluation Results==========Evaluations with d...   \n",
       "google-bert/bert-base-uncased                  Preprocessing==========The texts are lowercase...   \n",
       "coqui/XTTS-v2                                                                                NaN   \n",
       "lllyasviel/sd_control_collection                                                             NaN   \n",
       "meta-llama/Llama-2-7b-hf                                                                     NaN   \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0                                                  NaN   \n",
       "dreamlike-art/dreamlike-photoreal-2.0                                                        NaN   \n",
       "CohereForAI/c4ai-command-r-plus                                                              NaN   \n",
       "mattshumer/Reflection-Llama-3.1-70B                                                          NaN   \n",
       "mistralai/Mixtral-8x7B-v0.1                                                                  NaN   \n",
       "openai/whisper-large-v2                        Fine-Tuning==========The pre-trained Whisper m...   \n",
       "microsoft/Phi-3-mini-128k-instruct             Fine-tuning==========A basic example of multi-...   \n",
       "HuggingFaceH4/zephyr-7b-beta                   Training hyperparameters==========The followin...   \n",
       "gsdf/Counterfeit-V2.5                                                                        NaN   \n",
       "\n",
       "                                                                                        approach  \n",
       "bigcode/starcoder                                                                            NaN  \n",
       "CompVis/stable-diffusion-v-1-4-original        classifier-free guidance scales, 50 PLMS sampl...  \n",
       "meta-llama/Llama-3.1-8B-Instruct               custom training libraries, Meta's custom built...  \n",
       "stabilityai/stable-video-diffusion-img2vid-xt                                                NaN  \n",
       "mistralai/Mistral-7B-Instruct-v0.2                                                           NaN  \n",
       "tiiuae/falcon-40b                              Falcon-40B, 3D parallelism strategy, TP=8, PP=...  \n",
       "black-forest-labs/FLUX.1-schnell               latent adversarial diffusion distillation, FLUX.1  \n",
       "hakurei/waifu-diffusion                                                                      NaN  \n",
       "sentence-transformers/all-MiniLM-L6-v2         fine-tuning, contrastive objective, cosine sim...  \n",
       "openai-community/gpt2                                                                        NaN  \n",
       "stabilityai/sdxl-turbo                                                                       NaN  \n",
       "xai-org/grok-1                                                                               NaN  \n",
       "meta-llama/Llama-2-70b-chat-hf                                                               NaN  \n",
       "THUDM/chatglm2-6b                                                                            NaN  \n",
       "databricks/dolly-v2-12b                                                                      NaN  \n",
       "ByteDance/SDXL-Lightning                       ComfyUI, checkpoint, inference steps, Euler sa...  \n",
       "stabilityai/stable-diffusion-2                 classifier-free guidance scales, 50 steps DDIM...  \n",
       "google-bert/bert-base-uncased                  texts are lowercased and tokenized using WordP...  \n",
       "coqui/XTTS-v2                                                                                NaN  \n",
       "lllyasviel/sd_control_collection                                                             NaN  \n",
       "meta-llama/Llama-2-7b-hf                                                                     NaN  \n",
       "stabilityai/stable-diffusion-xl-refiner-1.0                                                  NaN  \n",
       "dreamlike-art/dreamlike-photoreal-2.0                                                        NaN  \n",
       "CohereForAI/c4ai-command-r-plus                                                              NaN  \n",
       "mattshumer/Reflection-Llama-3.1-70B                                                          NaN  \n",
       "mistralai/Mixtral-8x7B-v0.1                                                                  NaN  \n",
       "openai/whisper-large-v2                        Fine-tuning, Whisper model, generalise, langua...  \n",
       "microsoft/Phi-3-mini-128k-instruct             multi-GPUs supervised fine-tuning (SFT), TRL a...  \n",
       "HuggingFaceH4/zephyr-7b-beta                   hyperparameters, learning_rate, train_batch_si...  \n",
       "gsdf/Counterfeit-V2.5                                                                        NaN  "
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data_label_df = pd.DataFrame(data_label)\n",
    "data_label_df = data_label_df.transpose()\n",
    "data_label_df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_label_df.to_excel(f'./data_label/data_label_{stard_idx}_{end_idx}.xlsx', index=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "添加字段  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>modelArchitecture_raw_paragraph</th>\n",
       "      <th>modelArchitecture</th>\n",
       "      <th>task_raw_paragraph</th>\n",
       "      <th>task</th>\n",
       "      <th>datasets_raw_paragraph</th>\n",
       "      <th>datasets</th>\n",
       "      <th>base_model_raw_paragraph</th>\n",
       "      <th>base_model</th>\n",
       "      <th>users_raw_paragraph</th>\n",
       "      <th>users</th>\n",
       "      <th>...</th>\n",
       "      <th>model_homepage_source_url</th>\n",
       "      <th>model_code_url</th>\n",
       "      <th>model_code_url_raw_paragraph</th>\n",
       "      <th>model_code_source_url</th>\n",
       "      <th>model_other_url</th>\n",
       "      <th>model_card_complete</th>\n",
       "      <th>open_access</th>\n",
       "      <th>properties</th>\n",
       "      <th>performanceMetrics</th>\n",
       "      <th>known_info</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>Model Summary==========The StarCoder models ar...</td>\n",
       "      <td>15.5B parameter models, Multi Query Attention,...</td>\n",
       "      <td>Intended use==========The model was trained on...</td>\n",
       "      <td>technical assistant==========Fill-in-the-middl...</td>\n",
       "      <td>Attribution &amp; Other Requirements==========The ...</td>\n",
       "      <td>pretraining dataset, filtered for permissive l...</td>\n",
       "      <td>Hardware==========GPUs: 512 Tesla A100 \\n     ...</td>\n",
       "      <td>GPUs: 512 Tesla A100, Training time: 24 days, ...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Model Details==========Developed by: Robin Rom...</td>\n",
       "      <td>text-to-image generation==========generate con...</td>\n",
       "      <td>Bias==========While the capabilities of image ...</td>\n",
       "      <td>LAION-2B(en), insufficient representation of n...</td>\n",
       "      <td>Download the weights==========These weights ar...</td>\n",
       "      <td>original CompVis Stable Diffusion codebase====...</td>\n",
       "      <td>Direct Use==========The model is intended for ...</td>\n",
       "      <td>research purposes, safe deployment of models, ...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Model Information==========The Meta Llama 3.1 ...</td>\n",
       "      <td>Llama 3.1, auto-regressive language model, opt...</td>\n",
       "      <td>Tool use with transformers==========LLaMA-3.1 ...</td>\n",
       "      <td>tool use with transformers, LLaMA-3.1 supports...</td>\n",
       "      <td>Training Data==========Overview: Llama 3.1 was...</td>\n",
       "      <td>~15 trillion tokens, publicly available source...</td>\n",
       "      <td>Use with  llama==========Please, follow the in...</td>\n",
       "      <td>llama, Meta-Llama-3.1-8B-Instruct==========Lla...</td>\n",
       "      <td>Intended Use==========Intended Use Cases Llama...</td>\n",
       "      <td>commercial and research use, assistant-like ch...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>Model Description==========(SVD) Image-to-Vide...</td>\n",
       "      <td>latent diffusion model, 25 frames, resolution ...</td>\n",
       "      <td>Stable Video Diffusion Image-to-Video Model Ca...</td>\n",
       "      <td>Image-to-Video generation==========out-of-scop...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>How to Get Started with the Model==========Che...</td>\n",
       "      <td>Stability-AI, generative-models</td>\n",
       "      <td>Direct Use==========The model is intended for ...</td>\n",
       "      <td>non-commercial, research, commercial, generati...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>Encode and Decode with  mistral_common========...</td>\n",
       "      <td>MistralTokenizer, ChatCompletionRequest=======...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Limitations==========The Mistral 7B Instruct m...</td>\n",
       "      <td>Mistral 7B Instruct model, easily fine-tuned, ...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>🚀 Falcon-40B==========Falcon-40B is a 40B para...</td>\n",
       "      <td>40B parameters causal decoder-only model======...</td>\n",
       "      <td>Direct Use==========Research on large language...</td>\n",
       "      <td>foundation for further specialization, finetun...</td>\n",
       "      <td>Bias, Risks, and Limitations==========Falcon-4...</td>\n",
       "      <td>English, German, Spanish, French, Italian, Por...</td>\n",
       "      <td>Why use Falcon-40B?==========It is the best op...</td>\n",
       "      <td>Falcon-40B, best open-source model, outperform...</td>\n",
       "      <td>Out-of-Scope Use==========Production use witho...</td>\n",
       "      <td>production use, assessment of risks, mitigatio...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Usage==========We provide a reference implemen...</td>\n",
       "      <td>Developers and creatives looking to build on t...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>waifu-diffusion v1.4 - Diffusion for Weebs====...</td>\n",
       "      <td>latent text-to-image diffusion model, fine-tuning</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Downstream Uses==========This model can be use...</td>\n",
       "      <td>entertainment purposes, generative art assistant</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>all-MiniLM-L6-v2==========This is a  sentence-...</td>\n",
       "      <td>sentence-transformers model, 384 dimensional v...</td>\n",
       "      <td>Usage (Sentence-Transformers)==========Using t...</td>\n",
       "      <td>Using sentence-transformers for embeddings gen...</td>\n",
       "      <td>Training data==========We use the concatenatio...</td>\n",
       "      <td>multiple datasets, S2ORC, WikiAnswers, PAQ, St...</td>\n",
       "      <td>Pre-training==========We use the pretrained   ...</td>\n",
       "      <td>pretrained nreimers/MiniLM-L6-H384-uncased</td>\n",
       "      <td>Intended uses==========Our model is intended t...</td>\n",
       "      <td>sentence and short paragraph encoder, informat...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>How to use==========You can use this model dir...</td>\n",
       "      <td>text generation, get the features of a given text</td>\n",
       "      <td>Limitations and bias==========The training dat...</td>\n",
       "      <td>unfiltered content from the internet, training...</td>\n",
       "      <td>GPT-2==========Test the whole generation capab...</td>\n",
       "      <td>GPT-2, pretrained model on English language us...</td>\n",
       "      <td>Intended uses &amp; limitations==========You can u...</td>\n",
       "      <td>text generation, fine-tune, downstream task, m...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10</th>\n",
       "      <td>SDXL-Turbo Model Card==========SDXL-Turbo is a...</td>\n",
       "      <td>fast generative text-to-image model, synthesiz...</td>\n",
       "      <td>Out-of-Scope Use==========The model was not tr...</td>\n",
       "      <td>generate content, not trained to be factual, t...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Direct Use==========The model is intended for ...</td>\n",
       "      <td>non-commercial, commercial, research, generati...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>11</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Grok-1==========This repository contains the w...</td>\n",
       "      <td>Grok-1 open-weights model</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>12</th>\n",
       "      <td>Llama 2==========Llama 2 is a collection of pr...</td>\n",
       "      <td>Llama 2, 7 billion to 70 billion parameters, f...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Training Data==========Overview Llama 2 was pr...</td>\n",
       "      <td>pretrained on 2 trillion tokens, publicly avai...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Intended Use==========Intended Use Cases Llama...</td>\n",
       "      <td>commercial and research use, assistant-like ch...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>13</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>介绍==========ChatGLM2-6B 是开源中英双语对话模型  ChatGLM-6...</td>\n",
       "      <td>ChatGLM2-6B, base model, GLM, pre-training, hu...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>14</th>\n",
       "      <td>Happy Hacking!==========Downloads last month 4...</td>\n",
       "      <td>Transformer-based architecture with self-atten...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Dataset Limitations==========Like all language...</td>\n",
       "      <td>training corpuses, The Pile, GPT-J's pre-train...</td>\n",
       "      <td>Summary==========Databricks'  dolly-v2-12b , a...</td>\n",
       "      <td>pythia-12b, pythia-6.9b, pythia-2.8b==========...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>15</th>\n",
       "      <td>SDXL-Lightning==========SDXL-Lightning is a li...</td>\n",
       "      <td>SDXL-Lightning, lightning-fast text-to-image g...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Demos==========Generate with all configuration...</td>\n",
       "      <td>base model, 12 layers, Transformer architectur...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>16</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Model Details==========Developed by: Robin Rom...</td>\n",
       "      <td>generate and modify images based on text promp...</td>\n",
       "      <td>Bias==========While the capabilities of image ...</td>\n",
       "      <td>LAION-2B(en), insufficient representation of n...</td>\n",
       "      <td>Stable Diffusion v2 Model Card==========This m...</td>\n",
       "      <td>Stable Diffusion v2, stable-diffusion-2-base, ...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>17</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Intended uses &amp; limitations==========You can u...</td>\n",
       "      <td>masked language modeling, next sentence predic...</td>\n",
       "      <td>Training data==========The BERT model was pret...</td>\n",
       "      <td>BookCorpus, English Wikipedia</td>\n",
       "      <td>BERT base model (uncased)==========Pretrained ...</td>\n",
       "      <td>BERT base model (uncased), pretrained model on...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>18</th>\n",
       "      <td>Updates over XTTS-v1==========2 new languages;...</td>\n",
       "      <td>Architectural improvements for speaker conditi...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>ⓍTTS==========ⓍTTS is a Voice generation model...</td>\n",
       "      <td>Voice generation model, clone voices, differen...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>19</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>20</th>\n",
       "      <td>Model Details==========Note: Use of this model...</td>\n",
       "      <td>auto-regressive language model, optimized tran...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Training Data==========Overview Llama 2 was pr...</td>\n",
       "      <td>pretrained on 2 trillion tokens, fine-tuning d...</td>\n",
       "      <td>Llama 2==========Llama 2 is a collection of pr...</td>\n",
       "      <td>Llama 2, pretrained model, 7B parameters======...</td>\n",
       "      <td>Intended Use==========Intended Use Cases Llama...</td>\n",
       "      <td>commercial and research use, assistant-like ch...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>21</th>\n",
       "      <td>🧨 Diffusers==========Make sure to upgrade diff...</td>\n",
       "      <td>StableDiffusionXLImg2ImgPipeline, torch.compil...</td>\n",
       "      <td>Model Description==========Developed by: Stabi...</td>\n",
       "      <td>generate and modify images based on text promp...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Model==========SDXL  consists of an  ensemble ...</td>\n",
       "      <td>SDXL base model, ensemble of experts, pipeline...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>22</th>\n",
       "      <td>If you want to use dreamlike models on your we...</td>\n",
       "      <td>trained on 768x768px images, aspect ratios, ph...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>CKPT==========Download dreamlike-photoreal-2.0...</td>\n",
       "      <td>dreamlike-photoreal-2.0==========Stable Diffus...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>23</th>\n",
       "      <td>Model Details==========Input: Models input tex...</td>\n",
       "      <td>auto-regressive language model, optimized tran...</td>\n",
       "      <td>Grounded Generation and RAG Capabilities:=====...</td>\n",
       "      <td>grounded summarization, Retrieval Augmented Ge...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Model Card for C4AI Command R+==========🚨 This...</td>\n",
       "      <td>non-quantized version of C4AI Command R+======...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>24</th>\n",
       "      <td>Benchmarks==========Trained from Llama 3.1 70B...</td>\n",
       "      <td>Llama 3.1 70B, special tokens, Reflection Llam...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Dataset / Report==========Both the dataset and...</td>\n",
       "      <td>dataset, report, Reflection 405B model</td>\n",
       "      <td>Reflection Llama-3.1 70B==========| IMPORTANT ...</td>\n",
       "      <td>Reflection Llama-3.1 70B, open-source LLM=====...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>25</th>\n",
       "      <td>Click to expand==========+ import torch\\nfrom ...</td>\n",
       "      <td>AutoModelForCausalLM, use_flash_attention_2</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Model Card for Mixtral-8x7B==========The Mixtr...</td>\n",
       "      <td>Mixtral-8x7B, Large Language Model, Sparse Mix...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>26</th>\n",
       "      <td>Model details==========Whisper is a Transforme...</td>\n",
       "      <td>Transformer based encoder-decoder model, seque...</td>\n",
       "      <td>Usage==========To transcribe audio samples, th...</td>\n",
       "      <td>transcription or translation, speech recogniti...</td>\n",
       "      <td>Evaluation==========This code snippet shows ho...</td>\n",
       "      <td>LibriSpeech test-clean==========680,000 hours ...</td>\n",
       "      <td>Whisper==========Whisper is a pre-trained mode...</td>\n",
       "      <td>pre-trained model for automatic speech recogni...</td>\n",
       "      <td>Evaluated Use==========The primary intended us...</td>\n",
       "      <td>AI researchers, ASR solution for developers, E...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>27</th>\n",
       "      <td>Model==========Architecture: Phi-3 Mini-128K-I...</td>\n",
       "      <td>Phi-3 Mini-128K-Instruct, 3.8B parameters, den...</td>\n",
       "      <td>Benchmarks==========We report the results unde...</td>\n",
       "      <td>model's reasoning ability, common sense reason...</td>\n",
       "      <td>Datasets==========Our training data includes a...</td>\n",
       "      <td>Publicly available documents, high-quality edu...</td>\n",
       "      <td>Model Summary==========The Phi-3-Mini-128K-Ins...</td>\n",
       "      <td>Phi-3-Mini-128K-Instruct, 3.8 billion-paramete...</td>\n",
       "      <td>Intended Uses==========Primary use cases======...</td>\n",
       "      <td>commercial and research use, English, Memory/c...</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>28</th>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>Evaluation results==========normalized accurac...</td>\n",
       "      <td>normalized accuracy on AI2 Reasoning Challenge...</td>\n",
       "      <td>Training and evaluation data==========During D...</td>\n",
       "      <td>evaluation set</td>\n",
       "      <td>Model Card for Zephyr 7B β==========Zephyr is ...</td>\n",
       "      <td>mistralai/Mistral-7B-v0.1==========7B paramete...</td>\n",
       "      <td>Intended uses &amp; limitations==========The model...</td>\n",
       "      <td>chat, demo, capabilities</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>29</th>\n",
       "      <td>Update==========V2.5 has been updated for ease...</td>\n",
       "      <td>anime-style model</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>...</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "      <td>None</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>30 rows × 27 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "                      modelArchitecture_raw_paragraph  \\\n",
       "0   Model Summary==========The StarCoder models ar...   \n",
       "1                                                 NaN   \n",
       "2   Model Information==========The Meta Llama 3.1 ...   \n",
       "3   Model Description==========(SVD) Image-to-Vide...   \n",
       "4   Encode and Decode with  mistral_common========...   \n",
       "5   🚀 Falcon-40B==========Falcon-40B is a 40B para...   \n",
       "6                                                 NaN   \n",
       "7   waifu-diffusion v1.4 - Diffusion for Weebs====...   \n",
       "8   all-MiniLM-L6-v2==========This is a  sentence-...   \n",
       "9                                                 NaN   \n",
       "10  SDXL-Turbo Model Card==========SDXL-Turbo is a...   \n",
       "11                                                NaN   \n",
       "12  Llama 2==========Llama 2 is a collection of pr...   \n",
       "13                                                NaN   \n",
       "14  Happy Hacking!==========Downloads last month 4...   \n",
       "15  SDXL-Lightning==========SDXL-Lightning is a li...   \n",
       "16                                                NaN   \n",
       "17                                                NaN   \n",
       "18  Updates over XTTS-v1==========2 new languages;...   \n",
       "19                                                NaN   \n",
       "20  Model Details==========Note: Use of this model...   \n",
       "21  🧨 Diffusers==========Make sure to upgrade diff...   \n",
       "22  If you want to use dreamlike models on your we...   \n",
       "23  Model Details==========Input: Models input tex...   \n",
       "24  Benchmarks==========Trained from Llama 3.1 70B...   \n",
       "25  Click to expand==========+ import torch\\nfrom ...   \n",
       "26  Model details==========Whisper is a Transforme...   \n",
       "27  Model==========Architecture: Phi-3 Mini-128K-I...   \n",
       "28                                                NaN   \n",
       "29  Update==========V2.5 has been updated for ease...   \n",
       "\n",
       "                                    modelArchitecture  \\\n",
       "0   15.5B parameter models, Multi Query Attention,...   \n",
       "1                                                 NaN   \n",
       "2   Llama 3.1, auto-regressive language model, opt...   \n",
       "3   latent diffusion model, 25 frames, resolution ...   \n",
       "4   MistralTokenizer, ChatCompletionRequest=======...   \n",
       "5   40B parameters causal decoder-only model======...   \n",
       "6                                                 NaN   \n",
       "7   latent text-to-image diffusion model, fine-tuning   \n",
       "8   sentence-transformers model, 384 dimensional v...   \n",
       "9                                                 NaN   \n",
       "10  fast generative text-to-image model, synthesiz...   \n",
       "11                                                NaN   \n",
       "12  Llama 2, 7 billion to 70 billion parameters, f...   \n",
       "13                                                NaN   \n",
       "14  Transformer-based architecture with self-atten...   \n",
       "15  SDXL-Lightning, lightning-fast text-to-image g...   \n",
       "16                                                NaN   \n",
       "17                                                NaN   \n",
       "18  Architectural improvements for speaker conditi...   \n",
       "19                                                NaN   \n",
       "20  auto-regressive language model, optimized tran...   \n",
       "21  StableDiffusionXLImg2ImgPipeline, torch.compil...   \n",
       "22  trained on 768x768px images, aspect ratios, ph...   \n",
       "23  auto-regressive language model, optimized tran...   \n",
       "24  Llama 3.1 70B, special tokens, Reflection Llam...   \n",
       "25        AutoModelForCausalLM, use_flash_attention_2   \n",
       "26  Transformer based encoder-decoder model, seque...   \n",
       "27  Phi-3 Mini-128K-Instruct, 3.8B parameters, den...   \n",
       "28                                                NaN   \n",
       "29                                  anime-style model   \n",
       "\n",
       "                                   task_raw_paragraph  \\\n",
       "0   Intended use==========The model was trained on...   \n",
       "1   Model Details==========Developed by: Robin Rom...   \n",
       "2   Tool use with transformers==========LLaMA-3.1 ...   \n",
       "3   Stable Video Diffusion Image-to-Video Model Ca...   \n",
       "4                                                 NaN   \n",
       "5   Direct Use==========Research on large language...   \n",
       "6                                                 NaN   \n",
       "7                                                 NaN   \n",
       "8   Usage (Sentence-Transformers)==========Using t...   \n",
       "9   How to use==========You can use this model dir...   \n",
       "10  Out-of-Scope Use==========The model was not tr...   \n",
       "11                                                NaN   \n",
       "12                                                NaN   \n",
       "13                                                NaN   \n",
       "14                                                NaN   \n",
       "15                                                NaN   \n",
       "16  Model Details==========Developed by: Robin Rom...   \n",
       "17  Intended uses & limitations==========You can u...   \n",
       "18                                                NaN   \n",
       "19                                                NaN   \n",
       "20                                                NaN   \n",
       "21  Model Description==========Developed by: Stabi...   \n",
       "22                                                NaN   \n",
       "23  Grounded Generation and RAG Capabilities:=====...   \n",
       "24                                                NaN   \n",
       "25                                                NaN   \n",
       "26  Usage==========To transcribe audio samples, th...   \n",
       "27  Benchmarks==========We report the results unde...   \n",
       "28  Evaluation results==========normalized accurac...   \n",
       "29                                                NaN   \n",
       "\n",
       "                                                 task  \\\n",
       "0   technical assistant==========Fill-in-the-middl...   \n",
       "1   text-to-image generation==========generate con...   \n",
       "2   tool use with transformers, LLaMA-3.1 supports...   \n",
       "3   Image-to-Video generation==========out-of-scop...   \n",
       "4                                                 NaN   \n",
       "5   foundation for further specialization, finetun...   \n",
       "6                                                 NaN   \n",
       "7                                                 NaN   \n",
       "8   Using sentence-transformers for embeddings gen...   \n",
       "9   text generation, get the features of a given text   \n",
       "10  generate content, not trained to be factual, t...   \n",
       "11                                                NaN   \n",
       "12                                                NaN   \n",
       "13                                                NaN   \n",
       "14                                                NaN   \n",
       "15                                                NaN   \n",
       "16  generate and modify images based on text promp...   \n",
       "17  masked language modeling, next sentence predic...   \n",
       "18                                                NaN   \n",
       "19                                                NaN   \n",
       "20                                                NaN   \n",
       "21  generate and modify images based on text promp...   \n",
       "22                                                NaN   \n",
       "23  grounded summarization, Retrieval Augmented Ge...   \n",
       "24                                                NaN   \n",
       "25                                                NaN   \n",
       "26  transcription or translation, speech recogniti...   \n",
       "27  model's reasoning ability, common sense reason...   \n",
       "28  normalized accuracy on AI2 Reasoning Challenge...   \n",
       "29                                                NaN   \n",
       "\n",
       "                               datasets_raw_paragraph  \\\n",
       "0   Attribution & Other Requirements==========The ...   \n",
       "1   Bias==========While the capabilities of image ...   \n",
       "2   Training Data==========Overview: Llama 3.1 was...   \n",
       "3                                                 NaN   \n",
       "4                                                 NaN   \n",
       "5   Bias, Risks, and Limitations==========Falcon-4...   \n",
       "6                                                 NaN   \n",
       "7                                                 NaN   \n",
       "8   Training data==========We use the concatenatio...   \n",
       "9   Limitations and bias==========The training dat...   \n",
       "10                                                NaN   \n",
       "11                                                NaN   \n",
       "12  Training Data==========Overview Llama 2 was pr...   \n",
       "13                                                NaN   \n",
       "14  Dataset Limitations==========Like all language...   \n",
       "15                                                NaN   \n",
       "16  Bias==========While the capabilities of image ...   \n",
       "17  Training data==========The BERT model was pret...   \n",
       "18                                                NaN   \n",
       "19                                                NaN   \n",
       "20  Training Data==========Overview Llama 2 was pr...   \n",
       "21                                                NaN   \n",
       "22                                                NaN   \n",
       "23                                                NaN   \n",
       "24  Dataset / Report==========Both the dataset and...   \n",
       "25                                                NaN   \n",
       "26  Evaluation==========This code snippet shows ho...   \n",
       "27  Datasets==========Our training data includes a...   \n",
       "28  Training and evaluation data==========During D...   \n",
       "29                                                NaN   \n",
       "\n",
       "                                             datasets  \\\n",
       "0   pretraining dataset, filtered for permissive l...   \n",
       "1   LAION-2B(en), insufficient representation of n...   \n",
       "2   ~15 trillion tokens, publicly available source...   \n",
       "3                                                 NaN   \n",
       "4                                                 NaN   \n",
       "5   English, German, Spanish, French, Italian, Por...   \n",
       "6                                                 NaN   \n",
       "7                                                 NaN   \n",
       "8   multiple datasets, S2ORC, WikiAnswers, PAQ, St...   \n",
       "9   unfiltered content from the internet, training...   \n",
       "10                                                NaN   \n",
       "11                                                NaN   \n",
       "12  pretrained on 2 trillion tokens, publicly avai...   \n",
       "13                                                NaN   \n",
       "14  training corpuses, The Pile, GPT-J's pre-train...   \n",
       "15                                                NaN   \n",
       "16  LAION-2B(en), insufficient representation of n...   \n",
       "17                      BookCorpus, English Wikipedia   \n",
       "18                                                NaN   \n",
       "19                                                NaN   \n",
       "20  pretrained on 2 trillion tokens, fine-tuning d...   \n",
       "21                                                NaN   \n",
       "22                                                NaN   \n",
       "23                                                NaN   \n",
       "24             dataset, report, Reflection 405B model   \n",
       "25                                                NaN   \n",
       "26  LibriSpeech test-clean==========680,000 hours ...   \n",
       "27  Publicly available documents, high-quality edu...   \n",
       "28                                     evaluation set   \n",
       "29                                                NaN   \n",
       "\n",
       "                             base_model_raw_paragraph  \\\n",
       "0   Hardware==========GPUs: 512 Tesla A100 \\n     ...   \n",
       "1   Download the weights==========These weights ar...   \n",
       "2   Use with  llama==========Please, follow the in...   \n",
       "3   How to Get Started with the Model==========Che...   \n",
       "4   Limitations==========The Mistral 7B Instruct m...   \n",
       "5   Why use Falcon-40B?==========It is the best op...   \n",
       "6                                                 NaN   \n",
       "7                                                 NaN   \n",
       "8   Pre-training==========We use the pretrained   ...   \n",
       "9   GPT-2==========Test the whole generation capab...   \n",
       "10                                                NaN   \n",
       "11  Grok-1==========This repository contains the w...   \n",
       "12                                                NaN   \n",
       "13  介绍==========ChatGLM2-6B 是开源中英双语对话模型  ChatGLM-6...   \n",
       "14  Summary==========Databricks'  dolly-v2-12b , a...   \n",
       "15  Demos==========Generate with all configuration...   \n",
       "16  Stable Diffusion v2 Model Card==========This m...   \n",
       "17  BERT base model (uncased)==========Pretrained ...   \n",
       "18  ⓍTTS==========ⓍTTS is a Voice generation model...   \n",
       "19                                                NaN   \n",
       "20  Llama 2==========Llama 2 is a collection of pr...   \n",
       "21  Model==========SDXL  consists of an  ensemble ...   \n",
       "22  CKPT==========Download dreamlike-photoreal-2.0...   \n",
       "23  Model Card for C4AI Command R+==========🚨 This...   \n",
       "24  Reflection Llama-3.1 70B==========| IMPORTANT ...   \n",
       "25  Model Card for Mixtral-8x7B==========The Mixtr...   \n",
       "26  Whisper==========Whisper is a pre-trained mode...   \n",
       "27  Model Summary==========The Phi-3-Mini-128K-Ins...   \n",
       "28  Model Card for Zephyr 7B β==========Zephyr is ...   \n",
       "29                                                NaN   \n",
       "\n",
       "                                           base_model  \\\n",
       "0   GPUs: 512 Tesla A100, Training time: 24 days, ...   \n",
       "1   original CompVis Stable Diffusion codebase====...   \n",
       "2   llama, Meta-Llama-3.1-8B-Instruct==========Lla...   \n",
       "3                     Stability-AI, generative-models   \n",
       "4   Mistral 7B Instruct model, easily fine-tuned, ...   \n",
       "5   Falcon-40B, best open-source model, outperform...   \n",
       "6                                                 NaN   \n",
       "7                                                 NaN   \n",
       "8          pretrained nreimers/MiniLM-L6-H384-uncased   \n",
       "9   GPT-2, pretrained model on English language us...   \n",
       "10                                                NaN   \n",
       "11                          Grok-1 open-weights model   \n",
       "12                                                NaN   \n",
       "13  ChatGLM2-6B, base model, GLM, pre-training, hu...   \n",
       "14  pythia-12b, pythia-6.9b, pythia-2.8b==========...   \n",
       "15  base model, 12 layers, Transformer architectur...   \n",
       "16  Stable Diffusion v2, stable-diffusion-2-base, ...   \n",
       "17  BERT base model (uncased), pretrained model on...   \n",
       "18  Voice generation model, clone voices, differen...   \n",
       "19                                                NaN   \n",
       "20  Llama 2, pretrained model, 7B parameters======...   \n",
       "21  SDXL base model, ensemble of experts, pipeline...   \n",
       "22  dreamlike-photoreal-2.0==========Stable Diffus...   \n",
       "23  non-quantized version of C4AI Command R+======...   \n",
       "24  Reflection Llama-3.1 70B, open-source LLM=====...   \n",
       "25  Mixtral-8x7B, Large Language Model, Sparse Mix...   \n",
       "26  pre-trained model for automatic speech recogni...   \n",
       "27  Phi-3-Mini-128K-Instruct, 3.8 billion-paramete...   \n",
       "28  mistralai/Mistral-7B-v0.1==========7B paramete...   \n",
       "29                                                NaN   \n",
       "\n",
       "                                  users_raw_paragraph  \\\n",
       "0                                                 NaN   \n",
       "1   Direct Use==========The model is intended for ...   \n",
       "2   Intended Use==========Intended Use Cases Llama...   \n",
       "3   Direct Use==========The model is intended for ...   \n",
       "4                                                 NaN   \n",
       "5   Out-of-Scope Use==========Production use witho...   \n",
       "6   Usage==========We provide a reference implemen...   \n",
       "7   Downstream Uses==========This model can be use...   \n",
       "8   Intended uses==========Our model is intended t...   \n",
       "9   Intended uses & limitations==========You can u...   \n",
       "10  Direct Use==========The model is intended for ...   \n",
       "11                                                NaN   \n",
       "12  Intended Use==========Intended Use Cases Llama...   \n",
       "13                                                NaN   \n",
       "14                                                NaN   \n",
       "15                                                NaN   \n",
       "16                                                NaN   \n",
       "17                                                NaN   \n",
       "18                                                NaN   \n",
       "19                                                NaN   \n",
       "20  Intended Use==========Intended Use Cases Llama...   \n",
       "21                                                NaN   \n",
       "22                                                NaN   \n",
       "23                                                NaN   \n",
       "24                                                NaN   \n",
       "25                                                NaN   \n",
       "26  Evaluated Use==========The primary intended us...   \n",
       "27  Intended Uses==========Primary use cases======...   \n",
       "28  Intended uses & limitations==========The model...   \n",
       "29                                                NaN   \n",
       "\n",
       "                                                users  ...  \\\n",
       "0                                                 NaN  ...   \n",
       "1   research purposes, safe deployment of models, ...  ...   \n",
       "2   commercial and research use, assistant-like ch...  ...   \n",
       "3   non-commercial, research, commercial, generati...  ...   \n",
       "4                                                 NaN  ...   \n",
       "5   production use, assessment of risks, mitigatio...  ...   \n",
       "6   Developers and creatives looking to build on t...  ...   \n",
       "7    entertainment purposes, generative art assistant  ...   \n",
       "8   sentence and short paragraph encoder, informat...  ...   \n",
       "9   text generation, fine-tune, downstream task, m...  ...   \n",
       "10  non-commercial, commercial, research, generati...  ...   \n",
       "11                                                NaN  ...   \n",
       "12  commercial and research use, assistant-like ch...  ...   \n",
       "13                                                NaN  ...   \n",
       "14                                                NaN  ...   \n",
       "15                                                NaN  ...   \n",
       "16                                                NaN  ...   \n",
       "17                                                NaN  ...   \n",
       "18                                                NaN  ...   \n",
       "19                                                NaN  ...   \n",
       "20  commercial and research use, assistant-like ch...  ...   \n",
       "21                                                NaN  ...   \n",
       "22                                                NaN  ...   \n",
       "23                                                NaN  ...   \n",
       "24                                                NaN  ...   \n",
       "25                                                NaN  ...   \n",
       "26  AI researchers, ASR solution for developers, E...  ...   \n",
       "27  commercial and research use, English, Memory/c...  ...   \n",
       "28                           chat, demo, capabilities  ...   \n",
       "29                                                NaN  ...   \n",
       "\n",
       "   model_homepage_source_url model_code_url model_code_url_raw_paragraph  \\\n",
       "0                       None           None                         None   \n",
       "1                       None           None                         None   \n",
       "2                       None           None                         None   \n",
       "3                       None           None                         None   \n",
       "4                       None           None                         None   \n",
       "5                       None           None                         None   \n",
       "6                       None           None                         None   \n",
       "7                       None           None                         None   \n",
       "8                       None           None                         None   \n",
       "9                       None           None                         None   \n",
       "10                      None           None                         None   \n",
       "11                      None           None                         None   \n",
       "12                      None           None                         None   \n",
       "13                      None           None                         None   \n",
       "14                      None           None                         None   \n",
       "15                      None           None                         None   \n",
       "16                      None           None                         None   \n",
       "17                      None           None                         None   \n",
       "18                      None           None                         None   \n",
       "19                      None           None                         None   \n",
       "20                      None           None                         None   \n",
       "21                      None           None                         None   \n",
       "22                      None           None                         None   \n",
       "23                      None           None                         None   \n",
       "24                      None           None                         None   \n",
       "25                      None           None                         None   \n",
       "26                      None           None                         None   \n",
       "27                      None           None                         None   \n",
       "28                      None           None                         None   \n",
       "29                      None           None                         None   \n",
       "\n",
       "   model_code_source_url model_other_url model_card_complete open_access  \\\n",
       "0                   None            None                None        None   \n",
       "1                   None            None                None        None   \n",
       "2                   None            None                None        None   \n",
       "3                   None            None                None        None   \n",
       "4                   None            None                None        None   \n",
       "5                   None            None                None        None   \n",
       "6                   None            None                None        None   \n",
       "7                   None            None                None        None   \n",
       "8                   None            None                None        None   \n",
       "9                   None            None                None        None   \n",
       "10                  None            None                None        None   \n",
       "11                  None            None                None        None   \n",
       "12                  None            None                None        None   \n",
       "13                  None            None                None        None   \n",
       "14                  None            None                None        None   \n",
       "15                  None            None                None        None   \n",
       "16                  None            None                None        None   \n",
       "17                  None            None                None        None   \n",
       "18                  None            None                None        None   \n",
       "19                  None            None                None        None   \n",
       "20                  None            None                None        None   \n",
       "21                  None            None                None        None   \n",
       "22                  None            None                None        None   \n",
       "23                  None            None                None        None   \n",
       "24                  None            None                None        None   \n",
       "25                  None            None                None        None   \n",
       "26                  None            None                None        None   \n",
       "27                  None            None                None        None   \n",
       "28                  None            None                None        None   \n",
       "29                  None            None                None        None   \n",
       "\n",
       "   properties performanceMetrics known_info  \n",
       "0        None               None       None  \n",
       "1        None               None       None  \n",
       "2        None               None       None  \n",
       "3        None               None       None  \n",
       "4        None               None       None  \n",
       "5        None               None       None  \n",
       "6        None               None       None  \n",
       "7        None               None       None  \n",
       "8        None               None       None  \n",
       "9        None               None       None  \n",
       "10       None               None       None  \n",
       "11       None               None       None  \n",
       "12       None               None       None  \n",
       "13       None               None       None  \n",
       "14       None               None       None  \n",
       "15       None               None       None  \n",
       "16       None               None       None  \n",
       "17       None               None       None  \n",
       "18       None               None       None  \n",
       "19       None               None       None  \n",
       "20       None               None       None  \n",
       "21       None               None       None  \n",
       "22       None               None       None  \n",
       "23       None               None       None  \n",
       "24       None               None       None  \n",
       "25       None               None       None  \n",
       "26       None               None       None  \n",
       "27       None               None       None  \n",
       "28       None               None       None  \n",
       "29       None               None       None  \n",
       "\n",
       "[30 rows x 27 columns]"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# import pandas as pd\n",
    "# from tqdm import tqdm\n",
    "# stard_idx, end_idx = 0, 10\n",
    "# data_label = pd.read_csv(f'./data_label/data_label_{stard_idx}_{end_idx}.csv')\n",
    "data_label = pd.read_excel(f'./data_label/data_label_{stard_idx}_{end_idx}.xlsx')\n",
    "label_list = ['number', 'model_name', 'model_description', 'model_homepage', 'model_homepage_raw_paragraph', 'model_homepage_source_url', 'model_code_url', 'model_code_url_raw_paragraph', 'model_code_source_url', 'model_other_url', 'model_card_complete', 'open_access', 'properties', 'performanceMetrics', 'known_info']\n",
    "for label in label_list:\n",
    "    data_label[label] = None\n",
    "data_label"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "30it [00:00, 88.70it/s]\n"
     ]
    }
   ],
   "source": [
    "for idx, model_id in tqdm(enumerate(model_id_list[stard_idx:end_idx])):\n",
    "    number = stard_idx + idx + 1\n",
    "    model_url = f'https://huggingface.co/{model_id}'\n",
    "    model_name = model_id.split('/')[1]\n",
    "    model_homepage, model_homepage_raw_paragraph, model_homepage_source_url = None, None, None\n",
    "    model_code_url, model_code_url_raw_paragraph, model_code_source_url = None, None, None\n",
    "    # 处理url\n",
    "    url_pd = pd.read_excel(f'./data/{model_id}/url_content.xlsx')\n",
    "    for j in range(len(url_pd)):\n",
    "        if \"github\" in url_pd[\"url\"][j]:\n",
    "            model_code_url = url_pd[\"url\"][j]\n",
    "            model_code_url_raw_paragraph = url_pd[\"raw_paragraph\"][j]\n",
    "            model_code_source_url = model_url\n",
    "            break\n",
    "    for j in range(len(url_pd)):\n",
    "        if (\".com\" in url_pd[\"url\"][j] or \".cn\" in url_pd[\"url\"][j] or \\\n",
    "            \".net\" in url_pd[\"url\"][j] or \".org\" in url_pd[\"url\"][j] or \".edu\" in url_pd[\"url\"][j]) \\\n",
    "            and \"github\" not in url_pd[\"url\"][j]:\n",
    "            model_homepage = url_pd[\"url\"][j]\n",
    "            model_homepage_raw_paragraph = url_pd[\"raw_paragraph\"][j]\n",
    "            model_homepage_source_url = model_url\n",
    "            break\n",
    "    model_description = data_label['model_description'][idx]\n",
    "    # TODO :默认只有中文模型有modelscope等链接地址,qwen,chatglm等\n",
    "    model_other_url = None\n",
    "    # 如果有GitHub地址,认为model_card_complete完整\n",
    "    if model_code_url:\n",
    "        model_card_complete = 1\n",
    "    else:\n",
    "        model_card_complete = 0\n",
    "    # TODO:默认llama模型open_access=0,其他1\n",
    "    if \"llama\" in model_name.lower():\n",
    "        open_access = 0\n",
    "    else:\n",
    "        open_access = 1\n",
    "    properties = None # 评测表格?部分不需要的需要删除\n",
    "    # TODO:表头\n",
    "    performanceMetrics = None\n",
    "    known_info = None\n",
    "    \n",
    "    data_label['number'][idx] = number\n",
    "    data_label['model_name'][idx] = model_name\n",
    "    data_label['model_description'][idx] = model_description\n",
    "    data_label['model_homepage'][idx] = model_homepage\n",
    "    data_label['model_homepage_raw_paragraph'][idx] = model_homepage_raw_paragraph\n",
    "    data_label['model_homepage_source_url'][idx] = model_homepage_source_url\n",
    "    data_label['model_code_url'][idx] = model_code_url\n",
    "    data_label['model_code_url_raw_paragraph'][idx] = model_code_url_raw_paragraph\n",
    "    data_label['model_code_source_url'][idx] = model_code_source_url\n",
    "    data_label['model_other_url'][idx] = model_other_url\n",
    "    data_label['model_card_complete'][idx] = model_card_complete\n",
    "    data_label['open_access'][idx] = open_access\n",
    "    data_label['properties'][idx] = properties\n",
    "    data_label['performanceMetrics'][idx] = performanceMetrics\n",
    "    data_label['known_info'][idx] = known_info\n",
    "    \n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_label.to_excel(f'./data_label/data_label_{stard_idx}_{end_idx}.xlsx', index=False)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "lccc",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
