{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Nous-Hermes-2-Yi-34B-GGUF in Kaggle free GPU with llama.cpp"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Checkout my [Twitter(@rohanpaul_ai)](https://twitter.com/rohanpaul_ai) for daily LLM bits"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install --upgrade trl peft accelerate bitsandbytes datasets -q\n",
    "\n",
    "!pip3 install huggingface-hub -q\n",
    "\n",
    "!huggingface-cli download \\\n",
    "TheBloke/Nous-Hermes-2-Yi-34B-GGUF nous-hermes-2-yi-34b.Q2_K.gguf \\\n",
    "--local-dir . --local-dir-use-symlinks False\n",
    "\n",
    "!git clone https://github.com/ggerganov/llama.cpp.git\n",
    "\n",
    "cd ./llama.cpp\n",
    "\n",
    "!make\n",
    "\n",
    "!git clone https://github.com/ggerganov/llama.cpp.git\n",
    "\n",
    "cd ./llama.cpp\n",
    "\n",
    "!make\n",
    "\n",
    "\n",
    "!./main -ngl 35 \\\n",
    "-m /kaggle/working/nous-hermes-2-yi-34b.Q2_K.gguf \\\n",
    "--color -c 4096 \\\n",
    "--temp 0.7 \\\n",
    "--repeat_penalty 1.1 \\\n",
    "-n -1 \\\n",
    "-p \"system\\n{\\\"You are a friendly AI\\\"}\\nuser\\n{\\\"Tell me if you know python coding\\\"}\\nassistant\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Run Nous-Hermes-2-Yi-34B-GGUF in Kaggle's free GPU with llama.cpp\n",
    "\n",
    "You can also run it in colab just by changing the path of downloaded Model file"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
