{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "fe4b4186",
   "metadata": {},
   "source": [
    "# Building a \"Thinking\" LLM from Scratch: A Detailed End-to-End Guide\n",
    "\n",
    "Welcome! This notebook provides a comprehensive, step-by-step guide to building a small Large Language Model (LLM) capable of exhibiting a \"thinking\" process before delivering its final answer. We will implement the core components, including a simple tokenizer, the model architecture, data loaders, and training loops, all self-contained within this notebook.\n",
    "\n",
    "**Stages Covered:**\n",
    "1.  **Setup & Introduction**: Essential imports, configurations, and helper functions.\n",
    "2.  **Tokenizer Training**: Creating and training a simple Byte Pair Encoding (BPE) tokenizer.\n",
    "3.  **Data Preparation & Dataset Classes**: Creating sample datasets and PyTorch `Dataset` classes.\n",
    "4.  **Model Architecture**: Implementing Transformer blocks, RoPE, RMSNorm, and the overall LLM structure.\n",
    "5.  **Pretraining**: Training the model on raw text to learn basic language patterns.\n",
    "6.  **Supervised Fine-Tuning (SFT)**: Aligning the model to follow instructions and chat.\n",
    "7.  **Reasoning Training**: Fine-tuning the SFT model to generate explicit thought processes (`<think>...</think>`) before its final answer (`<answer>...</answer>`).\n",
    "8.  **Inference**: Using our trained \"thinking\" LLM.\n",
    "\n",
    "This guide is designed for learners who want a deep dive into the LLM building process. We'll focus on understanding each step, the underlying theory, and the corresponding code."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ebefe5e1",
   "metadata": {},
   "source": [
    "## Part 0: Setup and Introduction"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c9109e3f",
   "metadata": {},
   "source": [
    "### 0.1 Import Necessary Libraries\n",
    "We start by importing the Python libraries required for numerical operations, deep learning, file handling, and tokenization."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "094c7540",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "c:\\Users\\faree\\Desktop\\minimind_test\\.venv-mimind-thinking\\lib\\site-packages\\tqdm\\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "PyTorch version: 2.7.0+cpu\n",
      "CUDA available: False\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import json\n",
    "import math\n",
    "import time\n",
    "import random\n",
    "import warnings\n",
    "from typing import Optional, Tuple, List, Union, Iterator\n",
    "import numpy as np\n",
    "\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "from torch import optim\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "from contextlib import nullcontext # For mixed precision\n",
    "\n",
    "# From Hugging Face libraries\n",
    "from transformers import AutoTokenizer, PretrainedConfig, PreTrainedModel\n",
    "from transformers.modeling_outputs import CausalLMOutputWithPast, BaseModelOutputWithPast\n",
    "from transformers.activations import ACT2FN\n",
    "from tokenizers import Tokenizer as HFTokenizer # Renaming to avoid conflict if any\n",
    "from tokenizers import models as hf_models\n",
    "from tokenizers import trainers as hf_trainers\n",
    "from tokenizers import pre_tokenizers as hf_pre_tokenizers\n",
    "from tokenizers import decoders as hf_decoders\n",
    "\n",
    "warnings.filterwarnings('ignore')\n",
    "print(f\"PyTorch version: {torch.__version__}\")\n",
    "print(f\"CUDA available: {torch.cuda.is_available()}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bdb8bd56",
   "metadata": {},
   "source": [
    "### 0.2 Helper Functions\n",
    "These utility functions will assist in logging, learning rate scheduling, and providing summaries of our model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "d24f961c",
   "metadata": {},
   "outputs": [],
   "source": [
    "def logger(content):\n",
    "    print(f\"[{time.strftime('%Y-%m-%d %H:%M:%S')}] {content}\")\n",
    "\n",
    "def get_lr(current_step, total_steps, initial_lr, min_lr_ratio=0.1, warmup_ratio=0.01):\n",
    "    \"\"\"Cosine decay learning rate scheduler with linear warmup.\"\"\"\n",
    "    warmup_steps = int(warmup_ratio * total_steps)\n",
    "    min_lr = initial_lr * min_lr_ratio\n",
    "    if warmup_steps > 0 and current_step < warmup_steps:\n",
    "        return initial_lr * (current_step / warmup_steps)\n",
    "    elif current_step > total_steps:\n",
    "        return min_lr\n",
    "    else:\n",
    "        decay_steps = total_steps - warmup_steps\n",
    "        progress = (current_step - warmup_steps) / max(1, decay_steps)\n",
    "        coeff = 0.5 * (1.0 + math.cos(math.pi * progress))\n",
    "        return min_lr + coeff * (initial_lr - min_lr)\n",
    "\n",
    "def print_model_summary(model, model_name=\"Model\"):\n",
    "    logger(f\"--- {model_name} Summary ---\")\n",
    "    if hasattr(model, 'config'):\n",
    "      logger(f\"Configuration: {model.config}\")\n",
    "    total_params = sum(p.numel() for p in model.parameters())\n",
    "    trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
    "    logger(f\"Total parameters: {total_params / 1e6:.3f} M ({total_params})\")\n",
    "    logger(f\"Trainable parameters: {trainable_params / 1e6:.3f} M ({trainable_params})\")\n",
    "    logger(\"-------------------------\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab9ebd21",
   "metadata": {},
   "source": [
    "### 0.3 Global Configurations\n",
    "We define global settings such as the computation device (CPU/GPU), random seeds for reproducibility, and key hyperparameters for our demonstration model and training processes. The model parameters are intentionally kept very small to allow for quick execution within a notebook environment."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "2c788052",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 16:10:20] Sample pretraining data created at: ./dataset_notebook_scratch\\pretrain_data.jsonl\n",
      "[2025-05-14 16:10:20] Sample SFT data created at: ./dataset_notebook_scratch\\sft_data.jsonl\n",
      "[2025-05-14 16:10:20] Sample reasoning data created at: ./dataset_notebook_scratch\\reasoning_data.jsonl\n"
     ]
    }
   ],
   "source": [
    "NOTEBOOK_DATA_DIR = \"./dataset_notebook_scratch\"\n",
    "os.makedirs(NOTEBOOK_DATA_DIR, exist_ok=True)\n",
    "pretrain_file_path = os.path.join(NOTEBOOK_DATA_DIR, \"pretrain_data.jsonl\")\n",
    "reasoning_file_path = os.path.join(NOTEBOOK_DATA_DIR, \"reasoning_data.jsonl\")\n",
    "sft_file_path = os.path.join(NOTEBOOK_DATA_DIR, \"sft_data.jsonl\")\n",
    "\n",
    "\n",
    "# --- Pretraining Data ---\n",
    "sample_pretrain_data = [\n",
    "    {\"text\": \"The sun shines brightly in the clear blue sky.\"},\n",
    "    {\"text\": \"Cats love to chase mice and play with yarn balls.\"},\n",
    "    {\"text\": \"Reading books expands your knowledge and vocabulary.\"},\n",
    "    {\"text\": \"Artificial intelligence is a rapidly evolving field of study.\"},\n",
    "    {\"text\": \"To bake a cake, you need flour, sugar, eggs, and butter.\"},\n",
    "    {\"text\": \"Large language models are trained on vast amounts of text data.\"},\n",
    "    {\"text\": \"The quick brown fox jumps over the lazy dog.\"}\n",
    "]\n",
    "pretrain_file_path = os.path.join(NOTEBOOK_DATA_DIR, \"pretrain_data.jsonl\")\n",
    "with open(pretrain_file_path, 'w', encoding='utf-8') as f:\n",
    "    for item in sample_pretrain_data:\n",
    "        f.write(json.dumps(item) + '\\n')\n",
    "logger(f\"Sample pretraining data created at: {pretrain_file_path}\")\n",
    "\n",
    "# --- SFT Data ---\n",
    "sample_sft_data = [\n",
    "    {\"conversations\": [\n",
    "        {\"role\": \"user\", \"content\": \"Hello, how are you?\"},\n",
    "        {\"role\": \"assistant\", \"content\": \"I am doing well, thank you! How can I help you today?\"}\n",
    "    ]},\n",
    "    {\"conversations\": [\n",
    "        {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n",
    "        {\"role\": \"assistant\", \"content\": \"The capital of France is Paris.\"}\n",
    "    ]},\n",
    "    {\"conversations\": [\n",
    "        {\"role\": \"user\", \"content\": \"Explain gravity in simple terms.\"},\n",
    "        {\"role\": \"assistant\", \"content\": \"Gravity is the force that pulls objects towards each other. It's why things fall down to the ground!\"}\n",
    "    ]}\n",
    "]\n",
    "sft_file_path = os.path.join(NOTEBOOK_DATA_DIR, \"sft_data.jsonl\")\n",
    "with open(sft_file_path, 'w', encoding='utf-8') as f:\n",
    "    for item in sample_sft_data:\n",
    "        f.write(json.dumps(item) + '\\n')\n",
    "logger(f\"Sample SFT data created at: {sft_file_path}\")\n",
    "\n",
    "# --- Reasoning Data ---\n",
    "sample_reasoning_data = [\n",
    "    {\"conversations\": [\n",
    "        {\"role\": \"user\", \"content\": \"If I have 3 apples and eat 1, how many are left?\"},\n",
    "        {\"role\": \"assistant\", \"content\": \"<think>The user starts with 3 apples. The user eats 1 apple. This means 1 apple is subtracted from the initial amount. So, 3 - 1 = 2.</think><answer>You have 2 apples left.</answer>\"}\n",
    "    ]},\n",
    "    {\"conversations\": [\n",
    "        {\"role\": \"user\", \"content\": \"What are the primary colors?\"},\n",
    "        {\"role\": \"assistant\", \"content\": \"<think>The user is asking about primary colors. These are colors that cannot be made by mixing other colors. The standard set of primary colors in additive color models (like light) are Red, Green, and Blue (RGB). For subtractive models (like paint), they are often considered Red, Yellow, Blue (RYB) or Cyan, Magenta, Yellow (CMY).</think><answer>The primary colors are typically considered to be red, yellow, and blue. These are colors that can be mixed to create a range of other colors but cannot be created by mixing other colors themselves.</answer>\"}\n",
    "    ]}\n",
    "]\n",
    "reasoning_file_path = os.path.join(NOTEBOOK_DATA_DIR, \"reasoning_data.jsonl\")\n",
    "with open(reasoning_file_path, 'w', encoding='utf-8') as f:\n",
    "    for item in sample_reasoning_data:\n",
    "        f.write(json.dumps(item) + '\\n')\n",
    "logger(f\"Sample reasoning data created at: {reasoning_file_path}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "90304fa4",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: Using device: cpu\n",
      "[LOG]: Using PyTorch dtype: torch.float32 (derived from DTYPE_STR: float32)\n",
      "[LOG]: Output directory: ./out_notebook_scratch\n",
      "[LOG]: Data directory: ./dataset_notebook_scratch\n",
      "[LOG]: Trained tokenizer will be saved to/loaded from: ./out_notebook_scratch\\demo_tokenizer.json\n"
     ]
    }
   ],
   "source": [
    "# --- Device & Seeds ---\n",
    "DEVICE = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "BASE_SEED = 42\n",
    "DTYPE_STR = \"bfloat16\" if torch.cuda.is_available() and torch.cuda.is_bf16_supported() else \"float16\"\n",
    "PTDTYPE = {'float32': torch.float32, 'bfloat16': torch.bfloat16, 'float16': torch.float16}[DTYPE_STR if DEVICE.type == 'cuda' else 'float32']\n",
    "\n",
    "torch.manual_seed(BASE_SEED)\n",
    "if torch.cuda.is_available():\n",
    "    torch.cuda.manual_seed_all(BASE_SEED)\n",
    "random.seed(BASE_SEED)\n",
    "np.random.seed(BASE_SEED)\n",
    "\n",
    "# --- Tokenizer Configuration ---\n",
    "DEMO_VOCAB_SIZE = 32000  # Increased vocab for larger model\n",
    "SPECIAL_TOKENS_LIST = [\"<|endoftext|>\", \"<|im_start|>\", \"<|im_end|>\", \"<pad>\"]\n",
    "\n",
    "# --- Model Configuration (Larger Model) ---\n",
    "DEMO_HIDDEN_SIZE = 1024\n",
    "DEMO_NUM_LAYERS = 24\n",
    "DEMO_NUM_ATTENTION_HEADS = 16\n",
    "DEMO_NUM_KV_HEADS = 16\n",
    "DEMO_MAX_SEQ_LEN = 1024\n",
    "DEMO_INTERMEDIATE_SIZE = int(DEMO_HIDDEN_SIZE * 8 / 3)\n",
    "DEMO_INTERMEDIATE_SIZE = 32 * ((DEMO_INTERMEDIATE_SIZE + 32 - 1) // 32)\n",
    "\n",
    "# --- Training Hyperparameters (Larger Model) ---\n",
    "DEMO_PRETRAIN_EPOCHS = 10\n",
    "DEMO_SFT_EPOCHS = 10\n",
    "DEMO_REASONING_EPOCHS = 10\n",
    "DEMO_BATCH_SIZE = 16\n",
    "DEMO_PRETRAIN_LR = 3e-4\n",
    "DEMO_SFT_LR = 1e-4\n",
    "DEMO_REASONING_LR = 5e-5\n",
    "\n",
    "# --- Directories (unchanged) ---\n",
    "NOTEBOOK_OUT_DIR = \"./out_notebook_scratch\"\n",
    "NOTEBOOK_DATA_DIR = \"./dataset_notebook_scratch\"\n",
    "NOTEBOOK_TOKENIZER_PATH = os.path.join(NOTEBOOK_OUT_DIR, \"demo_tokenizer.json\")\n",
    "os.makedirs(NOTEBOOK_OUT_DIR, exist_ok=True)\n",
    "os.makedirs(NOTEBOOK_DATA_DIR, exist_ok=True)\n",
    "pretrain_file_path = os.path.join(NOTEBOOK_DATA_DIR, \"pretrain_data.jsonl\")\n",
    "reasoning_file_path = os.path.join(NOTEBOOK_DATA_DIR, \"reasoning_data.jsonl\")\n",
    "sft_file_path = os.path.join(NOTEBOOK_DATA_DIR, \"sft_data.jsonl\")\n",
    "\n",
    "def logger(msg):\n",
    "    print(f\"[LOG]: {msg}\")\n",
    "\n",
    "logger(f\"Using device: {DEVICE}\")\n",
    "logger(f\"Using PyTorch dtype: {PTDTYPE} (derived from DTYPE_STR: {DTYPE_STR if DEVICE.type == 'cuda' else 'float32'})\")\n",
    "logger(f\"Output directory: {NOTEBOOK_OUT_DIR}\")\n",
    "logger(f\"Data directory: {NOTEBOOK_DATA_DIR}\")\n",
    "logger(f\"Trained tokenizer will be saved to/loaded from: {NOTEBOOK_TOKENIZER_PATH}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "21d42964",
   "metadata": {},
   "source": [
    "## Part 1: Tokenizer Training\n",
    "\n",
    "**Theory:**\n",
    "Tokenization is the first step in processing text for an LLM. It involves breaking down raw text into smaller units called tokens, which are then mapped to numerical IDs. These IDs are what the model actually ingests.\n",
    "\n",
    "**Why Tokenize?**\n",
    "1.  **Fixed Vocabulary:** Neural networks work with fixed-size input vectors. Tokenization maps an arbitrarily large set of words/subwords to a fixed-size vocabulary.\n",
    "2.  **Handling OOV (Out-of-Vocabulary) Words:** Subword tokenization algorithms (like BPE) can represent rare or new words by breaking them into known subword units, reducing the OOV problem.\n",
    "3.  **Efficiency:** Representing text as sequences of integers is more computationally efficient than raw strings.\n",
    "\n",
    "**Byte Pair Encoding (BPE):**\n",
    "BPE is a popular subword tokenization algorithm. It works as follows:\n",
    "1.  **Initialization:** Start with a vocabulary consisting of all individual characters present in the training corpus.\n",
    "2.  **Iteration:** Repeatedly count all adjacent pairs of symbols (tokens) in the corpus and merge the most frequent pair into a new single symbol (token). This new symbol is added to the vocabulary.\n",
    "3.  **Termination:** Continue iterating until the vocabulary reaches a predefined size or no more merges improve compression significantly.\n",
    "\n",
    "We will use the `tokenizers` library from Hugging Face to train a simple BPE tokenizer on a small sample text."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0a9b7c9",
   "metadata": {},
   "source": [
    "### 1.1 Prepare Sample Corpus for Tokenizer Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "a7c9e911",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: Tokenizer training corpus saved to: ./dataset_notebook_scratch\\tokenizer_corpus.txt\n"
     ]
    }
   ],
   "source": [
    "tokenizer_corpus = [\n",
    "    \"Hello world, this is a demonstration of building a thinking LLM.\",\n",
    "    \"Language models learn from text data.\",\n",
    "    \"Tokenization is a crucial first step.\",\n",
    "    \"We will train a BPE tokenizer.\",\n",
    "    \"Think before you answer.\",\n",
    "    \"The answer is forty-two.\",\n",
    "    \"<think>Let's consider the options.</think><answer>Option A seems best.</answer>\",\n",
    "    \"<|im_start|>user\\nWhat's up?<|im_end|>\\n<|im_start|>assistant\\nNot much!<|im_end|>\"\n",
    "]\n",
    "\n",
    "# Save to a temporary file for the tokenizer trainer\n",
    "tokenizer_corpus_file = os.path.join(NOTEBOOK_DATA_DIR, \"tokenizer_corpus.txt\")\n",
    "with open(tokenizer_corpus_file, 'w', encoding='utf-8') as f:\n",
    "    for line in tokenizer_corpus:\n",
    "        f.write(line + \"\\n\")\n",
    "\n",
    "logger(f\"Tokenizer training corpus saved to: {tokenizer_corpus_file}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3d43944b",
   "metadata": {},
   "source": [
    "### 1.2 Train the BPE Tokenizer\n",
    "We use the `tokenizers` library to initialize a BPE model, define pre-tokenization rules (splitting by whitespace and punctuation, and byte-level fallback), specify a trainer, and then train it on our corpus."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "b30f4f90",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: Starting tokenizer training with vocab_size=32000...\n",
      "[LOG]: Tokenizer training complete. Vocab size: 435\n",
      "[LOG]: Tokenizer saved to ./out_notebook_scratch\\demo_tokenizer.json\n",
      "[LOG]: Trained Tokenizer Vocab (first 10 and special tokens):\n",
      "[LOG]:   'ain': 328\n",
      "[LOG]:   '}': 96\n",
      "[LOG]:   'D': 39\n",
      "[LOG]:   'Ġfirst': 428\n",
      "[LOG]:   'Wh': 324\n",
      "[LOG]:   'Ê': 138\n",
      "[LOG]:   'BP': 313\n",
      "[LOG]:   'Í': 141\n",
      "[LOG]:   'Ġf': 281\n",
      "[LOG]:   'ê': 170\n",
      "[LOG]:   '<|endoftext|>': 0\n",
      "[LOG]:   '<pad>': 3\n",
      "[LOG]:   '<|im_start|>': 1\n",
      "[LOG]:   '<|im_end|>': 2\n",
      "[LOG]: Original: Hello <|im_start|> world <think>思考中</think><answer>答案</answer> <|im_end|>\n",
      "[LOG]: Encoded IDs: [432, 224, 1, 401, 224, 31, 296, 33, 166, 226, 255, 168, 226, 229, 164, 120, 259, 31, 18, 296, 311, 287, 33, 167, 259, 246, 166, 98, 234, 31, 18, 287, 33, 224, 2]\n",
      "[LOG]: Encoded Tokens: ['Hello', 'Ġ', '<|im_start|>', 'Ġworld', 'Ġ', '<', 'think', '>', 'æ', 'Ģ', 'Ŀ', 'è', 'Ģ', 'ĥ', 'ä', '¸', 'Ń', '<', '/', 'think', '><', 'answer', '>', 'ç', 'Ń', 'Ķ', 'æ', '¡', 'Ī', '<', '/', 'answer', '>', 'Ġ', '<|im_end|>']\n",
      "[LOG]: Decoded: Hello  world <think>思考中</think><answer>答案</answer> \n"
     ]
    }
   ],
   "source": [
    "def train_demo_tokenizer(corpus_files: List[str], vocab_size: int, save_path: str, special_tokens: List[str]):\n",
    "    logger(f\"Starting tokenizer training with vocab_size={vocab_size}...\")\n",
    "    \n",
    "    # Initialize a BPE model\n",
    "    tokenizer_bpe = HFTokenizer(hf_models.BPE(unk_token=\"<unk>\")) # Add unk_token for BPE model\n",
    "    \n",
    "    # Pre-tokenizer: splits text into words, then processes at byte-level for OOV robustness.\n",
    "    # ByteLevel(add_prefix_space=False) is common for models like GPT-2/LLaMA.\n",
    "    tokenizer_bpe.pre_tokenizer = hf_pre_tokenizers.ByteLevel(add_prefix_space=False, use_regex=True)\n",
    "    \n",
    "    # Decoder: Reconstructs text from tokens, handling byte-level tokens correctly.\n",
    "    tokenizer_bpe.decoder = hf_decoders.ByteLevel()\n",
    "\n",
    "    # Trainer: BpeTrainer with specified vocab size and special tokens.\n",
    "    # The initial_alphabet from ByteLevel ensures all single bytes are potential tokens.\n",
    "    trainer = hf_trainers.BpeTrainer(\n",
    "        vocab_size=vocab_size,\n",
    "        special_tokens=special_tokens,\n",
    "        show_progress=True,\n",
    "        initial_alphabet=hf_pre_tokenizers.ByteLevel.alphabet()\n",
    "    )\n",
    "    \n",
    "    # Train the tokenizer\n",
    "    if isinstance(corpus_files, str): # If a single file path string\n",
    "        corpus_files = [corpus_files]\n",
    "    \n",
    "    tokenizer_bpe.train(corpus_files, trainer=trainer)\n",
    "    logger(f\"Tokenizer training complete. Vocab size: {tokenizer_bpe.get_vocab_size()}\")\n",
    "\n",
    "    # Save the tokenizer\n",
    "    # Saving as a single JSON file makes it compatible with AutoTokenizer.from_pretrained()\n",
    "    tokenizer_bpe.save(save_path)\n",
    "    logger(f\"Tokenizer saved to {save_path}\")\n",
    "    return tokenizer_bpe\n",
    "\n",
    "# Train and save our demo tokenizer\n",
    "trained_hf_tokenizer = train_demo_tokenizer(\n",
    "    corpus_files=[tokenizer_corpus_file],\n",
    "    vocab_size=DEMO_VOCAB_SIZE,\n",
    "    save_path=NOTEBOOK_TOKENIZER_PATH,\n",
    "    special_tokens=SPECIAL_TOKENS_LIST\n",
    ")\n",
    "\n",
    "# Verify special tokens are present and correctly mapped\n",
    "logger(\"Trained Tokenizer Vocab (first 10 and special tokens):\")\n",
    "vocab = trained_hf_tokenizer.get_vocab()\n",
    "for i, (token, token_id) in enumerate(vocab.items()):\n",
    "    if i < 10 or token in SPECIAL_TOKENS_LIST:\n",
    "        logger(f\"  '{token}': {token_id}\")\n",
    "\n",
    "# Test encoding and decoding with the trained tokenizer object\n",
    "test_sentence = \"Hello <|im_start|> world <think>思考中</think><answer>答案</answer> <|im_end|>\"\n",
    "encoded = trained_hf_tokenizer.encode(test_sentence)\n",
    "logger(f\"Original: {test_sentence}\")\n",
    "logger(f\"Encoded IDs: {encoded.ids}\")\n",
    "logger(f\"Encoded Tokens: {encoded.tokens}\")\n",
    "decoded = trained_hf_tokenizer.decode(encoded.ids)\n",
    "logger(f\"Decoded: {decoded}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f7131542",
   "metadata": {},
   "source": [
    "### 1.3 Load Trained Tokenizer using `AutoTokenizer`\n",
    "To use our newly trained tokenizer in a way that's standard with the Hugging Face ecosystem (especially for things like `apply_chat_template`), we load it using `AutoTokenizer.from_pretrained`. This requires the tokenizer to be saved in a specific format (a single JSON file typically handles this, or a directory with `vocab.json`, `merges.txt`, `tokenizer_config.json`). The `HFTokenizer.save()` method saves it as a single JSON."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "6dc1f7ae",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: Error loading trained tokenizer with AutoTokenizer: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: './out_notebook_scratch\\demo_tokenizer.json'.\n",
      "[LOG]: Falling back to using the HFTokenizer object directly (chat_template might not work as expected).\n"
     ]
    }
   ],
   "source": [
    "try:\n",
    "    # AutoTokenizer can load from a single .json file saved by HF Tokenizer\n",
    "    tokenizer = AutoTokenizer.from_pretrained(NOTEBOOK_TOKENIZER_PATH) \n",
    "    logger(f\"Successfully loaded trained tokenizer using AutoTokenizer from {NOTEBOOK_TOKENIZER_PATH}\")\n",
    "    \n",
    "    # Ensure PAD token is set. It's good practice for models.\n",
    "    # If your special tokens include a pad token, map it.\n",
    "    # Otherwise, often EOS is used as PAD for generation.\n",
    "    if \"<pad>\" in SPECIAL_TOKENS_LIST:\n",
    "        tokenizer.pad_token = \"<pad>\"\n",
    "    elif tokenizer.eos_token:\n",
    "        tokenizer.pad_token = tokenizer.eos_token\n",
    "    else: # Fallback if no EOS and no <pad> was in special tokens\n",
    "        # This case should be avoided by ensuring <pad> or <|endoftext|> is a special token\n",
    "        tokenizer.add_special_tokens({'pad_token': '<pad>'})\n",
    "        logger(\"Added '<pad>' as pad_token as it was missing.\")\n",
    "\n",
    "    # Assign other special tokens if they were part of SPECIAL_TOKENS_LIST during training\n",
    "    # AutoTokenizer usually infers these from the saved tokenizer file if they were added correctly.\n",
    "    # For explicit control or if issues arise:\n",
    "    if \"<|endoftext|>\" in SPECIAL_TOKENS_LIST:\n",
    "        tokenizer.eos_token = \"<|endoftext|>\"\n",
    "        tokenizer.bos_token = \"<|endoftext|>\" # Often models use same for BOS/EOS or a dedicated BOS\n",
    "    \n",
    "    # If your model specifically needs a different BOS, set it.\n",
    "    # For many models, prepending tokenizer.bos_token manually to input is common.\n",
    "    # tokenizer.bos_token = \"<|im_start|>\" # If you want <|im_start|> to be the automatic BOS\n",
    "\n",
    "    logger(f\"Final Tokenizer - Vocab Size: {tokenizer.vocab_size}\")\n",
    "    logger(f\"Final Tokenizer - PAD token: '{tokenizer.pad_token}', ID: {tokenizer.pad_token_id}\")\n",
    "    logger(f\"Final Tokenizer - EOS token: '{tokenizer.eos_token}', ID: {tokenizer.eos_token_id}\")\n",
    "    logger(f\"Final Tokenizer - BOS token: '{tokenizer.bos_token}', ID: {tokenizer.bos_token_id}\")\n",
    "    logger(f\"Final Tokenizer - UNK token: '{tokenizer.unk_token}', ID: {tokenizer.unk_token_id}\")\n",
    "    \n",
    "    # Update global vocab size if it changed due to added special tokens by AutoTokenizer\n",
    "    DEMO_VOCAB_SIZE_FINAL = tokenizer.vocab_size\n",
    "    logger(f\"Effective Vocab Size for Model: {DEMO_VOCAB_SIZE_FINAL}\")\n",
    "\n",
    "    # Define a simple chat template for SFT and Reasoning stage\n",
    "    # This is a simplified version of ChatML\n",
    "    chat_template_str = (\n",
    "        \"{% for message in messages %}\"\n",
    "        \"{{'<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>' + '\\n'}}\"\n",
    "        \"{% endfor %}\"\n",
    "        \"{% if add_generation_prompt %}\"\n",
    "        \"{{ '<|im_start|>assistant\\n' }}\"\n",
    "        \"{% endif %}\"\n",
    "    )\n",
    "    tokenizer.chat_template = chat_template_str\n",
    "    logger(f\"Chat template set for tokenizer.\")\n",
    "    test_chat = [{\"role\":\"user\", \"content\":\"Hi\"}]\n",
    "    logger(f\"Test chat template output: {tokenizer.apply_chat_template(test_chat, tokenize=False, add_generation_prompt=True)}\")\n",
    "\n",
    "except Exception as e:\n",
    "    logger(f\"Error loading trained tokenizer with AutoTokenizer: {e}\")\n",
    "    logger(\"Falling back to using the HFTokenizer object directly (chat_template might not work as expected).\")\n",
    "    tokenizer = trained_hf_tokenizer # Fallback, but AutoTokenizer is preferred for full features\n",
    "    DEMO_VOCAB_SIZE_FINAL = tokenizer.get_vocab_size()\n",
    "    # Manually set pad_token_id etc. if using HFTokenizer directly for model compatibility\n",
    "    tokenizer.pad_token_id = tokenizer.token_to_id(\"<pad>\") if tokenizer.token_to_id(\"<pad>\") is not None else tokenizer.token_to_id(\"<|endoftext|>\")\n",
    "    tokenizer.eos_token_id = tokenizer.token_to_id(\"<|endoftext|>\" )\n",
    "    tokenizer.bos_token_id = tokenizer.token_to_id(\"<|im_start|>\") # Or <|endoftext|> depending on convention\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3088e891",
   "metadata": {},
   "source": [
    "**What we've done:**\n",
    "We have successfully trained a BPE tokenizer on our sample corpus and saved it. We then loaded this trained tokenizer using `AutoTokenizer` (the Hugging Face standard way) to ensure compatibility with features like chat templating. We also verified its vocabulary and special token mappings. This tokenizer will now be used for all subsequent data processing and model interactions."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c41b596e",
   "metadata": {},
   "source": [
    "### 1.4 Self-Contained Dataset Classes\n",
    "Now we define PyTorch `Dataset` classes. These will take file paths to our `.jsonl` data and the *trained tokenizer* to prepare data in the format our model expects."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f99b90fe",
   "metadata": {},
   "source": [
    "#### 1.4.1 `DemoCorpusDataset` for Pretraining"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "f79f753d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: Testing DemoCorpusDataset...\n",
      "[LOG]: Loading pretraining data from: ./dataset_notebook_scratch\\pretrain_data.jsonl\n",
      "[LOG]: Loaded 7 samples for pretraining.\n",
      "[LOG]: Error loading trained tokenizer with AutoTokenizer: 'tokenizers.Tokenizer' object has no attribute 'bos_token'\n",
      "[LOG]: Falling back to using the HFTokenizer object directly (chat_template might not work as expected).\n",
      "[LOG]: Fallback Tokenizer - Vocab Size: 435\n",
      "[LOG]: Fallback Tokenizer - PAD token: '<pad>', ID: 3\n",
      "[LOG]: Fallback Tokenizer - EOS token: '<|endoftext|>', ID: 0\n",
      "[LOG]: Fallback Tokenizer - BOS token: '<|im_start|>', ID: 1\n"
     ]
    }
   ],
   "source": [
    "class DemoCorpusDataset(Dataset):\n",
    "    \"\"\"Dataset for pretraining. Loads text, tokenizes, and prepares X, Y pairs.\"\"\"\n",
    "    def __init__(self, file_path: str, tokenizer, max_length: int):\n",
    "        self.tokenizer = tokenizer\n",
    "        self.max_length = max_length\n",
    "        self.samples = []\n",
    "        logger(f\"Loading pretraining data from: {file_path}\")\n",
    "        with open(file_path, 'r', encoding='utf-8') as f:\n",
    "            for line in f:\n",
    "                self.samples.append(json.loads(line.strip())['text'])\n",
    "        logger(f\"Loaded {len(self.samples)} samples for pretraining.\")\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.samples)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        text = self.samples[idx]\n",
    "        \n",
    "        # For pretraining, we typically add BOS and EOS if not implicitly handled by tokenizer during training.\n",
    "        # Here, we ensure BOS is prepended. EOS will be part of the sequence if it fits.\n",
    "        full_text_with_bos = self.tokenizer.bos_token + text \n",
    "\n",
    "        encoding = self.tokenizer(\n",
    "            full_text_with_bos,\n",
    "            max_length=self.max_length,\n",
    "            padding=\"max_length\",\n",
    "            truncation=True,\n",
    "            return_tensors='pt'\n",
    "        )\n",
    "        input_ids = encoding.input_ids.squeeze(0) # (max_length)\n",
    "\n",
    "        # Create loss mask: 1 for non-pad tokens, 0 for pad tokens.\n",
    "        # Loss is calculated on Y (shifted input_ids), so the mask should align with Y.\n",
    "        effective_loss_mask = (input_ids != self.tokenizer.pad_token_id).long()\n",
    "        \n",
    "        X = input_ids[:-1]  # (max_length - 1)\n",
    "        Y = input_ids[1:]   # (max_length - 1)\n",
    "        mask_for_loss_calculation = effective_loss_mask[1:] # Align with Y\n",
    "        \n",
    "        return X, Y, mask_for_loss_calculation\n",
    "\n",
    "logger(\"Testing DemoCorpusDataset...\")\n",
    "try:\n",
    "    test_pretrain_ds = DemoCorpusDataset(pretrain_file_path, tokenizer, DEMO_MAX_SEQ_LEN)\n",
    "    X_pt_sample, Y_pt_sample, mask_pt_sample = test_pretrain_ds[0]\n",
    "    logger(f\"Sample X (Pretrain): {X_pt_sample.shape}, {X_pt_sample[:10]}...\")\n",
    "    logger(f\"Sample Y (Pretrain): {Y_pt_sample.shape}, {Y_pt_sample[:10]}...\")\n",
    "    logger(f\"Sample Mask (Pretrain): {mask_pt_sample.shape}, {mask_pt_sample[:10]}...\")\n",
    "    logger(f\"Decoded X with BOS: {tokenizer.decode(torch.cat([torch.tensor([tokenizer.bos_token_id]), X_pt_sample[:torch.sum(mask_pt_sample)]]))}\")\n",
    "    logger(f\"Decoded Y: {tokenizer.decode(Y_pt_sample[:torch.sum(mask_pt_sample)])}\")\n",
    "except Exception as e:\n",
    "    logger(f\"Error loading trained tokenizer with AutoTokenizer: {e}\")\n",
    "    logger(\"Falling back to using the HFTokenizer object directly (chat_template might not work as expected).\")\n",
    "    \n",
    "    # Create a wrapper class that makes the tokenizer callable\n",
    "    class CallableTokenizerWrapper:\n",
    "        def __init__(self, base_tokenizer):\n",
    "            self.tokenizer = base_tokenizer\n",
    "            # Set token IDs \n",
    "            self.pad_token_id = self.tokenizer.token_to_id(\"<pad>\") if self.tokenizer.token_to_id(\"<pad>\") is not None else self.tokenizer.token_to_id(\"<|endoftext|>\")\n",
    "            self.eos_token_id = self.tokenizer.token_to_id(\"<|endoftext|>\")\n",
    "            self.bos_token_id = self.tokenizer.token_to_id(\"<|im_start|>\")\n",
    "            \n",
    "            # Set string attributes\n",
    "            self.bos_token = \"<|im_start|>\" \n",
    "            self.eos_token = \"<|endoftext|>\"\n",
    "            self.pad_token = \"<pad>\" if self.tokenizer.token_to_id(\"<pad>\") is not None else \"<|endoftext|>\"\n",
    "            self.unk_token = \"<unk>\"\n",
    "            \n",
    "            self.vocab_size = self.tokenizer.get_vocab_size()\n",
    "        \n",
    "        def __call__(self, text, max_length=None, padding=None, truncation=None, return_tensors=None):\n",
    "            # Implement basic functionality of AutoTokenizer.__call__\n",
    "            if isinstance(text, list):\n",
    "                encodings = [self.tokenizer.encode(t) for t in text]\n",
    "            else:\n",
    "                encodings = [self.tokenizer.encode(text)]\n",
    "            \n",
    "            # Convert encodings to lists of IDs if they aren't already\n",
    "            token_ids = []\n",
    "            for enc in encodings:\n",
    "                if hasattr(enc, 'ids'):\n",
    "                    token_ids.append(enc.ids)\n",
    "                else:\n",
    "                    # If encode returns a list directly, use it as is\n",
    "                    token_ids.append(enc)\n",
    "            \n",
    "            # Handle max_length and padding\n",
    "            if max_length is not None:\n",
    "                if truncation:\n",
    "                    token_ids = [ids[:max_length] for ids in token_ids]\n",
    "                if padding == \"max_length\":\n",
    "                    token_ids = [ids + [self.pad_token_id] * (max_length - len(ids)) if len(ids) < max_length else ids[:max_length] for ids in token_ids]\n",
    "                        \n",
    "            # Convert to tensors if requested\n",
    "            if return_tensors == 'pt':\n",
    "                import torch\n",
    "                input_ids = torch.tensor(token_ids)\n",
    "                \n",
    "                # Create a proper TokenizerOutput class with a to() method\n",
    "                class TokenizerOutput:\n",
    "                    def __init__(self, input_ids):\n",
    "                        self.input_ids = input_ids\n",
    "                    \n",
    "                    def to(self, device):\n",
    "                        self.input_ids = self.input_ids.to(device)\n",
    "                        return self\n",
    "                \n",
    "                return TokenizerOutput(input_ids)\n",
    "            \n",
    "            return token_ids\n",
    "        \n",
    "        def apply_chat_template(self, conversations, tokenize=True, add_generation_prompt=False, return_tensors=None, max_length=None, truncation=None, padding=None):\n",
    "            \"\"\"Applies a chat template to format conversation messages.\"\"\"\n",
    "            # Define chat template similar to what was used in the original tokenizer\n",
    "            formatted_text = \"\"\n",
    "            for message in conversations:\n",
    "                formatted_text += f\"<|im_start|>{message['role']}\\n{message['content']}<|im_end|>\\n\"\n",
    "            \n",
    "            # Add generation prompt if requested\n",
    "            if add_generation_prompt:\n",
    "                formatted_text += \"<|im_start|>assistant\\n\"\n",
    "            \n",
    "            # Return the string if tokenize=False\n",
    "            if not tokenize:\n",
    "                return formatted_text\n",
    "            \n",
    "            # Otherwise tokenize and return tensor\n",
    "            input_ids = self(\n",
    "                formatted_text,\n",
    "                max_length=max_length,\n",
    "                padding=padding,\n",
    "                truncation=truncation,\n",
    "                return_tensors=return_tensors\n",
    "            ).input_ids\n",
    "            \n",
    "            return input_ids    \n",
    "        def encode(self, text, add_special_tokens=True):\n",
    "            return self.tokenizer.encode(text).ids\n",
    "        \n",
    "        def decode(self, token_ids, skip_special_tokens=False):\n",
    "            if hasattr(token_ids, 'tolist'):  # If it's a tensor\n",
    "                token_ids = token_ids.tolist()\n",
    "            return self.tokenizer.decode(token_ids)\n",
    "        \n",
    "        def get_vocab_size(self):\n",
    "            return self.tokenizer.get_vocab_size()\n",
    "        \n",
    "        def token_to_id(self, token):\n",
    "            return self.tokenizer.token_to_id(token)\n",
    "            \n",
    "    # Wrap the tokenizer with our callable wrapper\n",
    "    tokenizer = CallableTokenizerWrapper(trained_hf_tokenizer)\n",
    "    DEMO_VOCAB_SIZE_FINAL = tokenizer.vocab_size\n",
    "    \n",
    "    logger(f\"Fallback Tokenizer - Vocab Size: {tokenizer.vocab_size}\")\n",
    "    logger(f\"Fallback Tokenizer - PAD token: '{tokenizer.pad_token}', ID: {tokenizer.pad_token_id}\")\n",
    "    logger(f\"Fallback Tokenizer - EOS token: '{tokenizer.eos_token}', ID: {tokenizer.eos_token_id}\")\n",
    "    logger(f\"Fallback Tokenizer - BOS token: '{tokenizer.bos_token}', ID: {tokenizer.bos_token_id}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "43ce3a24",
   "metadata": {},
   "source": [
    "#### 1.4.2 `DemoChatDataset` for SFT and Reasoning\n",
    "This dataset class will handle conversational data. It will use the tokenizer's `apply_chat_template` method to format the input and then generate a loss mask to ensure that the model is only trained to predict the assistant's responses (including any `<think>` or `<answer>` tags within the assistant's turn)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "e5b7f44b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: Testing DemoChatDataset for SFT...\n",
      "[LOG]: Loading chat data from: ./dataset_notebook_scratch\\sft_data.jsonl\n",
      "[LOG]: Loaded 3 chat samples.\n",
      "[LOG]: Sample X (SFT): torch.Size([1023]), tensor([  1, 364, 202, 432,  15, 224,  75,  82,  90, 265,  85,  72, 377,  34,\n",
      "          2, 202,   1, 434, 202,  44])...\n",
      "[LOG]: Sample Y (SFT): torch.Size([1023]), tensor([364, 202, 432,  15, 224,  75,  82,  90, 265,  85,  72, 377,  34,   2,\n",
      "        202,   1, 434, 202,  44, 265])...\n",
      "[LOG]: Sample Mask (SFT): torch.Size([1023]), tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1])...\n",
      "[LOG]: Decoded SFT sample with mask applied (showing Y tokens where mask=1):\n",
      "I am doing well, thank you! How can I help you today?\n",
      "[LOG]: Full SFT sample decoded:\n",
      "user\n",
      "Hello, how are you?\n",
      "assistant\n",
      "I am doing well, thank you! How can I help you today?\n",
      "\n",
      "[LOG]: Testing DemoChatDataset for Reasoning...\n",
      "[LOG]: Loading chat data from: ./dataset_notebook_scratch\\reasoning_data.jsonl\n",
      "[LOG]: Loaded 2 chat samples.\n",
      "[LOG]: Sample X (Reasoning): torch.Size([1023]), tensor([  1, 364, 202,  44,  73, 224,  44, 224,  75,  68,  89,  72, 224,  22,\n",
      "        265,  83,  83,  79,  72,  86, 265,  81,  71, 224,  72, 268, 224,  20,\n",
      "         15, 224])...\n",
      "[LOG]: Sample Y (Reasoning): torch.Size([1023]), tensor([364, 202,  44,  73, 224,  44, 224,  75,  68,  89,  72, 224,  22, 265,\n",
      "         83,  83,  79,  72,  86, 265,  81,  71, 224,  72, 268, 224,  20,  15,\n",
      "        224,  75])...\n",
      "[LOG]: Sample Mask (Reasoning): torch.Size([1023]), tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
      "        0, 0, 0, 0, 0, 0])...\n",
      "[LOG]: Decoded Reasoning sample with mask applied (showing Y tokens where mask=1):\n",
      "<think>The user starts with 3 apples. The user eats 1 apple. This means 1 apple is subtracted from the initial amount. So, 3 - 1 = 2.</think><answer>You have 2 apples left.</answer>\n",
      "[LOG]: Full Reasoning sample decoded:\n",
      "user\n",
      "If I have 3 apples and eat 1, how many are left?\n",
      "assistant\n",
      "<think>The user starts with 3 apples. The user eats 1 apple. This means 1 apple is subtracted from the initial amount. So, 3 - 1 = 2.</think><answer>You have 2 apples left.</answer>\n",
      "\n"
     ]
    }
   ],
   "source": [
    "class DemoChatDataset(Dataset):\n",
    "    \"\"\"Dataset for SFT and Reasoning. Uses chat templates and masks non-assistant tokens.\"\"\"\n",
    "    def __init__(self, file_path: str, tokenizer, max_length: int):\n",
    "        self.tokenizer = tokenizer\n",
    "        self.max_length = max_length\n",
    "        self.samples = []\n",
    "        logger(f\"Loading chat data from: {file_path}\")\n",
    "        with open(file_path, 'r', encoding='utf-8') as f:\n",
    "            for line in f:\n",
    "                self.samples.append(json.loads(line.strip())['conversations'])\n",
    "        logger(f\"Loaded {len(self.samples)} chat samples.\")\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.samples)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        conversations = self.samples[idx]\n",
    "        \n",
    "        # Tokenize the full conversation using the chat template\n",
    "        # add_generation_prompt=False because the assistant's full response is in the data\n",
    "        input_ids = self.tokenizer.apply_chat_template(\n",
    "            conversations, \n",
    "            tokenize=True, \n",
    "            add_generation_prompt=False, \n",
    "            return_tensors=\"pt\",\n",
    "            max_length=self.max_length,\n",
    "            truncation=True,\n",
    "            padding=\"max_length\"\n",
    "        ).squeeze(0)\n",
    "        \n",
    "        # Create loss mask: only train on assistant's tokens\n",
    "        loss_mask = torch.zeros_like(input_ids, dtype=torch.long)\n",
    "        \n",
    "        # This is a simplified approach to find assistant tokens. \n",
    "        # A robust solution would parse based on special role tokens during/after tokenization.\n",
    "        # For this demo, we'll iterate turns and mark assistant tokens if the template is consistent.\n",
    "        # The tokenizer.apply_chat_template output helps, but identifying exact assistant spans requires care.\n",
    "        \n",
    "        # Re-tokenize each part to find spans. This is not most efficient but illustrative.\n",
    "        current_token_idx = 0\n",
    "        is_assistant_turn = False\n",
    "        assistant_start_token_id = self.tokenizer.encode(\"<|im_start|>assistant\", add_special_tokens=False)[-1] # often includes role token\n",
    "        user_start_token_id = self.tokenizer.encode(\"<|im_start|>user\", add_special_tokens=False)[-1]\n",
    "        # im_end_token_id = self.tokenizer.eos_token_id # or specific <|im_end|> id\n",
    "        im_end_token_id = self.tokenizer.token_to_id(\"<|im_end|>\")\n",
    "\n",
    "        # More reliable: iterate token IDs to find assistant segments\n",
    "        assistant_start_seq = self.tokenizer.encode(self.tokenizer.apply_chat_template([{'role':'assistant', 'content':''}], add_generation_prompt=True, tokenize=False).replace('\\n',''), add_special_tokens=False)[:-1] # remove placeholder token\n",
    "        # The above is a bit hacky. The MiniMind SFTDataset uses direct token ID matching for robust mask generation.\n",
    "        # For this demo, we'll assume any token *after* an assistant prompt and *before* the next user prompt or EOS is trainable.\n",
    "        # This logic from the original dataset.py is more robust:\n",
    "        bos_assistant_ids = self.tokenizer.encode(\"<|im_start|>assistant\\n\", add_special_tokens=False)\n",
    "        eos_ids = self.tokenizer.encode(\"<|im_end|>\", add_special_tokens=False)\n",
    "        \n",
    "        i = 0\n",
    "        input_ids_list = input_ids.tolist()\n",
    "        while i < len(input_ids_list):\n",
    "            # Check for assistant start sequence\n",
    "            if i + len(bos_assistant_ids) <= len(input_ids_list) and \\\n",
    "               input_ids_list[i : i + len(bos_assistant_ids)] == bos_assistant_ids:\n",
    "                # Found assistant start\n",
    "                start_of_response = i + len(bos_assistant_ids)\n",
    "                # Find corresponding EOS\n",
    "                end_of_response_marker = -1\n",
    "                j = start_of_response\n",
    "                while j < len(input_ids_list):\n",
    "                    if j + len(eos_ids) <= len(input_ids_list) and \\\n",
    "                       input_ids_list[j : j + len(eos_ids)] == eos_ids:\n",
    "                        end_of_response_marker = j\n",
    "                        break\n",
    "                    j += 1\n",
    "                \n",
    "                if end_of_response_marker != -1:\n",
    "                    # Mark tokens from start of response up to and including the EOS for loss\n",
    "                    loss_mask[start_of_response : end_of_response_marker + len(eos_ids)] = 1\n",
    "                    i = end_of_response_marker + len(eos_ids) # Move past this assistant block\n",
    "                    continue \n",
    "                else: # No EOS found, mask till end (might be truncated)\n",
    "                    loss_mask[start_of_response:] = 1\n",
    "                    break \n",
    "            i += 1\n",
    "\n",
    "        loss_mask[input_ids == self.tokenizer.pad_token_id] = 0 # Don't learn on padding\n",
    "\n",
    "        X = input_ids[:-1]\n",
    "        Y = input_ids[1:]\n",
    "        mask_for_loss_calculation = loss_mask[1:]\n",
    "        \n",
    "        return X, Y, mask_for_loss_calculation\n",
    "\n",
    "logger(\"Testing DemoChatDataset for SFT...\")\n",
    "try:\n",
    "    test_sft_ds = DemoChatDataset(sft_file_path, tokenizer, DEMO_MAX_SEQ_LEN)\n",
    "    X_sft_sample, Y_sft_sample, mask_sft_sample = test_sft_ds[0]\n",
    "    logger(f\"Sample X (SFT): {X_sft_sample.shape}, {X_sft_sample[:20]}...\")\n",
    "    logger(f\"Sample Y (SFT): {Y_sft_sample.shape}, {Y_sft_sample[:20]}...\")\n",
    "    logger(f\"Sample Mask (SFT): {mask_sft_sample.shape}, {mask_sft_sample[:20]}...\")\n",
    "    full_sft_ids = torch.cat([X_sft_sample[:1], Y_sft_sample], dim=0)\n",
    "    logger(f\"Decoded SFT sample with mask applied (showing Y tokens where mask=1):\\n{tokenizer.decode(Y_sft_sample[mask_sft_sample.bool()])}\")\n",
    "    logger(f\"Full SFT sample decoded:\\n{tokenizer.decode(full_sft_ids)}\")\n",
    "except Exception as e:\n",
    "    logger(f\"Error testing DemoChatDataset: {e}. Tokenizer or chat template might need adjustment.\")\n",
    "\n",
    "logger(\"Testing DemoChatDataset for Reasoning...\")\n",
    "try:\n",
    "    test_reasoning_ds = DemoChatDataset(reasoning_file_path, tokenizer, DEMO_MAX_SEQ_LEN)\n",
    "    X_rsn_sample, Y_rsn_sample, mask_rsn_sample = test_reasoning_ds[0]\n",
    "    logger(f\"Sample X (Reasoning): {X_rsn_sample.shape}, {X_rsn_sample[:30]}...\")\n",
    "    logger(f\"Sample Y (Reasoning): {Y_rsn_sample.shape}, {Y_rsn_sample[:30]}...\")\n",
    "    logger(f\"Sample Mask (Reasoning): {mask_rsn_sample.shape}, {mask_rsn_sample[:30]}...\")\n",
    "    full_rsn_ids = torch.cat([X_rsn_sample[:1], Y_rsn_sample], dim=0)\n",
    "    logger(f\"Decoded Reasoning sample with mask applied (showing Y tokens where mask=1):\\n{tokenizer.decode(Y_rsn_sample[mask_rsn_sample.bool()])}\")\n",
    "    logger(f\"Full Reasoning sample decoded:\\n{tokenizer.decode(full_rsn_ids)}\")\n",
    "except Exception as e:\n",
    "    logger(f\"Error testing DemoChatDataset for Reasoning: {e}. Ensure tokenizer and chat template are correctly set.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4cce6e61",
   "metadata": {},
   "source": [
    "**What we've done:**\n",
    "We have prepared three sample datasets (`.jsonl` files) for pretraining, SFT, and reasoning. We also defined two PyTorch `Dataset` classes: `DemoCorpusDataset` for pretraining, and `DemoChatDataset` for both SFT and reasoning (as the core data structure and masking logic for assistant responses are similar). We've tested these dataset classes to see example outputs. The `tokenizer` used here is the one we trained in Part 1."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9ab2ad9f",
   "metadata": {},
   "source": [
    "## Part 3: Model Architecture (Self-Contained `DemoLLM`)\n",
    "We will now implement the Transformer-based LLM architecture from scratch. This includes:\n",
    "- `DemoLLMConfig`: Configuration class.\n",
    "- `RMSNorm`: Root Mean Square Layer Normalization.\n",
    "- `RotaryEmbedding`: Rotary Positional Embeddings (RoPE).\n",
    "- `Attention`: Multi-head attention (with Grouped Query Attention consideration, though might be simplified to MHA for this demo's core).\n",
    "- `FeedForward`: MLP with SwiGLU activation.\n",
    "- `DemoTransformerBlock`: A single layer of the Transformer.\n",
    "- `DemoLLMModel`: The stack of Transformer blocks.\n",
    "- `DemoLLMForCausalLM`: The full model with a language modeling head for next-token prediction.\n",
    "\n",
    "The implementation will be inspired by modern LLM designs but simplified."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "996da3f5",
   "metadata": {},
   "source": [
    "### 3.1 `DemoLLMConfig` Class (Defined earlier in 0.3, re-shown for context)\n",
    "**Theory:** The configuration class holds all hyperparameters that define the model's architecture, such as vocabulary size, number of layers, hidden dimension size, number of attention heads, etc. It's good practice to have a dedicated config class."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "8b04008f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: DemoLLMConfig defined.\n"
     ]
    }
   ],
   "source": [
    "class DemoLLMConfig(PretrainedConfig):\n",
    "    model_type = \"demo_llm\" # Generic name\n",
    "\n",
    "    def __init__(\n",
    "            self,\n",
    "            vocab_size: int = DEMO_VOCAB_SIZE_FINAL, # Use the actual vocab size from trained tokenizer\n",
    "            hidden_size: int = DEMO_HIDDEN_SIZE,\n",
    "            intermediate_size: int = DEMO_INTERMEDIATE_SIZE,\n",
    "            num_hidden_layers: int = DEMO_NUM_LAYERS,\n",
    "            num_attention_heads: int = DEMO_NUM_ATTENTION_HEADS,\n",
    "            num_key_value_heads: Optional[int] = DEMO_NUM_KV_HEADS, \n",
    "            hidden_act: str = \"silu\",\n",
    "            max_position_embeddings: int = DEMO_MAX_SEQ_LEN,\n",
    "            rms_norm_eps: float = 1e-5,\n",
    "            rope_theta: float = 10000.0,\n",
    "            bos_token_id: int = 1, # Will be updated from tokenizer if possible\n",
    "            eos_token_id: int = 2, # Will be updated from tokenizer if possible\n",
    "            pad_token_id: Optional[int] = None, # Will be updated from tokenizer\n",
    "            dropout: float = 0.0,\n",
    "            use_cache: bool = True,\n",
    "            flash_attn: bool = True, \n",
    "            **kwargs\n",
    "    ):\n",
    "        self.vocab_size = vocab_size\n",
    "        self.hidden_size = hidden_size\n",
    "        self.intermediate_size = intermediate_size\n",
    "        self.num_hidden_layers = num_hidden_layers\n",
    "        self.num_attention_heads = num_attention_heads\n",
    "        self.num_key_value_heads = num_key_value_heads if num_key_value_heads is not None else num_attention_heads\n",
    "        self.head_dim = self.hidden_size // self.num_attention_heads\n",
    "        if self.num_attention_heads % self.num_key_value_heads != 0:\n",
    "             raise ValueError(f\"num_attention_heads ({self.num_attention_heads}) must be divisible by num_key_value_heads ({self.num_key_value_heads})\")\n",
    "        self.hidden_act = hidden_act\n",
    "        self.max_position_embeddings = max_position_embeddings\n",
    "        self.rms_norm_eps = rms_norm_eps\n",
    "        self.rope_theta = rope_theta\n",
    "        self.dropout = dropout\n",
    "        self.use_cache = use_cache\n",
    "        self.flash_attn = flash_attn\n",
    "        \n",
    "        # Update BOS/EOS/PAD from tokenizer if available\n",
    "        if tokenizer is not None and hasattr(tokenizer, 'bos_token_id') and tokenizer.bos_token_id is not None:\n",
    "            bos_token_id = tokenizer.bos_token_id\n",
    "        if tokenizer is not None and hasattr(tokenizer, 'eos_token_id') and tokenizer.eos_token_id is not None:\n",
    "            eos_token_id = tokenizer.eos_token_id\n",
    "        if tokenizer is not None and hasattr(tokenizer, 'pad_token_id') and tokenizer.pad_token_id is not None:\n",
    "            pad_token_id = tokenizer.pad_token_id\n",
    "        else: # Default pad to eos if not defined\n",
    "            pad_token_id = eos_token_id \n",
    "            \n",
    "        super().__init__(\n",
    "            bos_token_id=bos_token_id,\n",
    "            eos_token_id=eos_token_id,\n",
    "            pad_token_id=pad_token_id,\n",
    "            **kwargs,\n",
    "        )\n",
    "logger(\"DemoLLMConfig defined.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1a6b16ff",
   "metadata": {},
   "source": [
    "### 3.2 RMS Normalization\n",
    "**Theory:** RMSNorm is a variant of Layer Normalization. Instead of subtracting the mean, it only re-scales the activations by their root mean square. This simplifies the computation and has been found to be effective in Transformer models.\n",
    "Formula: `output = (x / sqrt(mean(x^2) + eps)) * weight`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "21ae495b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: DemoRMSNorm defined.\n"
     ]
    }
   ],
   "source": [
    "class DemoRMSNorm(nn.Module):\n",
    "    def __init__(self, dim: int, eps: float = 1e-5):\n",
    "        super().__init__()\n",
    "        self.eps = eps\n",
    "        self.weight = nn.Parameter(torch.ones(dim))\n",
    "\n",
    "    def _norm(self, x):\n",
    "        return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)\n",
    "\n",
    "    def forward(self, x):\n",
    "        output_dtype = x.dtype\n",
    "        x = x.to(torch.float32) # Calculate in float32 for stability\n",
    "        output = self._norm(x)\n",
    "        return (output * self.weight).to(output_dtype)\n",
    "logger(\"DemoRMSNorm defined.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "03f2a47e",
   "metadata": {},
   "source": [
    "### 3.3 Rotary Positional Embeddings (RoPE)\n",
    "**Theory:** RoPE applies rotations to query and key vectors based on their absolute positions. This injects positional information in a relative way, as the rotation applied depends on the token's position, and the dot product between rotated query and key vectors inherently captures relative positional differences. It avoids adding separate positional embedding vectors to the input.\n",
    "Each head dimension `d` is conceptually split into `d/2` pairs. For a position `m` and a pair `i`, a rotation matrix `R_m,i` is applied.\n",
    "`R_m,i = [[cos(m*theta_i), -sin(m*theta_i)], [sin(m*theta_i), cos(m*theta_i)]]`\n",
    "where `theta_i = 10000^(-2i/d)`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "5a7a0430",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: RotaryEmbedding defined.\n"
     ]
    }
   ],
   "source": [
    "class RotaryEmbedding(nn.Module):\n",
    "    def __init__(self, dim: int, max_seq_len: int, theta: float = 10000.0, device=None):\n",
    "        super().__init__()\n",
    "        # freqs: (max_seq_len, dim/2)\n",
    "        freqs = 1.0 / (theta ** (torch.arange(0, dim, 2, device=device).float() / dim))\n",
    "        t = torch.arange(max_seq_len, device=device, dtype=torch.float32)\n",
    "        freqs = torch.outer(t, freqs)\n",
    "        \n",
    "        # freqs_cis: (max_seq_len, dim/2) holding complex numbers cos(m*theta_i) + j*sin(m*theta_i)\n",
    "        self.register_buffer(\"freqs_cis\", torch.polar(torch.ones_like(freqs), freqs), persistent=False)\n",
    "        logger(f\"Initialized RotaryEmbedding with dim={dim}, max_seq_len={max_seq_len}\")\n",
    "\n",
    "    def forward(self, xq: torch.Tensor, xk: torch.Tensor, seq_len: int):\n",
    "        # xq, xk: (bsz, num_heads, seq_len, head_dim)\n",
    "        # freqs_cis: (max_seq_len, head_dim/2) -> slice to (seq_len, head_dim/2)\n",
    "        \n",
    "        # Reshape xq, xk to (bsz, num_heads, seq_len, head_dim/2, 2) to treat pairs for complex mul\n",
    "        xq_r = xq.float().reshape(*xq.shape[:-1], -1, 2)\n",
    "        xk_r = xk.float().reshape(*xk.shape[:-1], -1, 2)\n",
    "        \n",
    "        # Convert to complex: (bsz, num_heads, seq_len, head_dim/2)\n",
    "        xq_c = torch.view_as_complex(xq_r)\n",
    "        xk_c = torch.view_as_complex(xk_r)\n",
    "        \n",
    "        # Slice freqs_cis for the current sequence length\n",
    "        # freqs_cis_pos: (seq_len, head_dim/2)\n",
    "        freqs_cis_pos = self.freqs_cis[:seq_len]\n",
    "        \n",
    "        # Reshape freqs_cis for broadcasting: (1, 1, seq_len, head_dim/2)\n",
    "        freqs_cis_reshaped = freqs_cis_pos.unsqueeze(0).unsqueeze(0) \n",
    "        \n",
    "        # Apply rotation: q'_c = q_c * freqs_cis_pos\n",
    "        xq_out_c = xq_c * freqs_cis_reshaped\n",
    "        xk_out_c = xk_c * freqs_cis_reshaped\n",
    "        \n",
    "        # Convert back to real and reshape: (bsz, num_heads, seq_len, head_dim)\n",
    "        xq_out = torch.view_as_real(xq_out_c).flatten(3)\n",
    "        xk_out = torch.view_as_real(xk_out_c).flatten(3)\n",
    "        \n",
    "        return xq_out.type_as(xq), xk_out.type_as(xk)\n",
    "\n",
    "logger(\"RotaryEmbedding defined.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "461a3208",
   "metadata": {},
   "source": [
    "### 3.4 Attention Mechanism\n",
    "**Theory:** Attention allows the model to weigh the importance of different tokens in the input sequence when producing an output for a given token. Multi-Head Attention (MHA) performs attention in parallel across several \"heads,\" allowing the model to focus on different aspects of the input simultaneously.\n",
    "   - **Query (Q), Key (K), Value (V):** Input hidden states are linearly projected to Q, K, V vectors.\n",
    "   - **Scaled Dot-Product Attention:** `Attention(Q, K, V) = softmax( (Q @ K^T) / sqrt(d_k) ) @ V`\n",
    "   - **Grouped Query Attention (GQA):** A variation where multiple Q heads share the same K and V heads to reduce computation and memory for KV cache during inference. `num_key_value_heads` will be less than `num_attention_heads`.\n",
    "   - **Causal Masking:** For decoder-only models, a causal mask is applied to prevent tokens from attending to future tokens.\n",
    "   - **Flash Attention:** An optimized implementation of attention that reduces memory reads/writes, significantly speeding up training and inference (if available and conditions are met)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "989be918",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: DemoAttention defined.\n"
     ]
    }
   ],
   "source": [
    "class DemoAttention(nn.Module):\n",
    "    def __init__(self, config: DemoLLMConfig):\n",
    "        super().__init__()\n",
    "        self.config = config\n",
    "        self.hidden_size = config.hidden_size\n",
    "        self.num_q_heads = config.num_attention_heads\n",
    "        self.num_kv_heads = config.num_key_value_heads\n",
    "        self.num_kv_groups = self.num_q_heads // self.num_kv_heads # Num Q heads per KV head\n",
    "        self.head_dim = config.head_dim\n",
    "\n",
    "        self.q_proj = nn.Linear(self.hidden_size, self.num_q_heads * self.head_dim, bias=False)\n",
    "        self.k_proj = nn.Linear(self.hidden_size, self.num_kv_heads * self.head_dim, bias=False)\n",
    "        self.v_proj = nn.Linear(self.hidden_size, self.num_kv_heads * self.head_dim, bias=False)\n",
    "        self.o_proj = nn.Linear(self.num_q_heads * self.head_dim, self.hidden_size, bias=False)\n",
    "        \n",
    "        self.rotary_emb = RotaryEmbedding(\n",
    "            self.head_dim, \n",
    "            config.max_position_embeddings, \n",
    "            theta=config.rope_theta,\n",
    "            device=DEVICE # Initialize on target device\n",
    "        )\n",
    "        self.flash_available = hasattr(F, 'scaled_dot_product_attention') and config.flash_attn\n",
    "\n",
    "    def _repeat_kv(self, x: torch.Tensor, n_rep: int) -> torch.Tensor:\n",
    "        bs, num_kv_heads, slen, head_dim = x.shape\n",
    "        if n_rep == 1:\n",
    "            return x\n",
    "        return (\n",
    "            x[:, :, None, :, :]\n",
    "            .expand(bs, num_kv_heads, n_rep, slen, head_dim)\n",
    "            .reshape(bs, num_kv_heads * n_rep, slen, head_dim)\n",
    "        )\n",
    "\n",
    "    def forward(\n",
    "        self,\n",
    "        hidden_states: torch.Tensor, # (bsz, q_len, hidden_size)\n",
    "        attention_mask: Optional[torch.Tensor] = None, # (bsz, 1, q_len, kv_len) for additive mask\n",
    "        position_ids: Optional[torch.LongTensor] = None, # (bsz, q_len) -> Not directly used if RoPE applied based on seq_len\n",
    "        past_key_value: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,\n",
    "        use_cache: bool = False,\n",
    "    ) -> Tuple[torch.Tensor, Optional[Tuple[torch.Tensor, torch.Tensor]]]:\n",
    "        bsz, q_len, _ = hidden_states.shape\n",
    "\n",
    "        query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_q_heads, self.head_dim).transpose(1, 2)\n",
    "        key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_kv_heads, self.head_dim).transpose(1, 2)\n",
    "        value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_kv_heads, self.head_dim).transpose(1, 2)\n",
    "        # query_states: (bsz, num_q_heads, q_len, head_dim)\n",
    "        # key_states/value_states: (bsz, num_kv_heads, q_len, head_dim)\n",
    "\n",
    "        kv_seq_len = q_len # Initially, before considering past_key_value\n",
    "        if past_key_value is not None:\n",
    "            kv_seq_len += past_key_value[0].shape[2] # Add past length\n",
    "        \n",
    "        # Apply RoPE based on current q_len (for new tokens)\n",
    "        # The RotaryEmbedding's forward method expects current seq_len of Q and K\n",
    "        cos, sin = None, None # Not passing these directly, RoPE is self-contained\n",
    "        query_states, key_states = self.rotary_emb(query_states, key_states, seq_len=q_len) # RoPE applied to current q_len\n",
    "\n",
    "        if past_key_value is not None:\n",
    "            # key_states/value_states are for current q_len\n",
    "            # past_key_value[0] is (bsz, num_kv_heads, past_seq_len, head_dim)\n",
    "            key_states = torch.cat([past_key_value[0], key_states], dim=2)\n",
    "            value_states = torch.cat([past_key_value[1], value_states], dim=2)\n",
    "        \n",
    "        if use_cache:\n",
    "            current_key_value = (key_states, value_states)\n",
    "        else:\n",
    "            current_key_value = None\n",
    "        \n",
    "        # Grouped Query Attention: Repeat K and V heads to match Q heads\n",
    "        key_states = self._repeat_kv(key_states, self.num_kv_groups)\n",
    "        value_states = self._repeat_kv(value_states, self.num_kv_groups)\n",
    "        # Now key_states/value_states are (bsz, num_q_heads, kv_seq_len, head_dim)\n",
    "\n",
    "        attn_output = None\n",
    "        # Check for Flash Attention compatibility\n",
    "        # Flash Attn is_causal works when q_len == kv_seq_len for the attention computation itself.\n",
    "        # If past_kv is used, q_len for query_states is for new tokens, kv_seq_len for key_states is total length.\n",
    "        # Flash Attn handles this by taking full K/V and only new Qs.\n",
    "        # The `is_causal` flag in F.sdpa handles masking correctly for decoder style models.\n",
    "        # The main condition for Flash Attn is no explicit additive attention_mask.\n",
    "        can_use_flash = self.flash_available and attention_mask is None\n",
    "        if can_use_flash:\n",
    "            attn_output = F.scaled_dot_product_attention(\n",
    "                query_states, key_states, value_states,\n",
    "                attn_mask=None, # Causal mask handled by is_causal\n",
    "                dropout_p=self.config.dropout if self.training else 0.0,\n",
    "                is_causal= (q_len == kv_seq_len) # Only truly causal if no KV cache or if generating first token\n",
    "                                                # If q_len < kv_seq_len (due to KV cache), is_causal should be False\n",
    "                                                # and an explicit mask would be needed for padding if any.\n",
    "                                                # For simplicity in decoder generation where new_q_len = 1, is_causal=False is fine.\n",
    "                                                # And for training where q_len = kv_seq_len, is_causal=True.\n",
    "                                                # Let's make it always causal for decoder, assuming no padding mask for flash path\n",
    "            )\n",
    "        else:\n",
    "            # Manual attention with causal mask\n",
    "            attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)\n",
    "            \n",
    "            if kv_seq_len > 0: # Avoid mask creation for empty sequences\n",
    "                # Causal mask (triangle mask)\n",
    "                # query_states: (bsz, num_q_heads, q_len, head_dim)\n",
    "                # key_states:   (bsz, num_q_heads, kv_seq_len, head_dim)\n",
    "                # attn_weights: (bsz, num_q_heads, q_len, kv_seq_len)\n",
    "                mask = torch.full((q_len, kv_seq_len), float(\"-inf\"), device=query_states.device)\n",
    "                # For causal, target token j can only attend to source tokens i <= j + (kv_seq_len - q_len)\n",
    "                # where (kv_seq_len - q_len) is the length of the past context.\n",
    "                # If q_len == kv_seq_len (no cache), it's a standard upper triangle.\n",
    "                # If q_len == 1 (generation with cache), it attends to all kv_seq_len.\n",
    "                causal_shift = kv_seq_len - q_len\n",
    "                mask = torch.triu(mask, diagonal=1 + causal_shift) # Corrected causal mask\n",
    "                attn_weights = attn_weights + mask[None, None, :, :] # Add to scores\n",
    "            \n",
    "            if attention_mask is not None: # Additive padding mask\n",
    "                attn_weights = attn_weights + attention_mask\n",
    "                \n",
    "            attn_weights = F.softmax(attn_weights, dim=-1, dtype=torch.float32).type_as(query_states)\n",
    "            if self.config.dropout > 0.0:\n",
    "                attn_weights = F.dropout(attn_weights, p=self.config.dropout, training=self.training)\n",
    "            attn_output = torch.matmul(attn_weights, value_states)\n",
    "\n",
    "        attn_output = attn_output.transpose(1, 2).contiguous().view(bsz, q_len, -1)\n",
    "        attn_output = self.o_proj(attn_output)\n",
    "        return attn_output, current_key_value\n",
    "logger(\"DemoAttention defined.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9f8ef064",
   "metadata": {},
   "source": [
    "### 3.5 FeedForward Network (MLP)\n",
    "**Theory:** The FeedForward Network (FFN) is applied independently to each token position. It typically consists of two linear transformations with a non-linear activation function in between. Modern LLMs often use SwiGLU, which involves three linear layers and a SiLU (Sigmoid Linear Unit) activation.\n",
    "SwiGLU variant: `down_proj( silu(gate_proj(x)) * up_proj(x) )`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "60c44927",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: DemoFeedForward defined.\n"
     ]
    }
   ],
   "source": [
    "class DemoFeedForward(nn.Module):\n",
    "    def __init__(self, config: DemoLLMConfig):\n",
    "        super().__init__()\n",
    "        self.gate_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False)\n",
    "        self.up_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False)\n",
    "        self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size, bias=False)\n",
    "        self.act_fn = ACT2FN[config.hidden_act] # e.g., SiLU\n",
    "        self.dropout = nn.Dropout(config.dropout)\n",
    "\n",
    "    def forward(self, x):\n",
    "        # This is the SwiGLU formulation: FFN_SwiGLU(x, W, V, W2) = (Swish_1(xW) * xV)W2\n",
    "        # Swish_1(x) = x * sigmoid(beta*x), where beta is often 1 (SiLU)\n",
    "        # Here, gate_proj is W, up_proj is V, down_proj is W2\n",
    "        return self.dropout(self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x)))\n",
    "logger(\"DemoFeedForward defined.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cdd2dc66",
   "metadata": {},
   "source": [
    "### 3.6 Transformer Block (`DemoTransformerBlock`)\n",
    "**Theory:** A Transformer block typically consists of a multi-head self-attention layer followed by a feed-forward network. Layer normalization is applied before each of these sub-layers (Pre-LN), and residual connections are used around each sub-layer.\n",
    "Block Structure:\n",
    "1. `x_norm1 = RMSNorm(x)`\n",
    "2. `attn_out = SelfAttention(x_norm1)`\n",
    "3. `x = x + attn_out` (Residual 1)\n",
    "4. `x_norm2 = RMSNorm(x)`\n",
    "5. `ffn_out = FeedForward(x_norm2)`\n",
    "6. `x = x + ffn_out` (Residual 2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "12d68a36",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: DemoTransformerBlock defined.\n"
     ]
    }
   ],
   "source": [
    "class DemoTransformerBlock(nn.Module):\n",
    "    def __init__(self, config: DemoLLMConfig):\n",
    "        super().__init__()\n",
    "        self.self_attn = DemoAttention(config)\n",
    "        self.mlp = DemoFeedForward(config)\n",
    "        self.input_layernorm = DemoRMSNorm(config.hidden_size, eps=config.rms_norm_eps)\n",
    "        self.post_attention_layernorm = DemoRMSNorm(config.hidden_size, eps=config.rms_norm_eps)\n",
    "\n",
    "    def forward(\n",
    "        self,\n",
    "        hidden_states: torch.Tensor,\n",
    "        attention_mask: Optional[torch.Tensor] = None,\n",
    "        position_ids: Optional[torch.LongTensor] = None, # Passed to attention for RoPE\n",
    "        past_key_value: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,\n",
    "        use_cache: bool = False,\n",
    "    ) -> Tuple[torch.Tensor, Optional[Tuple[torch.Tensor, torch.Tensor]]]:\n",
    "        \n",
    "        residual = hidden_states\n",
    "        normed_hidden_states = self.input_layernorm(hidden_states)\n",
    "        \n",
    "        attn_outputs, present_key_value = self.self_attn(\n",
    "            normed_hidden_states,\n",
    "            attention_mask=attention_mask,\n",
    "            position_ids=position_ids, # RoPE handled inside DemoAttention using its internal RotaryEmbedding\n",
    "            past_key_value=past_key_value,\n",
    "            use_cache=use_cache\n",
    "        )\n",
    "        hidden_states = residual + attn_outputs\n",
    "\n",
    "        residual = hidden_states\n",
    "        normed_hidden_states = self.post_attention_layernorm(hidden_states)\n",
    "        feed_forward_hidden_states = self.mlp(normed_hidden_states)\n",
    "        hidden_states = residual + feed_forward_hidden_states\n",
    "        \n",
    "        return hidden_states, present_key_value\n",
    "logger(\"DemoTransformerBlock defined.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0b1ea2e0",
   "metadata": {},
   "source": [
    "### 3.7 Main Model (`DemoLLMModel` - stack of blocks)\n",
    "**Theory:** The main LLM model consists of an initial token embedding layer, followed by a stack of Transformer blocks, and a final normalization layer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "273d13f3",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: DemoLLMModel defined.\n"
     ]
    }
   ],
   "source": [
    "class DemoLLMModel(PreTrainedModel):\n",
    "    config_class = DemoLLMConfig\n",
    "\n",
    "    def __init__(self, config: DemoLLMConfig):\n",
    "        super().__init__(config)\n",
    "        self.config = config\n",
    "        self.padding_idx = config.pad_token_id\n",
    "        self.vocab_size = config.vocab_size\n",
    "\n",
    "        self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)\n",
    "        self.layers = nn.ModuleList([DemoTransformerBlock(config) for _ in range(config.num_hidden_layers)])\n",
    "        self.norm = DemoRMSNorm(config.hidden_size, eps=config.rms_norm_eps)\n",
    "        self.dropout = nn.Dropout(config.dropout) # Added dropout after embeddings\n",
    "        self.gradient_checkpointing = False # For simplicity\n",
    "\n",
    "    def forward(\n",
    "        self,\n",
    "        input_ids: torch.LongTensor = None,\n",
    "        attention_mask: Optional[torch.Tensor] = None, # (bsz, seq_len)\n",
    "        position_ids: Optional[torch.LongTensor] = None,\n",
    "        past_key_values: Optional[List[torch.FloatTensor]] = None,\n",
    "        inputs_embeds: Optional[torch.FloatTensor] = None,\n",
    "        use_cache: Optional[bool] = None,\n",
    "        output_attentions: Optional[bool] = None, # Not implemented\n",
    "        output_hidden_states: Optional[bool] = None, # Not implemented\n",
    "        return_dict: Optional[bool] = None,\n",
    "    ) -> Union[Tuple, CausalLMOutputWithPast]: # Adjusted return type\n",
    "        use_cache = use_cache if use_cache is not None else self.config.use_cache\n",
    "        return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n",
    "\n",
    "        if input_ids is not None and inputs_embeds is not None:\n",
    "            raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\n",
    "        elif input_ids is not None:\n",
    "            batch_size, seq_length = input_ids.shape\n",
    "        elif inputs_embeds is not None:\n",
    "            batch_size, seq_length, _ = inputs_embeds.shape\n",
    "        else:\n",
    "            raise ValueError(\"You have to specify either input_ids or inputs_embeds\")\n",
    "\n",
    "        past_key_values_length = 0\n",
    "        if past_key_values is not None:\n",
    "            past_key_values_length = past_key_values[0][0].shape[2] # (bsz, num_kv_heads, seq_len, head_dim)\n",
    "\n",
    "        if position_ids is None:\n",
    "            device = input_ids.device if input_ids is not None else inputs_embeds.device\n",
    "            position_ids = torch.arange(\n",
    "                past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device\n",
    "            )\n",
    "            position_ids = position_ids.unsqueeze(0).view(-1, seq_length)\n",
    "        \n",
    "        if inputs_embeds is None:\n",
    "            inputs_embeds = self.embed_tokens(input_ids)\n",
    "        \n",
    "        hidden_states = self.dropout(inputs_embeds)\n",
    "\n",
    "        # Create attention mask for padding (if any) and causality\n",
    "        # For a decoder, the mask should be causal and also respect padding tokens.\n",
    "        # The shape for additive mask in Attention is (bsz, 1, q_len, kv_len)\n",
    "        # `attention_mask` from input is usually (bsz, seq_len)\n",
    "        _expanded_mask = None\n",
    "        if attention_mask is not None:\n",
    "            # Expand padding mask: (bsz, seq_len) -> (bsz, 1, q_len, kv_len_with_past)\n",
    "            # This can get tricky with KV caching. For this simplified version, \n",
    "            # we assume attention_mask applies to current inputs.\n",
    "            # Causal part is handled in DemoAttention.\n",
    "            # An additive mask for padding would be:\n",
    "            expanded_padding_mask = attention_mask[:, None, None, :].expand(batch_size, 1, seq_length, seq_length + past_key_values_length)\n",
    "            _expanded_mask = torch.zeros_like(expanded_padding_mask, dtype=hidden_states.dtype)\n",
    "            _expanded_mask.masked_fill_(expanded_padding_mask == 0, float(\"-inf\"))\n",
    "        \n",
    "        next_decoder_cache = [] if use_cache else None\n",
    "\n",
    "        for i, decoder_layer in enumerate(self.layers):\n",
    "            past_kv = past_key_values[i] if past_key_values is not None else None\n",
    "            \n",
    "            layer_outputs = decoder_layer(\n",
    "                hidden_states,\n",
    "                attention_mask=_expanded_mask, # Pass the combined mask\n",
    "                position_ids=position_ids, # RoPE will use this implicitly or via seq_len\n",
    "                past_key_value=past_kv,\n",
    "                use_cache=use_cache,\n",
    "            )\n",
    "            hidden_states = layer_outputs[0]\n",
    "            if use_cache:\n",
    "                next_decoder_cache.append(layer_outputs[1])\n",
    "        \n",
    "        hidden_states = self.norm(hidden_states)\n",
    "        \n",
    "        # This model doesn't have MoE, so no aux_loss\n",
    "        if not return_dict:\n",
    "            return tuple(v for v in [hidden_states, next_decoder_cache] if v is not None)\n",
    "        \n",
    "        return BaseModelOutputWithPast(\n",
    "            last_hidden_state=hidden_states,\n",
    "            past_key_values=next_decoder_cache,\n",
    "            hidden_states=None, \n",
    "            attentions=None,\n",
    "        )\n",
    "logger(\"DemoLLMModel defined.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59033263",
   "metadata": {},
   "source": [
    "### 3.8 Causal LM Head Model (`DemoLLMForCausalLM`)\n",
    "**Theory:** This class wraps the `DemoLLMModel` and adds a final linear layer (the Language Modeling head) on top of the Transformer block outputs. This head projects the hidden states to the vocabulary size, producing logits for each token in the vocabulary. It also handles the calculation of the loss for Causal Language Modeling if labels are provided."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "f35d6a98",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: DemoLLMForCausalLM defined.\n"
     ]
    }
   ],
   "source": [
    "class DemoLLMForCausalLM(PreTrainedModel):\n",
    "    config_class = DemoLLMConfig\n",
    "\n",
    "    def __init__(self, config: DemoLLMConfig):\n",
    "        super().__init__(config)\n",
    "        self.model = DemoLLMModel(config)\n",
    "        self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)\n",
    "        # Weight tying is a common practice but optional\n",
    "        # self.model.embed_tokens.weight = self.lm_head.weight \n",
    "        self.post_init() # Initialize weights\n",
    "\n",
    "    def get_input_embeddings(self):\n",
    "        return self.model.embed_tokens\n",
    "\n",
    "    def set_input_embeddings(self, value):\n",
    "        self.model.embed_tokens = value\n",
    "\n",
    "    def get_output_embeddings(self):\n",
    "        return self.lm_head\n",
    "\n",
    "    def set_output_embeddings(self, new_embeddings):\n",
    "        self.lm_head = new_embeddings\n",
    "        \n",
    "    def prepare_inputs_for_generation(self, input_ids, past_key_values=None, attention_mask=None, **kwargs):\n",
    "        # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly\n",
    "        if past_key_values:\n",
    "            input_ids = input_ids[:, -1:] # Only take the last token if past_key_values is not None\n",
    "        \n",
    "        position_ids = kwargs.get(\"position_ids\", None)\n",
    "        if attention_mask is not None and position_ids is None:\n",
    "            # create position_ids on the fly for batch generation\n",
    "            position_ids = attention_mask.long().cumsum(-1) - 1\n",
    "            position_ids.masked_fill_(attention_mask == 0, 1)\n",
    "            if past_key_values:\n",
    "                position_ids = position_ids[:, -1].unsqueeze(-1)\n",
    "                \n",
    "        return {\n",
    "            \"input_ids\": input_ids,\n",
    "            \"past_key_values\": past_key_values,\n",
    "            \"use_cache\": kwargs.get(\"use_cache\"),\n",
    "            \"attention_mask\": attention_mask,\n",
    "            \"position_ids\": position_ids,\n",
    "        }\n",
    "\n",
    "    def forward(\n",
    "        self,\n",
    "        input_ids: Optional[torch.LongTensor] = None,\n",
    "        attention_mask: Optional[torch.Tensor] = None,\n",
    "        position_ids: Optional[torch.LongTensor] = None,\n",
    "        past_key_values: Optional[List[torch.FloatTensor]] = None,\n",
    "        inputs_embeds: Optional[torch.FloatTensor] = None,\n",
    "        labels: Optional[torch.LongTensor] = None,\n",
    "        use_cache: Optional[bool] = None,\n",
    "        output_attentions: Optional[bool] = None,\n",
    "        output_hidden_states: Optional[bool] = None,\n",
    "        return_dict: Optional[bool] = None,\n",
    "    ) -> Union[Tuple, CausalLMOutputWithPast]:\n",
    "        return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n",
    "\n",
    "        outputs = self.model(\n",
    "            input_ids=input_ids,\n",
    "            attention_mask=attention_mask,\n",
    "            position_ids=position_ids,\n",
    "            past_key_values=past_key_values,\n",
    "            inputs_embeds=inputs_embeds,\n",
    "            use_cache=use_cache,\n",
    "            output_attentions=output_attentions,\n",
    "            output_hidden_states=output_hidden_states,\n",
    "            return_dict=True, # Internal call to base model should always return dict\n",
    "        )\n",
    "\n",
    "        hidden_states = outputs.last_hidden_state\n",
    "        logits = self.lm_head(hidden_states)\n",
    "        \n",
    "        loss = None\n",
    "        if labels is not None:\n",
    "            # Shift so that tokens < n predict n\n",
    "            shift_logits = logits[..., :-1, :].contiguous()\n",
    "            shift_labels = labels[..., 1:].contiguous()\n",
    "            # Flatten the tokens\n",
    "            loss_fct = nn.CrossEntropyLoss() # Default ignore_index is -100, set if needed\n",
    "            shift_logits = shift_logits.view(-1, self.config.vocab_size)\n",
    "            shift_labels = shift_labels.view(-1)\n",
    "            # Ensure labels are on the same device as logits for loss calculation\n",
    "            shift_labels = shift_labels.to(shift_logits.device)\n",
    "            loss = loss_fct(shift_logits, shift_labels)\n",
    "\n",
    "        if not return_dict:\n",
    "            output = (logits,) + (outputs.past_key_values if use_cache else tuple()) # Keep it simple\n",
    "            return (loss,) + output if loss is not None else output\n",
    "\n",
    "        return CausalLMOutputWithPast(\n",
    "            loss=loss,\n",
    "            logits=logits,\n",
    "            past_key_values=outputs.past_key_values,\n",
    "            hidden_states=outputs.hidden_states,\n",
    "            attentions=outputs.attentions,\n",
    "        )\n",
    "logger(\"DemoLLMForCausalLM defined.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bffa2f2d",
   "metadata": {},
   "source": [
    "### 3.9 Verify Model Initialization"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "85c9c8b8",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "DemoLLMForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.\n",
      "  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes\n",
      "  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).\n",
      "  - If you are not the owner of the model architecture class, please contact the model code owner to update it.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: --- Initial DemoLLM Instance Summary ---\n",
      "[LOG]: Configuration: DemoLLMConfig {\n",
      "  \"_attn_implementation_autoset\": true,\n",
      "  \"bos_token_id\": 1,\n",
      "  \"dropout\": 0.0,\n",
      "  \"eos_token_id\": 0,\n",
      "  \"flash_attn\": true,\n",
      "  \"head_dim\": 64,\n",
      "  \"hidden_act\": \"silu\",\n",
      "  \"hidden_size\": 1024,\n",
      "  \"intermediate_size\": 2752,\n",
      "  \"max_position_embeddings\": 1024,\n",
      "  \"model_type\": \"demo_llm\",\n",
      "  \"num_attention_heads\": 16,\n",
      "  \"num_hidden_layers\": 24,\n",
      "  \"num_key_value_heads\": 16,\n",
      "  \"pad_token_id\": 3,\n",
      "  \"rms_norm_eps\": 1e-05,\n",
      "  \"rope_theta\": 10000.0,\n",
      "  \"transformers_version\": \"4.51.3\",\n",
      "  \"use_cache\": true,\n",
      "  \"vocab_size\": 435\n",
      "}\n",
      "\n",
      "[LOG]: Total parameters: 304.058 M (304058368)\n",
      "[LOG]: Trainable parameters: 304.058 M (304058368)\n",
      "[LOG]: -------------------------\n"
     ]
    }
   ],
   "source": [
    "if tokenizer: # Ensure tokenizer is loaded before creating config dependent on its vocab size\n",
    "    demo_llm_config = DemoLLMConfig(vocab_size=DEMO_VOCAB_SIZE_FINAL) # Use the final vocab size\n",
    "    demo_llm_instance = DemoLLMForCausalLM(demo_llm_config).to(DEVICE)\n",
    "    print_model_summary(demo_llm_instance, \"Initial DemoLLM Instance\")\n",
    "    del demo_llm_instance # Clean up\n",
    "else:\n",
    "    logger(\"Skipping model verification as tokenizer was not loaded.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3bab4274",
   "metadata": {},
   "source": [
    "**What we've done in Model Architecture:**\n",
    "We have defined all the necessary PyTorch modules for a Transformer-based decoder-only Language Model. This includes:\n",
    "- `DemoLLMConfig`: Holds model hyperparameters.\n",
    "- `DemoRMSNorm`: For layer normalization.\n",
    "- `RotaryEmbedding`: Implements RoPE for positional encoding.\n",
    "- `DemoAttention`: The core self-attention mechanism, including GQA logic and RoPE application.\n",
    "- `DemoFeedForward`: The MLP layer using SwiGLU.\n",
    "- `DemoTransformerBlock`: Combines attention and MLP with residual connections and normalization.\n",
    "- `DemoLLMModel`: Stacks multiple Transformer blocks.\n",
    "- `DemoLLMForCausalLM`: Adds the final language modeling head for prediction and loss calculation.\n",
    "\n",
    "This self-contained architecture is now ready for training."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "17e6f20d",
   "metadata": {},
   "source": [
    "## Part 4: Pretraining (Self-Contained `DemoLLM`)\n",
    "\n",
    "**Theory Recap:** Pretraining teaches the model fundamental language understanding by having it predict the next token in a sequence on a large corpus. The loss is typically Cross-Entropy Loss calculated over the entire valid (non-padded) sequence.\n",
    "\n",
    "We will use the `DemoLLMForCausalLM` and `DemoCorpusDataset` defined earlier."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b39a5cb8",
   "metadata": {},
   "source": [
    "### 4.1 Initialize Model for Pretraining"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "542c2c90",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: Initializing model for Pretraining...\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: Initialized RotaryEmbedding with dim=64, max_seq_len=1024\n",
      "[LOG]: --- Demo Pretrain Model Summary ---\n",
      "[LOG]: Configuration: DemoLLMConfig {\n",
      "  \"_attn_implementation_autoset\": true,\n",
      "  \"bos_token_id\": 1,\n",
      "  \"dropout\": 0.0,\n",
      "  \"eos_token_id\": 0,\n",
      "  \"flash_attn\": true,\n",
      "  \"head_dim\": 64,\n",
      "  \"hidden_act\": \"silu\",\n",
      "  \"hidden_size\": 1024,\n",
      "  \"intermediate_size\": 2752,\n",
      "  \"max_position_embeddings\": 1024,\n",
      "  \"model_type\": \"demo_llm\",\n",
      "  \"num_attention_heads\": 16,\n",
      "  \"num_hidden_layers\": 24,\n",
      "  \"num_key_value_heads\": 16,\n",
      "  \"pad_token_id\": 3,\n",
      "  \"rms_norm_eps\": 1e-05,\n",
      "  \"rope_theta\": 10000.0,\n",
      "  \"transformers_version\": \"4.51.3\",\n",
      "  \"use_cache\": true,\n",
      "  \"vocab_size\": 435\n",
      "}\n",
      "\n",
      "[LOG]: Total parameters: 304.058 M (304058368)\n",
      "[LOG]: Trainable parameters: 304.058 M (304058368)\n",
      "[LOG]: -------------------------\n"
     ]
    }
   ],
   "source": [
    "logger(\"Initializing model for Pretraining...\")\n",
    "if tokenizer: # Ensure tokenizer is loaded\n",
    "    pt_config = DemoLLMConfig(\n",
    "        vocab_size=DEMO_VOCAB_SIZE_FINAL,\n",
    "        hidden_size=DEMO_HIDDEN_SIZE,\n",
    "        intermediate_size=DEMO_INTERMEDIATE_SIZE,\n",
    "        num_hidden_layers=DEMO_NUM_LAYERS,\n",
    "        num_attention_heads=DEMO_NUM_ATTENTION_HEADS,\n",
    "        num_key_value_heads=DEMO_NUM_KV_HEADS,\n",
    "        max_position_embeddings=DEMO_MAX_SEQ_LEN,\n",
    "        bos_token_id=tokenizer.bos_token_id,\n",
    "        eos_token_id=tokenizer.eos_token_id,\n",
    "        pad_token_id=tokenizer.pad_token_id\n",
    "    )\n",
    "    pt_model = DemoLLMForCausalLM(pt_config).to(DEVICE)\n",
    "    print_model_summary(pt_model, \"Demo Pretrain Model\")\n",
    "else:\n",
    "    logger(\"Cannot initialize pretrain model: tokenizer not available.\")\n",
    "    pt_model = None"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "594ccb65",
   "metadata": {},
   "source": [
    "### 4.2 Prepare Pretraining DataLoader"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "d9f398da",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: Loading pretraining data from: ./dataset_notebook_scratch\\pretrain_data.jsonl\n",
      "[LOG]: Loaded 7 samples for pretraining.\n",
      "[LOG]: Demo Pretrain dataset size: 7\n"
     ]
    }
   ],
   "source": [
    "if tokenizer and pt_model: # Proceed only if tokenizer and model are initialized\n",
    "    demo_pt_dataset = DemoCorpusDataset(pretrain_file_path, tokenizer, max_length=DEMO_MAX_SEQ_LEN)\n",
    "    demo_pt_dataloader = DataLoader(demo_pt_dataset, batch_size=DEMO_BATCH_SIZE, shuffle=True, num_workers=0)\n",
    "    logger(f\"Demo Pretrain dataset size: {len(demo_pt_dataset)}\")\n",
    "else:\n",
    "    logger(\"Skipping pretrain dataloader: tokenizer or model not initialized.\")\n",
    "    demo_pt_dataloader = [] # Empty dataloader"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b6ee3837",
   "metadata": {},
   "source": [
    "### 4.3 Pretraining Loop"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1a5d8649",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LOG]: Starting DEMO Pretraining for 10 epochs (10 steps)...\n"
     ]
    }
   ],
   "source": [
    "if pt_model and demo_pt_dataloader: # Check if model and dataloader are ready\n",
    "    optimizer_pt_demo = optim.AdamW(pt_model.parameters(), lr=DEMO_PRETRAIN_LR)\n",
    "    loss_fct_pt_demo = nn.CrossEntropyLoss(reduction='none')\n",
    "\n",
    "    # Mixed precision context for GPU\n",
    "    autocast_ctx = nullcontext() if DEVICE.type == 'cpu' else torch.amp.autocast(device_type=DEVICE.type, dtype=PTDTYPE)\n",
    "    scaler_pt_demo = torch.cuda.amp.GradScaler(enabled=(DTYPE_STR != 'float32' and DEVICE.type == 'cuda'))\n",
    "\n",
    "    total_steps_pt_demo = len(demo_pt_dataloader) * DEMO_PRETRAIN_EPOCHS\n",
    "    logger(f\"Starting DEMO Pretraining for {DEMO_PRETRAIN_EPOCHS} epochs ({total_steps_pt_demo} steps)...\")\n",
    "\n",
    "    pt_model.train()\n",
    "    current_training_step_pt = 0\n",
    "    for epoch in range(DEMO_PRETRAIN_EPOCHS):\n",
    "        epoch_loss_pt_val = 0.0\n",
    "        for step, (X_batch, Y_batch, mask_batch) in enumerate(demo_pt_dataloader):\n",
    "            X_batch, Y_batch, mask_batch = X_batch.to(DEVICE), Y_batch.to(DEVICE), mask_batch.to(DEVICE)\n",
    "            \n",
    "            current_lr_pt = get_lr(current_training_step_pt, total_steps_pt_demo, DEMO_PRETRAIN_LR)\n",
    "            for param_group in optimizer_pt_demo.param_groups:\n",
    "                param_group['lr'] = current_lr_pt\n",
    "\n",
    "            with autocast_ctx:\n",
    "                # For our custom DemoLLMForCausalLM, if `labels` is not passed, it returns logits.\n",
    "                # We need to compute loss manually using the mask.\n",
    "                outputs_pt = pt_model(input_ids=X_batch) \n",
    "                logits_pt = outputs_pt.logits # (bsz, seq_len-1, vocab_size)\n",
    "                \n",
    "                # logits_pt are for predicting Y_batch. mask_batch aligns with Y_batch.\n",
    "                raw_loss_pt = loss_fct_pt_demo(logits_pt.view(-1, logits_pt.size(-1)), Y_batch.view(-1))\n",
    "                masked_loss_pt = (raw_loss_pt * mask_batch.view(-1)).sum() / mask_batch.sum().clamp(min=1)\n",
    "            \n",
    "            scaler_pt_demo.scale(masked_loss_pt).backward()\n",
    "            scaler_pt_demo.step(optimizer_pt_demo)\n",
    "            scaler_pt_demo.update()\n",
    "            optimizer_pt_demo.zero_grad(set_to_none=True)\n",
    "            \n",
    "            epoch_loss_pt_val += masked_loss_pt.item()\n",
    "            current_training_step_pt += 1\n",
    "            \n",
    "            if (step + 1) % 1 == 0: # Log frequently for demo\n",
    "                logger(f\"PT Epoch {epoch+1}, Step {step+1}/{len(demo_pt_dataloader)}, Loss: {masked_loss_pt.item():.4f}, LR: {current_lr_pt:.3e}\")\n",
    "        \n",
    "        avg_epoch_loss_pt = epoch_loss_pt_val / len(demo_pt_dataloader)\n",
    "        logger(f\"End of PT Epoch {epoch+1}, Average Loss: {avg_epoch_loss_pt:.4f}\")\n",
    "\n",
    "    logger(\"DEMO Pretraining finished.\")\n",
    "    # Save the final pretrained model\n",
    "    final_pretrained_model_path = os.path.join(NOTEBOOK_OUT_DIR, \"demo_llm_pretrained.pth\")\n",
    "    torch.save(pt_model.state_dict(), final_pretrained_model_path)\n",
    "    logger(f\"Demo pretrained model saved to: {final_pretrained_model_path}\")\n",
    "else:\n",
    "    logger(\"Skipping Pretraining loop as model or dataloader was not initialized.\")\n",
    "    final_pretrained_model_path = None"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a198b77b",
   "metadata": {},
   "source": [
    "### 4.4 Quick Test of Self-Contained Pretrained Model\n",
    "Let's check if our model learned anything, even if minimal."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "78f27548",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 15:52:28] Testing pretrained model generation...\n",
      "[2025-05-14 15:52:28] Prompt: 'Language models learn' -> Generated: 'Language models learn learn learn learn learn learn learn learn learn learn learn learn learn learn learn learn'\n"
     ]
    }
   ],
   "source": [
    "if pt_model and final_pretrained_model_path and os.path.exists(final_pretrained_model_path):\n",
    "    logger(\"Testing pretrained model generation...\")\n",
    "    pt_model.eval() # Set to evaluation mode\n",
    "    test_prompt_str_pt = \"Language models learn\"\n",
    "    # Prepend BOS for generation consistency with training if tokenizer doesn't do it automatically\n",
    "    pt_test_input_ids = tokenizer(tokenizer.bos_token + test_prompt_str_pt, return_tensors=\"pt\").input_ids.to(DEVICE)\n",
    "    \n",
    "    with torch.no_grad(), autocast_ctx:\n",
    "        generated_output_pt = pt_model.generate(\n",
    "            pt_test_input_ids,\n",
    "            max_new_tokens=15,\n",
    "            do_sample=False, # Greedy for this test\n",
    "            eos_token_id=tokenizer.eos_token_id,\n",
    "            pad_token_id=tokenizer.pad_token_id\n",
    "        )\n",
    "    decoded_generated_pt = tokenizer.decode(generated_output_pt[0], skip_special_tokens=True)\n",
    "    logger(f\"Prompt: '{test_prompt_str_pt}' -> Generated: '{decoded_generated_pt}'\")\n",
    "else:\n",
    "    logger(\"Skipping pretrained model test as it was not trained or saved.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80dea70c",
   "metadata": {},
   "source": [
    "**What we've done in Pretraining:**\n",
    "We initialized our `DemoLLMForCausalLM` with a very small configuration. We then trained it for a few epochs on our tiny sample pretraining dataset. The goal was for the model to learn to predict the next token in a sequence, thereby capturing some basic statistical properties of the language. The generated output will likely be repetitive or nonsensical at this stage due to the extremely limited data and model size, but the process is what matters for this guide."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "18257f67",
   "metadata": {},
   "source": [
    "## Part 5: Supervised Fine-Tuning (SFT) (Self-Contained `DemoLLM`)\n",
    "\n",
    "**Theory Recap:** SFT adapts the pretrained LLM to follow instructions or engage in conversations. It uses a dataset of input prompts and desired high-quality responses. The key is to train the model to generate the assistant's part of the dialogue, often using a loss mask so that only the assistant's tokens contribute to the loss calculation.\n",
    "\n",
    "We will load our pretrained `DemoLLMForCausalLM` and fine-tune it using the `DemoChatDataset` with our `sample_sft.jsonl` data."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dcf665f1",
   "metadata": {},
   "source": [
    "### 5.1 Initialize Model for SFT (Load Pretrained)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fbb2e1eb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 15:52:28] Initializing model for SFT, loading pretrained weights...\n",
      "[2025-05-14 15:52:28] Initialized RotaryEmbedding with dim=16, max_seq_len=64\n",
      "[2025-05-14 15:52:28] Initialized RotaryEmbedding with dim=16, max_seq_len=64\n",
      "[2025-05-14 15:52:28] --- Demo SFT Model (Loaded Pretrained) Summary ---\n",
      "[2025-05-14 15:52:28] Configuration: DemoLLMConfig {\n",
      "  \"_attn_implementation_autoset\": true,\n",
      "  \"bos_token_id\": 1,\n",
      "  \"dropout\": 0.0,\n",
      "  \"eos_token_id\": 0,\n",
      "  \"flash_attn\": true,\n",
      "  \"head_dim\": 16,\n",
      "  \"hidden_act\": \"silu\",\n",
      "  \"hidden_size\": 64,\n",
      "  \"intermediate_size\": 192,\n",
      "  \"max_position_embeddings\": 64,\n",
      "  \"model_type\": \"demo_llm\",\n",
      "  \"num_attention_heads\": 4,\n",
      "  \"num_hidden_layers\": 2,\n",
      "  \"num_key_value_heads\": 2,\n",
      "  \"pad_token_id\": 3,\n",
      "  \"rms_norm_eps\": 1e-05,\n",
      "  \"rope_theta\": 10000.0,\n",
      "  \"transformers_version\": \"4.51.3\",\n",
      "  \"use_cache\": true,\n",
      "  \"vocab_size\": 435\n",
      "}\n",
      "\n",
      "[2025-05-14 15:52:28] Total parameters: 0.126 M (126464)\n",
      "[2025-05-14 15:52:28] Trainable parameters: 0.126 M (126464)\n",
      "[2025-05-14 15:52:28] -------------------------\n"
     ]
    }
   ],
   "source": [
    "logger(\"Initializing model for SFT, loading pretrained weights...\")\n",
    "if tokenizer and final_pretrained_model_path and os.path.exists(final_pretrained_model_path):\n",
    "    sft_config = DemoLLMConfig( # Re-use the same config as pretraining\n",
    "        vocab_size=DEMO_VOCAB_SIZE_FINAL,\n",
    "        hidden_size=DEMO_HIDDEN_SIZE,\n",
    "        intermediate_size=DEMO_INTERMEDIATE_SIZE,\n",
    "        num_hidden_layers=DEMO_NUM_LAYERS,\n",
    "        num_attention_heads=DEMO_NUM_ATTENTION_HEADS,\n",
    "        num_key_value_heads=DEMO_NUM_KV_HEADS,\n",
    "        max_position_embeddings=DEMO_MAX_SEQ_LEN,\n",
    "        bos_token_id=tokenizer.bos_token_id,\n",
    "        eos_token_id=tokenizer.eos_token_id,\n",
    "        pad_token_id=tokenizer.pad_token_id\n",
    "    )\n",
    "    sft_model_demo = DemoLLMForCausalLM(sft_config).to(DEVICE)\n",
    "    sft_model_demo.load_state_dict(torch.load(final_pretrained_model_path, map_location=DEVICE))\n",
    "    print_model_summary(sft_model_demo, \"Demo SFT Model (Loaded Pretrained)\")\n",
    "else:\n",
    "    logger(\"Cannot initialize SFT model: Pretrained model path or tokenizer is invalid.\")\n",
    "    sft_model_demo = None"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1d104870",
   "metadata": {},
   "source": [
    "### 5.2 Prepare SFT DataLoader"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "193d4484",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 15:52:28] Loading chat data from: ./dataset_notebook_scratch\\sft_data.jsonl\n",
      "[2025-05-14 15:52:28] Loaded 3 chat samples.\n",
      "[2025-05-14 15:52:28] Demo SFT dataset size: 3\n",
      "[2025-05-14 15:52:28] Verifying SFT data sample and mask from DataLoader:\n",
      "[2025-05-14 15:52:28]   Tokens from Y for sample 0 where mask is 1: Gravity is the force that pulls objects toward\n"
     ]
    }
   ],
   "source": [
    "if tokenizer and sft_model_demo:\n",
    "    demo_sft_dataset = DemoChatDataset(sft_file_path, tokenizer, max_length=DEMO_MAX_SEQ_LEN)\n",
    "    demo_sft_dataloader = DataLoader(demo_sft_dataset, batch_size=DEMO_BATCH_SIZE, shuffle=True, num_workers=0)\n",
    "    logger(f\"Demo SFT dataset size: {len(demo_sft_dataset)}\")\n",
    "    \n",
    "    logger(\"Verifying SFT data sample and mask from DataLoader:\")\n",
    "    for x_s_dl, y_s_dl, m_s_dl in demo_sft_dataloader:\n",
    "        # Reconstruct full sequence for one sample to verify mask logic\n",
    "        idx_to_check = 0\n",
    "        # The first token of X is often BOS. The actual sequence starts from the second token of input_ids.\n",
    "        # So, to reconstruct from X, Y: use X's BOS, then Y. \n",
    "        # Original input_ids was tokenized_chat. X = input_ids[:-1], Y = input_ids[1:]\n",
    "        # So, input_ids = torch.cat([X_s_dl[idx_to_check, :1], Y_s_dl[idx_to_check]], dim=0)\n",
    "        # However, the tokenizer already added BOS if specified by template. Let's just decode Y with its mask.\n",
    "        logger(f\"  Tokens from Y for sample {idx_to_check} where mask is 1: {tokenizer.decode(y_s_dl[idx_to_check][m_s_dl[idx_to_check].bool()])}\")\n",
    "        break\n",
    "else:\n",
    "    logger(\"Skipping SFT Dataloader: SFT model or tokenizer not initialized.\")\n",
    "    demo_sft_dataloader = []"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a395154a",
   "metadata": {},
   "source": [
    "### 5.3 SFT Loop"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f0c589b9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 15:52:28] Starting DEMO SFT for 5 epochs (10 steps)...\n",
      "[2025-05-14 15:52:28] SFT Epoch 1, Step 1/2, Loss: 60.2210, LR: 5.000e-05\n",
      "[2025-05-14 15:52:28] SFT Epoch 1, Step 2/2, Loss: 55.7710, LR: 4.890e-05\n",
      "[2025-05-14 15:52:28] End of SFT Epoch 1, Avg Loss: 57.9960\n",
      "[2025-05-14 15:52:28] SFT Epoch 2, Step 1/2, Loss: 60.0502, LR: 4.570e-05\n",
      "[2025-05-14 15:52:28] SFT Epoch 2, Step 2/2, Loss: 55.5945, LR: 4.073e-05\n",
      "[2025-05-14 15:52:28] End of SFT Epoch 2, Avg Loss: 57.8224\n",
      "[2025-05-14 15:52:28] SFT Epoch 3, Step 1/2, Loss: 57.3038, LR: 3.445e-05\n",
      "[2025-05-14 15:52:28] SFT Epoch 3, Step 2/2, Loss: 59.5190, LR: 2.750e-05\n",
      "[2025-05-14 15:52:28] End of SFT Epoch 3, Avg Loss: 58.4114\n",
      "[2025-05-14 15:52:28] SFT Epoch 4, Step 1/2, Loss: 59.8299, LR: 2.055e-05\n",
      "[2025-05-14 15:52:28] SFT Epoch 4, Step 2/2, Loss: 55.3519, LR: 1.427e-05\n",
      "[2025-05-14 15:52:28] End of SFT Epoch 4, Avg Loss: 57.5909\n",
      "[2025-05-14 15:52:28] SFT Epoch 5, Step 1/2, Loss: 57.2238, LR: 9.297e-06\n",
      "[2025-05-14 15:52:28] SFT Epoch 5, Step 2/2, Loss: 60.3055, LR: 6.101e-06\n",
      "[2025-05-14 15:52:28] End of SFT Epoch 5, Avg Loss: 58.7647\n",
      "[2025-05-14 15:52:28] DEMO SFT finished.\n",
      "[2025-05-14 15:52:28] Demo SFT model saved to: ./out_notebook_scratch\\demo_llm_sft.pth\n"
     ]
    }
   ],
   "source": [
    "if sft_model_demo and demo_sft_dataloader:\n",
    "    optimizer_sft_d = optim.AdamW(sft_model_demo.parameters(), lr=DEMO_SFT_LR)\n",
    "    loss_fct_sft_d = nn.CrossEntropyLoss(reduction='none') \n",
    "\n",
    "    autocast_ctx_sft = nullcontext() if DEVICE.type == 'cpu' else torch.amp.autocast(device_type=DEVICE.type, dtype=PTDTYPE)\n",
    "    scaler_sft_d = torch.cuda.amp.GradScaler(enabled=(DTYPE_STR != 'float32' and DEVICE.type == 'cuda'))\n",
    "\n",
    "    total_steps_sft_d = len(demo_sft_dataloader) * DEMO_SFT_EPOCHS\n",
    "    logger(f\"Starting DEMO SFT for {DEMO_SFT_EPOCHS} epochs ({total_steps_sft_d} steps)...\")\n",
    "\n",
    "    sft_model_demo.train()\n",
    "    current_training_step_sft = 0\n",
    "    for epoch in range(DEMO_SFT_EPOCHS):\n",
    "        epoch_loss_sft_val = 0.0\n",
    "        for step, (X_batch_sft, Y_batch_sft, mask_batch_sft) in enumerate(demo_sft_dataloader):\n",
    "            X_batch_sft, Y_batch_sft, mask_batch_sft = X_batch_sft.to(DEVICE), Y_batch_sft.to(DEVICE), mask_batch_sft.to(DEVICE)\n",
    "            \n",
    "            current_lr_sft = get_lr(current_training_step_sft, total_steps_sft_d, DEMO_SFT_LR)\n",
    "            for param_group in optimizer_sft_d.param_groups:\n",
    "                param_group['lr'] = current_lr_sft\n",
    "\n",
    "            with autocast_ctx_sft:\n",
    "                outputs_sft_loop = sft_model_demo(input_ids=X_batch_sft)\n",
    "                logits_sft_loop = outputs_sft_loop.logits # (bsz, seq_len-1, vocab_size)\n",
    "                \n",
    "                raw_loss_sft = loss_fct_sft_d(logits_sft_loop.view(-1, logits_sft_loop.size(-1)), Y_batch_sft.view(-1))\n",
    "                # mask_batch_sft corresponds to Y_batch_sft\n",
    "                masked_loss_sft = (raw_loss_sft * mask_batch_sft.view(-1)).sum() / mask_batch_sft.sum().clamp(min=1)\n",
    "            \n",
    "            scaler_sft_d.scale(masked_loss_sft).backward()\n",
    "            scaler_sft_d.step(optimizer_sft_d)\n",
    "            scaler_sft_d.update()\n",
    "            optimizer_sft_d.zero_grad(set_to_none=True)\n",
    "            \n",
    "            epoch_loss_sft_val += masked_loss_sft.item()\n",
    "            current_training_step_sft += 1\n",
    "            \n",
    "            if (step + 1) % 1 == 0: \n",
    "                logger(f\"SFT Epoch {epoch+1}, Step {step+1}/{len(demo_sft_dataloader)}, Loss: {masked_loss_sft.item():.4f}, LR: {current_lr_sft:.3e}\")\n",
    "        \n",
    "        logger(f\"End of SFT Epoch {epoch+1}, Avg Loss: {epoch_loss_sft_val / len(demo_sft_dataloader):.4f}\")\n",
    "\n",
    "    logger(\"DEMO SFT finished.\")\n",
    "    final_sft_model_path = os.path.join(NOTEBOOK_OUT_DIR, \"demo_llm_sft.pth\")\n",
    "    torch.save(sft_model_demo.state_dict(), final_sft_model_path)\n",
    "    logger(f\"Demo SFT model saved to: {final_sft_model_path}\")\n",
    "else:\n",
    "    logger(\"Skipping SFT loop as model or dataloader was not initialized.\")\n",
    "    final_sft_model_path = None"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "42b9fe0f",
   "metadata": {},
   "source": [
    "### 5.4 Quick Test of SFT Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9dfd8e61",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 15:52:28] Testing SFT model chat capability...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (64). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 15:52:29] SFT Prompt: 'What is the capital of France?' -> Generated: '\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "'\n"
     ]
    }
   ],
   "source": [
    "if sft_model_demo and final_sft_model_path and os.path.exists(final_sft_model_path):\n",
    "    logger(\"Testing SFT model chat capability...\")\n",
    "    sft_model_demo.eval()\n",
    "    sft_test_chat_history = [{\"role\": \"user\", \"content\": \"What is the capital of France?\"}]\n",
    "    sft_test_prompt = tokenizer.apply_chat_template(sft_test_chat_history, tokenize=False, add_generation_prompt=True)\n",
    "    sft_test_inputs = tokenizer(sft_test_prompt, return_tensors=\"pt\").to(DEVICE)\n",
    "    \n",
    "    with torch.no_grad(), autocast_ctx_sft:\n",
    "        sft_generated_outputs = sft_model_demo.generate(\n",
    "            sft_test_inputs.input_ids,\n",
    "            max_new_tokens=200,\n",
    "            do_sample=True,\n",
    "            eos_token_id=tokenizer.eos_token_id,\n",
    "            pad_token_id=tokenizer.pad_token_id\n",
    "        )\n",
    "    sft_decoded_response = tokenizer.decode(sft_generated_outputs[0][sft_test_inputs.input_ids.shape[1]:], skip_special_tokens=True)\n",
    "    logger(f\"SFT Prompt: '{sft_test_chat_history[0]['content']}' -> Generated: '{sft_decoded_response}'\")\n",
    "else:\n",
    "    logger(\"Skipping SFT model test as it was not trained or saved.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "faaeee4f",
   "metadata": {},
   "source": [
    "**What we've done in SFT:**\n",
    "We took the pretrained model and fine-tuned it on a small dataset of conversations. The `DemoChatDataset` class helped format the data using the tokenizer's chat template and created a loss mask so that the model only learned to predict the assistant's responses. After this stage, the model should be better at engaging in simple Q&A or chat compared to the purely pretrained model."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c3af8601",
   "metadata": {},
   "source": [
    "## Part 6: Reasoning Training (Self-Contained `DemoLLM`)\n",
    "\n",
    "**Theory Recap:** The goal here is to teach the SFT model to produce structured reasoning outputs, specifically using `<think>...</think>` for the thought process and `<answer>...</answer>` for the final result. We achieve this by fine-tuning on a dataset where assistant responses follow this format. A crucial technique, inspired by `train_distill_reason.py` from the original project, is to increase the loss penalty on the special tag tokens (`<think>`, `</think>`, `<answer>`, `</answer>`). This encourages the model to learn to generate these structural elements correctly.\n",
    "\n",
    "We'll load our SFT `DemoLLMForCausalLM` and fine-tune it using the `DemoChatDataset` with our `sample_reasoning.jsonl`."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ccab6a1",
   "metadata": {},
   "source": [
    "### 6.1 Initialize Model for Reasoning (Load SFT)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2a7342bd",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 15:52:29] Initializing model for Reasoning, loading SFT weights...\n",
      "[2025-05-14 15:52:29] Initialized RotaryEmbedding with dim=16, max_seq_len=64\n",
      "[2025-05-14 15:52:29] Initialized RotaryEmbedding with dim=16, max_seq_len=64\n",
      "[2025-05-14 15:52:29] --- Demo Reasoning Model (Loaded SFT) Summary ---\n",
      "[2025-05-14 15:52:29] Configuration: DemoLLMConfig {\n",
      "  \"_attn_implementation_autoset\": true,\n",
      "  \"bos_token_id\": 1,\n",
      "  \"dropout\": 0.0,\n",
      "  \"eos_token_id\": 0,\n",
      "  \"flash_attn\": true,\n",
      "  \"head_dim\": 16,\n",
      "  \"hidden_act\": \"silu\",\n",
      "  \"hidden_size\": 64,\n",
      "  \"intermediate_size\": 192,\n",
      "  \"max_position_embeddings\": 64,\n",
      "  \"model_type\": \"demo_llm\",\n",
      "  \"num_attention_heads\": 4,\n",
      "  \"num_hidden_layers\": 2,\n",
      "  \"num_key_value_heads\": 2,\n",
      "  \"pad_token_id\": 3,\n",
      "  \"rms_norm_eps\": 1e-05,\n",
      "  \"rope_theta\": 10000.0,\n",
      "  \"transformers_version\": \"4.51.3\",\n",
      "  \"use_cache\": true,\n",
      "  \"vocab_size\": 435\n",
      "}\n",
      "\n",
      "[2025-05-14 15:52:29] Total parameters: 0.126 M (126464)\n",
      "[2025-05-14 15:52:29] Trainable parameters: 0.126 M (126464)\n",
      "[2025-05-14 15:52:29] -------------------------\n"
     ]
    }
   ],
   "source": [
    "logger(\"Initializing model for Reasoning, loading SFT weights...\")\n",
    "if tokenizer and final_sft_model_path and os.path.exists(final_sft_model_path):\n",
    "    rsn_config = DemoLLMConfig( # Same config as SFT\n",
    "        vocab_size=DEMO_VOCAB_SIZE_FINAL,\n",
    "        hidden_size=DEMO_HIDDEN_SIZE,\n",
    "        intermediate_size=DEMO_INTERMEDIATE_SIZE,\n",
    "        num_hidden_layers=DEMO_NUM_LAYERS,\n",
    "        num_attention_heads=DEMO_NUM_ATTENTION_HEADS,\n",
    "        num_key_value_heads=DEMO_NUM_KV_HEADS,\n",
    "        max_position_embeddings=DEMO_MAX_SEQ_LEN,\n",
    "        bos_token_id=tokenizer.bos_token_id,\n",
    "        eos_token_id=tokenizer.eos_token_id,\n",
    "        pad_token_id=tokenizer.pad_token_id\n",
    "    )\n",
    "    reasoning_model_d = DemoLLMForCausalLM(rsn_config).to(DEVICE)\n",
    "    reasoning_model_d.load_state_dict(torch.load(final_sft_model_path, map_location=DEVICE))\n",
    "    print_model_summary(reasoning_model_d, \"Demo Reasoning Model (Loaded SFT)\")\n",
    "else:\n",
    "    logger(\"Cannot initialize Reasoning model: SFT model path or tokenizer is invalid.\")\n",
    "    reasoning_model_d = None"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "84fb62e0",
   "metadata": {},
   "source": [
    "### 6.2 Prepare Reasoning DataLoader"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8ac66896",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 15:52:29] Loading chat data from: ./dataset_notebook_scratch\\reasoning_data.jsonl\n",
      "[2025-05-14 15:52:29] Loaded 2 chat samples.\n",
      "[2025-05-14 15:52:29] Demo Reasoning dataset size: 2\n",
      "[2025-05-14 15:52:29] Verifying Reasoning data sample and mask from DataLoader:\n",
      "[2025-05-14 15:52:29]   Tokens from Y for sample 0 where mask is 1: <think>The user is asking about primary colors. These are colors\n",
      "[2025-05-14 15:52:29]   Full Decoded Reasoning sample 0:\n",
      "user\n",
      "What are the primary colors?\n",
      "assistant\n",
      "<think>The user is asking about primary colors. These are colors\n"
     ]
    }
   ],
   "source": [
    "if tokenizer and reasoning_model_d:\n",
    "    demo_reasoning_dataset = DemoChatDataset(reasoning_file_path, tokenizer, max_length=DEMO_MAX_SEQ_LEN)\n",
    "    demo_reasoning_dataloader = DataLoader(demo_reasoning_dataset, batch_size=DEMO_BATCH_SIZE, shuffle=True, num_workers=0)\n",
    "    logger(f\"Demo Reasoning dataset size: {len(demo_reasoning_dataset)}\")\n",
    "    \n",
    "    logger(\"Verifying Reasoning data sample and mask from DataLoader:\")\n",
    "    for x_r_dl, y_r_dl, m_r_dl in demo_reasoning_dataloader:\n",
    "        idx_to_check = 0\n",
    "        logger(f\"  Tokens from Y for sample {idx_to_check} where mask is 1: {tokenizer.decode(y_r_dl[idx_to_check][m_r_dl[idx_to_check].bool()])}\")\n",
    "        full_ids_rsn_dl = torch.cat([x_r_dl[idx_to_check,:1], y_r_dl[idx_to_check]], dim=0)\n",
    "        logger(f\"  Full Decoded Reasoning sample {idx_to_check}:\\n{tokenizer.decode(full_ids_rsn_dl)}\")\n",
    "        break\n",
    "else:\n",
    "    logger(\"Skipping Reasoning Dataloader: model or tokenizer not initialized.\")\n",
    "    demo_reasoning_dataloader = []"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "461e6afb",
   "metadata": {},
   "source": [
    "### 6.3 Reasoning Training Loop with Special Token Weighting"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1f3b98a5",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 15:52:29] Special Reasoning Tag First Token IDs for weighting: [31]\n",
      "[2025-05-14 15:52:29] Starting DEMO Reasoning training for 5 epochs (5 steps)...\n",
      "[2025-05-14 15:52:29] Reasoning Epoch 1, Step 1/1, Loss: 67.5133, LR: 2.000e-05\n",
      "[2025-05-14 15:52:29] End of Reasoning Epoch 1, Avg Loss: 67.5133\n",
      "[2025-05-14 15:52:29] Reasoning Epoch 2, Step 1/1, Loss: 67.4562, LR: 1.828e-05\n",
      "[2025-05-14 15:52:29] End of Reasoning Epoch 2, Avg Loss: 67.4562\n",
      "[2025-05-14 15:52:29] Reasoning Epoch 3, Step 1/1, Loss: 67.4038, LR: 1.378e-05\n",
      "[2025-05-14 15:52:29] End of Reasoning Epoch 3, Avg Loss: 67.4038\n",
      "[2025-05-14 15:52:29] Reasoning Epoch 4, Step 1/1, Loss: 67.3643, LR: 8.219e-06\n",
      "[2025-05-14 15:52:29] End of Reasoning Epoch 4, Avg Loss: 67.3643\n",
      "[2025-05-14 15:52:29] Reasoning Epoch 5, Step 1/1, Loss: 67.3407, LR: 3.719e-06\n",
      "[2025-05-14 15:52:29] End of Reasoning Epoch 5, Avg Loss: 67.3407\n",
      "[2025-05-14 15:52:29] DEMO Reasoning training finished.\n",
      "[2025-05-14 15:52:29] Demo reasoning model saved to: ./out_notebook_scratch\\demo_llm_reasoning.pth\n"
     ]
    }
   ],
   "source": [
    "if reasoning_model_d and demo_reasoning_dataloader and tokenizer:\n",
    "    optimizer_rsn_d = optim.AdamW(reasoning_model_d.parameters(), lr=DEMO_REASONING_LR)\n",
    "    loss_fct_rsn_d = nn.CrossEntropyLoss(reduction='none')\n",
    "\n",
    "    autocast_ctx_rsn = nullcontext() if DEVICE.type == 'cpu' else torch.amp.autocast(device_type=DEVICE.type, dtype=PTDTYPE)\n",
    "    scaler_rsn_d = torch.cuda.amp.GradScaler(enabled=(DTYPE_STR != 'float32' and DEVICE.type == 'cuda'))\n",
    "\n",
    "    # Get token IDs for special reasoning tags. This might create multiple IDs if tokenized into subwords.\n",
    "    # For simplicity, we'll check for the presence of the first token of each tag sequence.\n",
    "    think_start_id_rsn  = tokenizer.encode('<think>', add_special_tokens=False)[0]\n",
    "    think_end_id_rsn    = tokenizer.encode('</think>', add_special_tokens=False)[0]\n",
    "    answer_start_id_rsn = tokenizer.encode('<answer>', add_special_tokens=False)[0]\n",
    "    answer_end_id_rsn   = tokenizer.encode('</answer>', add_special_tokens=False)[0]\n",
    "    \n",
    "    # Create a tensor of these first-token IDs for quick checking\n",
    "    special_tag_first_token_ids = torch.tensor([\n",
    "        think_start_id_rsn, think_end_id_rsn, \n",
    "        answer_start_id_rsn, answer_end_id_rsn\n",
    "    ], device=DEVICE).unique()\n",
    "    logger(f\"Special Reasoning Tag First Token IDs for weighting: {special_tag_first_token_ids.tolist()}\")\n",
    "    REASONING_TAG_LOSS_WEIGHT = 5.0\n",
    "\n",
    "    total_steps_rsn_d = len(demo_reasoning_dataloader) * DEMO_REASONING_EPOCHS\n",
    "    logger(f\"Starting DEMO Reasoning training for {DEMO_REASONING_EPOCHS} epochs ({total_steps_rsn_d} steps)...\")\n",
    "\n",
    "    reasoning_model_d.train()\n",
    "    current_training_step_rsn = 0\n",
    "    for epoch in range(DEMO_REASONING_EPOCHS):\n",
    "        epoch_loss_rsn_val = 0.0\n",
    "        for step, (X_batch_rsn, Y_batch_rsn, sft_style_mask_rsn) in enumerate(demo_reasoning_dataloader):\n",
    "            X_batch_rsn, Y_batch_rsn, sft_style_mask_rsn = X_batch_rsn.to(DEVICE), Y_batch_rsn.to(DEVICE), sft_style_mask_rsn.to(DEVICE)\n",
    "            \n",
    "            current_lr_rsn = get_lr(current_training_step_rsn, total_steps_rsn_d, DEMO_REASONING_LR)\n",
    "            for param_group in optimizer_rsn_d.param_groups:\n",
    "                param_group['lr'] = current_lr_rsn\n",
    "\n",
    "            with autocast_ctx_rsn:\n",
    "                outputs_rsn_loop = reasoning_model_d(input_ids=X_batch_rsn)\n",
    "                logits_rsn_loop = outputs_rsn_loop.logits\n",
    "                \n",
    "                raw_loss_per_token_rsn = loss_fct_rsn_d(logits_rsn_loop.view(-1, logits_rsn_loop.size(-1)), Y_batch_rsn.view(-1))\n",
    "                \n",
    "                # Start with the SFT mask (train only on assistant response tokens)\n",
    "                effective_loss_weights = sft_style_mask_rsn.view(-1).float().clone()\n",
    "                \n",
    "                # Identify positions of (first tokens of) special tags in the target Y_batch_rsn\n",
    "                is_special_target_token_rsn = torch.isin(Y_batch_rsn.view(-1), special_tag_first_token_ids)\n",
    "                \n",
    "                # Apply higher weight where it's an assistant token AND a special tag token\n",
    "                apply_extra_weight = is_special_target_token_rsn & (sft_style_mask_rsn.view(-1) == 1)\n",
    "                effective_loss_weights[apply_extra_weight] *= REASONING_TAG_LOSS_WEIGHT\n",
    "                \n",
    "                # Calculate final weighted loss, normalized by the sum of the original SFT mask counts\n",
    "                # This maintains a somewhat comparable loss magnitude to SFT, while upweighting tags.\n",
    "                weighted_loss_rsn = (raw_loss_per_token_rsn * effective_loss_weights).sum() / sft_style_mask_rsn.sum().clamp(min=1)\n",
    "            \n",
    "            scaler_rsn_d.scale(weighted_loss_rsn).backward()\n",
    "            scaler_rsn_d.step(optimizer_rsn_d)\n",
    "            scaler_rsn_d.update()\n",
    "            optimizer_rsn_d.zero_grad(set_to_none=True)\n",
    "            \n",
    "            epoch_loss_rsn_val += weighted_loss_rsn.item()\n",
    "            current_training_step_rsn += 1\n",
    "            \n",
    "            if (step + 1) % 1 == 0: \n",
    "                logger(f\"Reasoning Epoch {epoch+1}, Step {step+1}/{len(demo_reasoning_dataloader)}, Loss: {weighted_loss_rsn.item():.4f}, LR: {current_lr_rsn:.3e}\")\n",
    "        \n",
    "        logger(f\"End of Reasoning Epoch {epoch+1}, Avg Loss: {epoch_loss_rsn_val / len(demo_reasoning_dataloader):.4f}\")\n",
    "\n",
    "    logger(\"DEMO Reasoning training finished.\")\n",
    "    final_reasoning_model_path = os.path.join(NOTEBOOK_OUT_DIR, \"demo_llm_reasoning.pth\")\n",
    "    torch.save(reasoning_model_d.state_dict(), final_reasoning_model_path)\n",
    "    logger(f\"Demo reasoning model saved to: {final_reasoning_model_path}\")\n",
    "else:\n",
    "    logger(\"Skipping Reasoning loop as model or dataloader was not initialized.\")\n",
    "    final_reasoning_model_path = None"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3ad14c6",
   "metadata": {},
   "source": [
    "**What we've done in Reasoning Training:**\n",
    "We took our SFT-trained model and further fine-tuned it on a specialized dataset containing `<think>...</think><answer>...</answer>` structures. The key modification in the training loop was to apply a higher loss weight to the special tag tokens (`<think>`, `</think>`, etc.) when they appeared in the assistant's target response. This encourages the model to prioritize learning and correctly generating these structural elements, leading to the desired \"thinking\" output format."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "050f9d1d",
   "metadata": {},
   "source": [
    "## Part 7: Inference with the \"Thinking\" LLM\n",
    "\n",
    "**Theory:** Now we test our final model. We'll provide it with prompts and observe if it generates the structured `<think>...</think><answer>...</answer>` output. The quality of the thinking and answer will depend heavily on the (very limited) training data, but the structure should be present if the reasoning training was somewhat effective."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "30e2202c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-05-14 15:52:29] Loading final reasoning model for inference...\n",
      "[2025-05-14 15:52:29] Initialized RotaryEmbedding with dim=16, max_seq_len=64\n",
      "[2025-05-14 15:52:29] Initialized RotaryEmbedding with dim=16, max_seq_len=64\n",
      "[2025-05-14 15:52:29] Final 'thinking' model loaded.\n",
      "[2025-05-14 15:52:29] --- Final Thinking LLM Summary ---\n",
      "[2025-05-14 15:52:29] Configuration: DemoLLMConfig {\n",
      "  \"_attn_implementation_autoset\": true,\n",
      "  \"bos_token_id\": 1,\n",
      "  \"dropout\": 0.0,\n",
      "  \"eos_token_id\": 0,\n",
      "  \"flash_attn\": true,\n",
      "  \"head_dim\": 16,\n",
      "  \"hidden_act\": \"silu\",\n",
      "  \"hidden_size\": 64,\n",
      "  \"intermediate_size\": 192,\n",
      "  \"max_position_embeddings\": 64,\n",
      "  \"model_type\": \"demo_llm\",\n",
      "  \"num_attention_heads\": 4,\n",
      "  \"num_hidden_layers\": 2,\n",
      "  \"num_key_value_heads\": 2,\n",
      "  \"pad_token_id\": 3,\n",
      "  \"rms_norm_eps\": 1e-05,\n",
      "  \"rope_theta\": 10000.0,\n",
      "  \"transformers_version\": \"4.51.3\",\n",
      "  \"use_cache\": true,\n",
      "  \"vocab_size\": 435\n",
      "}\n",
      "\n",
      "[2025-05-14 15:52:29] Total parameters: 0.126 M (126464)\n",
      "[2025-05-14 15:52:29] Trainable parameters: 0.126 M (126464)\n",
      "[2025-05-14 15:52:29] -------------------------\n",
      "[2025-05-14 15:52:29] \n",
      "--- Generating Structured Response for: 'If I have 3 apples and eat 1, how many are left?' ---\n",
      "[2025-05-14 15:52:29] Input prompt (len 51): <|im_start|>user\n",
      "If I have 3 apples and eat 1, how many are left?<|im_end|>\n",
      "<|im_start|>assistant\n",
      "[2025-05-14 15:52:29] Raw Assistant Response:\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "[2025-05-14 15:52:29] ==> Parsed <think>: Not found\n",
      "[2025-05-14 15:52:29] ==> Parsed <answer>: \n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "[2025-05-14 15:52:29] ------------------------\n",
      "[2025-05-14 15:52:29] \n",
      "--- Generating Structured Response for: 'What are the primary colors?' ---\n",
      "[2025-05-14 15:52:29] Input prompt (len 25): <|im_start|>user\n",
      "What are the primary colors?<|im_end|>\n",
      "<|im_start|>assistant\n",
      "[2025-05-14 15:52:30] Raw Assistant Response:\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "[2025-05-14 15:52:30] ==> Parsed <think>: Not found\n",
      "[2025-05-14 15:52:30] ==> Parsed <answer>: \n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "[2025-05-14 15:52:30] ------------------------\n",
      "[2025-05-14 15:52:30] \n",
      "--- Generating Structured Response for: 'Hello! Tell me a joke.' ---\n",
      "[2025-05-14 15:52:30] Input prompt (len 24): <|im_start|>user\n",
      "Hello! Tell me a joke.<|im_end|>\n",
      "<|im_start|>assistant\n",
      "[2025-05-14 15:52:30] Raw Assistant Response:\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "[2025-05-14 15:52:30] ==> Parsed <think>: Not found\n",
      "[2025-05-14 15:52:30] ==> Parsed <answer>: \n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "[2025-05-14 15:52:30] ------------------------\n"
     ]
    }
   ],
   "source": [
    "if final_reasoning_model_path and os.path.exists(final_reasoning_model_path) and tokenizer:\n",
    "    logger(\"Loading final reasoning model for inference...\")\n",
    "    final_model_config = DemoLLMConfig( # Ensure config matches saved model\n",
    "        vocab_size=DEMO_VOCAB_SIZE_FINAL,\n",
    "        hidden_size=DEMO_HIDDEN_SIZE,\n",
    "        intermediate_size=DEMO_INTERMEDIATE_SIZE,\n",
    "        num_hidden_layers=DEMO_NUM_LAYERS,\n",
    "        num_attention_heads=DEMO_NUM_ATTENTION_HEADS,\n",
    "        num_key_value_heads=DEMO_NUM_KV_HEADS,\n",
    "        max_position_embeddings=DEMO_MAX_SEQ_LEN,\n",
    "        bos_token_id=tokenizer.bos_token_id,\n",
    "        eos_token_id=tokenizer.eos_token_id,\n",
    "        pad_token_id=tokenizer.pad_token_id\n",
    "    )\n",
    "    final_thinking_llm = DemoLLMForCausalLM(final_model_config).to(DEVICE)\n",
    "    final_thinking_llm.load_state_dict(torch.load(final_reasoning_model_path, map_location=DEVICE))\n",
    "    final_thinking_llm.eval()\n",
    "    logger(\"Final 'thinking' model loaded.\")\n",
    "    print_model_summary(final_thinking_llm, \"Final Thinking LLM\")\n",
    "    \n",
    "    def get_structured_response(model, user_query, max_new_toks=DEMO_MAX_SEQ_LEN - 10, temp=0.7, tk=10):\n",
    "        logger(f\"\\n--- Generating Structured Response for: '{user_query}' ---\")\n",
    "        chat_history = [{\"role\": \"user\", \"content\": user_query}]\n",
    "        prompt_text = tokenizer.apply_chat_template(chat_history, tokenize=False, add_generation_prompt=True)\n",
    "        \n",
    "        input_ids = tokenizer(prompt_text, return_tensors=\"pt\").input_ids.to(DEVICE)\n",
    "        logger(f\"Input prompt (len {input_ids.shape[1]}): {prompt_text.strip()}\")\n",
    "        \n",
    "        with torch.no_grad(), autocast_ctx_rsn: # Using autocast context from reasoning training\n",
    "            generated_ids = model.generate(\n",
    "                input_ids,\n",
    "                max_new_tokens=max_new_toks,\n",
    "                do_sample=True,\n",
    "                temperature=temp,\n",
    "                top_k=tk,\n",
    "                eos_token_id=tokenizer.eos_token_id,\n",
    "                pad_token_id=tokenizer.pad_token_id\n",
    "            )\n",
    "        \n",
    "        assistant_response_ids = generated_ids[0][input_ids.shape[1]:]\n",
    "        assistant_response_text = tokenizer.decode(assistant_response_ids, skip_special_tokens=True)\n",
    "        logger(f\"Raw Assistant Response:\\n{assistant_response_text}\")\n",
    "        \n",
    "        # Simple parsing for <think> and <answer> tags\n",
    "        think_part = \"Not found\"\n",
    "        answer_part = assistant_response_text # Default to full response if tags are not perfectly formed\n",
    "        try:\n",
    "            if \"<think>\" in assistant_response_text and \"</think>\" in assistant_response_text:\n",
    "                think_start_idx = assistant_response_text.find(\"<think>\") + len(\"<think>\")\n",
    "                think_end_idx = assistant_response_text.find(\"</think>\")\n",
    "                think_part = assistant_response_text[think_start_idx:think_end_idx].strip()\n",
    "                \n",
    "                if \"<answer>\" in assistant_response_text[think_end_idx:] and \"</answer>\" in assistant_response_text[think_end_idx:]:\n",
    "                    # Search for answer tag *after* the think_end tag\n",
    "                    search_after_think = assistant_response_text[think_end_idx + len(\"</think>\"):]\n",
    "                    if \"<answer>\" in search_after_think:\n",
    "                        answer_start_idx_rel = search_after_think.find(\"<answer>\") + len(\"<answer>\")\n",
    "                        answer_start_idx_abs = think_end_idx + len(\"</think>\") + answer_start_idx_rel\n",
    "                        if \"</answer>\" in assistant_response_text[answer_start_idx_abs:]:\n",
    "                            answer_end_idx = assistant_response_text.find(\"</answer>\", answer_start_idx_abs)\n",
    "                            answer_part = assistant_response_text[answer_start_idx_abs:answer_end_idx].strip()\n",
    "                        else: # No closing answer tag after opening\n",
    "                            answer_part = assistant_response_text[answer_start_idx_abs:].strip()\n",
    "                    else: # No answer tag found after think block\n",
    "                         answer_part = search_after_think.strip() # Consider rest as answer\n",
    "            elif \"<answer>\" in assistant_response_text and \"</answer>\" in assistant_response_text: # Only answer tags\n",
    "                answer_start_idx = assistant_response_text.find(\"<answer>\") + len(\"<answer>\")\n",
    "                answer_end_idx = assistant_response_text.find(\"</answer>\")\n",
    "                answer_part = assistant_response_text[answer_start_idx:answer_end_idx].strip()\n",
    "        except Exception as e:\n",
    "            logger(f\"Error parsing think/answer tags: {e}\")\n",
    "            # Fallback to full response handled by default answer_part initialization\n",
    "            \n",
    "        logger(f\"==> Parsed <think>: {think_part}\")\n",
    "        logger(f\"==> Parsed <answer>: {answer_part}\")\n",
    "        logger(\"------------------------\")\n",
    "        return think_part, answer_part\n",
    "\n",
    "    # Test queries\n",
    "    get_structured_response(final_thinking_llm, \"If I have 3 apples and eat 1, how many are left?\")\n",
    "    get_structured_response(final_thinking_llm, \"What are the primary colors?\")\n",
    "    get_structured_response(final_thinking_llm, \"Hello! Tell me a joke.\")\n",
    "\n",
    "else:\n",
    "    logger(\"Skipping final inference test as reasoning model or tokenizer was not available/trained.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "429126f9",
   "metadata": {},
   "source": [
    "## Part 8: Conclusion and Next Steps\n",
    "\n",
    "This notebook has provided a detailed, self-contained walkthrough of creating a small \"thinking\" Large Language Model. We covered:\n",
    "1.  **Tokenizer Training:** A simple BPE tokenizer was trained from scratch using the `tokenizers` library.\n",
    "2.  **Data Preparation:** Custom `Dataset` classes were implemented to handle pretraining, SFT, and reasoning data, all using our trained tokenizer.\n",
    "3.  **Model Architecture:** Key components of a Transformer decoder (RMSNorm, RoPE, Attention, FFN, Blocks, LM Head) were implemented directly within the notebook under the `DemoLLM` family of classes.\n",
    "4.  **Training Pipeline:** We executed three distinct training phases:\n",
    "    *   **Pretraining:** To instill basic language understanding.\n",
    "    *   **Supervised Fine-Tuning (SFT):** To teach instruction following and conversational ability, with loss masking on assistant responses.\n",
    "    *   **Reasoning Training:** To encourage the model to output explicit `<think>...</think>` processes before an `<answer>...</answer>`, using weighted loss for special tags.\n",
    "5.  **Inference:** We demonstrated how to use the final model and attempt to parse its structured output.\n",
    "\n",
    "**Key Observations from this Demo:**\n",
    "-   **Complexity:** Building even a simplified LLM from scratch involves many interconnected components.\n",
    "-   **Data is King:** The structure and content of training data at each stage are paramount. The reasoning data with explicit tags was crucial for the desired output format.\n",
    "-   **Computational Demands:** Even with tiny data and model sizes, training can be slow, highlighting the immense resources needed for state-of-the-art LLMs.\n",
    "-   **Fragility of Small Models:** The outputs from this demo model will be very basic and likely not robust due to the extremely limited scale. The \"reasoning\" observed is more pattern imitation than deep understanding.\n",
    "\n",
    "**Further Exploration (Beyond this Notebook):**\n",
    "-   **Scale Up:** Use significantly larger datasets, vocabulary sizes, and model dimensions.\n",
    "-   **Advanced Tokenization:** Explore more sophisticated tokenizer training or use well-established pretrained tokenizers if starting a new project from scratch isn't a hard requirement for the tokenizer itself.\n",
    "-   **Efficient Model Architectures:** Implement features like Mixture of Experts (MoE), Flash Attention (if hardware supports it more robustly than the basic check here), and Grouped Query Attention (GQA) more thoroughly.\n",
    "-   **Sophisticated Training:** Use distributed training (DDP/FSDP), more advanced optimizers and learning rate schedulers, gradient checkpointing, etc.\n",
    "-   **RLHF/DPO:** For better alignment with human preferences and more nuanced control over generation, explore Reinforcement Learning from Human Feedback or Direct Preference Optimization.\n",
    "-   **Rigorous Evaluation:** Employ standard NLP benchmarks (e.g., GLUE, SuperGLUE, MMLU, domain-specific tests for reasoning) to quantitatively assess model performance.\n",
    "\n",
    "This notebook serves as an educational tool to demystify the core mechanics. Building production-grade LLMs is a significant engineering and research effort. Hopefully, this detailed guide provides a solid foundation for your LLM journey!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv-mimind-thinking",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
