{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "21506780",
   "metadata": {},
   "source": [
    "# \"THE PRICE IS RIGHT\" Capstone Project\n",
    "\n",
    "This week - build a model that predicts how much something costs from a description, based on a scrape of Amazon data\n",
    "\n",
    "# Order of play\n",
    "\n",
    "DAY 1: Data Curation  \n",
    "DAY 2: Data Pre-processing  \n",
    "DAY 3: Evaluation, Baselines, Traditional ML  \n",
    "DAY 4: Deep Learning and LLMs  \n",
    "DAY 5: Fine-tuning a Frontier Model  \n",
    "\n",
    "## DAY 3: Evaluation, Baselines, Traditional ML\n",
    "\n",
    "Today we'll write some simple models to predict the price of a product\n",
    "\n",
    "We'll use an approach to evaluate the performance of the model\n",
    "\n",
    "And we'll test some Baseline Models using Traditional machine learning"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8b6d3a19",
   "metadata": {},
   "outputs": [],
   "source": [
    "import random\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "from sklearn.linear_model import LinearRegression\n",
    "from sklearn.metrics import mean_squared_error, r2_score\n",
    "from sklearn.feature_extraction.text import CountVectorizer\n",
    "from sklearn.ensemble import RandomForestRegressor\n",
    "from pricer.evaluator import evaluate\n",
    "from pricer.items import Item"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c7b913a0",
   "metadata": {},
   "outputs": [],
   "source": [
    "LITE_MODE = False"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "96ba4aff",
   "metadata": {},
   "outputs": [],
   "source": [
    "username = \"ed-donner\"\n",
    "dataset = f\"{username}/items_lite\" if LITE_MODE else f\"{username}/items_full\"\n",
    "\n",
    "train, val, test = Item.from_hub(dataset)\n",
    "\n",
    "print(f\"Loaded {len(train):,} training items, {len(val):,} validation items, {len(test):,} test items\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "31ca5736",
   "metadata": {},
   "outputs": [],
   "source": [
    "def random_pricer(item):\n",
    "    return random.randrange(1,1000)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8cf38807",
   "metadata": {},
   "outputs": [],
   "source": [
    "random.seed(42)\n",
    "evaluate(random_pricer, test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "23b1faa3",
   "metadata": {},
   "outputs": [],
   "source": [
    "# That was fun!\n",
    "# We can do better - here's another rather trivial model\n",
    "\n",
    "training_prices = [item.price for item in train]\n",
    "training_average = sum(training_prices) / len(training_prices)\n",
    "print(training_average)\n",
    "\n",
    "def constant_pricer(item):\n",
    "    return training_average"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "15ea0e6f",
   "metadata": {},
   "outputs": [],
   "source": [
    "evaluate(constant_pricer, test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0968094a",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_features(item):\n",
    "    return {\n",
    "        \"weight\": item.weight,\n",
    "        \"weight_unknown\": 1 if item.weight==0 else 0,\n",
    "        \"text_length\": len(item.summary)\n",
    "    }"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1b16468e",
   "metadata": {},
   "outputs": [],
   "source": [
    "def list_to_dataframe(items):\n",
    "    features = [get_features(item) for item in items]\n",
    "    df = pd.DataFrame(features)\n",
    "    df['price'] = [item.price for item in items]\n",
    "    return df\n",
    "\n",
    "train_df = list_to_dataframe(train)\n",
    "test_df = list_to_dataframe(test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "17376f8b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Traditional Linear Regression!\n",
    "\n",
    "np.random.seed(42)\n",
    "\n",
    "# Separate features and target\n",
    "feature_columns = ['weight', 'weight_unknown', 'text_length']\n",
    "\n",
    "X_train = train_df[feature_columns]\n",
    "y_train = train_df['price']\n",
    "X_test = test_df[feature_columns]\n",
    "y_test = test_df['price']\n",
    "\n",
    "# Train a Linear Regression\n",
    "model = LinearRegression()\n",
    "model.fit(X_train, y_train)\n",
    "\n",
    "for feature, coef in zip(feature_columns, model.coef_):\n",
    "    print(f\"{feature}: {coef}\")\n",
    "print(f\"Intercept: {model.intercept_}\")\n",
    "\n",
    "# Predict the test set and evaluate\n",
    "y_pred = model.predict(X_test)\n",
    "mse = mean_squared_error(y_test, y_pred)\n",
    "r2 = r2_score(y_test, y_pred)\n",
    "\n",
    "print(f\"Mean Squared Error: {mse}\")\n",
    "print(f\"R-squared Score: {r2}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9e2e1099",
   "metadata": {},
   "outputs": [],
   "source": [
    "def linear_regression_pricer(item):\n",
    "    features = get_features(item)\n",
    "    features_df = pd.DataFrame([features])\n",
    "    return model.predict(features_df)[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fdc8e467",
   "metadata": {},
   "outputs": [],
   "source": [
    "evaluate(linear_regression_pricer, test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8e51bad8",
   "metadata": {},
   "outputs": [],
   "source": [
    "prices = np.array([float(item.price) for item in train])\n",
    "documents = [item.summary for item in train]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c06f6cd3",
   "metadata": {},
   "outputs": [],
   "source": [
    "np.random.seed(42)\n",
    "vectorizer = CountVectorizer(max_features=2000, stop_words='english')\n",
    "X = vectorizer.fit_transform(documents)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "03259ec5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Here are the 1,000 most common words that it picked, not including \"stop words\":\n",
    "\n",
    "selected_words = vectorizer.get_feature_names_out()\n",
    "print(f\"Number of selected words: {len(selected_words)}\")\n",
    "print(\"Selected words:\", selected_words[1000:1020])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f6071315",
   "metadata": {},
   "outputs": [],
   "source": [
    "regressor = LinearRegression()\n",
    "regressor.fit(X, prices)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2fd40cee",
   "metadata": {},
   "outputs": [],
   "source": [
    "def natural_language_linear_regression_pricer(item):\n",
    "    x = vectorizer.transform([item.summary])\n",
    "    return max(regressor.predict(x)[0], 0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "92da701e",
   "metadata": {},
   "outputs": [],
   "source": [
    "evaluate(natural_language_linear_regression_pricer, test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2c6dd70f",
   "metadata": {},
   "outputs": [],
   "source": [
    "subset = 15_000\n",
    "rf_model = RandomForestRegressor(n_estimators=100, random_state=42, n_jobs=4)\n",
    "rf_model.fit(X[:subset], prices[:subset])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f299b117",
   "metadata": {},
   "source": [
    "## Random Forest model\n",
    "\n",
    "The Random Forest is a type of \"**ensemble**\" algorithm, meaning that it combines many smaller algorithms to make better predictions.\n",
    "\n",
    "It uses a very simple kind of machine learning algorithm called a **decision tree**. A decision tree makes predictions by examining the values of features in the input. Like a flow chart with IF statements. Decision trees are very quick and simple, but they tend to overfit.\n",
    "\n",
    "In our case, the \"features\" are the elements of the Vector - in other words, it's the number of times that a particular word appears in the product description.\n",
    "\n",
    "So you can think of it something like this:\n",
    "\n",
    "**Decision Tree**  \n",
    "\\- IF the word \"TV\" appears more than 3 times THEN  \n",
    "-- IF the word \"LED\" appears more than 2 times THEN  \n",
    "--- IF the word \"HD\" appears at least once THEN  \n",
    "---- Price = $500\n",
    "\n",
    "\n",
    "With Random Forest, multiple decision trees are created. Each one is trained with a different random subset of the data, and a different random subset of the features. You can see above that we specify 100 trees, which is the default.\n",
    "\n",
    "Then the Random Forest model simply takes the average of all its trees to product the final result."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "744980c3",
   "metadata": {},
   "outputs": [],
   "source": [
    "def random_forest(item):\n",
    "    x = vectorizer.transform([item.summary])\n",
    "    return max(0, rf_model.predict(x)[0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "36f002f8",
   "metadata": {},
   "outputs": [],
   "source": [
    "evaluate(random_forest, test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "50733fdd",
   "metadata": {},
   "outputs": [],
   "source": [
    "# This is how to save the model if you want to, particularly if you run this on a larger dataset\n",
    "\n",
    "# import joblib\n",
    "# joblib.dump(rf_model, \"random_forest.joblib\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b286bb03",
   "metadata": {},
   "source": [
    "## Introducing XGBoost\n",
    "\n",
    "Like Random Forest, XGBoost is also an ensemble model that combines multiple decision trees.\n",
    "\n",
    "But unlike Random Forest, XGBoost builds one tree after another, with each next tree correcting for errors in the prior trees, using 'gradient descent'.\n",
    "\n",
    "It's much faster than Random Forest, so we can run it for the full dataset, and it's typically better at generalizing.\n",
    "\n",
    "**If this import doesn't work, please skip this! It's not required. On a Mac, you might need to do `brew install libomp` in the terminal.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "39d87530",
   "metadata": {},
   "outputs": [],
   "source": [
    "import xgboost as xgb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3491c74a",
   "metadata": {},
   "outputs": [],
   "source": [
    "np.random.seed(42)\n",
    "\n",
    "xgb_model = xgb.XGBRegressor(n_estimators=1000, random_state=42, n_jobs=4, learning_rate=0.1)\n",
    "xgb_model.fit(X, prices)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "20f9d714",
   "metadata": {},
   "outputs": [],
   "source": [
    "def xg_boost(item):\n",
    "    x = vectorizer.transform([item.summary])\n",
    "    return max(0, xgb_model.predict(x)[0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8dd132b0",
   "metadata": {},
   "outputs": [],
   "source": [
    "evaluate(xg_boost, test)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3c472069",
   "metadata": {},
   "source": [
    "<table style=\"margin: 0; text-align: left;\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../assets/business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td>\n",
    "            <h2 style=\"color:#181;\">Business applications</h2>\n",
    "            <span style=\"color:#181;\">Traditional ML isn't just useful for learning the history; it's still heavily used in industry today, particularly for tasks where there are clearly identifiable features. It's worth spending time exploring the algorithms and experimenting here. See if you can beat my numbers with Traditional ML! I ran the Random Forest for the entire 800,000 training dataset. It took about 15 hours to run, and it ended up getting a low error of $56.40. Traditional ML can do well - try it for yourself.</span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f35f7c8",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
