{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# What is Machine Learnings?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Machine Learning (ML) is a subset of artificial intelligence (AI) that focuses on the development of algorithms and statistical models that enable computer systems to perform tasks without explicit programming. The fundamental idea behind machine learning is to enable computers to learn from data and improve their performance over time.\n",
    "\n",
    "In traditional programming, a human programmer writes explicit instructions for a computer to perform a specific task. However, in machine learning, the approach is different. Instead of providing explicit instructions, the system is trained on data, allowing it to learn patterns, make predictions, and improve its performance through experience.\n",
    "\n",
    "Here are key concepts and components of machine learning:\n",
    "\n",
    "1. **Data:** Machine learning algorithms rely on data to learn patterns and make predictions. The quality and quantity of the data significantly impact the performance of the model.\n",
    "\n",
    "2. **Features:** Features are the individual measurable properties or characteristics of the data. In a dataset, each row represents an observation, and each column represents a feature.\n",
    "\n",
    "3. **Labels/Targets:** In supervised learning, the algorithm is trained on a labeled dataset, where the desired output (or target) is provided along with the input data. The algorithm learns to map inputs to outputs.\n",
    "\n",
    "4. **Training:** During the training phase, the machine learning model is exposed to a dataset to learn the underlying patterns. The algorithm adjusts its internal parameters to minimize the difference between its predictions and the actual outcomes.\n",
    "\n",
    "5. **Testing/Evaluation:** After training, the model is evaluated on a separate dataset to assess its performance and generalization to new, unseen data. This step helps ensure that the model has not simply memorized the training data but can make accurate predictions on new data.\n",
    "\n",
    "6. **Types of Machine Learning:**\n",
    "    - **Supervised Learning:** The algorithm is trained on a labeled dataset, and it learns to make predictions or classify new data based on the patterns learned during training.\n",
    "    \n",
    "    - **Unsupervised Learning:** The algorithm is given unlabeled data and must find patterns or relationships within the data without explicit guidance.\n",
    "    \n",
    "    - **Reinforcement Learning:** The algorithm learns by interacting with an environment and receiving feedback in the form of rewards or penalties. It aims to learn the optimal strategy to maximize cumulative rewards.\n",
    "\n",
    "7. **Common Algorithms:**\n",
    "    - **Linear Regression:** Predicts a continuous output based on input features.\n",
    "    \n",
    "    - **Decision Trees:** Tree-like models that make decisions based on input features.\n",
    "    \n",
    "    - **Neural Networks:** Complex models inspired by the structure of the human brain, particularly effective for tasks like image recognition and natural language processing.\n",
    "    \n",
    "    - **Support Vector Machines (SVM):** Classifies data by finding the hyperplane that best separates different classes.\n",
    "\n",
    "8. **Applications:** Machine learning is applied in various domains, including image and speech recognition, natural language processing, recommendation systems, healthcare diagnostics, financial fraud detection, autonomous vehicles, and many more.\n",
    "\n",
    "Machine learning is a dynamic and evolving field, with ongoing research and development continually expanding its applications and capabilities."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Learning Algorithms?\n",
    "A machine learning algorithm is an algorithm that is able to learn from data. But\n",
    "what do we mean by learning? Mitchell ( 1997) provides the definition “A computer\n",
    "program is said to learn from experience E with respect to some class of tasks T\n",
    "and performance measure P , if its performance at tasks in T , as measured by P ,\n",
    "improves with experience E.” One can imagine a very wide variety of experiences\n",
    "E, tasks T , and performance measures P , and we do not make any attempt in this\n",
    "book to provide a formal definition of what may be used for each of these entities.\n",
    "Instead, the following sections provide intuitive descriptions and examples of the\n",
    "different kinds of tasks, performance measures and experiences that can be used\n",
    "to construct machine learning algorithms."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The concept of learning algorithms can be explained in terms of the Task (T), the Performance Measure (P), and the Experience (E). This framework is often referred to as the \"Task-Performance-Experience\" framework.\n",
    "\n",
    "1. **Task (T):**\n",
    "   - **Definition:** The task (T) represents what the learning system is trying to accomplish or the problem it is designed to solve.\n",
    "   - **Example:** In the context of image recognition, the task could be to correctly classify images of digits as numbers from 0 to 9.\n",
    "\n",
    "2. **Performance Measure (P):**\n",
    "   - **Definition:** The performance measure (P) is a metric that quantifies how well the learning system is accomplishing the task. It is a measure of the system's success or failure in achieving its objectives.\n",
    "   - **Example:** For an image recognition task, the performance measure could be the accuracy of the model in correctly classifying images.\n",
    "\n",
    "3. **Experience (E):**\n",
    "   - **Definition:** The experience (E) refers to the data or information that the learning system uses to learn and improve its performance on the task. It is the input the system receives to adapt and make better predictions or decisions.\n",
    "   - **Example:** In supervised learning, the experience would be a dataset containing labeled examples of images, where each image is associated with the correct digit label.\n",
    "\n",
    "Now, let's bring these concepts together:\n",
    "\n",
    "- **Learning Algorithm:**\n",
    "  - A learning algorithm is a computational procedure that takes the experience (E) as input and produces a hypothesis (H) as output.\n",
    "  - The hypothesis (H) is the learned model or function that maps inputs to outputs, attempting to capture the underlying patterns in the data to perform the task.\n",
    "\n",
    "- **Learning Process:**\n",
    "  - The learning process involves the learning algorithm using the provided experience (E) to produce a hypothesis (H) that minimizes the discrepancy between its predictions and the actual outcomes.\n",
    "  - The learning algorithm refines its internal parameters based on the feedback from the performance measure (P) to improve its ability to perform the task (T).\n",
    "\n",
    "- **Iterative Nature:**\n",
    "  - Learning is often an iterative process where the algorithm receives additional experience, refines its hypothesis, and adjusts its parameters to improve performance over time.\n",
    "\n",
    "In summary, the learning algorithm takes in experience (E) to perform a specific task (T) and is evaluated based on a performance measure (P). The goal is to continually improve the hypothesis (H) or model to enhance its ability to successfully accomplish the task. This framework provides a systematic way to understand and evaluate the learning process in various machine learning applications."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Task, T\n",
    "Machine learning allows us to tackle tasks that are too difficult to solve with\n",
    "fixed programs written and designed by human beings. From a scientific and\n",
    "philosophical point of view, machine learning is interesting because developing our\n",
    "understanding of machine learning entails developing our understanding of the\n",
    "principles that underlie intelligence.\n",
    "\n",
    "Machine learning can address a wide range of tasks, and these tasks are broadly categorized into different types based on the nature of the problem and the desired output. Here are some common types of machine learning tasks:\n",
    "\n",
    "1. **Supervised Learning:**\n",
    "   - **Classification:** In classification, the algorithm is trained on a labeled dataset where each example belongs to a certain category or class. The goal is to learn a mapping from inputs to predefined output classes.\n",
    "     - Example: Spam detection, image classification, sentiment analysis.\n",
    "   - **Regression:** In regression, the algorithm predicts a continuous output based on input features. The goal is to learn a mapping from inputs to a continuous numeric value.\n",
    "     - Example: Predicting house prices, stock prices, temperature.\n",
    "\n",
    "2. **Unsupervised Learning:**\n",
    "   - **Clustering:** Clustering involves grouping similar data points together based on some similarity measure. The algorithm identifies patterns or relationships within the data without predefined categories.\n",
    "     - Example: Customer segmentation, document clustering.\n",
    "   - **Dimensionality Reduction:** Dimensionality reduction techniques aim to reduce the number of input features while preserving relevant information. This helps in visualizing and processing high-dimensional data.\n",
    "     - Example: Principal Component Analysis (PCA), t-Distributed Stochastic Neighbor Embedding (t-SNE).\n",
    "\n",
    "3. **Reinforcement Learning:**\n",
    "   - Reinforcement learning involves an agent that interacts with an environment and learns to make decisions by receiving feedback in the form of rewards or penalties. The goal is to find an optimal strategy to maximize cumulative rewards over time.\n",
    "     - Example: Game playing (e.g., AlphaGo), robotic control, autonomous vehicles.\n",
    "\n",
    "4. **Semi-Supervised Learning:**\n",
    "   - Semi-supervised learning combines elements of both supervised and unsupervised learning. The algorithm is trained on a dataset with both labeled and unlabeled examples. This is particularly useful when obtaining labeled data is expensive or time-consuming.\n",
    "     - Example: Image recognition with limited labeled data.\n",
    "\n",
    "5. **Self-Supervised Learning:**\n",
    "   - Self-supervised learning is a type of unsupervised learning where the algorithm generates its own labels from the input data. This can involve predicting missing parts of the input or solving other auxiliary tasks.\n",
    "     - Example: Word embeddings, image completion.\n",
    "\n",
    "6. **Transfer Learning:**\n",
    "   - Transfer learning involves training a model on one task and then applying the learned knowledge to a different, but related, task. This is especially useful when labeled data is scarce for the target task.\n",
    "     - Example: Pre-training a model on a large image dataset and fine-tuning it for a specific image classification task.\n",
    "\n",
    "7. **Association Rule Learning:**\n",
    "   - Association rule learning discovers interesting relationships or associations among variables in large datasets. It identifies rules that describe how certain events tend to occur together.\n",
    "     - Example: Market basket analysis in retail, recommendation systems.\n",
    "\n",
    "8. **Generative Models:**\n",
    "   - Generative models create new samples that resemble the training data. These models can be used for tasks like image generation, text-to-image synthesis, and data augmentation.\n",
    "     - Example: Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs).\n",
    "\n",
    "9. **Sequence-to-Sequence Learning:**\n",
    "   - Sequence-to-sequence learning involves mapping input sequences to output sequences, making it suitable for tasks where the input and output have a sequential or temporal relationship.\n",
    "     - Example: Machine translation, speech recognition, text summarization.\n",
    "\n",
    "10. **Time Series Forecasting:**\n",
    "    - Time series forecasting focuses on predicting future values based on past observations. It involves understanding and modeling temporal dependencies in the data.\n",
    "      - Example: Stock price prediction, weather forecasting, energy consumption prediction.\n",
    "\n",
    "11. **Anomaly Detection:**\n",
    "    - Anomaly detection aims to identify instances that deviate significantly from the norm in a dataset. It is valuable for detecting unusual patterns or outliers.\n",
    "      - Example: Fraud detection, network intrusion detection, equipment failure prediction.\n",
    "\n",
    "12. **Multi-Label Classification:**\n",
    "    - In multi-label classification, each instance is assigned multiple labels simultaneously. This is different from traditional classification where each instance is assigned to a single category.\n",
    "      - Example: Document categorization with multiple topics, image tagging.\n",
    "\n",
    "13. **Multi-Task Learning:**\n",
    "    - Multi-task learning involves training a model to perform multiple related tasks simultaneously. The goal is to leverage shared information across tasks to improve overall performance.\n",
    "      - Example: Simultaneous learning of part-of-speech tagging and named entity recognition in natural language processing.\n",
    "\n",
    "14. **Causal Inference:**\n",
    "    - Causal inference aims to understand cause-and-effect relationships in data. It involves determining how changes in one variable affect another.\n",
    "      - Example: Understanding the impact of a marketing campaign on sales, determining the effectiveness of a medical treatment.\n",
    "\n",
    "15. **Fairness and Bias Mitigation:**\n",
    "    - Fairness and bias mitigation in machine learning involve developing models that are fair and unbiased across different demographic groups. This is crucial to ensure ethical and equitable applications.\n",
    "      - Example: Ensuring fairness in hiring algorithms, mitigating bias in credit scoring.\n",
    "\n",
    "These task categories showcase the versatility of machine learning in addressing diverse challenges across various domains. Depending on the specific problem at hand, practitioners choose the most suitable task type and learning approach to achieve effective and ethical solutions. Machine learning continues to evolve, and researchers are exploring innovative ways to tackle new types of tasks and improve the robustness and interpretability of models."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# How We Measure Performance Of Machine Learning Algorithm?\n",
    "\n",
    "Measuring the performance of machine learning algorithms is crucial to understanding how well they are solving a particular task. The choice of performance metrics depends on the type of task (classification, regression, clustering, etc.) and the specific goals of the application. Here are some common performance metrics for different types of machine learning tasks:\n",
    "\n",
    "### 1. **Classification Metrics:**\n",
    "   - **Accuracy:** The proportion of correctly classified instances out of the total instances. It is suitable when the classes are balanced.\n",
    "   - **Precision:** The ratio of true positive predictions to the total predicted positives. It is a measure of the accuracy of positive predictions.\n",
    "   - **Recall (Sensitivity or True Positive Rate):** The ratio of true positive predictions to the total actual positives. It is a measure of how well the model identifies positive instances.\n",
    "   - **F1 Score:** The harmonic mean of precision and recall. It provides a balance between precision and recall.\n",
    "   - **Area Under the Receiver Operating Characteristic (ROC) Curve (AUC-ROC):** A metric that evaluates the ability of a binary classifier to discriminate between positive and negative instances across different probability thresholds.\n",
    "\n",
    "### 2. **Regression Metrics:**\n",
    "   - **Mean Absolute Error (MAE):** The average absolute difference between the predicted and actual values. It is less sensitive to outliers.\n",
    "   - **Mean Squared Error (MSE):** The average of the squared differences between predicted and actual values. It gives more weight to large errors.\n",
    "   - **Root Mean Squared Error (RMSE):** The square root of MSE. It is in the same unit as the target variable, making it easier to interpret.\n",
    "   - **R-squared (Coefficient of Determination):** A measure of how well the model explains the variance in the target variable. It ranges from 0 to 1, with higher values indicating a better fit.\n",
    "\n",
    "### 3. **Clustering Metrics:**\n",
    "   - **Silhouette Score:** Measures how similar an object is to its own cluster (cohesion) compared to other clusters (separation).\n",
    "   - **Davies-Bouldin Index:** Measures the compactness and separation between clusters. Lower values indicate better clustering.\n",
    "   - **Adjusted Rand Index (ARI):** Measures the similarity between true and predicted cluster assignments, adjusted for chance.\n",
    "   - **Normalized Mutual Information (NMI):** Measures the mutual information between true and predicted cluster assignments, normalized by entropy.\n",
    "\n",
    "### 4. **Anomaly Detection Metrics:**\n",
    "   - **Precision at a Given Recall Level:** Measures the precision of the model at a specific recall level. It is essential when dealing with imbalanced datasets.\n",
    "   - **Area Under the Precision-Recall Curve (AUC-PR):** Evaluates the precision-recall trade-off across different probability thresholds.\n",
    "\n",
    "### 5. **Ranking Metrics (Information Retrieval):**\n",
    "   - **Precision at K:** Measures the precision of the top K retrieved items.\n",
    "   - **Recall at K:** Measures the recall of the top K retrieved items.\n",
    "   - **Mean Average Precision (MAP):** Calculates the average precision across different recall levels.\n",
    "\n",
    "### 6. **Multi-Class Classification Metrics:**\n",
    "   - Metrics such as micro/macro/weighted average precision, recall, and F1 score for multi-class classification tasks.\n",
    "\n",
    "### 7. **Fairness Metrics:**\n",
    "   - **Disparate Impact:** Measures the ratio of the predicted positive rate for the protected group to that of the unprotected group.\n",
    "   - **Equalized Odds:** Measures the balance of false positive and false negative rates across different groups.\n",
    "\n",
    "### 8. **Time Series Forecasting Metrics:**\n",
    "   - Metrics specific to time series data, such as Mean Absolute Percentage Error (MAPE), Root Mean Squared Logarithmic Error (RMSLE), and others.\n",
    "\n",
    "### General Considerations:\n",
    "   - **Cross-Validation:** Perform cross-validation to ensure that the model's performance is consistent across different subsets of the data.\n",
    "   - **Confusion Matrix:** Provides a detailed breakdown of true positives, true negatives, false positives, and false negatives.\n",
    "\n",
    "The choice of the most appropriate metric depends on the nature of the task and the specific requirements of the application. It's common to use a combination of metrics to gain a comprehensive understanding of a model's performance. Additionally, domain-specific considerations may influence the choice of evaluation metrics."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Performence Mesurement Of Classification Metrics\n",
    "\n",
    " **Performance Measurement of Classification Metrics with Mathematical Concepts**\n",
    "\n",
    "Classification metrics are used to evaluate the performance of a classification model. They measure how well the model can distinguish between different classes of data. There are many different classification metrics, each with its own strengths and weaknesses.\n",
    "\n",
    "**Mathematical Concepts**\n",
    "\n",
    "The following mathematical concepts are useful for understanding classification metrics:\n",
    "\n",
    "* **True Positives (TP)**: The number of instances that the model correctly predicted as positive.\n",
    "* **False Positives (FP)**: The number of instances that the model incorrectly predicted as positive.\n",
    "* **False Negatives (FN)**: The number of instances that the model incorrectly predicted as negative.\n",
    "* **True Negatives (TN)**: The number of instances that the model correctly predicted as negative.\n",
    "\n",
    "**Accuracy**\n",
    "\n",
    "Accuracy is the most common classification metric. It is calculated as the fraction of all predictions that are correct.\n",
    "\n",
    "```\n",
    "Accuracy = (TP + TN) / (TP + FP + FN + TN)\n",
    "```\n",
    "\n",
    "**Precision**\n",
    "\n",
    "Precision measures the fraction of predicted positives that are actually positive.\n",
    "\n",
    "```\n",
    "Precision = TP / (TP + FP)\n",
    "```\n",
    "\n",
    "**Recall**\n",
    "\n",
    "Recall measures the fraction of actual positives that are correctly predicted.\n",
    "\n",
    "```\n",
    "Recall = TP / (TP + FN)\n",
    "```\n",
    "\n",
    "**F1 Score**\n",
    "\n",
    "The F1 score is a harmonic mean of precision and recall. It is a useful metric for evaluating classification models when both precision and recall are important.\n",
    "\n",
    "```\n",
    "F1 Score = 2 * (Precision * Recall) / (Precision + Recall)\n",
    "```\n",
    "\n",
    "**ROC Curve and AUC**\n",
    "\n",
    "The receiver operating characteristic (ROC) curve is a graphical representation of the performance of a classification model. It plots the model's true positive rate (TPR) against its false positive rate (FPR) at different thresholds. The AUC (area under the ROC curve) is a single number that summarizes the overall performance of a classification model.\n",
    "\n",
    "```\n",
    "TPR = TP / (TP + FN)\n",
    "FPR = FP / (FP + TN)\n",
    "AUC = ∫ (ROC curve)\n",
    "```\n",
    "\n",
    "**Choosing the Right Classification Metric**\n",
    "\n",
    "The best classification metric to use depends on the specific problem at hand. If both precision and recall are important, then the F1 score is a good choice. If the cost of false positives is high, then precision is a good choice. If the cost of false negatives is high, then recall is a good choice.\n",
    "\n",
    "**Example**\n",
    "\n",
    "Suppose we are building a classification model to predict whether or not a customer will churn. We have a training set of 1000 customers, half of whom churned and half of whom did not churn. We train our model on the training set and then evaluate its performance on a held-out test set of 500 customers.\n",
    "\n",
    "The following table shows the confusion matrix for our model on the test set:\n",
    "\n",
    "| Predicted | Actual |\n",
    "|---|---|---|\n",
    "| Churn | Churn | 250 |\n",
    "| Churn | No Churn | 50 |\n",
    "| No Churn | Churn | 100 |\n",
    "| No Churn | No Churn | 100 | \n",
    "\n",
    "From the confusion matrix, we can calculate the following classification metrics:\n",
    "\n",
    "* Accuracy = (250 + 100) / (500) = 70%\n",
    "* Precision = 250 / (250 + 50) = 83.3%\n",
    "* Recall = 250 / (250 + 100) = 71.4%\n",
    "* F1 Score = 2 * (0.833 * 0.714) / (0.833 + 0.714) = 76.4%\n",
    "\n",
    "In this example, the accuracy of our model is 70%, which means that it correctly predicted whether or not a customer would churn 70% of the time. The precision of our model is 83.3%, which means that 83.3% of the customers that our model predicted would churn actually did churn. The recall of our model is 71.4%, which means that our model correctly predicted 71.4% of the customers that actually churned. The F1 score of our model is 76.4%, which is a good score overall.\n",
    "\n",
    "**Conclusion**\n",
    "\n",
    "Performance measurement of classification metrics is an important part of machine learning. By understanding the different classification metrics and how to calculate them, you can better evaluate the performance of your classification models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Confusion Matrix:\n",
      "[[3 2]\n",
      " [1 4]]\n",
      "True Negative (TN): 3, False Positive (FP): 2, False Negative (FN): 1, True Positive (TP): 4\n",
      "Accuracy: 0.7000\n",
      "Precision: 0.6667\n",
      "Recall: 0.8000\n",
      "F1 Score: 0.7273\n",
      "AUC-ROC: 0.7000\n"
     ]
    }
   ],
   "source": [
    "from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_auc_score, confusion_matrix\n",
    "\n",
    "# Example data: true labels and predicted labels\n",
    "y_true = [1, 0, 1, 1, 0, 1, 0, 1, 0, 0]\n",
    "y_pred = [1, 0, 1, 1, 0, 0, 1, 1, 0, 1]\n",
    "\n",
    "# Calculate confusion matrix\n",
    "cm = confusion_matrix(y_true, y_pred)\n",
    "tn, fp, fn, tp = cm.ravel()\n",
    "\n",
    "# Calculate classification metrics\n",
    "accuracy = accuracy_score(y_true, y_pred)\n",
    "precision = precision_score(y_true, y_pred)\n",
    "recall = recall_score(y_true, y_pred)\n",
    "f1 = f1_score(y_true, y_pred)\n",
    "roc_auc = roc_auc_score(y_true, y_pred)\n",
    "\n",
    "# Display the results\n",
    "print(f\"Confusion Matrix:\\n{cm}\")\n",
    "print(f\"True Negative (TN): {tn}, False Positive (FP): {fp}, False Negative (FN): {fn}, True Positive (TP): {tp}\")\n",
    "print(f\"Accuracy: {accuracy:.4f}\")\n",
    "print(f\"Precision: {precision:.4f}\")\n",
    "print(f\"Recall: {recall:.4f}\")\n",
    "print(f\"F1 Score: {f1:.4f}\")\n",
    "print(f\"AUC-ROC: {roc_auc:.4f}\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Performence Mesurement Of Regression Metrics\n",
    "\n",
    "**Performance Measurement of Regression Metrics with Mathematical Concept**\n",
    "\n",
    "**Regression metrics** are used to evaluate the performance of a regression model on a held-out test set. They measure the distance between the predicted values and the actual values of the target variable.\n",
    "\n",
    "**Mathematical Concepts**\n",
    "\n",
    "The following mathematical concepts are used to calculate the regression metrics:\n",
    "\n",
    "* **Squared Error:** The squared error is the difference between two values squared. It is calculated as follows:\n",
    "\n",
    "```\n",
    "squared_error = (predicted_value - actual_value) ** 2\n",
    "```\n",
    "\n",
    "* **Absolute Error:** The absolute error is the difference between two values without regard to sign. It is calculated as follows:\n",
    "\n",
    "```\n",
    "absolute_error = |predicted_value - actual_value|\n",
    "```\n",
    "\n",
    "* **Mean:** The mean is the average of a set of values. It is calculated as follows:\n",
    "\n",
    "```\n",
    "mean = sum(values) / len(values)\n",
    "```\n",
    "\n",
    "* **Variance:** The variance is a measure of how spread out a set of values is. It is calculated as follows:\n",
    "\n",
    "```\n",
    "variance = (sum((values - mean) ** 2) / (len(values) - 1))\n",
    "```\n",
    "\n",
    "**Calculation of Regression Metrics**\n",
    "\n",
    "The following equations show how to calculate the regression metrics:\n",
    "\n",
    "**Mean Squared Error (MSE)**\n",
    "\n",
    "```\n",
    "MSE = mean(squared_errors)\n",
    "```\n",
    "\n",
    "**Mean Absolute Error (MAE)**\n",
    "\n",
    "```\n",
    "MAE = mean(absolute_errors)\n",
    "```\n",
    "\n",
    "**Root Mean Squared Error (RMSE)**\n",
    "\n",
    "```\n",
    "RMSE = sqrt(MSE)\n",
    "```\n",
    "\n",
    "**R-squared (R²)**\n",
    "\n",
    "```\n",
    "R² = 1 - (variance_of_residuals / variance_of_target_variable)\n",
    "```\n",
    "\n",
    "**Variance of residuals** is the variance of the difference between the predicted values and the actual values of the target variable. It is calculated as follows:\n",
    "\n",
    "```\n",
    "variance_of_residuals = variance(predicted_values - actual_values)\n",
    "```\n",
    "\n",
    "**Variance of target variable** is the variance of the target variable itself. It is calculated as follows:\n",
    "\n",
    "```\n",
    "variance_of_target_variable = variance(target_variable)\n",
    "```\n",
    "\n",
    "**Interpretation of Regression Metrics**\n",
    "\n",
    "The interpretation of the regression metrics depends on the specific problem and the desired outcome. However, some general guidelines can be provided:\n",
    "\n",
    "* **MSE, MAE, and RMSE:** Lower values of these metrics indicate a better performing model.\n",
    "* **R²:** Higher values of R² indicate a better performing model. However, it is important to note that R² can be misleading if the model is overfitting the training data.\n",
    "\n",
    "**Example**\n",
    "\n",
    "Suppose we have a regression model that predicts house prices. We train the model on a set of training data and then evaluate its performance on a held-out test set. The following table shows the results:\n",
    "\n",
    "| Metric | Value |\n",
    "|---|---|\n",
    "| MSE | 1000000 |\n",
    "| MAE | 500000 |\n",
    "| RMSE | 1000 |\n",
    "| R² | 0.8 |\n",
    "\n",
    "The MSE and RMSE are both relatively low, indicating that the model is making good predictions on average. However, the MAE is relatively high, indicating that the model is making some large errors. This may be due to the presence of outliers in the data.\n",
    "\n",
    "The R² of 0.8 indicates that the model explains 80% of the variation in the house prices. This is a good R² score, but it is important to note that it can be misleading if the model is overfitting the training data.\n",
    "\n",
    "**Conclusion**\n",
    "\n",
    "Regression metrics are essential for evaluating the performance of regression models. By understanding the mathematical concepts behind these metrics, you can better interpret their results and choose the right metric for your specific problem."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "### 1. Mean Absolute Error (MAE):"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/markdown": [
       "**MAE:** 0.5000"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/markdown": [
       "\n",
       "### 2. Mean Squared Error (MSE):"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/markdown": [
       "**MSE:** 0.3750"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/markdown": [
       "\n",
       "### 3. Root Mean Squared Error (RMSE):"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/markdown": [
       "**RMSE:** 0.6124"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/markdown": [
       "\n",
       "### 4. R-squared (R2):"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/markdown": [
       "**R-squared:** 0.9486"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score\n",
    "import numpy as np\n",
    "\n",
    "# Example data: true values and predicted values\n",
    "y_true = np.array([3, -0.5, 2, 7])\n",
    "y_pred = np.array([2.5, 0.0, 2, 8])\n",
    "\n",
    "# Calculate regression metrics\n",
    "mae = mean_absolute_error(y_true, y_pred)\n",
    "mse = mean_squared_error(y_true, y_pred)\n",
    "rmse = np.sqrt(mse)\n",
    "r2 = r2_score(y_true, y_pred)\n",
    "\n",
    "# Display the results using Markdown\n",
    "from IPython.display import display, Markdown\n",
    "\n",
    "display(Markdown(f\"### 1. Mean Absolute Error (MAE):\"))\n",
    "display(Markdown(f\"**MAE:** {mae:.4f}\"))\n",
    "display(Markdown(f\"\\n### 2. Mean Squared Error (MSE):\"))\n",
    "display(Markdown(f\"**MSE:** {mse:.4f}\"))\n",
    "display(Markdown(f\"\\n### 3. Root Mean Squared Error (RMSE):\"))\n",
    "display(Markdown(f\"**RMSE:** {rmse:.4f}\"))\n",
    "display(Markdown(f\"\\n### 4. R-squared (R2):\"))\n",
    "display(Markdown(f\"**R-squared:** {r2:.4f}\"))\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
