{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a81a83ec",
   "metadata": {},
   "source": [
    "\n",
    "# 贝尔曼方程的状态价值函数(State Value Function)\n",
    "\n",
    "贝尔曼方程是强化学习中的基础理论,用于描述状态价值函数V(s)。状态价值函数表示从状态s开始,遵循某个策略π所能获得的期望回报。\n",
    "\n",
    "贝尔曼方程的数学表达式:\n",
    "\n",
    "V_π(s) = E_π[R_{t+1} + γV_π(s_{t+1}) | s_t = s]\n",
    "\n",
    "其中:\n",
    "- V_π(s): 在策略π下,状态s的价值\n",
    "- R_{t+1}: 即时奖励\n",
    "- γ: 折扣因子(0≤γ≤1)\n",
    "- s_{t+1}: 下一个状态\n",
    "- E_π: 在策略π下的期望\n",
    "\n",
    "贝尔曼方程表明:\n",
    "1. 状态的价值等于即时奖励加上折扣后的下一状态价值的期望\n",
    "2. 体现了强化学习的递归性质\n",
    "3. 是动态规划方法的理论基础\n",
    "4. 可用于评估策略的好坏\n",
    "\n",
    "通过迭代求解贝尔曼方程,我们可以得到最优状态价值函数V*(s),进而找到最优策略。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "63fa2733",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "最终的状态价值函数:\n",
      "状态 0 的价值: 9.198\n",
      "状态 1 的价值: 8.901\n",
      "状态 2 的价值: 11.901\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "# 定义状态转移概率矩阵 P\n",
    "P = np.array([\n",
    "    [0.7, 0.3, 0.0],  # 状态0的转移概率\n",
    "    [0.3, 0.4, 0.3],  # 状态1的转移概率\n",
    "    [0.0, 0.3, 0.7]   # 状态2的转移概率\n",
    "])\n",
    "\n",
    "# 定义奖励矩阵 R\n",
    "R = np.array([1, 0, 2])  # 每个状态对应的即时奖励\n",
    "\n",
    "# 定义折扣因子\n",
    "gamma = 0.9\n",
    "\n",
    "# 初始化价值函数\n",
    "V = np.zeros(3)\n",
    "\n",
    "# 使用贝尔曼方程迭代计算价值函数\n",
    "for i in range(100):\n",
    "    V_new = np.zeros(3)\n",
    "    for s in range(3):\n",
    "        # 贝尔曼方程: V(s) = R(s) + gamma * sum(P(s,s') * V(s'))\n",
    "        V_new[s] = R[s] + gamma * np.sum(P[s] * V)\n",
    "    \n",
    "    # 检查收敛\n",
    "    if np.all(np.abs(V_new - V) < 1e-6):\n",
    "        break\n",
    "        \n",
    "    V = V_new.copy()\n",
    "\n",
    "print(\"最终的状态价值函数:\")\n",
    "for s in range(3):\n",
    "    print(f\"状态 {s} 的价值: {V[s]:.3f}\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "b88663bb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "更新后的Q表:\n",
      "[[0. 1.]\n",
      " [0. 0.]\n",
      " [0. 0.]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "# Q函数的贝尔曼方程实现\n",
    "def bellman_q_equation(Q, state, action, reward, next_state, next_action, gamma=0.9):\n",
    "    \"\"\"\n",
    "    Q函数的贝尔曼方程\n",
    "    Q(s,a) = R + gamma * Q(s',a')\n",
    "    \n",
    "    参数:\n",
    "    Q: Q值表\n",
    "    state: 当前状态\n",
    "    action: 当前动作\n",
    "    reward: 即时奖励\n",
    "    next_state: 下一个状态\n",
    "    next_action: 下一个动作\n",
    "    gamma: 折扣因子\n",
    "    \"\"\"\n",
    "    # 计算当前Q值\n",
    "    current_q = Q[state][action]\n",
    "    \n",
    "    # 计算下一状态的Q值\n",
    "    next_q = Q[next_state][next_action]\n",
    "    \n",
    "    # 应用贝尔曼方程更新Q值\n",
    "    new_q = reward + gamma * next_q\n",
    "    \n",
    "    # 更新Q表\n",
    "    Q[state][action] = new_q\n",
    "    \n",
    "    return Q\n",
    "\n",
    "# 示例使用\n",
    "if __name__ == \"__main__\":\n",
    "    # 创建一个简单的Q表 (3个状态，2个动作)\n",
    "    Q = np.zeros((3, 2))\n",
    "    \n",
    "    # 更新Q值示例\n",
    "    state = 0\n",
    "    action = 1\n",
    "    reward = 1\n",
    "    next_state = 1\n",
    "    next_action = 0\n",
    "    \n",
    "    # 应用贝尔曼方程\n",
    "    Q = bellman_q_equation(Q, state, action, reward, next_state, next_action)\n",
    "    print(\"更新后的Q表:\")\n",
    "    print(Q)\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
