{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### transfomer讲解"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### transformer Net\n",
    "\n",
    "coder  --  Bert\n",
    "decoder -- GPT\n",
    "\n",
    "#### 编码器（特征提取器）\n",
    "\n",
    "输入数据进行处理，获得特征（向量）\n",
    "\n",
    "#### 解码器（数据生成器）\n",
    "\n",
    "输入数据，生成数据"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 遵顼思想\n",
    "\n",
    "* 贝叶斯\n",
    "* 极大似然估值法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### transfomer block\n",
    "\n",
    "* 多头自注意力机制\n",
    "* 前馈神经网络\n",
    "* 残差连接\n",
    "* 层归一化\n",
    "* 激活函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 注意力机制\n",
    "\n",
    "Q是输入 K是输出\n",
    "\n",
    "衡量相似度 Q·K = cos θ || Q || || K ||\n",
    "\n",
    "COS θ = Q·K / (|| Q || || K ||)\n",
    "\n",
    "归一化 softmax(Q·K)\n",
    "\n",
    "比较每个字符的相关性，得到每个字符的权重。\n",
    "\n",
    "按照每个权重再做值，就是softmax(Q·K)*V V=Q的值\n",
    "\n",
    "### 自注意力机制\n",
    "\n",
    "Q=K=V\n",
    "\n",
    "#### 多头注意力\n",
    "\n",
    "多头注意力就是多个头，每个头都做一次注意力，然后合并起来。"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
