<!DOCTYPE html>
<html lang="en-us">
    <head><meta charset='utf-8'>
<meta name='viewport' content='width=device-width, initial-scale=1'><meta name='description' content='线性回归 线性模型 当我们的输入包含d个特征时，预测结果$\hat{y}$可表示为： $$ \hat{y} = w_{1}x_{1} &#43; &amp;hellip; &#43; w_{d}x_{d} &#43; b $$ $w$称为权重（weight），$b$称为偏置（bias）或偏移量（offset）。权重决定了每个特征对我们预测值的影响。偏置是指当所有特征都取值为0时，预测值应该为多少。
用向量 $\mathbf{x} \in \mathbb{R}^{d}$ 和 $\mathbf{w} \in \mathbb{R}^{d}$ 简洁表达模型： $$ \hat{\mathbf{y}} = \mathbf{w}^{\top} \mathbf{x} &#43; b $$ 向量 $\mathbf{x}$ 对应于单个数据样本的特征。用符号表示的矩阵 $\mathbf{X} \in \mathbb{R}^{n \times d}$ 可以很方便地引用我们整个数据集的 $n$ 个样本。其中，$\mathbf{X}$ 的每一行是一个样本，每一列是一种特征。 $$ \hat{\mathbf{y}} = \mathbf{X} \mathbf{w} &#43; b $$
def linreg(X, w, b): #@save &amp;#34;&amp;#34;&amp;#34;线性回归模型。&amp;#34;&amp;#34;&amp;#34; return tf.matmul(X, w) &#43; b 损失函数 损失函数能够量化目标的实际值与预测值之间的差距。通常我们会选择非负数作为损失，且数值越小表示损失越小，完美预测时的损失为0。常用的损失函数是平方误差函数。当样本 $i$ 的预测值为 $\hat{\mathbf{y}}{i}$ ，其相应的真实标签为 $y{i}$ 时，平方误差可以定义为以下公式： $$ l^{(i)}(w, b) = \frac{1}{2}(\hat{\mathbf{y}}{i} - y{i})^{2} $$'><title>机器学习篇章之线性回归</title>

<link rel='canonical' href='https://enrique518.gitee.io/p/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/'>

<link rel="stylesheet" href="/scss/style.min.css"><meta property='og:title' content='机器学习篇章之线性回归'>
<meta property='og:description' content='线性回归 线性模型 当我们的输入包含d个特征时，预测结果$\hat{y}$可表示为： $$ \hat{y} = w_{1}x_{1} &#43; &amp;hellip; &#43; w_{d}x_{d} &#43; b $$ $w$称为权重（weight），$b$称为偏置（bias）或偏移量（offset）。权重决定了每个特征对我们预测值的影响。偏置是指当所有特征都取值为0时，预测值应该为多少。
用向量 $\mathbf{x} \in \mathbb{R}^{d}$ 和 $\mathbf{w} \in \mathbb{R}^{d}$ 简洁表达模型： $$ \hat{\mathbf{y}} = \mathbf{w}^{\top} \mathbf{x} &#43; b $$ 向量 $\mathbf{x}$ 对应于单个数据样本的特征。用符号表示的矩阵 $\mathbf{X} \in \mathbb{R}^{n \times d}$ 可以很方便地引用我们整个数据集的 $n$ 个样本。其中，$\mathbf{X}$ 的每一行是一个样本，每一列是一种特征。 $$ \hat{\mathbf{y}} = \mathbf{X} \mathbf{w} &#43; b $$
def linreg(X, w, b): #@save &amp;#34;&amp;#34;&amp;#34;线性回归模型。&amp;#34;&amp;#34;&amp;#34; return tf.matmul(X, w) &#43; b 损失函数 损失函数能够量化目标的实际值与预测值之间的差距。通常我们会选择非负数作为损失，且数值越小表示损失越小，完美预测时的损失为0。常用的损失函数是平方误差函数。当样本 $i$ 的预测值为 $\hat{\mathbf{y}}{i}$ ，其相应的真实标签为 $y{i}$ 时，平方误差可以定义为以下公式： $$ l^{(i)}(w, b) = \frac{1}{2}(\hat{\mathbf{y}}{i} - y{i})^{2} $$'>
<meta property='og:url' content='https://enrique518.gitee.io/p/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/'>
<meta property='og:site_name' content='Enriqueliu'>
<meta property='og:type' content='article'><meta property='article:section' content='Post' /><meta property='article:published_time' content='2021-11-10T00:00:00&#43;00:00'/><meta property='article:modified_time' content='2021-11-10T00:00:00&#43;00:00'/>
<meta name="twitter:title" content="机器学习篇章之线性回归">
<meta name="twitter:description" content="线性回归 线性模型 当我们的输入包含d个特征时，预测结果$\hat{y}$可表示为： $$ \hat{y} = w_{1}x_{1} &#43; &amp;hellip; &#43; w_{d}x_{d} &#43; b $$ $w$称为权重（weight），$b$称为偏置（bias）或偏移量（offset）。权重决定了每个特征对我们预测值的影响。偏置是指当所有特征都取值为0时，预测值应该为多少。
用向量 $\mathbf{x} \in \mathbb{R}^{d}$ 和 $\mathbf{w} \in \mathbb{R}^{d}$ 简洁表达模型： $$ \hat{\mathbf{y}} = \mathbf{w}^{\top} \mathbf{x} &#43; b $$ 向量 $\mathbf{x}$ 对应于单个数据样本的特征。用符号表示的矩阵 $\mathbf{X} \in \mathbb{R}^{n \times d}$ 可以很方便地引用我们整个数据集的 $n$ 个样本。其中，$\mathbf{X}$ 的每一行是一个样本，每一列是一种特征。 $$ \hat{\mathbf{y}} = \mathbf{X} \mathbf{w} &#43; b $$
def linreg(X, w, b): #@save &amp;#34;&amp;#34;&amp;#34;线性回归模型。&amp;#34;&amp;#34;&amp;#34; return tf.matmul(X, w) &#43; b 损失函数 损失函数能够量化目标的实际值与预测值之间的差距。通常我们会选择非负数作为损失，且数值越小表示损失越小，完美预测时的损失为0。常用的损失函数是平方误差函数。当样本 $i$ 的预测值为 $\hat{\mathbf{y}}{i}$ ，其相应的真实标签为 $y{i}$ 时，平方误差可以定义为以下公式： $$ l^{(i)}(w, b) = \frac{1}{2}(\hat{\mathbf{y}}{i} - y{i})^{2} $$">
    </head>
    <body class="
    article-page has-toc
">
    <script>
        (function() {
            const colorSchemeKey = 'StackColorScheme';
            if(!localStorage.getItem(colorSchemeKey)){
                localStorage.setItem(colorSchemeKey, "auto");
            }
        })();
    </script><script>
    (function() {
        const colorSchemeKey = 'StackColorScheme';
        const colorSchemeItem = localStorage.getItem(colorSchemeKey);
        const supportDarkMode = window.matchMedia('(prefers-color-scheme: dark)').matches === true;

        if (colorSchemeItem == 'dark' || colorSchemeItem === 'auto' && supportDarkMode) {
            

            document.documentElement.dataset.scheme = 'dark';
        } else {
            document.documentElement.dataset.scheme = 'light';
        }
    })();
</script>
<div class="container main-container flex 
    
        extended
    
">
    
        <div id="article-toolbar">
            <a href="https://enrique518.gitee.io/" class="back-home">
                <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-chevron-left" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
  <path stroke="none" d="M0 0h24v24H0z"/>
  <polyline points="15 6 9 12 15 18" />
</svg>



                <span>返回</span>
            </a>
        </div>
    
<main class="main full-width">
    <article class="main-article">
    <header class="article-header">

    <div class="article-details">
    
    <header class="article-category">
        
            <a href="/categories/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/" >
                机器学习
            </a>
        
    </header>
    

    <h2 class="article-title">
        <a href="/p/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/">机器学习篇章之线性回归</a>
    </h2>

    

    
    <footer class="article-time">
        
            <div>
                <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-calendar-time" width="56" height="56" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
  <path stroke="none" d="M0 0h24v24H0z"/>
  <path d="M11.795 21h-6.795a2 2 0 0 1 -2 -2v-12a2 2 0 0 1 2 -2h12a2 2 0 0 1 2 2v4" />
  <circle cx="18" cy="18" r="4" />
  <path d="M15 3v4" />
  <path d="M7 3v4" />
  <path d="M3 11h16" />
  <path d="M18 16.496v1.504l1 1" />
</svg>
                <time class="article-time--published">Nov 10, 2021</time>
            </div>
        

        
            <div>
                <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-clock" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
  <path stroke="none" d="M0 0h24v24H0z"/>
  <circle cx="12" cy="12" r="9" />
  <polyline points="12 7 12 12 15 15" />
</svg>



                <time class="article-time--reading">
                    阅读时长: 2 分钟
                </time>
            </div>
        
    </footer>
    
</div>
</header>

    <section class="article-content">
    <h1 id="线性回归">线性回归</h1>
<h2 id="线性模型">线性模型</h2>
<p>当我们的输入包含<strong>d</strong>个特征时，预测结果$\hat{y}$可表示为：
$$
\hat{y} = w_{1}x_{1} + &hellip; + w_{d}x_{d} + b
$$
$w$称为权重（weight），$b$称为偏置（bias）或偏移量（offset）。权重决定了每个特征对我们预测值的影响。偏置是指当所有特征都取值为0时，预测值应该为多少。</p>
<p>用向量 $\mathbf{x} \in \mathbb{R}^{d}$ 和 $\mathbf{w} \in \mathbb{R}^{d}$ 简洁表达模型：
$$
\hat{\mathbf{y}} = \mathbf{w}^{\top} \mathbf{x} + b
$$
向量 $\mathbf{x}$ 对应于单个数据样本的特征。用符号表示的矩阵 $\mathbf{X} \in \mathbb{R}^{n \times d}$ 可以很方便地引用我们整个数据集的 $n$ 个样本。其中，$\mathbf{X}$ 的每一行是一个样本，每一列是一种特征。
$$
\hat{\mathbf{y}} = \mathbf{X} \mathbf{w} + b
$$</p>
<div class="highlight"><pre class="chroma"><code class="language-python" data-lang="python"><span class="k">def</span> <span class="nf">linreg</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">):</span>  <span class="c1">#@save</span>
    <span class="s2">&#34;&#34;&#34;线性回归模型。&#34;&#34;&#34;</span>
    <span class="k">return</span> <span class="n">tf</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">w</span><span class="p">)</span> <span class="o">+</span> <span class="n">b</span>
</code></pre></div><h2 id="损失函数">损失函数</h2>
<p>损失函数能够量化目标的实际值与预测值之间的差距。通常我们会选择非负数作为损失，且数值越小表示损失越小，完美预测时的损失为0。常用的损失函数是平方误差函数。当样本 $i$ 的预测值为 $\hat{\mathbf{y}}<em>{i}$ ，其相应的真实标签为 $y</em>{i}$ 时，平方误差可以定义为以下公式：
$$
l^{(i)}(w, b) = \frac{1}{2}(\hat{\mathbf{y}}<em>{i} - y</em>{i})^{2}
$$</p>
<div class="highlight"><pre class="chroma"><code class="language-python" data-lang="python"><span class="k">def</span> <span class="nf">squared_loss</span><span class="p">(</span><span class="n">y_hat</span><span class="p">,</span> <span class="n">y</span><span class="p">):</span>  <span class="c1">#@save</span>
    <span class="s2">&#34;&#34;&#34;均方损失。&#34;&#34;&#34;</span>
    <span class="k">return</span> <span class="p">(</span><span class="n">y_hat</span> <span class="o">-</span> <span class="n">tf</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="n">y_hat</span><span class="o">.</span><span class="n">shape</span><span class="p">))</span> <span class="o">**</span> <span class="mi">2</span> <span class="o">/</span> <span class="mi">2</span>
</code></pre></div><p>由于平方误差函数中的二次方项，估计值 𝑦̂ (𝑖) 和观测值 𝑦(𝑖) 之间较大的差异将贡献更大的损失。为了度量模型在整个数据集上的质量，我们需计算在训练集 𝑛 个样本上的损失均值。
$$
L(w, b) = \frac{1}{n} \sum^{n}_{i=1} l^{(i)}(w, b)
$$</p>
<h2 id="小批量随机梯度下降">小批量随机梯度下降</h2>
<p>梯度下降最简单的用法是计算损失函数（数据集中所有样本的损失均值）关于模型参数的导数（在这里也可以称为梯度），通常会在每次需要计算更新的时候随机抽取一小批样本，因此叫做小批量随机梯度下降（minibatch stochastic gradient descent）。</p>
<p>在每次迭代中，我们首先随机抽样一个小批量$\mathcal{B}$，它是由固定数量的训练样本组成的。然后，我们计算小批量的平均损失关于模型参数的导数（也可以称为梯度）。最后，我们将梯度乘以一个预先确定的正数$\eta$，并从当前参数的值中减掉。</p>
<p>$$
\mathbf{g} \leftarrow \partial_{(\mathbf{w},b)} \frac{1}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} l(\mathbf{x}^{(i)}, y^{(i)}, \mathbf{w}, b)
$$</p>
<p>$$
(\mathbf{w}, b) \leftarrow (\mathbf{w}, b) - \eta \mathbf{g}
$$</p>
<div class="highlight"><pre class="chroma"><code class="language-python" data-lang="python"><span class="k">def</span> <span class="nf">sgd</span><span class="p">(</span><span class="n">params</span><span class="p">,</span> <span class="n">grads</span><span class="p">,</span> <span class="n">lr</span><span class="p">,</span> <span class="n">batch_size</span><span class="p">):</span>  <span class="c1">#@save</span>
    <span class="s2">&#34;&#34;&#34;小批量随机梯度下降。&#34;&#34;&#34;</span>
    <span class="k">for</span> <span class="n">param</span><span class="p">,</span> <span class="n">grad</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">params</span><span class="p">,</span> <span class="n">grads</span><span class="p">):</span>
        <span class="n">param</span><span class="o">.</span><span class="n">assign_sub</span><span class="p">(</span><span class="n">lr</span><span class="o">*</span><span class="n">grad</span><span class="o">/</span><span class="n">batch_size</span><span class="p">)</span>

<span class="n">lr</span> <span class="o">=</span> <span class="mf">0.03</span>
<span class="n">num_epochs</span> <span class="o">=</span> <span class="mi">3</span>
<span class="n">net</span> <span class="o">=</span> <span class="n">linreg</span>
<span class="n">loss</span> <span class="o">=</span> <span class="n">squared_loss</span>

<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_epochs</span><span class="p">):</span>
    <span class="k">for</span> <span class="n">X</span><span class="p">,</span> <span class="n">y</span> <span class="ow">in</span> <span class="n">data_iter</span><span class="p">(</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">features</span><span class="p">,</span> <span class="n">labels</span><span class="p">):</span>
        <span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">GradientTape</span><span class="p">()</span> <span class="k">as</span> <span class="n">g</span><span class="p">:</span>
            <span class="n">l</span> <span class="o">=</span> <span class="n">loss</span><span class="p">(</span><span class="n">net</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">),</span> <span class="n">y</span><span class="p">)</span>  <span class="c1"># `X`和`y`的小批量损失</span>
        <span class="c1"># 计算l关于[`w`, `b`]的梯度</span>
        <span class="n">dw</span><span class="p">,</span> <span class="n">db</span> <span class="o">=</span> <span class="n">g</span><span class="o">.</span><span class="n">gradient</span><span class="p">(</span><span class="n">l</span><span class="p">,</span> <span class="p">[</span><span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">])</span>
        <span class="c1"># 使用参数的梯度更新参数</span>
        <span class="n">sgd</span><span class="p">([</span><span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">],</span> <span class="p">[</span><span class="n">dw</span><span class="p">,</span> <span class="n">db</span><span class="p">],</span> <span class="n">lr</span><span class="p">,</span> <span class="n">batch_size</span><span class="p">)</span>
    <span class="n">train_l</span> <span class="o">=</span> <span class="n">loss</span><span class="p">(</span><span class="n">net</span><span class="p">(</span><span class="n">features</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">b</span><span class="p">),</span> <span class="n">labels</span><span class="p">)</span>
    <span class="k">print</span><span class="p">(</span><span class="n">f</span><span class="s1">&#39;epoch {epoch + 1}, loss {float(tf.reduce_mean(train_l)):f}&#39;</span><span class="p">)</span>
</code></pre></div><p>这里计算出的梯度并没有计算 batch_size ，而是在更新参数时计算 batch_size。计算顺序变了，结果不变。</p>
<h2 id="softmax回归">softmax回归</h2>
<p>softmax回归通常用于解决分类问题，它是一个有多个输出的模型，每个类别对应一个输出。每个输出对应于它自己的仿射函数。假设我们有4个特征和3个可能的输出类别，我们将需要12个标量来表示权重（带下标的$w$），3个标量来表示偏置（带下标的$b$）。</p>
<p>$$
\begin{aligned}
o_1 &amp;= x_1 w_{11} + x_2 w_{12} + x_3 w_{13} + x_4 w_{14} + b_1,\<br>
o_2 &amp;= x_1 w_{21} + x_2 w_{22} + x_3 w_{23} + x_4 w_{24} + b_2,\<br>
o_3 &amp;= x_1 w_{31} + x_2 w_{32} + x_3 w_{33} + x_4 w_{34} + b_3.
\end{aligned}
$$</p>
<p>与线性回归一样，softmax回归也是一个单层神经网络。由于计算每个输出$o_1$、$o_2$和$o_3$取决于所有输入$x_1$、$x_2$、$x_3$和$x_4$，所以softmax回归的输出层也是全连接层。</p>
<p><figure 
	>
	<a href="http://d2l.ai/_images/softmaxreg.svg" >
		<img src="http://d2l.ai/_images/softmaxreg.svg"
			
			
			
			loading="lazy"
			alt="softmax回归是一种单层神经网络。">
	</a>
	
	<figcaption>softmax回归是一种单层神经网络。</figcaption>
	
</figure></p>
<h3 id="计算模型">计算模型</h3>
<p>softmax由三个步骤组成： （1）对每个项求幂（使用exp）； （2）对每一行求和（小批量中每个样本是一行），得到每个样本的归一化常数； （3）将每一行除以其归一化常数，确保结果的和为1。</p>
<p>$$
\mathrm{softmax}(\mathbf{X})<em>{ij} = \frac{\exp(\mathbf{X}</em>{ij})}{\sum_k \exp(\mathbf{X}_{ik})}.
$$</p>
<div class="highlight"><pre class="chroma"><code class="language-python" data-lang="python"><span class="k">def</span> <span class="nf">softmax</span><span class="p">(</span><span class="n">X</span><span class="p">):</span>
    <span class="n">X_exp</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">exp</span><span class="p">(</span><span class="n">X</span><span class="p">)</span>
    <span class="n">partition</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">X_exp</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="n">keepdims</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
    <span class="k">return</span> <span class="n">X_exp</span> <span class="o">/</span> <span class="n">partition</span>  <span class="c1"># 这里应用了广播机制</span>
</code></pre></div>
</section>


    <footer class="article-footer">
    

    
    <section class="article-copyright">
        <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-copyright" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
  <path stroke="none" d="M0 0h24v24H0z"/>
  <circle cx="12" cy="12" r="9" />
  <path d="M14.5 9a3.5 4 0 1 0 0 6" />
</svg>



        <span>Licensed under CC BY-NC-SA 4.0</span>
    </section>
    </footer>


    
        <link 
                rel="stylesheet" 
                href="https://cdn.jsdelivr.net/npm/katex@0.13.13/dist/katex.min.css"integrity="sha384-RZU/ijkSsFbcmivfdRBQDtwuwVqK7GMOw6IMvKyeWL2K5UAlyp6WonmB8m7Jd0Hn"crossorigin="anonymous"
            ><script 
                src="https://cdn.jsdelivr.net/npm/katex@0.13.13/dist/katex.min.js"integrity="sha384-pK1WpvzWVBQiP0/GjnvRxV4mOb0oxFuyRxJlk6vVw146n3egcN5C925NCP7a7BY8"crossorigin="anonymous"
                defer="true"
                >
            </script><script 
                src="https://cdn.jsdelivr.net/npm/katex@0.13.13/dist/contrib/auto-render.min.js"integrity="sha384-vZTG03m&#43;2yp6N6BNi5iM4rW4oIwk5DfcNdFfxkk9ZWpDriOkXX8voJBFrAO7MpVl"crossorigin="anonymous"
                defer="true"
                >
            </script><script>
    window.addEventListener("DOMContentLoaded", () => {
        renderMathInElement(document.querySelector(`.article-content`), {
            delimiters: [
                { left: "$$", right: "$$", display: true },
                { left: "$", right: "$", display: false },
                { left: "\\(", right: "\\)", display: false },
                { left: "\\[", right: "\\]", display: true }
            ]
        });})
</script>
    
</article>

    <aside class="related-contents--wrapper">
    
    
        <h2 class="section-title">相关文章</h2>
        <div class="related-contents">
            <div class="flex article-list--tile">
                
                    
<article class="">
    <a href="/p/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A003/">
        
        

        <div class="article-details">
            <h2 class="article-title">基于深度学习的公式识别</h2>
        </div>
    </a>
</article>
                
                    
<article class="">
    <a href="/p/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A004/">
        
        

        <div class="article-details">
            <h2 class="article-title">文献综述</h2>
        </div>
    </a>
</article>
                
                    
<article class="">
    <a href="/p/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A002/">
        
        

        <div class="article-details">
            <h2 class="article-title">机器学习篇章之Kaggle比赛（预测房价）</h2>
        </div>
    </a>
</article>
                
                    
<article class="">
    <a href="/p/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A001/">
        
        

        <div class="article-details">
            <h2 class="article-title">机器学习篇章之感知机</h2>
        </div>
    </a>
</article>
                
            </div>
        </div>
    
</aside>

     
     
        
    <div class="disqus-container">
    <div id="disqus_thread"></div>
<script type="application/javascript">
    var disqus_config = function () {
    
    
    
    };
    (function() {
        if (["localhost", "127.0.0.1"].indexOf(window.location.hostname) != -1) {
            document.getElementById('disqus_thread').innerHTML = 'Disqus comments not available by default when the website is previewed locally.';
            return;
        }
        var d = document, s = d.createElement('script'); s.async = true;
        s.src = '//' + "hugo-theme-stack" + '.disqus.com/embed.js';
        s.setAttribute('data-timestamp', +new Date());
        (d.head || d.body).appendChild(s);
    })();
</script>
<noscript>Please enable JavaScript to view the <a href="https://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
<a href="https://disqus.com" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a>
</div>

<style>
    .disqus-container {
        background-color: var(--card-background);
        border-radius: var(--card-border-radius);
        box-shadow: var(--shadow-l1);
        padding: var(--card-padding);
    }
</style>

<script>
    window.addEventListener('onColorSchemeChange', (e) => {
        if (DISQUS) {
            DISQUS.reset({
                reload: true
            });
        }
    })
</script>

    

    <footer class="site-footer">
    <section class="copyright">
        &copy; 
        
            2020 - 
        
        2022 Enriqueliu
    </section>
    
    <section class="powerby">
        Built with <a href="https://gohugo.io/" target="_blank" rel="noopener">Hugo</a> <br />
        Theme <b><a href="https://github.com/CaiJimmy/hugo-theme-stack" target="_blank" rel="noopener" data-version="3.2.0">Stack</a></b> designed by <a href="https://jimmycai.com" target="_blank" rel="noopener">Jimmy</a>
    </section>
</footer>


    
<div class="pswp" tabindex="-1" role="dialog" aria-hidden="true">

    
    <div class="pswp__bg"></div>

    
    <div class="pswp__scroll-wrap">

        
        <div class="pswp__container">
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
        </div>

        
        <div class="pswp__ui pswp__ui--hidden">

            <div class="pswp__top-bar">

                

                <div class="pswp__counter"></div>

                <button class="pswp__button pswp__button--close" title="Close (Esc)"></button>

                <button class="pswp__button pswp__button--share" title="Share"></button>

                <button class="pswp__button pswp__button--fs" title="Toggle fullscreen"></button>

                <button class="pswp__button pswp__button--zoom" title="Zoom in/out"></button>

                
                
                <div class="pswp__preloader">
                    <div class="pswp__preloader__icn">
                        <div class="pswp__preloader__cut">
                            <div class="pswp__preloader__donut"></div>
                        </div>
                    </div>
                </div>
            </div>

            <div class="pswp__share-modal pswp__share-modal--hidden pswp__single-tap">
                <div class="pswp__share-tooltip"></div>
            </div>

            <button class="pswp__button pswp__button--arrow--left" title="Previous (arrow left)">
            </button>

            <button class="pswp__button pswp__button--arrow--right" title="Next (arrow right)">
            </button>

            <div class="pswp__caption">
                <div class="pswp__caption__center"></div>
            </div>

        </div>

    </div>

</div><script 
                src="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe.min.js"integrity="sha256-ePwmChbbvXbsO02lbM3HoHbSHTHFAeChekF1xKJdleo="crossorigin="anonymous"
                defer="true"
                >
            </script><script 
                src="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe-ui-default.min.js"integrity="sha256-UKkzOn/w1mBxRmLLGrSeyB4e1xbrp4xylgAWb3M42pU="crossorigin="anonymous"
                defer="true"
                >
            </script><link 
                rel="stylesheet" 
                href="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/default-skin/default-skin.css"integrity="sha256-c0uckgykQ9v5k&#43;IqViZOZKc47Jn7KQil4/MP3ySA3F8="crossorigin="anonymous"
            ><link 
                rel="stylesheet" 
                href="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe.css"integrity="sha256-SBLU4vv6CA6lHsZ1XyTdhyjJxCjPif/TRkjnsyGAGnE="crossorigin="anonymous"
            >

            </main>
    
        <aside class="sidebar right-sidebar sticky">
            <section class="widget archives">
                <div class="widget-icon">
                    <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-hash" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
  <path stroke="none" d="M0 0h24v24H0z"/>
  <line x1="5" y1="9" x2="19" y2="9" />
  <line x1="5" y1="15" x2="19" y2="15" />
  <line x1="11" y1="4" x2="7" y2="20" />
  <line x1="17" y1="4" x2="13" y2="20" />
</svg>



                </div>
                <h2 class="widget-title section-title">目录</h2>
                
                <div class="widget--toc">
                    <nav id="TableOfContents">
  <ol>
    <li><a href="#线性模型">线性模型</a></li>
    <li><a href="#损失函数">损失函数</a></li>
    <li><a href="#小批量随机梯度下降">小批量随机梯度下降</a></li>
    <li><a href="#softmax回归">softmax回归</a>
      <ol>
        <li><a href="#计算模型">计算模型</a></li>
      </ol>
    </li>
  </ol>
</nav>
                </div>
            </section>
        </aside>
    

        </div>
        <script 
                src="https://cdn.jsdelivr.net/npm/node-vibrant@3.1.5/dist/vibrant.min.js"integrity="sha256-5NovOZc4iwiAWTYIFiIM7DxKUXKWvpVEuMEPLzcm5/g="crossorigin="anonymous"
                defer="false"
                >
            </script><script type="text/javascript" src="/ts/main.js" defer></script>
<script>
    (function () {
        const customFont = document.createElement('link');
        customFont.href = "https://fonts.googleapis.com/css2?family=Lato:wght@300;400;700&display=swap";

        customFont.type = "text/css";
        customFont.rel = "stylesheet";

        document.head.appendChild(customFont);
    }());
</script>

    </body>
</html>
