<!DOCTYPE html>
<html lang="zh-hans">

<head>
    
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<meta name="HandheldFriendly" content="True" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta http-equiv="Cache-Control" content="no-transform" />
<meta http-equiv="Cache-Control" content="no-siteapp" />
<meta name="generator" content="Hugo 0.109.0">


<link rel="shortcut icon" href="https://cdn.jsdelivr.net/gh/dsrkafuu/dsr-cdn-main@1/images/favicons/dsrca.ico" />



<title>Softmax回归 - OffSummer</title>


<meta name="author" content="RQY" />


<meta name="description" content="A minimal Hugo theme with nice theme color." />


<meta name="keywords" content="mindspore, DL" />


<meta property="og:title" content="Softmax回归" />
<meta name="twitter:title" content="Softmax回归" />
<meta property="og:type" content="article" />
<meta property="og:url" content="/post/02-softmax/" /><meta property="og:description" content="Softmax
我们都知道概率是分布在0到1之间，但在第一节中，在不限定作用域的情况下，线性函数的值域分布在无穷的区间内，那么有没有一个函数能够压缩其值域到0~1之间呢？首先我们来看一个Sigmoid函数" />
<meta name="twitter:description" content="Softmax
我们都知道概率是分布在0到1之间，但在第一节中，在不限定作用域的情况下，线性函数的值域分布在无穷的区间内，那么有没有一个函数能够压缩其值域到0~1之间呢？首先我们来看一个Sigmoid函数" /><meta property="og:image" content="/img/og.png" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:image" content="/img/og.png" /><meta property="article:published_time" content="2022-10-15T23:09:00+08:00" /><meta property="article:modified_time" content="2022-10-15T23:09:00+08:00" />


<style>
    @media (prefers-color-scheme: dark) {
        body[data-theme='auto'] img {
            filter: brightness(60%);
        }
    }

    body[data-theme='dark'] img {
        filter: brightness(60%);
    }
</style>




<link rel="stylesheet" href="/assets/css/fuji.min.b4a21b5d3eb1d0a51297e31230a65fc25e387843e45ec3a2d9176cd8d163c216d99b9b13a618b28f537c3b559ec8a408183b0fbfad48daddb9befa7d3ef90eed.css" integrity="sha512-tKIbXT6x0KUSl&#43;MSMKZfwl44eEPkXsOi2Rds2NFjwhbZm5sTphiyj1N8O1WeyKQIGDsPv61I2t25vvp9PvkO7Q==" />








</head>

<body
  data-theme="auto"
  data-theme-auto='true'
  >
    <script data-cfasync="false">
  
  var fujiThemeData = localStorage.getItem('fuji_data-theme');
  
  if (!fujiThemeData) {
    localStorage.setItem('fuji_data-theme', 'auto');
  } else {
    
    if (fujiThemeData !== 'auto') {
      document.body.setAttribute('data-theme', fujiThemeData === 'dark' ? 'dark' : 'light');
    }
  }
</script>

    <header>
    <div class="container-lg clearfix">
        <div class="col-12 header">
            <a class="title-main" href="/">OffSummer</a>
            
            <span class="title-sub">Summer is going, but autumn does not come yet.</span>
            
        </div>
    </div>
</header>

    <main>
        <div class="container-lg clearfix">
            
            <div class="col-12 col-md-9 float-left content">
                
<article>
    
    <h2 class="post-item post-title">
        <a href="/post/02-softmax/">Softmax回归</a>
    </h2>
    <div class="post-item post-meta">
        <span><i class="iconfont icon-today-sharp"></i>&nbsp;2022-10-15</span>

<span><i class="iconfont icon-file-tray-sharp"></i>&nbsp;2202 字</span>
<span><i class="iconfont icon-time-sharp"></i>&nbsp;5 分钟</span>
<span><i class="iconfont icon-pricetags-sharp"></i>&nbsp;<a href="/tags/mindspore">mindspore</a>&nbsp;<a href="/tags/dl">DL</a>&nbsp;</span>

    </div>
    
    <div class="post-content markdown-body">
        <h2 id="softmax">Softmax</h2>
<p>我们都知道概率是分布在0到1之间，但在第一节中，在不限定作用域的情况下，线性函数的值域分布在无穷的区间内，那么有没有一个函数能够压缩其值域到0~1之间呢？首先我们来看一个Sigmoid函数</p>
<p>$$
Sigmoid(x)=\frac1{1+e^{-x}}=\frac{e^x}{1+e^x}
$$</p>
<p>很明显，该函数最大值不会超过1，而最小值不会小于0，我们可以将其视为二分类的概率；而Softmax函数，就是多分类的概率函数，它的表达式如下所示。</p>
<p>$$
Softmax(z_i) = \frac{e^{z_i}}{\sum_j^n e^{z_j}}
$$</p>
<p>我们可以使用下面的代码求出矩阵的Softmax函数值。在第二行，为了避免数据溢出，根据Softmax函数的性质，我们将矩阵所有的元素都减去它的最大值，函数值保持不变。第四行。为了避免除零错误，我们在求和项上加上一个很小的数。</p>
<pre><code class="language-python">def softmax(x):
    x -= np.max(x)
    x_exp = np.exp(x)
    partition = np.sum(x_exp, axis=1, keepdims=True) + 1e-10
    return x_exp / partition
</code></pre>
<h2 id="全连接网络">全连接网络</h2>
<p>在第一节的基础上，我们对线性网络的输出进行一定的处理，即使</p>
<p>$$
y = f_{activation}(WX^T+B)
$$</p>
<p>即可得到全连接网络。其中，$f_{activation}(X)$被称为激活函数，既可以是线性的，也可以是非线性的，如果使用非线性的激活函数后，能够将原本的线性输出变为非线性输出，能够适应更多类型的函数。</p>
<p>如果激活函数选择ReLu，则该网络可以称为全连接神经网络；现在我们选择Softmax函数作为激活函数，所以只能得到普通的非线性网络</p>
<p>$$
Z=WX^T+B
$$</p>
<p>$$
Y=Softmax(Z)
$$</p>
<pre><code class="language-python">def softmax_net(x, w, b):
    z = np.matmul(w.T, x.T) + b
    return softmax(z)
</code></pre>
<h2 id="交叉熵损失函数">交叉熵损失函数</h2>
<p>在多分类问题中我们一般使用交叉熵损失函数，主要因为它能够优化包含指数型的函数，表达式如下</p>
<p>$$
loss(\hat y, y)=-\sum_{j=1}^ny_j\ln \hat y_j
$$</p>
<pre><code class="language-python">def cross_entropy(y_hat, y):
    o = y_hat[y[0], range(y_hat.shape[1])]
    return -np.log(o + 1e-5)
</code></pre>
<h2 id="梯度下降">梯度下降</h2>
<p>由于我们使用交叉熵损失函数，且使用独热编码，当i为正确的分类时，有$y_i=1,y_j=0,i\neq j$，故损失函数中只剩下正确的一项，损失函数退化为$l(\hat y, y)=-\ln \hat y_i$，对$\hat y_i$求梯度，有</p>
<p>$$
\frac{\partial loss}{\partial \hat y_i}=-\frac{1}{\hat y_i}
$$</p>
<p>在这里我们设所有错误项的指数和为$\sum_{j \neq i} e_j=A$，根据Softmax公式，有$\hat y_i=\frac{e^{z_i}}{e^{z_i}+A}$，然后我们对其求梯度，可以得到正确分类项目的梯度为</p>
<p>$$
\frac{\partial \hat y_i}{\partial z_i}=\frac{e^{z_i}(e^{z_i}+A)-e^{z_i}e^{z_i}}{(e^{z_i}+A)^2}=\frac{e^{z_i}}{e^{z_i}+A}\frac{A}{e^{z_i}+A}=\hat y_i (1- \hat y_i)
$$</p>
<p>然后我们对错误分类的第k项求导，假设其他包括正确分类的指数和为$\sum_{j \neq k} e_k=B$，与上面的式子相同，我们对其求导，可以得到下面的梯度</p>
<p>$$
\frac{\partial \hat y_i}{\partial z_k}=-\frac{e^{z_i}}{B+e^{z_k}}\frac{e^{z_k}}{B+e^{z_k}}=-\hat y_i \hat y_k
$$</p>
<p>根据链式法则，我们可以得到交叉熵损失函数对softmax函数的梯度，这个式子是很有规律的，即对正确分类的项，将Softmax的输出结果减一，其他的不变，我们将其记为$\hat y&rsquo;$。</p>
<p>$$
\frac{\partial loss}{\partial \hat z_i}=\frac{\partial loss}{\partial \hat y_i}\frac{\partial \hat y_i}{\partial z_i}=\hat y_i-1
$$</p>
<p>$$
\frac{\partial loss}{\partial \hat z_k}=\frac{\partial loss}{\partial \hat y_i}\frac{\partial \hat y_i}{\partial z_k}=\hat y_k
$$</p>
<p>最后可以很轻松地得到交叉熵损失函数对权重的梯度</p>
<p>$$
\frac{\partial z}{\partial w}=x, \frac{\partial z}{\partial b}=1
$$</p>
<p>$$
\frac{\partial loss}{\partial w}=\hat y&rsquo; x, \frac{\partial loss}{\partial b}=\hat y&rsquo;
$$</p>
<p>根据上面的原理，我们使用numpy来更新权重以及偏置对交叉熵损失函数以及Softmax函数的梯度。</p>
<pre><code class="language-python">def softmax_sgd(y_hat, x, y, w, b, lr, batch_size):
    # manual calculate the gradient of cross entropy
    y_hat[y[0], range(y_hat.shape[1])] -= 1

    grad_w = np.matmul(y_hat, x).squeeze(axis=0).T
    new_w = w - lr * grad_w / batch_size

    grad_b = y_hat
    grad_b = np.sum(grad_b, axis=1, keepdims=True)
    new_b = b - lr * grad_b / batch_size

    return new_w, new_b
</code></pre>
<h2 id="评估精确度">评估精确度</h2>
<p>我们将Softmax函数输出的最大值的列，视为输出的标签值，故仅需判断其与标签数据的不同，能够得到判断正确的数据的个数</p>
<pre><code class="language-python">def evaluate_accuracy(y_hat, y):
    right_item = np.sum(np.argmax(y_hat, axis=0) == y)
    return np.sum(right_item)
</code></pre>
<h2 id="mnist数据集">Mnist数据集</h2>
<p>现在，我们在真实的数据集上做测试。</p>
<p>mindvision是mindspore的工具包，可以通过<code>pip install mindvision</code>获取。</p>
<p>Mnist是一个手写数据集，我们利用<code>mindvision.classification.dataset</code>中的<code>Mnist</code>可以获取Mnist的数据集，利用以下两行代码，将其下载到指定位置</p>
<pre><code class="language-python">    Mnist(path='/shareData/mindspore-dataset/Mnist',
          split=&quot;train&quot;, download=True).download_dataset()
    Mnist(path='/shareData/mindspore-dataset/Mnist',
          split=&quot;test&quot;, download=True).download_dataset()
</code></pre>
<p>然后，我们可以通过<a href="https://blog.csdn.net/justidle/article/details/103146658" target="_blank">使用NumPy读取MNIST数据</a>提到的方式，使用以下代码（这一段代码在util.datasets内），读取数据，并转换为numpy矩阵</p>
<pre><code class="language-python">def load_mnist(path, split='train', reshape=False):
    &quot;&quot;&quot;
    reference:
    https://blog.csdn.net/justidle/article/details/103146658
    &quot;&quot;&quot;
    labels_path = os.path.join(path, f'{split}-labels-idx1-ubyte')
    images_path = os.path.join(path, f'{split}-images-idx3-ubyte')
    with open(labels_path, 'rb') as lb_path:
        magic, n = struct.unpack('&gt;II', lb_path.read(8))
        labels = np.fromfile(lb_path, dtype=np.uint8)

    with open(images_path, 'rb') as img_path:
        magic, num, rows, cols = struct.unpack('&gt;IIII', img_path.read(16))
        if reshape:
            images = np.fromfile(img_path, dtype=np.uint8).reshape(len(labels), 28, 28, 1)
        else:
            images = np.fromfile(img_path, dtype=np.uint8).reshape(len(labels), 28 * 28)
    return images, labels
</code></pre>
<p>然后，就可以通过以下的代码读取数据</p>
<pre><code class="language-python">    features_t, labels_t = load_mnist('/shareData/mindspore-dataset/Mnist/train')
    features_v, labels_v = load_mnist('/shareData/mindspore-dataset/Mnist/test', split='t10k')
</code></pre>
<p>如果要将数据进行显示，可以通过使用PIL的Image库，即<code>from PIL import Image</code>，然后使用以下代码，将其显示出来</p>
<pre><code class="language-python">    for _ in range(10):
        im = features_v[_, :].reshape(28, 28)
        im = Image.fromarray(im)
        im.show()
</code></pre>
<p>我们将它的图片数据进行归一化，将其从0-255变为0-1之间的分布。</p>
<pre><code class="language-python">    features_t = features_t.astype(np.float64) / 255
    features_v = features_v.astype(np.float64) / 255
</code></pre>
<h2 id="批处理">批处理</h2>
<p>这里用到的批处理与第一节基本类似，唯一的区别是索引可能不同。</p>
<pre><code class="language-python">def data_iter(batch_size, features, labels):
    num_examples = len(features)
    indices = list(range(num_examples))
    np.random.shuffle(indices)
    for i in range(0, num_examples, batch_size):
        batch_indices = np.mat(indices[i:min(i + batch_size, num_examples)])
        yield features[batch_indices, :], labels[batch_indices]

</code></pre>
<h2 id="处理流程">处理流程</h2>
<p>首先，我们要确定模型的超参数（注意学习率的选择，过高的学习率会导致无法学习，之前我设置为0.1时精确度基本没有发生变化，损失函数反而越来越高了，晕）。</p>
<pre><code class="language-python">    batch_size = 256
    lr = 0.0001
    epochs = 10

    net = softmax_net
    loss = cross_entropy
</code></pre>
<p>然后，进行初始化权重与偏置矩阵。</p>
<pre><code class="language-python">w = np.random.normal(0, 0.01, (num_input, num_output))
    b = np.zeros((num_output, 1))
</code></pre>
<p>最后是训练代码，与第一节类似，唯一的不同是计算精确度是在小循环中进行累加计算，避免计算重复的数值浪费时间。</p>
<pre><code class="language-python">    for epoch in range(epochs):
        train_loss, right, total = 0, 0, 0
        for x, y in data_iter(batch_size, features_t, labels_t):
            y_hat_t = net(x.squeeze(axis=0), w, b)
            train_loss += loss(y_hat_t, y).sum()

            right += evaluate_accuracy(y_hat_t, y)
            total += y.shape[1]

            w, b = softmax_sgd(y_hat_t, x, y, w, b, lr, batch_size)

        train_acc = right / total
        y_hat_v = net(features_v, w, b)
        valid_acc = evaluate_accuracy(y_hat_v, labels_v) / len(labels_v)

        print(
            f'epoch [{epoch + 1}/{epochs}], loss is {float(train_loss / len(labels_t)):f}, train accuracy is {train_acc}, valid accuracy is {valid_acc}')
</code></pre>
<h2 id="运行结果">运行结果</h2>
<p>最后通过运行的结果可以看到，精确率大概在60%左右，不是很高。</p>
<pre><code class="language-shell">In epoch 1, loss is 5.538449, train accuracy is 0.10166666666666667, valid accuracy is 0.133
In epoch 2, loss is 5.513865, train accuracy is 0.18388333333333334, valid accuracy is 0.2307
In epoch 3, loss is 5.489963, train accuracy is 0.2880333333333333, valid accuracy is 0.3373
In epoch 4, loss is 5.466770, train accuracy is 0.38366666666666666, valid accuracy is 0.4284
In epoch 5, loss is 5.444432, train accuracy is 0.46321666666666667, valid accuracy is 0.4989
In epoch 6, loss is 5.422886, train accuracy is 0.5247, valid accuracy is 0.5511
In epoch 7, loss is 5.402007, train accuracy is 0.5696333333333333, valid accuracy is 0.5903
In epoch 8, loss is 5.381965, train accuracy is 0.6034666666666667, valid accuracy is 0.6209
In epoch 9, loss is 5.362727, train accuracy is 0.62905, valid accuracy is 0.6408
In epoch 10, loss is 5.344362, train accuracy is 0.6508333333333334, valid accuracy is 0.6578
</code></pre>
<p>（2022年10月15日更新）</p>
    </div>
</article>




            </div>
            <aside class="col-12 col-md-3 float-left sidebar">
    
    <div class="sidebar-item sidebar-pages">
        <h3>页面</h3>
        <ul>
            
            <li>
                <a href="/">Home</a>
            </li>
            
            <li>
                <a href="/archives/">Archives</a>
            </li>
            
            <li>
                <a href="/about/">About</a>
            </li>
            
            <li>
                <a href="/search/">Search</a>
            </li>
            
            <li>
                <a href="/index.xml">RSS</a>
            </li>
            
        </ul>
    </div>
    
    <div class="sidebar-item sidebar-links">
        <h3>链接</h3>
        <ul>
            
            <li>
                <a href="https://github.com/ruaqy" target="_blank"><span>GitHub</span></a>
            </li>
            
            <li>
                <a href="https://gitee.com/ruqy" target="_blank"><span>Gitee</span></a>
            </li>
            
            <li>
                <a href="https://space.bilibili.com/13382902" target="_blank"><span>Bilibili</span></a>
            </li>
            
        </ul>
    </div>
    
    <div class="sidebar-item sidebar-tags">
        <h3>标签</h3>
        <div>
            
            <span>
                <a href="/tags/dl/">DL</a>
            </span>
            
            <span>
                <a href="/tags/make-up/">Make Up</a>
            </span>
            
            <span>
                <a href="/tags/matlab/">MATLAB</a>
            </span>
            
            <span>
                <a href="/tags/mindspore/">mindspore</a>
            </span>
            
            <span>
                <a href="/tags/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/">机器学习</a>
            </span>
            
        </div>
    </div>
    <div class="sidebar-item sidebar-toc">
        <h3>目录</h3><nav id="TableOfContents">
  <ul>
    <li><a href="#softmax">Softmax</a></li>
    <li><a href="#全连接网络">全连接网络</a></li>
    <li><a href="#交叉熵损失函数">交叉熵损失函数</a></li>
    <li><a href="#梯度下降">梯度下降</a></li>
    <li><a href="#评估精确度">评估精确度</a></li>
    <li><a href="#mnist数据集">Mnist数据集</a></li>
    <li><a href="#批处理">批处理</a></li>
    <li><a href="#处理流程">处理流程</a></li>
    <li><a href="#运行结果">运行结果</a></li>
  </ul>
</nav></div>
</aside>

        </div>
        <div class="btn">
    <div class="btn-menu" id="btn-menu">
        <i class="iconfont icon-grid-sharp"></i>
    </div>
    <div class="btn-toggle-mode">
        <i class="iconfont icon-contrast-sharp"></i>
    </div>
    <div class="btn-scroll-top">
        <i class="iconfont icon-chevron-up-circle-sharp"></i>
    </div>
</div>
<aside class="sidebar-mobile" style="display: none;">
  <div class="sidebar-wrapper">
    
    <div class="sidebar-item sidebar-pages">
        <h3>页面</h3>
        <ul>
            
            <li>
                <a href="/">Home</a>
            </li>
            
            <li>
                <a href="/archives/">Archives</a>
            </li>
            
            <li>
                <a href="/about/">About</a>
            </li>
            
            <li>
                <a href="/search/">Search</a>
            </li>
            
            <li>
                <a href="/index.xml">RSS</a>
            </li>
            
        </ul>
    </div>
    
    <div class="sidebar-item sidebar-links">
        <h3>链接</h3>
        <ul>
            
            <li>
                <a href="https://github.com/ruaqy" target="_blank"><span>GitHub</span></a>
            </li>
            
            <li>
                <a href="https://gitee.com/ruqy" target="_blank"><span>Gitee</span></a>
            </li>
            
            <li>
                <a href="https://space.bilibili.com/13382902" target="_blank"><span>Bilibili</span></a>
            </li>
            
        </ul>
    </div>
    
    <div class="sidebar-item sidebar-tags">
        <h3>标签</h3>
        <div>
            
            <span>
                <a href="/tags/dl/">DL</a>
            </span>
            
            <span>
                <a href="/tags/make-up/">Make Up</a>
            </span>
            
            <span>
                <a href="/tags/matlab/">MATLAB</a>
            </span>
            
            <span>
                <a href="/tags/mindspore/">mindspore</a>
            </span>
            
            <span>
                <a href="/tags/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/">机器学习</a>
            </span>
            
        </div>
    </div>
    
    
    
    <div class="sidebar-item sidebar-toc">
        <h3>目录</h3>
        <nav id="TableOfContents">
  <ul>
    <li><a href="#softmax">Softmax</a></li>
    <li><a href="#全连接网络">全连接网络</a></li>
    <li><a href="#交叉熵损失函数">交叉熵损失函数</a></li>
    <li><a href="#梯度下降">梯度下降</a></li>
    <li><a href="#评估精确度">评估精确度</a></li>
    <li><a href="#mnist数据集">Mnist数据集</a></li>
    <li><a href="#批处理">批处理</a></li>
    <li><a href="#处理流程">处理流程</a></li>
    <li><a href="#运行结果">运行结果</a></li>
  </ul>
</nav>
    </div>
    
    
  </div>
</aside>
    </main>

    <footer>
    <div class="container-lg clearfix">
        <div class="col-12 footer">
            
            <p>
                除特殊注明部分，本站内容采用 <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/" target="_blank">CC BY-NC-SA 4.0</a> 进行许可。
            </p>
            
            <span>&copy; 2023-2023
                <a href="/">RQY</a>
                 | <a href="https://github.com/dsrkafuu/hugo-theme-fuji">Source code</a> 
                | 基于 <a href="https://github.com/dsrkafuu/hugo-theme-fuji/"
                   target="_blank">Fuji-v2</a> &amp; <a href="https://gohugo.io/"
                                                    target="_blank">Hugo</a> 构建
            </span>
        </div>
    </div>
</footer>

    
<script defer src="https://cdn.jsdelivr.net/npm/medium-zoom@1.0.6/dist/medium-zoom.min.js" integrity="sha512-N9IJRoc3LaP3NDoiGkcPa4gG94kapGpaA5Zq9/Dr04uf5TbLFU5q0o8AbRhLKUUlp8QFS2u7S+Yti0U7QtuZvQ==" crossorigin="anonymous"></script>
<script defer src="https://cdn.jsdelivr.net/npm/lazysizes@5.3.2/lazysizes.min.js" integrity="sha512-q583ppKrCRc7N5O0n2nzUiJ+suUv7Et1JGels4bXOaMFQcamPk9HjdUknZuuFjBNs7tsMuadge5k9RzdmO+1GQ==" crossorigin="anonymous"></script>
<script defer src="https://cdn.jsdelivr.net/npm/prismjs@1.27.0/components/prism-core.min.js" integrity="sha512-LCKPTo0gtJ74zCNMbWw04ltmujpzSR4oW+fgN+Y1YclhM5ZrHCZQAJE4quEodcI/G122sRhSGU2BsSRUZ2Gu3w==" crossorigin="anonymous"></script>
<script defer src="https://cdn.jsdelivr.net/npm/prismjs@1.27.0/plugins/autoloader/prism-autoloader.min.js" integrity="sha512-GP4x8UWxWyh4BMbyJGOGneiTbkrWEF5izsVJByzVLodP8CuJH/n936+yQDMJJrOPUHLgyPbLiGw2rXmdvGdXHA==" crossorigin="anonymous"></script>



<script defer src="/assets/js/fuji.min.645f1123be695831f419ab54c1bcba327325895c740014006e57070d4f3e5d6b553e929c4b46f40ea707249e9c7f7c2a446d32a39ce7319f80a34525586a8e0f.js" integrity="sha512-ZF8RI75pWDH0GatUwby6MnMliVx0ABQAblcHDU8&#43;XWtVPpKcS0b0DqcHJJ6cf3wqRG0yo5znMZ&#43;Ao0UlWGqODw=="></script>

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.15.3/dist/katex.min.css" integrity="sha512-07YhC3P4/vS5HdgGuNAAeIxb5ee//efgRNo5AGdMtqFBUPYOdQG/sDK0Nl5qNq94kdEk/Pvu8pmN4GYUeucUkw==" crossorigin="anonymous">
<script src="https://cdn.jsdelivr.net/npm/katex@0.15.3/dist/katex.min.js" integrity="sha512-aMDiFsrEV3KzAn9EHwyBRS7y1APjZWt/Z/73ukLN2Ca2KcGGzlOQFQSnfOdnEcehpwMaQ8edlDB/0cMX2GsHbg==" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/katex@0.15.3/dist/contrib/auto-render.min.js" integrity="sha512-ZA/RPrAo88DlwRnnoNVqKINnQNcWERzRK03PDaA4GIJiVZvGFIWQbdWCsUebMZfkWohnfngsDjXzU6PokO4jGw==" crossorigin="anonymous"></script>
<script>
  renderMathInElement(document.querySelector('div.content'), {
    delimiters: [
      { left: '$$', right: '$$', display: true },
      { left: '\\[', right: '\\]', display: true },
      { left: '$', right: '$', display: false },
      { left: '\\(', right: '\\)', display: false },
    ],
  });
</script>




</body>

</html>
