<!DOCTYPE html>
<html dir="ltr" lang="zh">
<head>
<meta charset="utf-8"/>
<meta content="width=device-width, initial-scale=1.0" name="viewport"/>
<meta content="TensorFlow 笔记（二） # 介绍神经网络的优化过程，主要有：
神经网络复杂度 指数衰减学习率 激活函数 损失函数 欠拟合与过拟合 正则化减少过拟合 优化器更新网络参数 1. 预备知识 # tf.where(条件语句，真返回A，假返回B)：条件语句，真返回A，假返回B a = tf.constant([1, 2, 3, 1, 1]) b = tf.constant([0, 1, 3, 4, 5]) c = tf.where(tf.greater(a, b), a, b) # 若a&gt;b，返回 a 对应位置的元素，否则返回 b 对应位置的元素 输出：c = tf.Tensor([1 2 3 4 5], shape=(5,), dtype=int32) np.random.RandomState.rand(维度)：返回 [0, 1) 之间的随机数 rdm = np.random.RandomState(seed=1) a = rdm.rand() # 返回一个随机标量 b = rdm.rand(2, 3) # 返回维度为2行3列随机数矩阵 np.vstack(数组1， 数组2)：将两个数组按垂直方向叠加 a = np." name="description"/>
<meta content="#FFFFFF" name="theme-color"/>
<meta content="light dark" name="color-scheme"/><meta content="" property="og:title"/>
<meta content="TensorFlow 笔记（二） # 介绍神经网络的优化过程，主要有：
神经网络复杂度 指数衰减学习率 激活函数 损失函数 欠拟合与过拟合 正则化减少过拟合 优化器更新网络参数 1. 预备知识 # tf.where(条件语句，真返回A，假返回B)：条件语句，真返回A，假返回B a = tf.constant([1, 2, 3, 1, 1]) b = tf.constant([0, 1, 3, 4, 5]) c = tf.where(tf.greater(a, b), a, b) # 若a&gt;b，返回 a 对应位置的元素，否则返回 b 对应位置的元素 输出：c = tf.Tensor([1 2 3 4 5], shape=(5,), dtype=int32) np.random.RandomState.rand(维度)：返回 [0, 1) 之间的随机数 rdm = np.random.RandomState(seed=1) a = rdm.rand() # 返回一个随机标量 b = rdm.rand(2, 3) # 返回维度为2行3列随机数矩阵 np.vstack(数组1， 数组2)：将两个数组按垂直方向叠加 a = np." property="og:description"/>
<meta content="article" property="og:type"/>
<meta content="https://helloputong.gitee.io/notes/tensorflow/tensorflow-%E7%AC%94%E8%AE%B0%E4%BA%8C/" property="og:url"/><meta content="notes" property="article:section"/>
<title>Tensor Flow 笔记（二） | Hello! 噗通 🍀</title>
<link href="/manifest.json" rel="manifest"/>
<link href="/favicon.png" rel="icon" type="image/x-icon"/>
<link crossorigin="anonymous" href="/book.min.a82d7e77ceb134d151c4d7e381eeb30623fbd5a524d58c584d8716ecec0205bd.css" integrity="sha256-qC1+d86xNNFRxNfjge6zBiP71aUk1YxYTYcW7OwCBb0=" rel="stylesheet"/>
<script defer="" src="/flexsearch.min.js"></script>
<script crossorigin="anonymous" defer="" integrity="sha256-+pR/j4Voa/VXQmH38FekcfPx1IEWD5WAkNOJruKNmQk=" src="/zh.search.min.fa947f8f85686bf5574261f7f057a471f3f1d481160f958090d389aee28d9909.js"></script>
<script crossorigin="anonymous" defer="" integrity="sha256-b2+Q/LjrHEnsOJg45rgB0N4ZQwuOUWkC+NdcPIvZhzk=" src="/sw.min.6f6f90fcb8eb1c49ec389838e6b801d0de19430b8e516902f8d75c3c8bd98739.js"></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
-->
</head>
<body dir="ltr">
<input class="hidden toggle" id="menu-control" type="checkbox"/>
<input class="hidden toggle" id="toc-control" type="checkbox"/>
<main class="container flex">
<aside class="book-menu">
<div class="book-menu-content">
<nav>
<h2 class="book-brand">
<a class="flex align-center" href="/"><span>Hello! 噗通 🍀</span>
</a>
</h2>
<div class="book-search">
<input aria-label="Search" data-hotkeys="s/" id="book-search-input" maxlength="64" placeholder="Search" type="text"/>
<div class="book-search-spinner hidden"></div>
<ul id="book-search-results"></ul>
</div>
<ul>
<li class="book-section-flat">
<span>--学习笔记--👇</span>
<ul>
<li>
<input class="toggle" id="section-62161c8b7eae8ea89aee3d6f310b2312" type="checkbox"/>
<label class="flex justify-between" for="section-62161c8b7eae8ea89aee3d6f310b2312">
<a class="" role="button">Android</a>
</label>
<ul>
<li>
<a class="" href="/notes/android/mac%E9%80%9A%E8%BF%87homebrew%E5%AE%89%E8%A3%85java8/">Mac 通过 Homebrew 安装 Java8</a>
</li>
<li>
<a class="" href="/notes/android/%E6%B4%BB%E5%8A%A8%E7%9A%84%E7%94%9F%E5%91%BD%E5%91%A8%E6%9C%9F/">活动的生命周期</a>
</li>
<li>
<a class="" href="/notes/android/%E5%B8%B8%E7%94%A8%E6%8E%A7%E4%BB%B6%E5%92%8C4%E7%A7%8D%E5%B8%83%E5%B1%80/">常用控件和4种布局</a>
</li>
<li>
<a class="" href="/notes/android/android-%E4%B8%AD%E4%BA%8B%E4%BB%B6%E5%AD%A6%E4%B9%A0%E6%80%BB%E7%BB%93/">Android 中事件学习总结</a>
</li>
<li>
<a class="" href="/notes/android/android-%E4%B8%AD%E7%9A%84%E6%B6%88%E6%81%AF%E6%9C%BA%E5%88%B6%E5%92%8C%E5%BC%82%E6%AD%A5%E4%BB%BB%E5%8A%A1/">Android 中的消息机制和异步任务</a>
</li>
<li>
<a class="" href="/notes/android/listview%E4%B8%AD%E5%85%B3%E4%BA%8E%E6%95%B0%E6%8D%AE%E6%9B%B4%E6%96%B0%E7%9A%84%E4%B8%A4%E4%B8%AA%E9%97%AE%E9%A2%98/">List View中关于数据更新的两个问题</a>
</li>
<li>
<a class="" href="/notes/android/spinner-%E5%9F%BA%E6%9C%AC%E4%BD%BF%E7%94%A8/">Spinner 基本使用</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-d1dc8d9746f5c776e8a82499bbb2e7c6" type="checkbox"/>
<label class="flex justify-between" for="section-d1dc8d9746f5c776e8a82499bbb2e7c6">
<a class="" role="button">BMS</a>
</label>
<ul>
<li>
<a class="" href="/notes/bms/bms-%E7%9B%B8%E5%85%B3%E7%90%86%E8%AE%BA%E5%AD%A6%E4%B9%A0/">BMS 相关概念</a>
</li>
<li>
<a class="" href="/notes/bms/%E6%BC%94%E7%A4%BA%E5%8F%82%E6%95%B0%E8%A7%A3%E8%AF%BB/">功能演示</a>
</li>
<li>
<a class="" href="/notes/bms/%E5%8E%9F%E7%90%86%E5%9B%BE%E8%A7%A3%E8%AF%BB/">原理图解读</a>
</li>
<li>
<a class="" href="/notes/bms/%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90/">源码解析</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-b7444509cb631180897a34f028407c2c" type="checkbox"/>
<label class="flex justify-between" for="section-b7444509cb631180897a34f028407c2c">
<a class="" role="button">设计模式</a>
</label>
<ul>
<li>
<a class="" href="/notes/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/uml-%E5%9B%BE/">Uml 图</a>
</li>
<li>
<a class="" href="/notes/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/%E8%AE%BE%E8%AE%A1%E5%8E%9F%E5%88%99/">设计原则</a>
</li>
<li>
<a class="" href="/notes/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/%E5%88%9B%E5%BB%BA%E5%9E%8B%E6%A8%A1%E5%BC%8F/">创建型模式</a>
</li>
<li>
<a class="" href="/notes/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/%E7%BB%93%E6%9E%84%E5%9E%8B%E6%A8%A1%E5%BC%8F/">结构型模式</a>
</li>
<li>
<a class="" href="/notes/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/%E8%A1%8C%E4%B8%BA%E5%9E%8B%E6%A8%A1%E5%BC%8F/">行为型模式</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-4364152b7ab5995d509c0b7b811005c4" type="checkbox"/>
<label class="flex justify-between" for="section-4364152b7ab5995d509c0b7b811005c4">
<a class="" role="button">JVM</a>
</label>
<ul>
<li>
<a class="" href="/notes/jvm/%E4%BB%80%E4%B9%88%E6%98%AF-jvm/">什么是 Jvm</a>
</li>
<li>
<a class="" href="/notes/jvm/%E7%A8%8B%E5%BA%8F%E8%AE%A1%E6%95%B0%E5%99%A8/">程序计数器</a>
</li>
<li>
<a class="" href="/notes/jvm/%E8%99%9A%E6%8B%9F%E6%9C%BA%E6%A0%88/">虚拟机栈</a>
</li>
<li>
<a class="" href="/notes/jvm/%E6%9C%AC%E5%9C%B0%E6%96%B9%E6%B3%95%E6%A0%88-/">本地方法栈</a>
</li>
<li>
<a class="" href="/notes/jvm/%E5%A0%86/">堆</a>
</li>
<li>
<a class="" href="/notes/jvm/%E6%96%B9%E6%B3%95%E5%8C%BA/">方法区</a>
</li>
<li>
<a class="" href="/notes/jvm/%E7%9B%B4%E6%8E%A5%E5%86%85%E5%AD%98/">直接内存</a>
</li>
<li>
<a class="" href="/notes/jvm/%E5%9E%83%E5%9C%BE%E5%9B%9E%E6%94%B6/">垃圾回收</a>
</li>
<li>
<a class="" href="/notes/jvm/%E5%9E%83%E5%9C%BE%E5%9B%9E%E6%94%B6%E5%99%A8/">垃圾回收器</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-61661238f18c0095524962a5d1d6e676" type="checkbox"/>
<label class="flex justify-between" for="section-61661238f18c0095524962a5d1d6e676">
<a class="" role="button">Spring</a>
</label>
<ul>
<li>
<a class="" href="/notes/spring/beanfactory%E4%B8%8Eapplicationcontext/">Bean Factory与 Application Context</a>
</li>
<li>
<a class="" href="/notes/spring/nacos-%E9%85%8D%E7%BD%AE%E4%B8%AD%E5%BF%83/">Nacos 配置中心</a>
</li>
<li>
<a class="" href="/notes/spring/open-feign-%E8%BF%9C%E7%A8%8B%E8%B0%83%E7%94%A8%E7%A4%BA%E4%BE%8B/">Open Feign 远程调用示例</a>
</li>
<li>
<a class="" href="/notes/spring/springboot-%E6%95%B4%E5%90%88-mybatis-plus/">Spring Boot 整合 My Batis Plus</a>
</li>
</ul>
</li>
<li>
<input checked="" class="toggle" id="section-39abd0d44427d4a54e694a2b3f22d967" type="checkbox"/>
<label class="flex justify-between" for="section-39abd0d44427d4a54e694a2b3f22d967">
<a class="" role="button">TensorFlow</a>
</label>
<ul>
<li>
<a class="" href="/notes/tensorflow/tensorflow-%E7%AC%94%E8%AE%B0%E4%B8%80/">Tensor Flow 笔记（一）</a>
</li>
<li>
<a class="active" href="/notes/tensorflow/tensorflow-%E7%AC%94%E8%AE%B0%E4%BA%8C/">Tensor Flow 笔记（二）</a>
</li>
<li>
<a class="" href="/notes/tensorflow/tensorflow-%E7%AC%94%E8%AE%B0%E4%B8%89/">Tensor Flow 笔记（三）</a>
</li>
<li>
<a class="" href="/notes/tensorflow/tensorflow-%E7%AC%94%E8%AE%B0%E5%9B%9B/">Tensor Flow 笔记（四）</a>
</li>
<li>
<a class="" href="/notes/tensorflow/tensorflow-%E7%AC%94%E8%AE%B0%E4%BA%94/">Tensor Flow 笔记（五）</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-e2ca0d138d67d9d3ae55da25ac044829" type="checkbox"/>
<label class="flex justify-between" for="section-e2ca0d138d67d9d3ae55da25ac044829">
<a class="" role="button">Redis</a>
</label>
<ul>
<li>
<a class="" href="/notes/redis/nosql%E6%95%B0%E6%8D%AE%E5%BA%93%E7%AE%80%E4%BB%8B/">No Sql数据库简介</a>
</li>
<li>
<a class="" href="/notes/redis/redis-key/">Redis Key</a>
</li>
<li>
<a class="" href="/notes/redis/redis-string/">Redis String</a>
</li>
<li>
<a class="" href="/notes/redis/redis-list/">Redis List</a>
</li>
<li>
<a class="" href="/notes/redis/redis-set/">Redis Set</a>
</li>
<li>
<a class="" href="/notes/redis/redis-hash/">Redis Hash</a>
</li>
<li>
<a class="" href="/notes/redis/redis-zset/">Redis Zset</a>
</li>
<li>
<a class="" href="/notes/redis/redis-%E5%8F%91%E5%B8%83%E4%B8%8E%E8%AE%A2%E9%98%85/">Redis 发布与订阅</a>
</li>
<li>
<a class="" href="/notes/redis/redis-jedis/">Redis Jedis</a>
</li>
<li>
<a class="" href="/notes/redis/springboot-%E6%95%B4%E5%90%88-redis/">Spring Boot 整合 Redis</a>
</li>
<li>
<a class="" href="/notes/redis/redis-%E4%BA%8B%E5%8A%A1%E5%92%8C%E9%94%81%E6%9C%BA%E5%88%B6/">Redis 事务和锁机制</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-58f730a0b22fcdc7a886db614d77f88c" type="checkbox"/>
<label class="flex justify-between" for="section-58f730a0b22fcdc7a886db614d77f88c">
<a class="" role="button">代码随想录刷题</a>
</label>
<ul>
<li>
<a class="" href="/notes/leetcode/day001-%E7%AC%AC%E4%B8%80%E7%AB%A0%E6%95%B0%E7%BB%84/">Day001 第一章数组</a>
</li>
<li>
<a class="" href="/notes/leetcode/day002-%E7%AC%AC%E4%B8%80%E7%AB%A0%E6%95%B0%E7%BB%84/">Day002 第一章数组</a>
</li>
<li>
<a class="" href="/notes/leetcode/day003-%E7%AC%AC%E4%BA%8C%E7%AB%A0%E9%93%BE%E8%A1%A8/">Day003 第二章链表</a>
</li>
<li>
<a class="" href="/notes/leetcode/day004-%E7%AC%AC%E4%BA%8C%E7%AB%A0%E9%93%BE%E8%A1%A8/">Day004 第二章链表</a>
</li>
<li>
<a class="" href="/notes/leetcode/day006-%E7%AC%AC%E4%B8%89%E7%AB%A0%E5%93%88%E5%B8%8C%E8%A1%A8/">Day006 第三章哈希表</a>
</li>
<li>
<a class="" href="/notes/leetcode/day007-%E7%AC%AC%E4%B8%89%E7%AB%A0%E5%93%88%E5%B8%8C%E8%A1%A8/">Day007 第三章哈希表</a>
</li>
<li>
<a class="" href="/notes/leetcode/day008-%E7%AC%AC%E5%9B%9B%E7%AB%A0%E5%AD%97%E7%AC%A6%E4%B8%B2/">Day008 第四章字符串</a>
</li>
<li>
<a class="" href="/notes/leetcode/day009-%E7%AC%AC%E5%9B%9B%E7%AB%A0%E5%AD%97%E7%AC%A6%E4%B8%B2/">Day009 第四章字符串</a>
</li>
<li>
<a class="" href="/notes/leetcode/day010-%E7%AC%AC%E4%BA%94%E7%AB%A0%E6%A0%88%E4%B8%8E%E9%98%9F%E5%88%97/">Day010 第五章栈与队列</a>
</li>
<li>
<a class="" href="/notes/leetcode/day011-%E7%AC%AC%E4%BA%94%E7%AB%A0%E6%A0%88%E4%B8%8E%E9%98%9F%E5%88%97/">Day011 第五章栈与队列</a>
</li>
<li>
<a class="" href="/notes/leetcode/day013-%E7%AC%AC%E4%BA%94%E7%AB%A0%E6%A0%88%E4%B8%8E%E9%98%9F%E5%88%97/">Day013 第五章栈与队列</a>
</li>
<li>
<a class="" href="/notes/leetcode/day014-%E7%AC%AC%E5%85%AD%E7%AB%A0%E4%BA%8C%E5%8F%89%E6%A0%91/">Day014 第六章二叉树</a>
</li>
<li>
<a class="" href="/notes/leetcode/day015-%E7%AC%AC%E5%85%AD%E7%AB%A0%E4%BA%8C%E5%8F%89%E6%A0%91/">Day015 第六章二叉树</a>
</li>
<li>
<a class="" href="/notes/leetcode/day017-%E7%AC%AC%E5%85%AD%E7%AB%A0%E4%BA%8C%E5%8F%89%E6%A0%91/">Day017 第六章二叉树</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-4f95435d3a74007e2c985ea455bbb6e6" type="checkbox"/>
<label class="flex justify-between" for="section-4f95435d3a74007e2c985ea455bbb6e6">
<a class="" role="button">MyBatisPlus</a>
</label>
<ul>
<li>
<a class="" href="/notes/mybatisplus/%E5%BF%AB%E9%80%9F%E5%85%A5%E9%97%A8/">MP快速入门</a>
</li>
<li>
<a class="" href="/notes/mybatisplus/%E4%B8%80%E8%88%AC%E6%9F%A5%E8%AF%A2%E6%93%8D%E4%BD%9C/">一般查询操作</a>
</li>
<li>
<a class="" href="/notes/mybatisplus/%E5%88%86%E9%A1%B5%E6%9F%A5%E8%AF%A2/">分页查询</a>
</li>
<li>
<a class="" href="/notes/mybatisplus/%E9%80%BB%E8%BE%91%E5%88%A0%E9%99%A4/">逻辑删除</a>
</li>
<li>
<a class="" href="/notes/mybatisplus/%E6%9D%A1%E4%BB%B6%E6%9F%A5%E8%AF%A2/">条件查询</a>
</li>
<li>
<a class="" href="/notes/mybatisplus/%E5%B0%81%E8%A3%85service%E4%BD%BF%E7%94%A8/">封装service使用</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-3907b2cf55ed520ad784e24525c8baa4" type="checkbox"/>
<label class="flex justify-between" for="section-3907b2cf55ed520ad784e24525c8baa4">
<a class="" role="button">Swift</a>
</label>
<ul>
<li>
<a class="" href="/notes/swift/swiftui-%E5%B8%B8%E7%94%A8%E8%A7%86%E5%9B%BE-views/">Swift Ui 常用视图 Views</a>
</li>
<li>
<a class="" href="/notes/swift/swiftui-%E4%BF%A1%E6%81%AF%E8%A7%86%E5%9B%BE-views/">Swift Ui 信息视图 Views</a>
</li>
<li>
<a class="" href="/notes/swift/swiftui-%E5%B1%82%E7%BA%A7%E8%A7%86%E5%9B%BE-views/">Swift Ui 层级视图 Views</a>
</li>
<li>
<a class="" href="/notes/swift/swiftui-%E6%8E%A7%E5%88%B6%E8%A7%86%E5%9B%BE-views/">Swift Ui 控制视图 Views</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-0f70934a6e5284fbc93928c61dfe9c83" type="checkbox"/>
<label class="flex justify-between" for="section-0f70934a6e5284fbc93928c61dfe9c83">
<a class="" role="button">Java</a>
</label>
<ul>
<li>
<a class="" href="/notes/java/arraylist-%E6%89%A9%E5%AE%B9%E8%A7%84%E5%88%99/">Array List 扩容规则</a>
</li>
<li>
<a class="" href="/notes/java/hashmap-%E7%9B%B8%E5%85%B3%E5%AD%A6%E4%B9%A0%E6%80%BB%E7%BB%93/">Hash Map 相关学习总结</a>
</li>
<li>
<a class="" href="/notes/java/java-%E5%8F%8D%E5%B0%84/">Java 反射</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-3d1ea9814960db7e996773b67773e705" type="checkbox"/>
<label class="flex justify-between" for="section-3d1ea9814960db7e996773b67773e705">
<a class="" role="button">Java八股</a>
</label>
<ul>
<li>
<a class="" href="/notes/java-%E7%9B%B8%E5%85%B3/%E5%B9%B6%E5%8F%91/">Concurrence</a>
</li>
<li>
<a class="" href="/notes/java-%E7%9B%B8%E5%85%B3/mybatis/">MyBatis</a>
</li>
<li>
<a class="" href="/notes/java-%E7%9B%B8%E5%85%B3/mysql/">MySQL</a>
</li>
<li>
<a class="" href="/notes/java-%E7%9B%B8%E5%85%B3/jvm/">Jvm</a>
</li>
<li>
<a class="" href="/notes/java-%E7%9B%B8%E5%85%B3/redis/">Redis</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-ea55243c547fa272027885b73a74852e" type="checkbox"/>
<label class="flex justify-between" for="section-ea55243c547fa272027885b73a74852e">
<a class="" role="button">嵌入式重点总结</a>
</label>
<ul>
<li>
<a class="" href="/notes/%E5%B5%8C%E5%85%A5%E5%BC%8F%E6%80%BB%E7%BB%93/c/">C 语言相关</a>
</li>
<li>
<a class="" href="/notes/%E5%B5%8C%E5%85%A5%E5%BC%8F%E6%80%BB%E7%BB%93/%E4%B8%AD%E6%96%AD%E7%B3%BB%E7%BB%9F/">中断系统</a>
</li>
<li>
<a class="" href="/notes/%E5%B5%8C%E5%85%A5%E5%BC%8F%E6%80%BB%E7%BB%93/%E5%AE%9A%E6%97%B6%E5%99%A8/">TIM</a>
</li>
<li>
<a class="" href="/notes/%E5%B5%8C%E5%85%A5%E5%BC%8F%E6%80%BB%E7%BB%93/adc/">ADC</a>
</li>
<li>
<a class="" href="/notes/%E5%B5%8C%E5%85%A5%E5%BC%8F%E6%80%BB%E7%BB%93/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98/">嵌入式知识点串烧</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="book-section-flat">
<span>--解决方案--👇</span>
<ul>
<li>
<input class="toggle" id="section-f1d4602254471b9d0da445dd468cd456" type="checkbox"/>
<label class="flex justify-between" for="section-f1d4602254471b9d0da445dd468cd456">
<a class="" role="button">环境配置</a>
</label>
<ul>
<li>
<a class="" href="/solution/environment/apple-m%E7%B3%BB%E5%88%97%E8%8A%AF%E7%89%87%E5%AE%89%E8%A3%85-pyqt/">Apple M系列芯片安装 Pyqt</a>
</li>
<li>
<a class="" href="/solution/environment/docker-%E5%AE%89%E8%A3%85-redis-/">Docker 安装 Redis</a>
</li>
<li>
<a class="" href="/solution/environment/hugo-%E4%B8%BB%E9%A2%98-hugo-book-%E4%B8%AD%E8%8B%B1%E6%96%87%E6%90%9C%E7%B4%A2%E9%85%8D%E7%BD%AE/">Hugo 主题 Hugo Book 中英文搜索配置</a>
</li>
<li>
<a class="" href="/solution/environment/iterm2-oh-my-zsh-%E9%85%8D%E7%BD%AE/">I Term2 Oh My Zsh 配置</a>
</li>
<li>
<a class="" href="/solution/environment/m1-%E8%8A%AF%E7%89%87-docker-%E5%AE%89%E8%A3%85-mysql5.7-/">M1 芯片 Docker 安装 Mysql5.7</a>
</li>
<li>
<a class="" href="/solution/environment/mac-idea-%E5%BF%AB%E6%8D%B7%E9%94%AE%E4%BD%8D/">MAC Idea 快捷键位</a>
</li>
<li>
<a class="" href="/solution/environment/mac-%E5%90%AF%E5%8A%A8%E5%8F%B0%E8%87%AA%E5%AE%9A%E4%B9%89%E8%A1%8C%E5%88%97%E5%B8%83%E5%B1%80/">MAC 启动台自定义行列布局</a>
</li>
<li>
<a class="" href="/solution/environment/%E5%86%85%E7%BD%91%E7%A9%BF%E9%80%8F/">内网穿透</a>
</li>
<li>
<a class="" href="/solution/environment/%E5%86%85%E7%BD%91%E7%A9%BF%E9%80%8F%E7%9A%84%E8%87%AA%E5%90%AF%E5%8A%A8%E8%AE%BE%E7%BD%AE/">内网穿透的自启动设置</a>
</li>
</ul>
</li>
<li>
<input class="toggle" id="section-5a76a664ba4855b79d3c1bc77e5b08b1" type="checkbox"/>
<label class="flex justify-between" for="section-5a76a664ba4855b79d3c1bc77e5b08b1">
<a class="" role="button">杂乱问题</a>
</label>
<ul>
<li>
<a class="" href="/solution/problems/brew-%E4%B8%80%E4%BA%9B%E5%B8%B8%E7%94%A8%E5%91%BD%E4%BB%A4/">Brew 一些常用命令</a>
</li>
<li>
<a class="" href="/solution/problems/docker-%E5%B8%B8%E7%94%A8%E5%91%BD%E4%BB%A4/">Docker 常用命令</a>
</li>
<li>
<a class="" href="/solution/problems/git-github-%E7%9B%B8%E5%85%B3%E5%91%BD%E4%BB%A4%E6%95%B4%E7%90%86/">Git Git Hub 相关命令整理</a>
</li>
<li>
<a class="" href="/solution/problems/%E9%9D%A2%E8%AF%95%E9%A2%98%E6%89%AB%E7%9B%B2/">面试题扫盲</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="book-section-flat">
<span>一点业余👇</span>
<ul>
</ul>
</li>
<li class="book-section-flat">
<span>--其他记录--👇</span>
<ul>
<li>
<input class="toggle" id="section-9ef4d2063ddc9af7785b193647f22260" type="checkbox"/>
<label class="flex justify-between" for="section-9ef4d2063ddc9af7785b193647f22260">
<a class="" role="button">我和阿刁</a>
</label>
<ul>
<li>
<a class="" href="/daily/ad/%E5%85%B3%E4%BA%8E%E9%98%BF%E5%88%812022%E5%B9%B4%E7%9A%84%E7%94%9F%E6%97%A5%E7%9A%84%E5%B0%8F%E8%AE%BA%E6%96%87/">关于阿刁2022年的生日的小论文</a>
</li>
<li>
<a class="" href="/daily/ad/%E5%85%B3%E4%BA%8E%E9%98%BF%E5%88%81%E7%9A%842021%E5%B9%B4%E5%BA%A6%E6%80%BB%E7%BB%93/">关于阿刁的2021年度总结</a>
</li>
<li>
<a class="" href="/daily/ad/%E5%85%B3%E4%BA%8E%E9%98%BF%E5%88%81%E7%9A%842022%E5%B9%B4%E5%BA%A6%E6%80%BB%E7%BB%93/">关于阿刁的2022年度总结</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</nav>
<script>(function(){var e=document.querySelector("aside .book-menu-content");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
</div>
</aside>
<div class="book-page">
<header class="book-header">
<div class="flex align-center justify-between">
<label for="menu-control">
<img alt="Menu" class="book-icon" src="/svg/menu.svg"/>
</label>
<strong>Tensor Flow 笔记（二）</strong>
<label for="toc-control">
<img alt="Table of Contents" class="book-icon" src="/svg/toc.svg"/>
</label>
</div>
<aside class="hidden clearfix">
<nav id="TableOfContents">
<ul>
<li><a href="#tensorflow-笔记二">TensorFlow 笔记（二）</a>
<ul>
<li><a href="#1-预备知识">1. 预备知识</a></li>
<li><a href="#2-神经网络复杂度">2. 神经网络复杂度</a></li>
<li><a href="#3-指数衰减学习率">3. 指数衰减学习率</a></li>
<li><a href="#4激活函数">4.激活函数</a>
<ul>
<li><a href="#41-sigmoid-函数">4.1 Sigmoid 函数</a></li>
<li><a href="#42-relu函数">4.2 Relu函数</a></li>
<li><a href="#42-总结">4.2 总结</a></li>
</ul>
</li>
<li><a href="#5损失函数">5.损失函数</a>
<ul>
<li><a href="#51-均方误差损失函数">5.1 均方误差损失函数</a></li>
<li><a href="#52-自定义损失函数">5.2 自定义损失函数</a></li>
<li><a href="#53-交叉熵损失函数cross-entropy">5.3 交叉熵损失函数（Cross Entropy）</a></li>
</ul>
</li>
<li><a href="#6-欠拟合与过拟合">6. 欠拟合与过拟合</a>
<ul>
<li><a href="#61-解决方案">6.1 解决方案</a></li>
<li><a href="#62-正则化缓解过拟合">6.2 正则化缓解过拟合</a></li>
</ul>
</li>
<li><a href="#7-优化器">7. 优化器</a>
<ul>
<li><a href="#71-更新参数四步骤">7.1 更新参数四步骤</a></li>
<li><a href="#72-sgd无momentum常用的梯度下降法">7.2 SGD（无momentum），常用的梯度下降法</a></li>
<li><a href="#73-sgdm含momentum的sgd在-sgd-基础上增加了一阶动量">7.3 SGDM（含momentum的SGD），在 SGD 基础上增加了一阶动量</a></li>
<li><a href="#74-adagrad-在-sgd-基础上增加二阶动量">7.4 Adagrad, 在 SGD 基础上增加二阶动量</a></li>
<li><a href="#75-rmspropsgd基础上增加二阶动量">7.5 RMSProp，SGD基础上增加二阶动量</a></li>
<li><a href="#76-adam同时结合-sgdm-的一阶动量和-rmsprop-二阶动量">7.6 Adam，同时结合 SGDM 的一阶动量和 RMSProp 二阶动量</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</nav>
</aside>
</header>
<article class="markdown"><h1 id="tensorflow-笔记二">
  TensorFlow 笔记（二）
  <a class="anchor" href="#tensorflow-%e7%ac%94%e8%ae%b0%e4%ba%8c">#</a>
</h1>
<p>介绍神经网络的优化过程，主要有：</p>
<ol>
<li>神经网络复杂度</li>
<li>指数衰减学习率</li>
<li>激活函数</li>
<li>损失函数</li>
<li>欠拟合与过拟合</li>
<li>正则化减少过拟合</li>
<li>优化器更新网络参数</li>
</ol>
<h2 id="1-预备知识">
  1. 预备知识
  <a class="anchor" href="#1-%e9%a2%84%e5%a4%87%e7%9f%a5%e8%af%86">#</a>
</h2>
<ul>
<li><code>tf.where(条件语句，真返回A，假返回B)</code>：条件语句，真返回A，假返回B</li>
</ul>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>a <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>constant([<span style="color:#ae81ff">1</span>, <span style="color:#ae81ff">2</span>, <span style="color:#ae81ff">3</span>, <span style="color:#ae81ff">1</span>, <span style="color:#ae81ff">1</span>])
</span></span><span style="display:flex;"><span>b <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>constant([<span style="color:#ae81ff">0</span>, <span style="color:#ae81ff">1</span>, <span style="color:#ae81ff">3</span>, <span style="color:#ae81ff">4</span>, <span style="color:#ae81ff">5</span>])
</span></span><span style="display:flex;"><span>c <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>where(tf<span style="color:#f92672">.</span>greater(a, b), a, b) <span style="color:#75715e"># 若a&gt;b，返回 a 对应位置的元素，否则返回 b 对应位置的元素</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>输出<span style="color:#960050;background-color:#1e0010">：</span>c <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>Tensor([<span style="color:#ae81ff">1</span> <span style="color:#ae81ff">2</span> <span style="color:#ae81ff">3</span> <span style="color:#ae81ff">4</span> <span style="color:#ae81ff">5</span>], shape<span style="color:#f92672">=</span>(<span style="color:#ae81ff">5</span>,), dtype<span style="color:#f92672">=</span>int32)
</span></span></code></pre></div><ul>
<li><code>np.random.RandomState.rand(维度)</code>：返回 [0, 1) 之间的随机数</li>
</ul>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>rdm <span style="color:#f92672">=</span> np<span style="color:#f92672">.</span>random<span style="color:#f92672">.</span>RandomState(seed<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>)
</span></span><span style="display:flex;"><span>a <span style="color:#f92672">=</span> rdm<span style="color:#f92672">.</span>rand() <span style="color:#75715e"># 返回一个随机标量</span>
</span></span><span style="display:flex;"><span>b <span style="color:#f92672">=</span> rdm<span style="color:#f92672">.</span>rand(<span style="color:#ae81ff">2</span>, <span style="color:#ae81ff">3</span>) <span style="color:#75715e"># 返回维度为2行3列随机数矩阵</span>
</span></span></code></pre></div><ul>
<li><code>np.vstack(数组1， 数组2)</code>：将两个数组按垂直方向叠加</li>
</ul>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>a <span style="color:#f92672">=</span> np<span style="color:#f92672">.</span>array([<span style="color:#ae81ff">1</span>, <span style="color:#ae81ff">2</span>, <span style="color:#ae81ff">3</span>])
</span></span><span style="display:flex;"><span>b <span style="color:#f92672">=</span> np<span style="color:#f92672">.</span>array([<span style="color:#ae81ff">4</span>, <span style="color:#ae81ff">5</span>, <span style="color:#ae81ff">6</span>])
</span></span><span style="display:flex;"><span>c <span style="color:#f92672">=</span> np<span style="color:#f92672">.</span>vstack((a, b))
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>结果<span style="color:#960050;background-color:#1e0010">：</span>
</span></span><span style="display:flex;"><span>[[<span style="color:#ae81ff">1</span> <span style="color:#ae81ff">2</span> <span style="color:#ae81ff">3</span>]
</span></span><span style="display:flex;"><span>[<span style="color:#ae81ff">4</span> <span style="color:#ae81ff">5</span> <span style="color:#ae81ff">6</span>]]
</span></span></code></pre></div><ul>
<li><code>np.mgrid[]	.ravel()	np.c_[]</code></li>
</ul>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>np<span style="color:#f92672">.</span>mgrid[起始值:结束值:步长, 起始值:结束值:步长, <span style="color:#f92672">...</span>] <span style="color:#75715e"># 几组生成的数据就几纬，列数由组参数决定</span>
</span></span><span style="display:flex;"><span>x<span style="color:#f92672">.</span>ravel() <span style="color:#75715e"># 将 x 变为一维数组，"把 . 前的变量拉直"</span>
</span></span><span style="display:flex;"><span>np<span style="color:#f92672">.</span>c_[] 使返回的间隔数值点配对
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>例<span style="color:#960050;background-color:#1e0010">：</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> numpy <span style="color:#66d9ef">as</span> np
</span></span><span style="display:flex;"><span>x, y <span style="color:#f92672">=</span> np<span style="color:#f92672">.</span>mgrid[<span style="color:#ae81ff">1</span>:<span style="color:#ae81ff">3</span>:<span style="color:#ae81ff">1</span>, <span style="color:#ae81ff">2</span>:<span style="color:#ae81ff">4</span>:<span style="color:#ae81ff">0.5</span>]
</span></span><span style="display:flex;"><span>grid <span style="color:#f92672">=</span> np<span style="color:#f92672">.</span>c_[x<span style="color:#f92672">.</span>ravel(), y<span style="color:#f92672">.</span>ravel()]
</span></span></code></pre></div><h2 id="2-神经网络复杂度">
  2. 神经网络复杂度
  <a class="anchor" href="#2-%e7%a5%9e%e7%bb%8f%e7%bd%91%e7%bb%9c%e5%a4%8d%e6%9d%82%e5%ba%a6">#</a>
</h2>
<p>空间复杂度：</p>
<ul>
<li>层数 = 隐藏层的层数 + 1个输出层</li>
<li>总参数 = 总w + 总b</li>
</ul>
<p>时间复杂度：</p>
<ul>
<li>即乘加运算次数</li>
</ul>
<h2 id="3-指数衰减学习率">
  3. 指数衰减学习率
  <a class="anchor" href="#3-%e6%8c%87%e6%95%b0%e8%a1%b0%e5%87%8f%e5%ad%a6%e4%b9%a0%e7%8e%87">#</a>
</h2>
<p>可以先用较大的学习率，快速得到较优解，然后逐步减小学习率，使模型在训练后期稳定。

<link href="/katex/katex.min.css" rel="stylesheet"/>
<script defer="" src="/katex/katex.min.js"></script>
<script defer="" onload="renderMathInElement(document.body);" src="/katex/auto-render.min.js"></script><span>
  \[指数衰减学习率 = 初始学习率 . 学习率衰减^{当前轮数 / 多少轮衰减一次}\]
</span>
</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>epoch <span style="color:#f92672">=</span> <span style="color:#ae81ff">40</span>
</span></span><span style="display:flex;"><span>LR_BASE <span style="color:#f92672">=</span> <span style="color:#ae81ff">0.2</span>
</span></span><span style="display:flex;"><span>LR_DECAY <span style="color:#f92672">=</span> <span style="color:#ae81ff">0.99</span>
</span></span><span style="display:flex;"><span>LR_STEP <span style="color:#f92672">=</span> <span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">for</span> epoch <span style="color:#f92672">in</span> range(epoch):
</span></span><span style="display:flex;"><span>  lr <span style="color:#f92672">=</span> LR_BASE <span style="color:#f92672">*</span> LR_DECAY <span style="color:#f92672">**</span> (epoch <span style="color:#f92672">/</span> LR_STEP)
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">with</span> tf<span style="color:#f92672">.</span>GradientTape() <span style="color:#66d9ef">as</span> tape:
</span></span><span style="display:flex;"><span>    loss <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>square(w <span style="color:#f92672">+</span> <span style="color:#ae81ff">1</span>)
</span></span><span style="display:flex;"><span>  grads <span style="color:#f92672">=</span> tape<span style="color:#f92672">.</span>gradient(loss, w)
</span></span><span style="display:flex;"><span>  w<span style="color:#f92672">.</span>assign_sub(lr <span style="color:#f92672">*</span> grads)
</span></span></code></pre></div><h2 id="4激活函数">
  4.激活函数
  <a class="anchor" href="#4%e6%bf%80%e6%b4%bb%e5%87%bd%e6%95%b0">#</a>
</h2>
<h3 id="41-sigmoid-函数">
  4.1 Sigmoid 函数
  <a class="anchor" href="#41-sigmoid-%e5%87%bd%e6%95%b0">#</a>
</h3>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>tf<span style="color:#f92672">.</span>nn<span style="color:#f92672">.</span>sigmoid(x)
</span></span></code></pre></div><span>
  \[f(x) = \frac{1}{1+{e^{-x}}}\]
</span>
<p>特点：</p>
<ol>
<li>容易造成梯度消失(因为梯度小)</li>
<li>输出非0均值，收敛慢</li>
<li>幂运算复杂，训练时间长</li>
</ol>
<h3 id="42-relu函数">
  4.2 Relu函数
  <a class="anchor" href="#42-relu%e5%87%bd%e6%95%b0">#</a>
</h3>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>tf<span style="color:#f92672">.</span>nn<span style="color:#f92672">.</span>relu(x)
</span></span></code></pre></div><span>
  \[f(x) = max(x, 0) = \begin{cases}0, &amp; x&lt;0 \\ 1, &amp; x \ge0\end{cases}\]
</span>
<p>优点：</p>
<ol>
<li>解决了梯度消失问题</li>
<li>只需要判断输入是否大于0，计算速度快</li>
<li>收敛速度远快于<code>sigmoid</code>和<code>tanh</code></li>
</ol>
<p>缺点：</p>
<ol>
<li>输出非 0 均值，收敛慢</li>
<li>某些神经元可能永远未被激活，导致相应的参数永远不能被更新</li>
</ol>
<h3 id="42-总结">
  4.2 总结
  <a class="anchor" href="#42-%e6%80%bb%e7%bb%93">#</a>
</h3>
<ul>
<li>
<p>首选 relu 激活函数</p>
</li>
<li>
<p>学习率设置较小值</p>
</li>
<li>
<p>输入特征标准化，即让输入特征满足以 0 为均值，1 为标准差的正态分布</p>
</li>
<li>
<p>初始参数中心化，即让随机生成的参数满足以0为均值，<span>
  \(\sqrt\frac{2}{当前输入特征个数}\)
</span>
</p>
<p>为标准差的正态分布。</p>
</li>
</ul>
<h2 id="5损失函数">
  5.损失函数
  <a class="anchor" href="#5%e6%8d%9f%e5%a4%b1%e5%87%bd%e6%95%b0">#</a>
</h2>
<p>损失函数：预测值（y）与已知答案（y_）的差距</p>
<p>损失函数的优化目标：使 loss 最小，主要有以下损失函数</p>
<ul>
<li>mse (Mean Squared Error)</li>
<li>自定义</li>
<li>ce(Cross Entropy)</li>
</ul>
<h3 id="51-均方误差损失函数">
  5.1 均方误差损失函数
  <a class="anchor" href="#51-%e5%9d%87%e6%96%b9%e8%af%af%e5%b7%ae%e6%8d%9f%e5%a4%b1%e5%87%bd%e6%95%b0">#</a>
</h3>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>loss_mse <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>reduce_mean(tf<span style="color:#f92672">.</span>square(y_<span style="color:#f92672">-</span>y))
</span></span></code></pre></div><p><span>
  \[MSE(y\_, y) = \frac{\Sigma^n_{i=1}(y-y\_)^2}{n}\]
</span>

例：</p>
<blockquote>
<p>预测酸奶日销量 y，<span>
  \(x_1\)
</span>
 和 <span>
  \(x_2\)
</span>
是影响日常销量的因素。</p>
</blockquote>
<blockquote>
<p>建模前，应预先采集的数据有：每日<span>
  \(x_1\)
</span>
 , <span>
  \(x_2\)
</span>
和日销量<span>
  \(y\_\)
</span>
（即已知答案：最佳情况：产量=销量），拟造数据集<span>
  \(X\)
</span>
, <span>
  \(Y\_\)
</span>
  :  <span>
  \(y\_=x_1 + x_2\)
</span>
   噪声：<span>
  \(-0.05~+0.05\)
</span>
 拟合可以预测销量的函数。</p>
</blockquote>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">import</span> tensorflow <span style="color:#66d9ef">as</span> tf
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> numpy <span style="color:#66d9ef">as</span> np
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>SEED <span style="color:#f92672">=</span> <span style="color:#ae81ff">23455</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>rdm <span style="color:#f92672">=</span> np<span style="color:#f92672">.</span>random<span style="color:#f92672">.</span>RandomState(SEED)
</span></span><span style="display:flex;"><span>x <span style="color:#f92672">=</span> rdm<span style="color:#f92672">.</span>rand(<span style="color:#ae81ff">32</span>, <span style="color:#ae81ff">2</span>)
</span></span><span style="display:flex;"><span>y_ <span style="color:#f92672">=</span> [[x1 <span style="color:#f92672">+</span> x2 <span style="color:#f92672">+</span> (rdm<span style="color:#f92672">.</span>rand() <span style="color:#f92672">/</span> <span style="color:#ae81ff">10.0</span> <span style="color:#f92672">-</span> <span style="color:#ae81ff">0.05</span>)] <span style="color:#66d9ef">for</span> (x1, x2) <span style="color:#f92672">in</span> x]
</span></span><span style="display:flex;"><span>x <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>cast(x, dtype<span style="color:#f92672">=</span>tf<span style="color:#f92672">.</span>float32)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>w1 <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>Variable(tf<span style="color:#f92672">.</span>random<span style="color:#f92672">.</span>normal([<span style="color:#ae81ff">2</span>, <span style="color:#ae81ff">1</span>], stddev<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>, seed<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>))
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>epoch <span style="color:#f92672">=</span> <span style="color:#ae81ff">15000</span>
</span></span><span style="display:flex;"><span>lr <span style="color:#f92672">=</span> <span style="color:#ae81ff">0.002</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">for</span> epoch <span style="color:#f92672">in</span> range(epoch):
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">with</span> tf<span style="color:#f92672">.</span>GradientTape() <span style="color:#66d9ef">as</span> tape:
</span></span><span style="display:flex;"><span>        y <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>matmul(x, w1)
</span></span><span style="display:flex;"><span>        loss_mse <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>reduce_mean(tf<span style="color:#f92672">.</span>square(y_ <span style="color:#f92672">-</span> y))
</span></span><span style="display:flex;"><span>    grads <span style="color:#f92672">=</span> tape<span style="color:#f92672">.</span>gradient(loss_mse, w1)
</span></span><span style="display:flex;"><span>    w1<span style="color:#f92672">.</span>assign_sub(lr <span style="color:#f92672">*</span> grads)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> epoch <span style="color:#f92672">%</span> <span style="color:#ae81ff">500</span> <span style="color:#f92672">==</span> <span style="color:#ae81ff">0</span>:
</span></span><span style="display:flex;"><span>        print(<span style="color:#e6db74">"After </span><span style="color:#e6db74">%d</span><span style="color:#e6db74"> training steps, w1 is "</span> <span style="color:#f92672">%</span> (epoch))
</span></span><span style="display:flex;"><span>        print(w1<span style="color:#f92672">.</span>numpy(), <span style="color:#e6db74">"</span><span style="color:#ae81ff">\n</span><span style="color:#e6db74">"</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>print(<span style="color:#e6db74">"Final w1 is: "</span>, w1<span style="color:#f92672">.</span>numpy())
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>Output:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">...</span>
</span></span><span style="display:flex;"><span>  Final w1 <span style="color:#f92672">is</span>:  [[<span style="color:#ae81ff">1.0009792</span>]
</span></span><span style="display:flex;"><span> 	[<span style="color:#ae81ff">0.9977485</span>]]
</span></span></code></pre></div><h3 id="52-自定义损失函数">
  5.2 自定义损失函数
  <a class="anchor" href="#52-%e8%87%aa%e5%ae%9a%e4%b9%89%e6%8d%9f%e5%a4%b1%e5%87%bd%e6%95%b0">#</a>
</h3>
<p>如 5.1 预测商品销量，预测多了，损失成本；预测少了，损失利润。若<span>
  \(利润\neq成本\)
</span>
则 mse 产生的 loss 无法实现利益最大化。
<span>
  \[自定义损失函数：loss(y\_,y)=\Sigma{f({y\_}, y)}\]
</span>

优化 5.1 损失函数</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>loss_zdy <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>reduce_sum(tf<span style="color:#f92672">.</span>where(tf<span style="color:#f92672">.</span>greater(y, y_), COST(y<span style="color:#f92672">-</span>y_), PROFIT(y_<span style="color:#f92672">-</span>y)))
</span></span></code></pre></div><p><span>
  \[f(y\_,y)=\begin{cases}PROFIT*(y\_-y), &amp; y&lt;y\_ &amp; 预测的y少了，损失利润\\ COST*(y-y\_), &amp; y \ge y\_ &amp; 预测的y多了，损失成本 \end{cases}\]
</span>

例：</p>
<blockquote>
<p>预测酸奶销量，酸奶成本（COST）1元，酸奶利润（PROFIT）99元。预测少了损失利润99元，预测多了损失成本1元。预测少了损失大，希望生成的预测函数往多了预测。</p>
</blockquote>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">import</span> tensorflow <span style="color:#66d9ef">as</span> tf
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> numpy <span style="color:#66d9ef">as</span> np
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>SEED <span style="color:#f92672">=</span> <span style="color:#ae81ff">23455</span>
</span></span><span style="display:flex;"><span>COST <span style="color:#f92672">=</span> <span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span>PROFIT <span style="color:#f92672">=</span> <span style="color:#ae81ff">99</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>rdm <span style="color:#f92672">=</span> np<span style="color:#f92672">.</span>random<span style="color:#f92672">.</span>RandomState(SEED)
</span></span><span style="display:flex;"><span>x <span style="color:#f92672">=</span> rdm<span style="color:#f92672">.</span>rand(<span style="color:#ae81ff">32</span>, <span style="color:#ae81ff">2</span>)
</span></span><span style="display:flex;"><span>y_ <span style="color:#f92672">=</span> [[x1 <span style="color:#f92672">+</span> x2 <span style="color:#f92672">+</span> (rdm<span style="color:#f92672">.</span>rand() <span style="color:#f92672">/</span> <span style="color:#ae81ff">10.0</span> <span style="color:#f92672">-</span> <span style="color:#ae81ff">0.05</span>)] <span style="color:#66d9ef">for</span> (x1, x2) <span style="color:#f92672">in</span> x]
</span></span><span style="display:flex;"><span>x <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>cast(x, dtype<span style="color:#f92672">=</span>tf<span style="color:#f92672">.</span>float32)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>w1 <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>Variable(tf<span style="color:#f92672">.</span>random<span style="color:#f92672">.</span>normal([<span style="color:#ae81ff">2</span>, <span style="color:#ae81ff">1</span>], stddev<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>, seed<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>))
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>epoch <span style="color:#f92672">=</span> <span style="color:#ae81ff">15000</span>
</span></span><span style="display:flex;"><span>lr <span style="color:#f92672">=</span> <span style="color:#ae81ff">0.002</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">for</span> epoch <span style="color:#f92672">in</span> range(epoch):
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">with</span> tf<span style="color:#f92672">.</span>GradientTape() <span style="color:#66d9ef">as</span> tape:
</span></span><span style="display:flex;"><span>        y <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>matmul(x, w1)
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># loss_mse = tf.reduce_mean(tf.square(y_ - y))</span>
</span></span><span style="display:flex;"><span>        loss <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>reduce_sum(tf<span style="color:#f92672">.</span>where(tf<span style="color:#f92672">.</span>greater(y, y_), (y<span style="color:#f92672">-</span>y_) <span style="color:#f92672">*</span> COST, (y_<span style="color:#f92672">-</span>y) <span style="color:#f92672">*</span> PROFIT))
</span></span><span style="display:flex;"><span>    grads <span style="color:#f92672">=</span> tape<span style="color:#f92672">.</span>gradient(loss, w1)
</span></span><span style="display:flex;"><span>    w1<span style="color:#f92672">.</span>assign_sub(lr <span style="color:#f92672">*</span> grads)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> epoch <span style="color:#f92672">%</span> <span style="color:#ae81ff">500</span> <span style="color:#f92672">==</span> <span style="color:#ae81ff">0</span>:
</span></span><span style="display:flex;"><span>        print(<span style="color:#e6db74">"After </span><span style="color:#e6db74">%d</span><span style="color:#e6db74"> training steps, w1 is "</span> <span style="color:#f92672">%</span> (epoch))
</span></span><span style="display:flex;"><span>        print(w1<span style="color:#f92672">.</span>numpy(), <span style="color:#e6db74">"</span><span style="color:#ae81ff">\n</span><span style="color:#e6db74">"</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>print(<span style="color:#e6db74">"Final w1 is: "</span>, w1<span style="color:#f92672">.</span>numpy())
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>Output:
</span></span><span style="display:flex;"><span>  Final w1 <span style="color:#f92672">is</span>:  [[<span style="color:#ae81ff">1.1420637</span>]
</span></span><span style="display:flex;"><span> 	[<span style="color:#ae81ff">1.101678</span> ]]
</span></span></code></pre></div><h3 id="53-交叉熵损失函数cross-entropy">
  5.3 交叉熵损失函数（Cross Entropy）
  <a class="anchor" href="#53-%e4%ba%a4%e5%8f%89%e7%86%b5%e6%8d%9f%e5%a4%b1%e5%87%bd%e6%95%b0cross-entropy">#</a>
</h3>
<p>交叉熵损失函数：表征两个概率分布之间的距离</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>tf<span style="color:#f92672">.</span>losses<span style="color:#f92672">.</span>categorical_crossentropy(y_, y)
</span></span></code></pre></div><p><span>
  \[H(y\_,y)=-\Sigma y\_*\ln y\]
</span>
<strong>softmax 与交叉熵结合</strong>：输出先过 softmax 函数，再计算 y 与 y_ 的交叉熵损失函数。</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;" tabindex="0"><code class="language-python" data-lang="python"><span style="display:flex;"><span>tf<span style="color:#f92672">.</span>nn<span style="color:#f92672">.</span>softmax_cross_entropy_with_logits(y_, y)
</span></span><span style="display:flex;"><span>y_ <span style="color:#f92672">=</span> np<span style="color:#f92672">.</span>array([[<span style="color:#ae81ff">1</span>, <span style="color:#ae81ff">0</span>, <span style="color:#ae81ff">0</span>], [<span style="color:#ae81ff">0</span>, <span style="color:#ae81ff">1</span>, <span style="color:#ae81ff">0</span>], [<span style="color:#ae81ff">0</span>, <span style="color:#ae81ff">0</span>, <span style="color:#ae81ff">1</span>], [<span style="color:#ae81ff">1</span>, <span style="color:#ae81ff">0</span>, <span style="color:#ae81ff">0</span>,], [<span style="color:#ae81ff">0</span>, <span style="color:#ae81ff">1</span>, <span style="color:#ae81ff">0</span>]])
</span></span><span style="display:flex;"><span>y <span style="color:#f92672">=</span> np<span style="color:#f92672">.</span>array([[<span style="color:#ae81ff">12</span>, <span style="color:#ae81ff">3</span>, <span style="color:#ae81ff">2</span>], [<span style="color:#ae81ff">3</span>, <span style="color:#ae81ff">10</span>, <span style="color:#ae81ff">1</span>], [<span style="color:#ae81ff">1</span>, <span style="color:#ae81ff">2</span>, <span style="color:#ae81ff">5</span>], [<span style="color:#ae81ff">4</span>, <span style="color:#ae81ff">6.5</span>, <span style="color:#ae81ff">1.2</span>], [<span style="color:#ae81ff">3</span>, <span style="color:#ae81ff">6</span>, <span style="color:#ae81ff">1</span>]])
</span></span><span style="display:flex;"><span>y_pro <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>nn<span style="color:#f92672">.</span>softmax(y)
</span></span><span style="display:flex;"><span>loss_ce1 <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>losses<span style="color:#f92672">.</span>categorical_crossentropy(y_, y_pro)
</span></span><span style="display:flex;"><span>loss_ce2 <span style="color:#f92672">=</span> tf<span style="color:#f92672">.</span>nn<span style="color:#f92672">.</span>softmax_cross_entropy_with_logits(y_, y)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>tf<span style="color:#f92672">.</span>nn<span style="color:#f92672">.</span>softmax_cross_entropy_with_logits(y_, y) 函数内部就有 softmax算法<span style="color:#960050;background-color:#1e0010">，</span>不用再 softmax 了
</span></span></code></pre></div><h2 id="6-欠拟合与过拟合">
  6. 欠拟合与过拟合
  <a class="anchor" href="#6-%e6%ac%a0%e6%8b%9f%e5%90%88%e4%b8%8e%e8%bf%87%e6%8b%9f%e5%90%88">#</a>
</h2>
<h3 id="61-解决方案">
  6.1 解决方案
  <a class="anchor" href="#61-%e8%a7%a3%e5%86%b3%e6%96%b9%e6%a1%88">#</a>
</h3>
<p>欠拟合解决方法：</p>
<ul>
<li>增加输入特征项</li>
<li>增加网络参数</li>
<li>减少正则化参数</li>
</ul>
<p>过拟合解决方法：</p>
<ul>
<li>数据清洗</li>
<li>增大训练集</li>
<li>采用正则化</li>
<li>增大正则化参数</li>
</ul>
<h3 id="62-正则化缓解过拟合">
  6.2 正则化缓解过拟合
  <a class="anchor" href="#62-%e6%ad%a3%e5%88%99%e5%8c%96%e7%bc%93%e8%a7%a3%e8%bf%87%e6%8b%9f%e5%90%88">#</a>
</h3>
<p>正则化在损失函数中引入模型复杂度指标，利用给 W 加权值，弱化了训练数据的噪声（一般不正则化b）
<span>
  \[loss = loss(y, y\_) + REGULARIZER * loss(w)\]
</span>

L1 正则化：
<span>
  \[loss_{L1}(w) = \Sigma{|w_i|}\]
</span>

L2 正则化：
<span>
  \[loss_{L2}(w)=\Sigma{|w_{i}^2|}\]
</span>

正则化的选择：</p>
<ul>
<li>L1 正则化大概率会使很多参数变为 0，因此该方法可通过稀疏参数，即减少参数的数量，降低复杂度。</li>
<li>L2 正则化会使参数很接近 0 但不为 0，因为该方法可通过减少参数值的大小降低复杂度。</li>
</ul>
<h2 id="7-优化器">
  7. 优化器
  <a class="anchor" href="#7-%e4%bc%98%e5%8c%96%e5%99%a8">#</a>
</h2>
<h3 id="71-更新参数四步骤">
  7.1 更新参数四步骤
  <a class="anchor" href="#71-%e6%9b%b4%e6%96%b0%e5%8f%82%e6%95%b0%e5%9b%9b%e6%ad%a5%e9%aa%a4">#</a>
</h3>
<ol>
<li>计算 <span>
  \(t\)
</span>
 时刻损失函数关于当前参数的梯度  	<span>
  \(g_t=\nabla{loss}=\frac{\partial loss}{\partial (w_t)} \)
</span>
</li>
<li>计算  <span>
  \(t\)
</span>
 时刻一阶动量 <span>
  \(m_t\)
</span>
 和二阶动量 <span>
  \(V_t\)
</span>
</li>
<li>计算 <span>
  \(t\)
</span>
 时刻下降梯度：<span>
  \(\eta{_t}=lr. \frac{m_t}{\sqrt{V_t}}\)
</span>
</li>
<li>计算 t+1 时刻参数： <span>
  \(w_{t+1}=w_t - \eta{_t} = w_t - lr. \frac{m_t}{\sqrt V_t}\)
</span>
</li>
</ol>
<p>其中：</p>
<ul>
<li>一阶动量：与梯度相关的函数</li>
<li>二阶动量：与梯度平方相关的函数</li>
</ul>
<h3 id="72-sgd无momentum常用的梯度下降法">
  7.2 SGD（无momentum），常用的梯度下降法
  <a class="anchor" href="#72-sgd%e6%97%a0momentum%e5%b8%b8%e7%94%a8%e7%9a%84%e6%a2%af%e5%ba%a6%e4%b8%8b%e9%99%8d%e6%b3%95">#</a>
</h3>
<span>
  \[m_t = g_t \hspace{3cm} V_t = 1\]
</span>
<span>
  \[\eta_t = lr. \frac {m_t}{\sqrt V_t} = lr.g_t\]
</span>
<span>
  \[w_{t+1} = w_t - \eta_t = w_t-lr. \frac {m_t}{\sqrt{V_t}}=w_t-lr.g_t\]
</span>
<span>
  \[w_{t+1}=w_t-lr.\frac{\partial loss}{\partial (w_t)}\]
</span>
<h3 id="73-sgdm含momentum的sgd在-sgd-基础上增加了一阶动量">
  7.3 SGDM（含momentum的SGD），在 SGD 基础上增加了一阶动量
  <a class="anchor" href="#73-sgdm%e5%90%abmomentum%e7%9a%84sgd%e5%9c%a8-sgd-%e5%9f%ba%e7%a1%80%e4%b8%8a%e5%a2%9e%e5%8a%a0%e4%ba%86%e4%b8%80%e9%98%b6%e5%8a%a8%e9%87%8f">#</a>
</h3>
<span>
  \[m_t = \beta.m_{t-1} + (1-\beta).g_t \hspace{3cm} V_t = 1\]
</span>
<span>
  \[\eta_t=lr.\frac{m_t}{\sqrt{V_t}}=lr.m_t=lr.(\beta.m_{t-1} + (1-\beta).g_t)\]
</span>
<span>
  \[w_{t+1} = w_t - \eta_t=w_t-lr.(\beta.m_{t-1} + (1-\beta).g_t)\]
</span>
<h3 id="74-adagrad-在-sgd-基础上增加二阶动量">
  7.4 Adagrad, 在 SGD 基础上增加二阶动量
  <a class="anchor" href="#74-adagrad-%e5%9c%a8-sgd-%e5%9f%ba%e7%a1%80%e4%b8%8a%e5%a2%9e%e5%8a%a0%e4%ba%8c%e9%98%b6%e5%8a%a8%e9%87%8f">#</a>
</h3>
<span>
  \[m_t = g_t \hspace{3cm} V_t = \Sigma^t_\tau g^2_\tau\]
</span>
<span>
  \[\eta_t = lr.\frac{m_t}{\sqrt V_t} = lr.\frac{g_t}{\sqrt{\Sigma^t_\tau g^2_\tau}}\]
</span>
<span>
  \[w_{t+1} = w_t - \eta_t=w_t-lr.\frac{g_t}{\Sigma^t_\tau g^2_\tau}\]
</span>
<h3 id="75-rmspropsgd基础上增加二阶动量">
  7.5 RMSProp，SGD基础上增加二阶动量
  <a class="anchor" href="#75-rmspropsgd%e5%9f%ba%e7%a1%80%e4%b8%8a%e5%a2%9e%e5%8a%a0%e4%ba%8c%e9%98%b6%e5%8a%a8%e9%87%8f">#</a>
</h3>
<span>
  \[m_t = g_t \hspace{3cm} V_t = \beta.V_{t-1} + (1-\beta).g_t^2\]
</span>
<span>
  \[\eta_t = lr.\frac{m_t}{\sqrt V_t} = lr.\frac{g_t}{\sqrt{\beta.V_{t-1} + (1-\beta).g_t^2}}\]
</span>
<span>
  \[w_{t+1} = w_t - \eta_t=w_t-lr.\frac{g_t}{\sqrt{\beta.V_{t-1} + (1-\beta).g_t^2}}\]
</span>
<h3 id="76-adam同时结合-sgdm-的一阶动量和-rmsprop-二阶动量">
  7.6 Adam，同时结合 SGDM 的一阶动量和 RMSProp 二阶动量
  <a class="anchor" href="#76-adam%e5%90%8c%e6%97%b6%e7%bb%93%e5%90%88-sgdm-%e7%9a%84%e4%b8%80%e9%98%b6%e5%8a%a8%e9%87%8f%e5%92%8c-rmsprop-%e4%ba%8c%e9%98%b6%e5%8a%a8%e9%87%8f">#</a>
</h3>
<span>
  \[m_t = \beta.m_{t-1} + (1-\beta).g_t \hspace{1cm} 修正一阶动量的偏差：\widehat{m_t}=\frac{m_t}{1-\beta_1^t}\]
</span>
<span>
  \[V_t = \beta.V_{t-1} + (1-\beta).g_t^2 \hspace{1cm} 修正一阶动量的偏差：\widehat{V_t}=\frac{V_t}{1-\beta_2^t}\]
</span>
<span>
  \[\eta_t = lr.\frac{\widehat{m_t}}{\sqrt{\widehat{V_t}}}=lr.\frac{\frac{m_t}{1-\beta_1^t}}{\frac{V_t}{1-\beta_2^t}}\]
</span>
<span>
  \[w_{t+1} = w_t - \eta_t = w_t - lr.\frac{\frac{m_t}{1-\beta_1^t}}{\frac{V_t}{1-\beta_2^t}}\]
</span>
</article>
<footer class="book-footer">
<div class="flex flex-wrap justify-between">
</div>
<script>(function(){function e(e){const t=window.getSelection(),n=document.createRange();n.selectNodeContents(e),t.removeAllRanges(),t.addRange(n)}document.querySelectorAll("pre code").forEach(t=>{t.addEventListener("click",function(){if(window.getSelection().toString())return;e(t.parentElement),navigator.clipboard&&navigator.clipboard.writeText(t.parentElement.textContent)})})})()</script>
</footer>
<div class="book-comments">
</div>
<label class="hidden book-menu-overlay" for="menu-control"></label>
</div>
<aside class="book-toc">
<div class="book-toc-content">
<nav id="TableOfContents">
<ul>
<li><a href="#tensorflow-笔记二">TensorFlow 笔记（二）</a>
<ul>
<li><a href="#1-预备知识">1. 预备知识</a></li>
<li><a href="#2-神经网络复杂度">2. 神经网络复杂度</a></li>
<li><a href="#3-指数衰减学习率">3. 指数衰减学习率</a></li>
<li><a href="#4激活函数">4.激活函数</a>
<ul>
<li><a href="#41-sigmoid-函数">4.1 Sigmoid 函数</a></li>
<li><a href="#42-relu函数">4.2 Relu函数</a></li>
<li><a href="#42-总结">4.2 总结</a></li>
</ul>
</li>
<li><a href="#5损失函数">5.损失函数</a>
<ul>
<li><a href="#51-均方误差损失函数">5.1 均方误差损失函数</a></li>
<li><a href="#52-自定义损失函数">5.2 自定义损失函数</a></li>
<li><a href="#53-交叉熵损失函数cross-entropy">5.3 交叉熵损失函数（Cross Entropy）</a></li>
</ul>
</li>
<li><a href="#6-欠拟合与过拟合">6. 欠拟合与过拟合</a>
<ul>
<li><a href="#61-解决方案">6.1 解决方案</a></li>
<li><a href="#62-正则化缓解过拟合">6.2 正则化缓解过拟合</a></li>
</ul>
</li>
<li><a href="#7-优化器">7. 优化器</a>
<ul>
<li><a href="#71-更新参数四步骤">7.1 更新参数四步骤</a></li>
<li><a href="#72-sgd无momentum常用的梯度下降法">7.2 SGD（无momentum），常用的梯度下降法</a></li>
<li><a href="#73-sgdm含momentum的sgd在-sgd-基础上增加了一阶动量">7.3 SGDM（含momentum的SGD），在 SGD 基础上增加了一阶动量</a></li>
<li><a href="#74-adagrad-在-sgd-基础上增加二阶动量">7.4 Adagrad, 在 SGD 基础上增加二阶动量</a></li>
<li><a href="#75-rmspropsgd基础上增加二阶动量">7.5 RMSProp，SGD基础上增加二阶动量</a></li>
<li><a href="#76-adam同时结合-sgdm-的一阶动量和-rmsprop-二阶动量">7.6 Adam，同时结合 SGDM 的一阶动量和 RMSProp 二阶动量</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</aside>
</main>
</body>
</html>
