<!DOCTYPE html>
<html>
<head><meta name="generator" content="Hexo 3.9.0">
  <meta charset="utf-8">
  
  <title>【机器学习】动手实操(sklearn) | MaxMa</title>
  <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">
  
  <meta name="keywords" content="sklearn">
  
  
  
  
  <meta name="description" content="一、ML的实操的灵魂4问Q1：请写出你了解的机器学习特征工程操作，以及它的意义【解答】： 特征工程是在拿到数据后，将数据进行观测处理，为机器学习算法提供数据，所构建的特征好坏决定着机器学习算法的天花板，所以，对于机器学习算法，特征工程是至关重要的。特征工程一般包括：  缺失值的删除、处理 Scaling幅度缩放：归一化（标准化） 连续值离散化 多项式处理 LabelEncoder/One-Hot变">
<meta name="keywords" content="sklearn">
<meta property="og:type" content="article">
<meta property="og:title" content="【机器学习】动手实操(Sklearn)">
<meta property="og:url" content="https://anxiang1836.github.io/2019/10/20/ML_Usage_of_sklearn/index.html">
<meta property="og:site_name" content="MaxMa">
<meta property="og:description" content="一、ML的实操的灵魂4问Q1：请写出你了解的机器学习特征工程操作，以及它的意义【解答】： 特征工程是在拿到数据后，将数据进行观测处理，为机器学习算法提供数据，所构建的特征好坏决定着机器学习算法的天花板，所以，对于机器学习算法，特征工程是至关重要的。特征工程一般包括：  缺失值的删除、处理 Scaling幅度缩放：归一化（标准化） 连续值离散化 多项式处理 LabelEncoder/One-Hot变">
<meta property="og:locale" content="zh-CN">
<meta property="og:updated_time" content="2019-10-20T11:32:05.514Z">
<meta name="twitter:card" content="summary">
<meta name="twitter:title" content="【机器学习】动手实操(Sklearn)">
<meta name="twitter:description" content="一、ML的实操的灵魂4问Q1：请写出你了解的机器学习特征工程操作，以及它的意义【解答】： 特征工程是在拿到数据后，将数据进行观测处理，为机器学习算法提供数据，所构建的特征好坏决定着机器学习算法的天花板，所以，对于机器学习算法，特征工程是至关重要的。特征工程一般包括：  缺失值的删除、处理 Scaling幅度缩放：归一化（标准化） 连续值离散化 多项式处理 LabelEncoder/One-Hot变">
  
    <link rel="alternate" href="/atom.xml" title="MaxMa" type="application/atom+xml">
  

  

  <link rel="icon" href="/css/images/mylogo.jpg">
  <link rel="apple-touch-icon" href="/css/images/mylogo.jpg">
  
    <link href="//fonts.googleapis.com/css?family=Source+Code+Pro" rel="stylesheet" type="text/css">
  
  <link href="https://fonts.googleapis.com/css?family=Open+Sans|Montserrat:700" rel="stylesheet" type="text/css">
  <link href="https://fonts.googleapis.com/css?family=Roboto:400,300,300italic,400italic" rel="stylesheet" type="text/css">
  <link href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.css" rel="stylesheet">
  <style type="text/css">
    @font-face{font-family:futura-pt; src:url("css/fonts/FuturaPTBold.otf") format("woff");font-weight:500;font-style:normal;}
    @font-face{font-family:futura-pt-light; src:url("css/fonts/FuturaPTBook.otf") format("woff");font-weight:lighter;font-style:normal;}
    @font-face{font-family:futura-pt-italic; src:url("css/fonts/FuturaPTBookOblique.otf") format("woff");font-weight:400;font-style:italic;}
}

  </style>
  <link rel="stylesheet" href="/css/style.css">

  <script src="/js/jquery-3.1.1.min.js"></script>
  <script src="/js/bootstrap.js"></script>

  <!-- Bootstrap core CSS -->
  <link rel="stylesheet" href="/css/bootstrap.css">

  
    <link rel="stylesheet" href="/css/dialog.css">
  

  

  
    <link rel="stylesheet" href="/css/header-post.css">
  

  
  
  

</head>
</html>


  <body data-spy="scroll" data-target="#toc" data-offset="50">


  
  <div id="container">
    <div id="wrap">
      
        <header>

    <div id="allheader" class="navbar navbar-default navbar-static-top" role="navigation">
        <div class="navbar-inner">
          
          <div class="container"> 
            <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse">
              <span class="sr-only">Toggle navigation</span>
              <span class="icon-bar"></span>
              <span class="icon-bar"></span>
              <span class="icon-bar"></span>
            </button>

            
              <a class="brand" style="
                 margin-top: 0px;"  
                href="#" data-toggle="modal" data-target="#myModal" >
                  <img width="124px" height="124px" alt="Hike News" src="/css/images/mylogo.jpg">
              </a>
            
            
            <div class="navbar-collapse collapse">
              <ul class="hnav navbar-nav">
                
                  <li> <a class="main-nav-link" href="/">首页</a> </li>
                
                  <li> <a class="main-nav-link" href="/archives">归档</a> </li>
                
                  <li> <a class="main-nav-link" href="/categories">分类</a> </li>
                
                  <li> <a class="main-nav-link" href="/tags">标签</a> </li>
                
                  <li> <a class="main-nav-link" href="/about">关于</a> </li>
                
                  <li><div id="search-form-wrap">

    <form class="search-form">
        <input type="text" class="ins-search-input search-form-input" placeholder="" />
        <button type="submit" class="search-form-submit"></button>
    </form>
    <div class="ins-search">
    <div class="ins-search-mask"></div>
    <div class="ins-search-container">
        <div class="ins-input-wrapper">
            <input type="text" class="ins-search-input" placeholder="请输入关键词..." />
            <span class="ins-close ins-selectable"><i class="fa fa-times-circle"></i></span>
        </div>
        <div class="ins-section-wrapper">
            <div class="ins-section-container"></div>
        </div>
    </div>
</div>
<script>
(function (window) {
    var INSIGHT_CONFIG = {
        TRANSLATION: {
            POSTS: '文章',
            PAGES: '页面',
            CATEGORIES: '分类',
            TAGS: '标签',
            UNTITLED: '(无标题)',
        },
        ROOT_URL: '/',
        CONTENT_URL: '/content.json',
    };
    window.INSIGHT_CONFIG = INSIGHT_CONFIG;
})(window);
</script>
<script src="/js/insight.js"></script>

</div></li>
            </div>
          </div>
                
      </div>
    </div>

</header>



      
            
      <div id="content" class="outer">
        
          <section id="main" style="float:none;"><article id="post-ML_Usage_of_sklearn" style="width: 75%; float:left;" class="article article-type-post" itemscope itemprop="blogPost" >
  <div id="articleInner" class="article-inner">
    
    
      <header class="article-header">
        
  
    <h1 class="thumb" class="article-title" itemprop="name">
      【机器学习】动手实操(Sklearn)
    </h1>
  

      </header>
    
    <div class="article-meta">
      
	<a href="/2019/10/20/ML_Usage_of_sklearn/" class="article-date">
	  <time datetime="2019-10-20T10:22:50.175Z" itemprop="datePublished">2019-10-20</time>
	</a>

      
    <a class="article-category-link" href="/categories/机器学习/">机器学习</a>

      
	<a class="article-views">
	<span id="busuanzi_container_page_pv">
		阅读量<span id="busuanzi_value_page_pv"></span>
	</span>
	</a>

      

    </div>
    <div class="article-entry" itemprop="articleBody">
      
        <h2 id="一、ML的实操的灵魂4问"><a href="#一、ML的实操的灵魂4问" class="headerlink" title="一、ML的实操的灵魂4问"></a>一、ML的实操的灵魂4问</h2><h3 id="Q1：请写出你了解的机器学习特征工程操作，以及它的意义"><a href="#Q1：请写出你了解的机器学习特征工程操作，以及它的意义" class="headerlink" title="Q1：请写出你了解的机器学习特征工程操作，以及它的意义"></a>Q1：请写出你了解的机器学习特征工程操作，以及它的意义</h3><p>【解答】：</p>
<p>特征工程是在拿到数据后，将数据进行观测处理，为机器学习算法提供数据，所构建的特征好坏决定着机器学习算法的天花板，所以，对于机器学习算法，特征工程是至关重要的。特征工程一般包括：</p>
<ul>
<li>缺失值的删除、处理</li>
<li>Scaling幅度缩放：归一化（标准化）</li>
<li>连续值离散化</li>
<li>多项式处理</li>
<li>LabelEncoder/One-Hot变换</li>
<li>特征选择</li>
</ul>
<h3 id="Q2：请写出上述特征工程操作的sklearn或者pandas实现方式"><a href="#Q2：请写出上述特征工程操作的sklearn或者pandas实现方式" class="headerlink" title="Q2：请写出上述特征工程操作的sklearn或者pandas实现方式"></a>Q2：请写出上述特征工程操作的sklearn或者pandas实现方式</h3><p>【解答】：</p>
<ul>
<li><p>缺失值填充</p>
<ol>
<li><p>df.fillna(##待填充的值##)</p>
</li>
<li><p>或者根据一些groupby做更细致的填充,df.groupby(‘##’).agg(‘mean’)</p>
</li>
</ol>
</li>
<li><p>Scaling幅度缩放：</p>
<ol>
<li><p>标准化（Standard）</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn.preprocessing <span class="keyword">import</span> StandardScaler</span><br><span class="line">ss = StandardScaler()</span><br><span class="line">ss.fit_transform(data)</span><br></pre></td></tr></table></figure>
</li>
<li><p>归一化（MinMax）</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn.preprocessing <span class="keyword">import</span> MinMaxScaler	</span><br><span class="line">min_max = MinMaxScaler()</span><br><span class="line">min_max.fit_transform(data)</span><br></pre></td></tr></table></figure>
</li>
</ol>
</li>
</ul>
<ul>
<li><p>连续值离散化</p>
<ol>
<li><p>等距分组:pd.cut()</p>
</li>
<li><p>等频分组:pd.qcut()</p>
</li>
</ol>
</li>
<li><p>多项式变化</p>
  <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn.preprocessing <span class="keyword">import</span> PolynomialFeatures</span><br><span class="line"><span class="comment"># interaction_only：True表示的是只保留最高的幂的</span></span><br><span class="line">poly_feture = PolynomialFeatures(degree=<span class="number">3</span>,interaction_only=<span class="literal">True</span>)</span><br><span class="line">poly_feture.fit_transform(data)</span><br></pre></td></tr></table></figure>
</li>
</ul>
<ul>
<li><p>LabelEncoder</p>
  <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn.preprocessing <span class="keyword">import</span> LabelEncoder</span><br><span class="line">le = LabelEncoder()</span><br><span class="line">le.fit_transform(data)</span><br></pre></td></tr></table></figure>
</li>
</ul>
<ul>
<li><p>OneHotEncoder</p>
  <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn.preprocessing <span class="keyword">import</span> OneHotEncoder</span><br></pre></td></tr></table></figure>
</li>
</ul>
<p>  PS：xgboost不适合处理高维的稀疏特征，因为，在分裂的时候，只能按照0-1来切分了，本来别人是想切哪里切哪里，怎么切目标函数低怎么切，结果one-hot一下相当于把能切的方案都给限制了。</p>
<ul>
<li><p>特征选择</p>
<ul>
<li><p>嵌入法</p>
<p>  其实就是用模型l1正则，然后，用SelFromModel完成特征筛选。</p>
</li>
<li><p>包裹法</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn.feature_selection <span class="keyword">import</span> RFE</span><br><span class="line"><span class="keyword">from</span> sklearn.ensemble <span class="keyword">import</span> RandomForestClassifier</span><br><span class="line"></span><br><span class="line">rfe = RFE(estimator=RandomForestClassifier(),n_features_to_select=<span class="number">3</span>)</span><br><span class="line">data_rfe = rfe.fit_transform(X,y)</span><br></pre></td></tr></table></figure>
</li>
</ul>
</li>
</ul>
<h3 id="Q3：模型评估中的留一法，留出法，交叉验证分别是什么操作？"><a href="#Q3：模型评估中的留一法，留出法，交叉验证分别是什么操作？" class="headerlink" title="Q3：模型评估中的留一法，留出法，交叉验证分别是什么操作？"></a>Q3：模型评估中的留一法，留出法，交叉验证分别是什么操作？</h3><p>【解答】：</p>
<ul>
<li><p>留一法：</p>
<p>  就是选择其中1个数据作为测试，剩下用来训练</p>
</li>
<li><p>留出法：</p>
<p>  就是选择一部分比例的数据作为测试，剩下用来训练</p>
</li>
<li><p>交叉验证：</p>
<p>  就是将数据切成k折，选择的其中的1折作为测试，剩下的k-1折作为训练，重复k次</p>
</li>
</ul>
<h3 id="Q4：如何理解模型的过拟合与欠拟合，以及如何解决？"><a href="#Q4：如何理解模型的过拟合与欠拟合，以及如何解决？" class="headerlink" title="Q4：如何理解模型的过拟合与欠拟合，以及如何解决？"></a>Q4：如何理解模型的过拟合与欠拟合，以及如何解决？</h3><p>模型的欠拟合：</p>
<ul>
<li>理解：从表现上来看，就是在数据集上，训练的Loss和验证的Loss都很大，即高偏差，即没有达到足够的收敛。</li>
<li>解决：增加模型的复杂度（传统机器学习：增加变量、多项式转换；深度学习模型：增加层内神经元数量、增加网络的层数），增加训练迭代的次数等；</li>
</ul>
<p>模型的过拟合：</p>
<ul>
<li>理解：从表现上来看，就是在数据集上，训练的Loss很小，但验证的Loss却很大，即高方差，即模型过度学习了训练数据的特征。</li>
<li>解决：<ul>
<li>增加数据</li>
<li>正则化</li>
<li>适当的降低模型的复杂度</li>
<li>对于数据进行随机采样提高随机特性（类似于随机森林的那种处理方式）</li>
</ul>
</li>
</ul>
<h2 id="二、实操应用——信用卡欺诈"><a href="#二、实操应用——信用卡欺诈" class="headerlink" title="二、实操应用——信用卡欺诈"></a>二、实操应用——信用卡欺诈</h2><h3 id="前期数据导入-预览及处理"><a href="#前期数据导入-预览及处理" class="headerlink" title="前期数据导入,预览及处理"></a>前期数据导入,预览及处理</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 忽略warnings</span></span><br><span class="line"><span class="keyword">import</span> warnings</span><br><span class="line">warnings.filterwarnings(<span class="string">'ignore'</span>)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 自动输出每个cell的运行时间</span></span><br><span class="line">%load_ext autotime</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> pandas <span class="keyword">as</span> pd</span><br><span class="line">pd.set_option(<span class="string">'display.max_columns'</span>, <span class="number">500</span>)</span><br><span class="line"><span class="keyword">import</span> zipfile</span><br><span class="line"><span class="keyword">with</span> zipfile.ZipFile(<span class="string">'KaggleCredit2.csv.zip'</span>, <span class="string">'r'</span>) <span class="keyword">as</span> z:</span><br><span class="line">    f = z.open(<span class="string">'KaggleCredit2.csv'</span>)</span><br><span class="line">    data = pd.read_csv(f, index_col=<span class="number">0</span>)</span><br><span class="line">data.head()</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">data.shape</span><br></pre></td></tr></table></figure>
<pre><code>(112915, 11)
</code></pre><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">data.isnull().sum(axis=<span class="number">0</span>)</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">SeriousDlqin2yrs                           <span class="number">0</span></span><br><span class="line">RevolvingUtilizationOfUnsecuredLines       <span class="number">0</span></span><br><span class="line">age                                     <span class="number">4267</span></span><br><span class="line">NumberOfTime30<span class="number">-59</span>DaysPastDueNotWorse       <span class="number">0</span></span><br><span class="line">DebtRatio                                  <span class="number">0</span></span><br><span class="line">MonthlyIncome                              <span class="number">0</span></span><br><span class="line">NumberOfOpenCreditLinesAndLoans            <span class="number">0</span></span><br><span class="line">NumberOfTimes90DaysLate                    <span class="number">0</span></span><br><span class="line">NumberRealEstateLoansOrLines               <span class="number">0</span></span><br><span class="line">NumberOfTime60<span class="number">-89</span>DaysPastDueNotWorse       <span class="number">0</span></span><br><span class="line">NumberOfDependents                      <span class="number">4267</span></span><br><span class="line">dtype: int64</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 直接删掉了有缺失值的记录（一行）</span></span><br><span class="line">data=data.dropna()</span><br><span class="line">y = data[<span class="string">'SeriousDlqin2yrs'</span>]</span><br><span class="line"><span class="comment"># 删掉了一维特征</span></span><br><span class="line">X = data.drop(<span class="string">'SeriousDlqin2yrs'</span>, axis=<span class="number">1</span>)</span><br><span class="line">y.mean()</span><br></pre></td></tr></table></figure>
<pre><code>0.06742876076872101
</code></pre><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">X.info()</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">&lt;<span class="class"><span class="keyword">class</span> '<span class="title">pandas</span>.<span class="title">core</span>.<span class="title">frame</span>.<span class="title">DataFrame</span>'&gt;</span></span><br><span class="line"><span class="class"><span class="title">Int64Index</span>:</span> <span class="number">108648</span> entries, <span class="number">0</span> to <span class="number">112914</span></span><br><span class="line">Data columns (total <span class="number">10</span> columns):</span><br><span class="line">RevolvingUtilizationOfUnsecuredLines    <span class="number">108648</span> non-null float64</span><br><span class="line">age                                     <span class="number">108648</span> non-null float64</span><br><span class="line">NumberOfTime30<span class="number">-59</span>DaysPastDueNotWorse    <span class="number">108648</span> non-null float64</span><br><span class="line">DebtRatio                               <span class="number">108648</span> non-null float64</span><br><span class="line">MonthlyIncome                           <span class="number">108648</span> non-null float64</span><br><span class="line">NumberOfOpenCreditLinesAndLoans         <span class="number">108648</span> non-null float64</span><br><span class="line">NumberOfTimes90DaysLate                 <span class="number">108648</span> non-null float64</span><br><span class="line">NumberRealEstateLoansOrLines            <span class="number">108648</span> non-null float64</span><br><span class="line">NumberOfTime60<span class="number">-89</span>DaysPastDueNotWorse    <span class="number">108648</span> non-null float64</span><br><span class="line">NumberOfDependents                      <span class="number">108648</span> non-null float64</span><br><span class="line">dtypes: float64(<span class="number">10</span>)</span><br><span class="line">memory usage: <span class="number">9.1</span> MB</span><br></pre></td></tr></table></figure>
<h3 id="实操内容"><a href="#实操内容" class="headerlink" title="实操内容"></a>实操内容</h3><h4 id="Q1：数据切分（样本不均衡；stratify）"><a href="#Q1：数据切分（样本不均衡；stratify）" class="headerlink" title="Q1：数据切分（样本不均衡；stratify）"></a>Q1：数据切分（样本不均衡；stratify）</h4><ul>
<li>把数据切分成训练集和测试集<ul>
<li>tips：train_test_split</li>
</ul>
</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># your code here</span></span><br><span class="line"><span class="keyword">from</span> sklearn.model_selection <span class="keyword">import</span> train_test_split</span><br><span class="line"><span class="comment"># 我是按照label的分布进行切分的</span></span><br><span class="line">X_train,X_test,y_train,y_test = train_test_split(X,y,stratify=y)</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">data[<span class="string">'SeriousDlqin2yrs'</span>].unique()</span><br><span class="line"><span class="comment"># 这是一个二分类问题</span></span><br></pre></td></tr></table></figure>
<pre><code>array([1, 0])
</code></pre><h4 id="Q2：用LR建模分析特征的重要程度"><a href="#Q2：用LR建模分析特征的重要程度" class="headerlink" title="Q2：用LR建模分析特征的重要程度"></a>Q2：用LR建模分析特征的重要程度</h4><ul>
<li>使用logistic regression建模，并且输出一下系数，分析重要度。 <ul>
<li>tips：feature_importance</li>
</ul>
</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn.linear_model <span class="keyword">import</span> LogisticRegression</span><br><span class="line"><span class="keyword">from</span> sklearn.model_selection <span class="keyword">import</span> StratifiedKFold</span><br><span class="line"></span><br><span class="line">lr = LogisticRegression(solver=<span class="string">'lbfgs'</span>,random_state=<span class="number">1024</span>,max_iter=<span class="number">1000</span>)</span><br><span class="line"><span class="comment"># 当默认迭代次数最大100时，导致其未实现完全拟合，ConvergenceWarning；所以调整max_iter到1000</span></span><br><span class="line"></span><br><span class="line">lr.fit(X_train,y_train)</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">LogisticRegression(C=<span class="number">1.0</span>, class_weight=<span class="literal">None</span>, dual=<span class="literal">False</span>, fit_intercept=<span class="literal">True</span>,</span><br><span class="line">                   intercept_scaling=<span class="number">1</span>, l1_ratio=<span class="literal">None</span>, max_iter=<span class="number">1000</span>,</span><br><span class="line">                   multi_class=<span class="string">'warn'</span>, n_jobs=<span class="literal">None</span>, penalty=<span class="string">'l2'</span>,</span><br><span class="line">                   random_state=<span class="number">1024</span>, solver=<span class="string">'lbfgs'</span>, tol=<span class="number">0.0001</span>, verbose=<span class="number">0</span>,</span><br><span class="line">                   warm_start=<span class="literal">False</span>)</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="comment"># 获取特征并转为（列向量）</span></span><br><span class="line">feture = np.array(X.columns).reshape((X.columns.size,<span class="number">1</span>))</span><br><span class="line"><span class="comment"># 获取coef_,并与feture连接成（shape，2）的矩阵</span></span><br><span class="line">feture_importance = np.hstack((feture,lr.coef_.reshape((lr.coef_.size,<span class="number">1</span>))))</span><br><span class="line"><span class="comment"># 将np.array转为DataFrame</span></span><br><span class="line">lr_weight = pd.DataFrame(feture_importance,columns=[<span class="string">'feture'</span>,<span class="string">'weight'</span>])</span><br><span class="line"><span class="comment"># 根据权重进行从小到大排序</span></span><br><span class="line">lr_weight.sort_values(<span class="string">'weight'</span>,ascending=<span class="literal">False</span>)</span><br></pre></td></tr></table></figure>
<div class="table-container">
<table>
<thead>
<tr>
<th></th>
<th>Feture</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>6</td>
<td>NumberOfTimes90DaysLate</td>
<td>0.526997</td>
</tr>
<tr>
<td>2</td>
<td>NumberOfTime30-59DaysPastDueNotWorse</td>
<td>0.512845</td>
</tr>
<tr>
<td>3</td>
<td>DebtRatio</td>
<td>0.17756</td>
</tr>
<tr>
<td>9</td>
<td>NumberOfDependents</td>
<td>0.0736575</td>
</tr>
<tr>
<td>4</td>
<td>MonthlyIncome</td>
<td>-4.17822e-05</td>
</tr>
<tr>
<td>0</td>
<td>RevolvingUtilizationOfUnsecuredLines</td>
<td>-5.10621e-05</td>
</tr>
<tr>
<td>7</td>
<td>NumberRealEstateLoansOrLines</td>
<td>-0.000780827</td>
</tr>
<tr>
<td>5</td>
<td>NumberOfOpenCreditLinesAndLoans</td>
<td>-0.0178746</td>
</tr>
<tr>
<td>1</td>
<td>age</td>
<td>-0.0386368</td>
</tr>
<tr>
<td>8</td>
<td>NumberOfTime60-89DaysPastDueNotWorse</td>
<td>-1.00415</td>
</tr>
</tbody>
</table>
</div>
<p>【解答】：因为逻辑回归是广义上的线性回归，所以feture的重要程度是与weight的大小有关系的，所以模型中，参数的重要程度如上表所示。</p>
<h4 id="Q3：使用不用的分类模型进行建模"><a href="#Q3：使用不用的分类模型进行建模" class="headerlink" title="Q3：使用不用的分类模型进行建模"></a>Q3：使用不用的分类模型进行建模</h4><ul>
<li>使用决策树/SVM/KNN…等sklearn分类算法进行分类，尝试了解参数含义，调整不同的参数。<ul>
<li>tips：sklearn的API页面查询</li>
</ul>
</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn.tree <span class="keyword">import</span> DecisionTreeClassifier</span><br><span class="line"><span class="keyword">from</span> sklearn.svm <span class="keyword">import</span> SVC</span><br><span class="line"><span class="keyword">from</span> sklearn.neighbors <span class="keyword">import</span> KNeighborsClassifier</span><br><span class="line"><span class="comment"># from hpsklearn import HyperoptEstimator</span></span><br><span class="line"><span class="keyword">from</span> sklearn.metrics <span class="keyword">import</span> roc_auc_score</span><br></pre></td></tr></table></figure>
<pre><code>time: 1.23 ms
</code></pre><ul>
<li>决策树的建模</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 决策树建模</span></span><br><span class="line">tree_clf = DecisionTreeClassifier(criterion=<span class="string">'entropy'</span>,</span><br><span class="line">                                  max_depth=<span class="literal">None</span>,</span><br><span class="line">                                  max_features=<span class="string">'sqrt'</span>,<span class="comment">#if “sqrt”, then max_features=sqrt(n_features).</span></span><br><span class="line">                                  splitter=<span class="string">'random'</span>,<span class="comment">#可选为“best”：最佳分裂点；“random”： choose the best random split</span></span><br><span class="line">                                 )</span><br><span class="line">tree_clf.fit(X_train,y_train)</span><br><span class="line">y_true = y_test</span><br><span class="line">tree_pred = tree_clf.predict(X_test)</span><br><span class="line">tree_score = roc_auc_score(y_true,tree_pred)</span><br><span class="line">tree_score</span><br></pre></td></tr></table></figure>
<pre><code>0.59902550525207

time: 140 ms
</code></pre><ul>
<li>SVM建模</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">svm = SVC(kernel=<span class="string">'rbf'</span>, </span><br><span class="line">          <span class="comment">#kernel是确定核函数，‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed‘</span></span><br><span class="line">          C=<span class="number">10.0</span>,</span><br><span class="line">          <span class="comment">#是惩罚因子的参数</span></span><br><span class="line">          gamma=<span class="number">0.10</span>,</span><br><span class="line">          <span class="comment">#gamma是核函数的参数</span></span><br><span class="line">          random_state=<span class="number">0</span>)</span><br><span class="line"></span><br><span class="line">svm.fit(X_train, y_train)</span><br><span class="line">svm_pred = svm.predict(X_test)</span><br><span class="line">svm_score = roc_auc_score(y_true,svm_pred)</span><br><span class="line">svm_score</span><br></pre></td></tr></table></figure>
<pre><code>0.5138485620240555

time: 9min 15s
</code></pre><ul>
<li>KNN建模</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">knn = KNeighborsClassifier(n_neighbors=<span class="number">5</span>, </span><br><span class="line">                           <span class="comment"># n_neighbors 确定邻居的数量n</span></span><br><span class="line">                           p=<span class="number">2</span>, </span><br><span class="line">                           metric=<span class="string">'minkowski'</span>,</span><br><span class="line">                          <span class="comment"># metric 是计算距离的方式，</span></span><br><span class="line">                          <span class="comment"># https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html#sklearn.neighbors.DistanceMetric </span></span><br><span class="line">                           leaf_size=<span class="number">30</span></span><br><span class="line">                           <span class="comment"># 传递给KDTree的叶子数（基础知识有点忘了，回头补一下）</span></span><br><span class="line">                          )</span><br><span class="line">knn.fit(X_train, y_train)</span><br><span class="line">knn_pred = knn.predict(X_test)</span><br><span class="line">knn_score = roc_auc_score(y_true,knn_pred)</span><br><span class="line">knn_score</span><br></pre></td></tr></table></figure>
<pre><code>0.5064387703420583

time: 1.74 s
</code></pre><h4 id="Q4：网格搜索交叉验证、贝叶斯优化器调参"><a href="#Q4：网格搜索交叉验证、贝叶斯优化器调参" class="headerlink" title="Q4：网格搜索交叉验证、贝叶斯优化器调参"></a>Q4：网格搜索交叉验证、贝叶斯优化器调参</h4><ul>
<li>使用网格搜索交叉验证进行逻辑回归/随机森林/Xgboost/LightGBM调参<ul>
<li>tips:特别注意此处的正负样本不均衡，以及选择什么样的评估准则</li>
</ul>
</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 正负样本不均衡，就没有办法直接用GridSearchCV了</span></span><br><span class="line"><span class="comment">#from sklearn.model_selectisklearnimport GridSearchCV</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">from</span> sklearn.model_selection <span class="keyword">import</span> StratifiedKFold,cross_val_score,ParameterGrid</span><br><span class="line"></span><br><span class="line"><span class="keyword">from</span> sklearn.linear_model <span class="keyword">import</span> LogisticRegression</span><br><span class="line"><span class="keyword">from</span> sklearn.ensemble <span class="keyword">import</span> RandomForestClassifier</span><br><span class="line"><span class="keyword">from</span> xgboost <span class="keyword">import</span> XGBClassifier</span><br><span class="line"></span><br><span class="line"><span class="comment"># 正负样本不均衡，用auc评估准则</span></span><br><span class="line"><span class="keyword">from</span> sklearn.metrics <span class="keyword">import</span> roc_auc_score,make_scorer</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> warnings</span><br><span class="line">warnings.filterwarnings(<span class="string">'ignore'</span>)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 自定义切分k折的训练集、验证集、测试集</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">nest_cv</span><span class="params">(Classifier,X,y,outer_cv,inner_cv,param_grid)</span>:</span></span><br><span class="line">    outer_scores = []</span><br><span class="line">    outer_best_params = []</span><br><span class="line">    <span class="comment"># 首先，切出来测试集test_indexes</span></span><br><span class="line">    <span class="keyword">for</span> trival_indexes,test_indexes <span class="keyword">in</span> outer_cv.split(X,y):</span><br><span class="line">        </span><br><span class="line">        best_score = -np.inf</span><br><span class="line">        best_param = <span class="literal">None</span></span><br><span class="line">        </span><br><span class="line">        <span class="keyword">for</span> params <span class="keyword">in</span> param_grid:</span><br><span class="line">            inner_scores = []</span><br><span class="line">            <span class="comment"># 再从trival_indexes中切出来验证集</span></span><br><span class="line">            <span class="keyword">for</span> train_indexes,val_indexes <span class="keyword">in</span> inner_cv.split(X.iloc[trival_indexes],y.iloc[trival_indexes]):</span><br><span class="line">                </span><br><span class="line">                <span class="comment"># 选择分类器</span></span><br><span class="line">                clf = Classifier(**params)</span><br><span class="line">                </span><br><span class="line">                <span class="comment"># 训练</span></span><br><span class="line">                clf.fit(X.iloc[train_indexes],y.iloc[train_indexes])</span><br><span class="line">                </span><br><span class="line">                <span class="comment"># 选用auc指标来评估</span></span><br><span class="line">                y_true = y.iloc[val_indexes]</span><br><span class="line">                y_score = clf.predict(X.iloc[val_indexes])</span><br><span class="line">                score = roc_auc_score(y_true,y_score)</span><br><span class="line">                </span><br><span class="line">                <span class="comment"># 将当前折的score缓存到inner_scores中</span></span><br><span class="line">                inner_scores.append(score)</span><br><span class="line">            </span><br><span class="line">            mean_score = np.mean(inner_scores)</span><br><span class="line">            </span><br><span class="line">            <span class="keyword">if</span> mean_score &gt; best_score:</span><br><span class="line">                best_score = mean_score</span><br><span class="line">                best_param = params</span><br><span class="line">        </span><br><span class="line">        clf = Classifier(**best_param)</span><br><span class="line">        clf.fit(X.iloc[trival_indexes],y.iloc[trival_indexes])</span><br><span class="line">        </span><br><span class="line">        y_test_true = y.iloc[test_indexes]</span><br><span class="line">        y_test_score = clf.predict(X.iloc[test_indexes])</span><br><span class="line">        </span><br><span class="line">        test_score = roc_auc_score(y_test_true,y_test_score)</span><br><span class="line">        </span><br><span class="line">        outer_scores.append(test_score)</span><br><span class="line">        outer_best_params.append(best_param)</span><br><span class="line">        </span><br><span class="line">    <span class="keyword">return</span> outer_scores,outer_best_params</span><br></pre></td></tr></table></figure>
<pre><code>time: 159 ms
</code></pre><ul>
<li>逻辑回归</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> sklearn.model_selection <span class="keyword">import</span> ParameterGrid</span><br><span class="line">param_grid_lr = &#123;<span class="string">'penalty'</span>: [<span class="string">'l1'</span>,<span class="string">'l2'</span>],</span><br><span class="line">              <span class="string">'C'</span>: [<span class="number">0.001</span>, <span class="number">0.01</span>, <span class="number">0.1</span>, <span class="number">1</span>, <span class="number">10</span>, <span class="number">100</span>]&#125;</span><br><span class="line"></span><br><span class="line">scores,params = nest_cv(LogisticRegression,X=X,y=y,</span><br><span class="line">                        outer_cv=StratifiedKFold(n_splits=<span class="number">5</span>),</span><br><span class="line">                        inner_cv=StratifiedKFold(n_splits=<span class="number">5</span>),</span><br><span class="line">                        param_grid=ParameterGrid(param_grid_lr))</span><br><span class="line"></span><br><span class="line"><span class="comment"># 最好的超参数 和 最高得分</span></span><br><span class="line"><span class="keyword">for</span> (s,p) <span class="keyword">in</span> zip(scores,params):</span><br><span class="line">    print(<span class="string">"交叉验证得分为：&#123;&#125; 参数为：&#123;&#125;"</span>.format(s,p))</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">交叉验证得分为：<span class="number">0.5215189496335896</span> 参数为：&#123;<span class="string">'C'</span>: <span class="number">0.1</span>, <span class="string">'penalty'</span>: <span class="string">'l2'</span>&#125;</span><br><span class="line">交叉验证得分为：<span class="number">0.5040914369249088</span> 参数为：&#123;<span class="string">'C'</span>: <span class="number">0.1</span>, <span class="string">'penalty'</span>: <span class="string">'l2'</span>&#125;</span><br><span class="line">交叉验证得分为：<span class="number">0.5116451576392979</span> 参数为：&#123;<span class="string">'C'</span>: <span class="number">100</span>, <span class="string">'penalty'</span>: <span class="string">'l1'</span>&#125;</span><br><span class="line">交叉验证得分为：<span class="number">0.5164233146358849</span> 参数为：&#123;<span class="string">'C'</span>: <span class="number">1</span>, <span class="string">'penalty'</span>: <span class="string">'l2'</span>&#125;</span><br><span class="line">交叉验证得分为：<span class="number">0.5195360490669915</span> 参数为：&#123;<span class="string">'C'</span>: <span class="number">1</span>, <span class="string">'penalty'</span>: <span class="string">'l2'</span>&#125;</span><br><span class="line">time: <span class="number">1</span>min <span class="number">49</span>s</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 输出模型</span></span><br><span class="line">lr = LogisticRegression(C=<span class="number">0.1</span>,penalty=<span class="string">'l2'</span>)</span><br><span class="line">lr</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">LogisticRegression(C=<span class="number">0.1</span>, class_weight=<span class="literal">None</span>, dual=<span class="literal">False</span>, fit_intercept=<span class="literal">True</span>,</span><br><span class="line">                   intercept_scaling=<span class="number">1</span>, l1_ratio=<span class="literal">None</span>, max_iter=<span class="number">100</span>,</span><br><span class="line">                   multi_class=<span class="string">'warn'</span>, n_jobs=<span class="literal">None</span>, penalty=<span class="string">'l2'</span>,</span><br><span class="line">                   random_state=<span class="literal">None</span>, solver=<span class="string">'warn'</span>, tol=<span class="number">0.0001</span>, verbose=<span class="number">0</span>,</span><br><span class="line">                   warm_start=<span class="literal">False</span>)</span><br><span class="line"></span><br><span class="line">time: <span class="number">30.4</span> ms</span><br></pre></td></tr></table></figure>
<ul>
<li>Xgboost—贝叶斯优化器来调参</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> hyperopt <span class="keyword">import</span> fmin, tpe, hp, STATUS_OK, Trials</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">hpt_train_test</span><span class="params">(Classifier,X,y,params)</span>:</span></span><br><span class="line">    clf = Classifier(**params)</span><br><span class="line">    scores = []</span><br><span class="line">    <span class="keyword">for</span> train_indexes,val_indexes <span class="keyword">in</span> StratifiedKFold(n_splits=<span class="number">5</span>).split(X,y):</span><br><span class="line">        clf.fit(X.iloc[train_indexes],y.iloc[train_indexes])</span><br><span class="line">        y_true = y.iloc[val_indexes]</span><br><span class="line">        y_socre = clf.predict(X.iloc[val_indexes])</span><br><span class="line">        s = roc_auc_score(y_true,y_socre)</span><br><span class="line">        scores.append(s)</span><br><span class="line">    <span class="keyword">return</span> np.mean(scores)</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">f</span><span class="params">(params)</span>:</span></span><br><span class="line">    roc = hpt_train_test(XGBClassifier,X_train,y_train,params)</span><br><span class="line">    <span class="comment"># 因为要fmin，所以取负值</span></span><br><span class="line">    <span class="keyword">return</span> &#123;<span class="string">'loss'</span>: -roc, <span class="string">'status'</span>: STATUS_OK&#125;</span><br><span class="line"></span><br><span class="line">space4xgb = &#123;</span><br><span class="line">    <span class="string">'eta'</span>: hp.uniform(<span class="string">'eta'</span>, <span class="number">0.01</span>,<span class="number">0.1</span>),</span><br><span class="line">    <span class="string">'gama'</span>: hp.uniform(<span class="string">'gama'</span>, <span class="number">0.05</span>,<span class="number">1</span>),</span><br><span class="line">    <span class="string">'max_depth'</span>: hp.choice(<span class="string">'max_depth'</span>, range(<span class="number">3</span>,<span class="number">26</span>)),</span><br><span class="line">    <span class="string">'min_child_weight'</span>: hp.choice(<span class="string">'min_child_weight'</span>, range(<span class="number">1</span>,<span class="number">8</span>,<span class="number">2</span>)),</span><br><span class="line">    <span class="string">'subsample'</span>: hp.uniform(<span class="string">'subsample'</span>, <span class="number">0.6</span>,<span class="number">1</span>),</span><br><span class="line">    <span class="string">'colsample_bytree'</span>:hp.uniform(<span class="string">'colsample_bytree'</span>,<span class="number">0.01</span>,<span class="number">1</span>),</span><br><span class="line">    <span class="string">'lambda'</span>:hp.uniform(<span class="string">'lambda'</span>,<span class="number">0.01</span>,<span class="number">1</span>)</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">trials = Trials()</span><br><span class="line">best = fmin(f, space4xgb, algo=tpe.suggest, max_evals=<span class="number">50</span>, trials=trials)</span><br><span class="line">print(<span class="string">'best:'</span>,best)</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="number">100</span>%|██████████| <span class="number">50</span>/<span class="number">50</span> [<span class="number">54</span>:<span class="number">45</span>&lt;<span class="number">00</span>:<span class="number">00</span>, <span class="number">55.58</span>s/it, best loss: <span class="number">-0.5832977753344738</span>]</span><br><span class="line">best: &#123;<span class="string">'colsample_bytree'</span>: <span class="number">0.885302641042877</span>, <span class="string">'eta'</span>: <span class="number">0.03291623475879335</span>, <span class="string">'gama'</span>: <span class="number">0.36202750013913637</span>, <span class="string">'lambda'</span>: <span class="number">0.5405265501553331</span>, <span class="string">'max_depth'</span>: <span class="number">6</span>, <span class="string">'min_child_weight'</span>: <span class="number">1</span>, <span class="string">'subsample'</span>: <span class="number">0.6032183583041595</span>&#125;</span><br><span class="line">time: <span class="number">54</span>min <span class="number">45</span>s</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">params = &#123;</span><br><span class="line">          <span class="string">'eta'</span>: <span class="number">0.03291623475879335</span>, </span><br><span class="line">          <span class="string">'gama'</span>: <span class="number">0.36202750013913637</span>, </span><br><span class="line">          <span class="string">'max_depth'</span>: int(<span class="number">6</span>), </span><br><span class="line">          <span class="string">'min_child_weight'</span>: int(<span class="number">1</span>), </span><br><span class="line">          <span class="string">'subsample'</span>: <span class="number">0.6032183583041595</span>,</span><br><span class="line">          <span class="string">'lambda'</span>: <span class="number">0.5405265501553331</span>,</span><br><span class="line">          <span class="string">'colsample_bytree'</span>: <span class="number">0.885302641042877</span></span><br><span class="line">          &#125;</span><br><span class="line">xgb = XGBClassifier(**params)</span><br><span class="line">xgb.fit(X_train,y_train)</span><br><span class="line"></span><br><span class="line">xgb_pred = xgb.predict(X_test)</span><br><span class="line">xgb_score = roc_auc_score(y_true,xgb_pred)</span><br><span class="line">xgb_score</span><br></pre></td></tr></table></figure>
<pre><code>0.5906123234440753
</code></pre><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">xgb_score_train = roc_auc_score(y_train,xgb.predict(X_train))</span><br><span class="line">xgb_score_train</span><br></pre></td></tr></table></figure>
<pre><code>0.6144750742639435

time: 545 ms
</code></pre><p>这个其实也是有点过拟合了！</p>
<h4 id="Q5：混淆矩阵评估指标"><a href="#Q5：混淆矩阵评估指标" class="headerlink" title="Q5：混淆矩阵评估指标"></a>Q5：混淆矩阵评估指标</h4><ul>
<li>查看sklearn的官方说明，了解混淆矩阵等评估标准，并对此例进行评估。</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># your code here</span></span><br><span class="line"><span class="comment"># 这里我还是用逻辑回归的结果来评估吧！</span></span><br><span class="line"><span class="keyword">from</span> sklearn.metrics <span class="keyword">import</span> confusion_matrix,f1_score,fbeta_score</span><br><span class="line"></span><br><span class="line">lr.fit(X_train,y_train)</span><br><span class="line">y_true = y_test</span><br><span class="line">y_pred = lr.predict(X_test)</span><br></pre></td></tr></table></figure>
<pre><code>time: 2.13 s
</code></pre><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">conf = confusion_matrix(y_true,y_pred)</span><br><span class="line">f1 = f1_score(y_true,y_pred)</span><br><span class="line">f_beta = fbeta_score(y_true,y_pred,<span class="number">5</span>)</span><br><span class="line">print(<span class="string">"混淆矩阵为:\n"</span>,conf)</span><br><span class="line">print(<span class="string">"F_1分数为:\n"</span>,f1)</span><br><span class="line">print(<span class="string">"F_beta分数:\n"</span>,f_beta)</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">混淆矩阵为:</span><br><span class="line"> [[<span class="number">25256</span>    <span class="number">74</span>]</span><br><span class="line"> [ <span class="number">1733</span>    <span class="number">99</span>]]</span><br><span class="line">F_1分数为:</span><br><span class="line"> <span class="number">0.09875311720698254</span></span><br><span class="line">F_beta分数:</span><br><span class="line"> <span class="number">0.0559893850738477</span></span><br><span class="line">time: <span class="number">71.4</span> ms</span><br></pre></td></tr></table></figure>
<h4 id="Q6：自定义2分类中的判决边界（调整0-5）"><a href="#Q6：自定义2分类中的判决边界（调整0-5）" class="headerlink" title="Q6：自定义2分类中的判决边界（调整0.5）"></a>Q6：自定义2分类中的判决边界（调整0.5）</h4><ul>
<li>银行通常会有更严格的要求，因为fraud带来的后果通常比较严重，一般我们会调整模型的标准。<br>比如在各种分类模型当中，一般我们的概率判定边界为0.5，但是我们可以把阈值设定低一些，来提高模型的“敏感度”<br>试试看把阈值设定为0.3，再看看这个时候的混淆矩阵等评估指标。</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 获取到标签为第二个类别的预测概率</span></span><br><span class="line">y_pred_prob = lr.predict_proba(X_test)[:,<span class="number">1</span>]</span><br><span class="line"><span class="comment"># 这里判定阈值为大于0.3的表示为1</span></span><br><span class="line">y_pred_thres_3 = np.array(y_pred_prob &gt; <span class="number">0.3</span>)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 重新计算评估指标</span></span><br><span class="line">conf_thres_3 = confusion_matrix(y_true,y_pred_thres_3)</span><br><span class="line"><span class="comment"># F1_score</span></span><br><span class="line">f1_thres_3 = f1_score(y_true,y_pred_thres_3)</span><br><span class="line"><span class="comment"># F-beta_score</span></span><br><span class="line">f_beta_thres_3 = fbeta_score(y_true,y_pred_thres_3,<span class="number">5</span>)</span><br><span class="line"></span><br><span class="line"><span class="comment"># 输出计算的结果</span></span><br><span class="line">print(<span class="string">"混淆矩阵为:\n"</span>,conf_thres_3)</span><br><span class="line">print(<span class="string">"F_1分数为:\n"</span>,f1_thres_3)</span><br><span class="line">print(<span class="string">"F_beta分数:\n"</span>,f_beta_thres_3)</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">混淆矩阵为:</span><br><span class="line"> [[<span class="number">25104</span>   <span class="number">226</span>]</span><br><span class="line"> [ <span class="number">1592</span>   <span class="number">240</span>]]</span><br><span class="line">F_1分数为:</span><br><span class="line"> <span class="number">0.20887728459530025</span></span><br><span class="line">F_beta分数:</span><br><span class="line"> <span class="number">0.13487226040721045</span></span><br><span class="line">time: <span class="number">149</span> ms</span><br></pre></td></tr></table></figure>
<h4 id="Q7：特征筛选并重建模型"><a href="#Q7：特征筛选并重建模型" class="headerlink" title="Q7：特征筛选并重建模型"></a>Q7：特征筛选并重建模型</h4><ul>
<li>尝试对不同特征的重要度进行排序，通过特征选择的方式，对特征进行筛选。并重新建模，观察此时的模型准确率等评估指标。</li>
</ul>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># your code here</span></span><br><span class="line"><span class="keyword">from</span> sklearn.feature_selection <span class="keyword">import</span> RFE</span><br><span class="line"></span><br><span class="line"><span class="keyword">from</span> sklearn.model_selection <span class="keyword">import</span> cross_val_predict</span><br><span class="line"></span><br><span class="line"><span class="comment"># 根据决策树的训练情况，查看特征的重要程度。</span></span><br><span class="line">tree_clf.fit(X,y)</span><br><span class="line">feature_imp = zip(X.columns,tree_clf.feature_importances_)</span><br><span class="line">feature_imp = pd.DataFrame(feature_imp,columns=[<span class="string">"features"</span>,<span class="string">"importance"</span>])</span><br><span class="line">feature_imp.sort_values(<span class="string">'importance'</span>,ascending=<span class="literal">False</span>)</span><br></pre></td></tr></table></figure>
<div class="table-container">
<table>
<thead>
<tr>
<th></th>
<th>Feture</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>RevolvingUtilizationOfUnsecuredLines</td>
<td>0.153979</td>
</tr>
<tr>
<td>3</td>
<td>DebtRatio</td>
<td>0.145688</td>
</tr>
<tr>
<td>4</td>
<td>MonthlyIncome</td>
<td>0.138199</td>
</tr>
<tr>
<td>1</td>
<td>age</td>
<td>0.133760</td>
</tr>
<tr>
<td>5</td>
<td>NumberOfOpenCreditLinesAndLoans</td>
<td>0.103903</td>
</tr>
<tr>
<td>2</td>
<td>NumberOfTime30-59DaysPastDueNotWorse</td>
<td>0.093550</td>
</tr>
<tr>
<td>6</td>
<td>NumberOfTimes90DaysLate</td>
<td>0.072241</td>
</tr>
<tr>
<td>8</td>
<td>NumberOfTime60-89DaysPastDueNotWorse</td>
<td>0.059271</td>
</tr>
<tr>
<td>9</td>
<td>NumberOfDependents</td>
<td>0.050715</td>
</tr>
<tr>
<td>7</td>
<td>NumberRealEstateLoansOrLines</td>
<td>0.048693</td>
</tr>
</tbody>
</table>
</div>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 这里就选用决策树进行作业题的联系演示吧，其实随机森林在我的小macos上跑不动，就选个简单的模型吧！</span></span><br><span class="line">rfe = RFE(estimator=tree_clf, n_features_to_select=<span class="number">6</span>)</span><br><span class="line">X_rfe = rfe.fit_transform(X,y)</span><br><span class="line">X_rfe.shape</span><br><span class="line"></span><br><span class="line">X_train_rfe,X_test_rfe,y_train_rfe,y_test_rfe = train_test_split(X_rfe,y,stratify=y)</span><br><span class="line">tree_clf.fit(X_train_rfe,y_train_rfe)</span><br><span class="line"></span><br><span class="line">y_true = y_test_rfe</span><br><span class="line">tree_pred = tree_clf.predict(X_test_rfe)</span><br><span class="line">tree_score_test = roc_auc_score(y_true,tree_pred)</span><br><span class="line">tree_score_train = roc_auc_score(y_train_rfe,tree_clf.predict(X_train_rfe))</span><br><span class="line">tree_score_test,tree_score_train</span><br></pre></td></tr></table></figure>
<pre><code>(0.5713903569390101, 0.999818016378526)
</code></pre><p>贼尴尬，roc不升反而降低了,看了一下训练指标，额，发现严重的过拟合了。</p>

      
    </div>
    <footer class="article-footer">
      
      
      
        
	<div id="comment">
		<!-- 来必力City版安装代码 -->
		<div id="lv-container" data-id="city" data-uid="MTAyMC80NTk2OS8yMjQ4MA==">
		<script type="text/javascript">
		   (function(d, s) {
		       var j, e = d.getElementsByTagName(s)[0];

		       if (typeof LivereTower === 'function') { return; }

		       j = d.createElement(s);
		       j.src = 'https://cdn-city.livere.com/js/embed.dist.js';
		       j.async = true;

		       e.parentNode.insertBefore(j, e);
		   })(document, 'script');
		</script>
		<noscript>为正常使用来必力评论功能请激活JavaScript</noscript>
		</div>
		<!-- City版安装代码已完成 -->
	</div>



      
      
        
  <ul class="article-tag-list"><li class="article-tag-list-item"><a class="article-tag-list-link" href="/tags/sklearn/">sklearn</a></li></ul>

      

    </footer>
  </div>
  
    
<nav id="article-nav">
  
    <a href="/2019/10/21/ML_19_Ask_Anwser/" id="article-nav-newer" class="article-nav-link-wrap">
      <strong class="article-nav-caption">上一篇</strong>
      <div class="article-nav-title">
        
          【机器学习】灵魂19问
        
      </div>
    </a>
  
  
    <a href="/2019/10/06/NLP_WordCloud/" id="article-nav-older" class="article-nav-link-wrap">
      <strong class="article-nav-caption">下一篇</strong>
      <div class="article-nav-title">【NLP】WordCloud-词云</div>
    </a>
  
</nav>

  
</article>

<!-- Table of Contents -->

  <aside id="toc-sidebar">
    <div id="toc" class="toc-article">
    <strong class="toc-title">文章目录</strong>
    
        <ol class="nav"><li class="nav-item nav-level-2"><a class="nav-link" href="#一、ML的实操的灵魂4问"><span class="nav-number">1.</span> <span class="nav-text">一、ML的实操的灵魂4问</span></a><ol class="nav-child"><li class="nav-item nav-level-3"><a class="nav-link" href="#Q1：请写出你了解的机器学习特征工程操作，以及它的意义"><span class="nav-number">1.1.</span> <span class="nav-text">Q1：请写出你了解的机器学习特征工程操作，以及它的意义</span></a></li><li class="nav-item nav-level-3"><a class="nav-link" href="#Q2：请写出上述特征工程操作的sklearn或者pandas实现方式"><span class="nav-number">1.2.</span> <span class="nav-text">Q2：请写出上述特征工程操作的sklearn或者pandas实现方式</span></a></li><li class="nav-item nav-level-3"><a class="nav-link" href="#Q3：模型评估中的留一法，留出法，交叉验证分别是什么操作？"><span class="nav-number">1.3.</span> <span class="nav-text">Q3：模型评估中的留一法，留出法，交叉验证分别是什么操作？</span></a></li><li class="nav-item nav-level-3"><a class="nav-link" href="#Q4：如何理解模型的过拟合与欠拟合，以及如何解决？"><span class="nav-number">1.4.</span> <span class="nav-text">Q4：如何理解模型的过拟合与欠拟合，以及如何解决？</span></a></li></ol></li><li class="nav-item nav-level-2"><a class="nav-link" href="#二、实操应用——信用卡欺诈"><span class="nav-number">2.</span> <span class="nav-text">二、实操应用——信用卡欺诈</span></a><ol class="nav-child"><li class="nav-item nav-level-3"><a class="nav-link" href="#前期数据导入-预览及处理"><span class="nav-number">2.1.</span> <span class="nav-text">前期数据导入,预览及处理</span></a></li><li class="nav-item nav-level-3"><a class="nav-link" href="#实操内容"><span class="nav-number">2.2.</span> <span class="nav-text">实操内容</span></a><ol class="nav-child"><li class="nav-item nav-level-4"><a class="nav-link" href="#Q1：数据切分（样本不均衡；stratify）"><span class="nav-number">2.2.1.</span> <span class="nav-text">Q1：数据切分（样本不均衡；stratify）</span></a></li><li class="nav-item nav-level-4"><a class="nav-link" href="#Q2：用LR建模分析特征的重要程度"><span class="nav-number">2.2.2.</span> <span class="nav-text">Q2：用LR建模分析特征的重要程度</span></a></li><li class="nav-item nav-level-4"><a class="nav-link" href="#Q3：使用不用的分类模型进行建模"><span class="nav-number">2.2.3.</span> <span class="nav-text">Q3：使用不用的分类模型进行建模</span></a></li><li class="nav-item nav-level-4"><a class="nav-link" href="#Q4：网格搜索交叉验证、贝叶斯优化器调参"><span class="nav-number">2.2.4.</span> <span class="nav-text">Q4：网格搜索交叉验证、贝叶斯优化器调参</span></a></li><li class="nav-item nav-level-4"><a class="nav-link" href="#Q5：混淆矩阵评估指标"><span class="nav-number">2.2.5.</span> <span class="nav-text">Q5：混淆矩阵评估指标</span></a></li><li class="nav-item nav-level-4"><a class="nav-link" href="#Q6：自定义2分类中的判决边界（调整0-5）"><span class="nav-number">2.2.6.</span> <span class="nav-text">Q6：自定义2分类中的判决边界（调整0.5）</span></a></li><li class="nav-item nav-level-4"><a class="nav-link" href="#Q7：特征筛选并重建模型"><span class="nav-number">2.2.7.</span> <span class="nav-text">Q7：特征筛选并重建模型</span></a></li></ol></li></ol></li></ol>
    
    </div>
  </aside>

</section>
        
      </div>
      
      <footer id="footer">
  

  <div class="container">
      	<div class="row">
	      <p> Powered by <a href="http://hexo.io/" target="_blank">Hexo</a> and <a href="https://github.com/iTimeTraveler/hexo-theme-hiker" target="_blank">Hexo-theme-hiker</a> </p>
	      <p id="copyRightEn">Copyright &copy; 2013 - 2020 MaxMa All Rights Reserved.</p>
	      
	      
    		<p class="busuanzi_uv">
				访客数 : <span id="busuanzi_value_site_uv"></span> |  
				访问量 : <span id="busuanzi_value_site_pv"></span>
		    </p>
  		   
		</div>

		
  </div>
</footer>


<!-- min height -->

<script>
    var wrapdiv = document.getElementById("wrap");
    var contentdiv = document.getElementById("content");
    var allheader = document.getElementById("allheader");

    wrapdiv.style.minHeight = document.body.offsetHeight + "px";
    if (allheader != null) {
      contentdiv.style.minHeight = document.body.offsetHeight - allheader.offsetHeight - document.getElementById("footer").offsetHeight + "px";
    } else {
      contentdiv.style.minHeight = document.body.offsetHeight - document.getElementById("footer").offsetHeight + "px";
    }
</script>
    </div>
    <!-- <nav id="mobile-nav">
  
    <a href="/" class="mobile-nav-link">Home</a>
  
    <a href="/archives" class="mobile-nav-link">Archives</a>
  
    <a href="/categories" class="mobile-nav-link">Categories</a>
  
    <a href="/tags" class="mobile-nav-link">Tags</a>
  
    <a href="/about" class="mobile-nav-link">About</a>
  
</nav> -->
    

<!-- mathjax config similar to math.stackexchange -->

<script type="text/x-mathjax-config">
  MathJax.Hub.Config({
    tex2jax: {
      inlineMath: [ ['$','$'], ["\\(","\\)"] ],
      processEscapes: true
    }
  });
</script>

<script type="text/x-mathjax-config">
    MathJax.Hub.Config({
      tex2jax: {
        skipTags: ['script', 'noscript', 'style', 'textarea', 'pre', 'code']
      }
    });
</script>

<script type="text/x-mathjax-config">
    MathJax.Hub.Queue(function() {
        var all = MathJax.Hub.getAllJax(), i;
        for(i=0; i < all.length; i += 1) {
            all[i].SourceElement().parentNode.className += ' has-jax';
        }
    });
</script>

<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>


  <link rel="stylesheet" href="/fancybox/jquery.fancybox.css">
  <script src="/fancybox/jquery.fancybox.pack.js"></script>


<script src="/js/scripts.js"></script>




  <script src="/js/dialog.js"></script>








	<div style="display: none;">
    <script src="https://s95.cnzz.com/z_stat.php?id=1260716016&web_id=1260716016" language="JavaScript"></script>
  </div>



	<script async src="//busuanzi.ibruce.info/busuanzi/2.3/busuanzi.pure.mini.js">
	</script>






  </div>

  <div class="modal fade" id="myModal" tabindex="-1" role="dialog" aria-labelledby="myModalLabel" aria-hidden="true" style="display: none;">
  <div class="modal-dialog">
    <div class="modal-content">
      <div class="modal-header">
        <h2 class="modal-title" id="myModalLabel">设置</h2>
      </div>
      <hr style="margin-top:0px; margin-bottom:0px; width:80%; border-top: 3px solid #000;">
      <hr style="margin-top:2px; margin-bottom:0px; width:80%; border-top: 1px solid #000;">


      <div class="modal-body">
          <div style="margin:6px;">
            <a data-toggle="collapse" data-parent="#accordion" href="#collapseOne" onclick="javascript:setFontSize();" aria-expanded="true" aria-controls="collapseOne">
              正文字号大小
            </a>
          </div>
          <div id="collapseOne" class="panel-collapse collapse" role="tabpanel" aria-labelledby="headingOne">
          <div class="panel-body">
            您已调整页面字体大小
          </div>
        </div>
      


          <div style="margin:6px;">
            <a data-toggle="collapse" data-parent="#accordion" href="#collapseTwo" onclick="javascript:setBackground();" aria-expanded="true" aria-controls="collapseTwo">
              夜间护眼模式
            </a>
        </div>
          <div id="collapseTwo" class="panel-collapse collapse" role="tabpanel" aria-labelledby="headingTwo">
          <div class="panel-body">
            夜间模式已经开启，再次单击按钮即可关闭 
          </div>
        </div>

        <div>
            <a data-toggle="collapse" data-parent="#accordion" href="#collapseThree" aria-expanded="true" aria-controls="collapseThree">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;关 于&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</a>
        </div>
         <div id="collapseThree" class="panel-collapse collapse" role="tabpanel" aria-labelledby="headingThree">
          <div class="panel-body">
            MaxMa
          </div>
          <div class="panel-body">
            Copyright © 2020 MaxMa All Rights Reserved.
          </div>
        </div>
      </div>


      <hr style="margin-top:0px; margin-bottom:0px; width:80%; border-top: 1px solid #000;">
      <hr style="margin-top:2px; margin-bottom:0px; width:80%; border-top: 3px solid #000;">
      <div class="modal-footer">
        <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button>
      </div>
    </div>
  </div>
</div>
  
  <a id="rocket" href="#top" class=""></a>
  <script type="text/javascript" src="/js/totop.js?v=1.0.0" async=""></script>
  
    <a id="menu-switch"><i class="fa fa-bars fa-lg"></i></a>
  
<script type="text/x-mathjax-config">
    MathJax.Hub.Config({
        tex2jax: {
            inlineMath: [ ["$","$"], ["\\(","\\)"] ],
            skipTags: ['script', 'noscript', 'style', 'textarea', 'pre', 'code'],
            processEscapes: true
        }
    });
    MathJax.Hub.Queue(function() {
        var all = MathJax.Hub.getAllJax();
        for (var i = 0; i < all.length; ++i)
            all[i].SourceElement().parentNode.className += ' has-jax';
    });
</script>
<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
</body>
</html>