<html>
 <head>
  <meta charset="UTF-8">
 </head>
 <body>
  <h3 data-lake-id="0b479686" id="0b479686"><span data-lake-id="u20d9535c" id="u20d9535c">问题发现</span></h3>
  <p data-lake-id="u0d49e1f7" id="u0d49e1f7"><br></p>
  <p data-lake-id="u5aa4ddab" id="u5aa4ddab"><span data-lake-id="ucf8255fe" id="ucf8255fe">最近，经常收到一些数据库的报警，提示我们的数据库的CPU有异常飙高的情况，通过该监控发现，确实间歇性的有一些CPU飙高的情况，经常把CPU打满了。</span></p>
  <p data-lake-id="u970d1e1e" id="u970d1e1e"><br></p>
  <p data-lake-id="u7f346947" id="u7f346947"><img src="https://ata2-img.oss-cn-zhangjiakou.aliyuncs.com/neweditor/a1786489-1f44-4c39-bea4-85ca25a45433.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_45%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="ud5ce12d7" id="ud5ce12d7"><br></p>
  <h3 data-lake-id="6e621f2f" id="6e621f2f"><span data-lake-id="ua92a70e1" id="ua92a70e1">问题排查</span></h3>
  <p data-lake-id="uaddd8173" id="uaddd8173"><br></p>
  <p data-lake-id="u31a6ff7b" id="u31a6ff7b"><span data-lake-id="ue131b0d3" id="ue131b0d3">通过监控进一步查看，发现在CPU飙高的同时，有大量SQL的锁耗时比较长，平均在1.5秒左右，并且在业务高峰期经常要4s-5s：</span></p>
  <p data-lake-id="u57b3144a" id="u57b3144a"><br></p>
  <p data-lake-id="uf1fa0aa1" id="uf1fa0aa1"><img src="https://ata2-img.oss-cn-zhangjiakou.aliyuncs.com/neweditor/1737f072-f8e8-41d6-b209-79ef04365fd5.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_87%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="u032e20e9" id="u032e20e9"><br></p>
  <p data-lake-id="ue73857bf" id="ue73857bf"><span data-lake-id="ud58d29ee" id="ud58d29ee">具体查看SQL的话，会发现是一些update语句导致的：</span></p>
  <p data-lake-id="uda4bb33e" id="uda4bb33e"><br></p>
  <p data-lake-id="u26629342" id="u26629342"><img src="https://ata2-img.oss-cn-zhangjiakou.aliyuncs.com/neweditor/5d61c70f-5dce-4717-bd57-6a14039bf708.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_95%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="u03872c2c" id="u03872c2c"><br></p>
  <p data-lake-id="ua438626b" id="ua438626b"><span data-lake-id="ud99ec531" id="ud99ec531">主要的SQL内容在下面，其中我们的更新条件，number是有唯一性索引的：</span></p>
  <p data-lake-id="uffc2ca16" id="uffc2ca16"><br></p>
  <pre lang="java"><code>
SET gmt_modified = now(), business_type_enum = ?, product_type_enum = ?
WHERE number = ?
</code></pre>
  <p data-lake-id="u70943cd8" id="u70943cd8"><br></p>
  <p data-lake-id="u57b514e6" id="u57b514e6"><span data-lake-id="u327d8e4b" id="u327d8e4b">经过分析SQL语句，结合前面我们看到的这</span><strong><span data-lake-id="u457bbb35" id="u457bbb35">条SQL语句大量的耗时都在锁等待上面</span></strong><span data-lake-id="u51a569eb" id="u51a569eb">。现象比较明显了，那问题根据猜测，大概率是出现在多个线程同时尝试更新同一行记录的时候。</span></p>
  <p data-lake-id="u7a410df6" id="u7a410df6"><br></p>
  <p data-lake-id="u27201d65" id="u27201d65"><span data-lake-id="udca0055d" id="udca0055d">因为</span><strong><span data-lake-id="u75d5c084" id="u75d5c084">InnoDB会在update的时候自动给行记录加锁，以防止其他线程同时更新该行记录。如果多个线程同时尝试更新同一行记录，那么没拿到锁的线程就必须等待持有锁的线程释放锁后才能继续更新该行记录。</span></strong></p>
  <p data-lake-id="ud12546ae" id="ud12546ae"><br></p>
  <p data-lake-id="u9cba2f17" id="u9cba2f17"><span data-lake-id="u94b12a70" id="u94b12a70">思路大致有了，再结合我们的实际业务情况，基本可以确定之所以导致CPU飙高就是因为并发修改同一条记录导致的锁等待，进而导致的CPU飙高。</span></p>
  <p data-lake-id="u543f97e8" id="u543f97e8"><br></p>
  <p data-lake-id="u17adaea2" id="u17adaea2"><span data-lake-id="ubaaade26" id="ubaaade26">因为CPU飙高的几个时间点，都是我们有一个合案任务在执行，合案任务的逻辑是这样的：</span></p>
  <p data-lake-id="u26b4d9fd" id="u26b4d9fd"><br></p>
  <p data-lake-id="u2b060278" id="u2b060278"><span data-lake-id="u2a339637" id="u2a339637">定时扫描所有风控策略流入的存在欺诈风险的订单及用户数据(fraud_risk_order)，然后把这些数据，基于用户维度进行合并，并且合并后要把多条其他明细数据组合在同一条用于审核的欺诈审核单(fraud_audit_order)上。</span></p>
  <p data-lake-id="u19563806" id="u19563806"><span data-lake-id="u440049a3" id="u440049a3">​</span><br></p>
  <p data-lake-id="u82182ca7" id="u82182ca7"><span data-lake-id="ud7165f09" id="ud7165f09">伪代码：</span></p>
  <pre lang="java"><code>
for(fraud_risk_order : fraud_risk_orders){

	update fraud_audit_order set xxx = 'xx' where fraud_audit_order_no = "同一个单号"
}
</code></pre>
  <p data-lake-id="u1d4bd88a" id="u1d4bd88a"><br></p>
  <p data-lake-id="uf3a44568" id="uf3a44568"><span data-lake-id="u0b514e0d" id="u0b514e0d">而这个问题之前没有出现，是在我最近刚刚做过一次合案任务的性能优化，使用网格任务分布式的进行合案之后才频繁出现的。</span></p>
  <p data-lake-id="u6c30f827" id="u6c30f827"><br></p>
  <p data-lake-id="u6a27129d" id="u6a27129d"><span data-lake-id="u5a8b64a4" id="u5a8b64a4">主要就是任务的性能好了，扫表扫的快了，如果同一个用户名下的欺诈风险单比较多的话，就会并发的去修改同一条审核单。这就会导致并发冲突。</span></p>
  <p data-lake-id="u0ac911bc" id="u0ac911bc"><br></p>
  <h3 data-lake-id="520f9e6c" id="520f9e6c"><span data-lake-id="u69281133" id="u69281133">问题解决</span></h3>
  <p data-lake-id="ue9e53d62" id="ue9e53d62"><br></p>
  <p data-lake-id="u3468f092" id="u3468f092"><span data-lake-id="u98cbc4ff" id="u98cbc4ff">问题定位到以后，就要想办法解决了。</span></p>
  <p data-lake-id="ue7c9c446" id="ue7c9c446"><br></p>
  <p data-lake-id="u24e34a58" id="u24e34a58"><span data-lake-id="u7c675f91" id="u7c675f91">结合我们自己的业务情况，优化的方案也很简单，就是不要单条单条的去修改审核单，而是先进行一次预合案，然后再批量一次执行更新，并把结果合并到一条审核单上即可。</span></p>
  <p data-lake-id="ufd6cbb8c" id="ufd6cbb8c"><br></p>
  <p data-lake-id="ue94c1e81" id="ue94c1e81"><span data-lake-id="ue686aa42" id="ue686aa42">预合案的方案主要是基于数据库写SQL做的，大致思路如下：</span></p>
  <p data-lake-id="ua22ccb0a" id="ua22ccb0a"><br></p>
  <pre lang="java"><code>
select
        product_type_enum,
        subject_id,
        subject_id_enum,
        GROUP_CONCAT(distinct(submitter) SEPARATOR ',') as submitters,
        GROUP_CONCAT(distinct(number) SEPARATOR ',') as risk_order_numbers,
        GROUP_CONCAT(distinct(risk_level_enum) SEPARATOR ',') as risk_level_enums ,
        GROUP_CONCAT(distinct(risk_category) SEPARATOR ',') as category_codes

        from fraud_risk_order
        where 
            product_type_enum = "XXX"
            and risk_order_status_enum = 'DRAFT'
        group by subject_id_enum,subject_id
</code></pre>
  <p data-lake-id="u244a2a51" id="u244a2a51"><br></p>
  <p data-lake-id="u3ad0d0f0" id="u3ad0d0f0"><span data-lake-id="uca47f430" id="uca47f430">通过上面的SQL，我们把各个需要合并的数据，基于主体ID和主体类型进行了聚合，并且把需要聚合到一起的字段，如submitter，通过GROUP_CONCAT函数进行逗号分隔开组成一个字符串。</span></p>
  <p data-lake-id="u95a7d3df" id="u95a7d3df"><br></p>
  <p data-lake-id="ucb9d8b76" id="ucb9d8b76"><span data-lake-id="u864ddb98" id="u864ddb98">然后在代码的合案逻辑中，进行如下操作：</span></p>
  <p data-lake-id="u2cf510c4" id="u2cf510c4"><br></p>
  <pre lang="java"><code>
public class AuditOrder{

    public void merge(RiskOrderMergeInfo riskOrderMergeInfo) {

        RiskLevelEnum  currentRiskLevelEnum = Arrays.stream(riskOrderMergeInfo.getRiskLevelEnums().split(","))
                .map(RiskLevelEnum::valueOf)
                .max(Comparator.comparing(RiskLevelEnum::getWeights))
                .orElse(RiskLevelEnum.LOW);

        this.riskLevelEnum = Stream.of(this.riskLevelEnum, currentRiskLevelEnum).max(Comparator.comparing(RiskLevelEnum::getWeights)).orElse(RiskLevelEnum.LOW);
    }

}
</code></pre>
  <p data-lake-id="uc6b63314" id="uc6b63314"><br></p>
  <p data-lake-id="u06361a36" id="u06361a36"><span data-lake-id="ua715f04c" id="ua715f04c">就是把上面的SQL中返回的信息，和已有的审核单进行合并，合并后只需要进行一次更新即可。</span></p>
  <p data-lake-id="uc58aff54" id="uc58aff54"><br></p>
  <p data-lake-id="u247439e4" id="u247439e4"><span data-lake-id="uae0ad172" id="uae0ad172">因为在上面的SQL中，我同时把本次合并涉及到的风险单（fraud_risk_order）的单号（number）也返回了，所以针对这些单据我也可以通过一条SQL进行批量的推进状态。</span></p>
  <p data-lake-id="uf4caef59" id="uf4caef59"><br></p>
  <p data-lake-id="ub43133df" id="ub43133df"><span data-lake-id="ufbd2f855" id="ufbd2f855">经过以上代码优化后，不仅CPU飙高的问题解决了，合案任务的执行效率也大大提高了。原来需要跑2小时，现在只需要10分钟不到。</span></p>
  <p data-lake-id="u7ecaef82" id="u7ecaef82"><br></p>
  <p data-lake-id="ue2186fdd" id="ue2186fdd"><img src="https://ata2-img.oss-cn-zhangjiakou.aliyuncs.com/neweditor/910de110-f45d-412b-88df-179309bf24f3.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_46%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
 </body>
</html>