<html>
 <head>
  <meta charset="UTF-8">
 </head>
 <body>
  <h1 data-lake-id="EwFoW" id="EwFoW"><span data-lake-id="u3b10690a" id="u3b10690a">典型回答</span></h1>
  <p data-lake-id="u9e88d497" id="u9e88d497"><span data-lake-id="ub29fddc1" id="ub29fddc1"><br></span><span data-lake-id="ua7ca3c2d" id="ua7ca3c2d">Java中的Stream API提供了一种高效且易于使用的方式来处理数据集合。其中，Stream的并行流（parallel stream）是一种特别强大的工具，它可以显著提高数据处理的效率，特别是在处理大型数据集时。</span></p>
  <p data-lake-id="ud000c680" id="ud000c680"><span data-lake-id="ueb7e6a6c" id="ueb7e6a6c">​</span><br></p>
  <pre lang="java"><code>
List&lt;String&gt; list = Arrays.asList("Apple", "Banana", "Cherry", "Date");

// 创建一个串行流
Stream&lt;String&gt; stream = list.stream();

// 创建一个并行流
Stream&lt;String&gt; parallelStream = list.parallelStream();
</code></pre>
  <p data-lake-id="u6fcae79d" id="u6fcae79d"><br></p>
  <p data-lake-id="u97260097" id="u97260097"><span data-lake-id="ud02cfb73" id="ud02cfb73">使用parallelStream方法就能获取到一个并行流。通过并发运行的方式执行流的迭代及操作。</span></p>
  <p data-lake-id="uf30460ac" id="uf30460ac"><span data-lake-id="u5e4c0b86" id="u5e4c0b86">​</span><br></p>
  <p data-lake-id="u8ff6ff7d" id="u8ff6ff7d"><span data-lake-id="u1e19cde5" id="u1e19cde5">并行流底层使用了Java 7中引入的Fork/Join框架。这个框架旨在帮助开发者利用多核处理器的并行处理能力。它工作的方式是将一个大任务分割（fork）成多个小任务，这些小任务可以并行执行，然后再将这些小任务的结果合并（join）成最终结果。</span></p>
  <p data-lake-id="u4fd4909d" id="u4fd4909d"><span data-lake-id="uec3de9a0" id="uec3de9a0">​</span><br></p>
  <p data-lake-id="ue3fb7db4" id="ue3fb7db4"><br></p>
  <p data-lake-id="ud0bdf166" id="ud0bdf166"><span data-lake-id="ubad6333d" id="ubad6333d">我们来看下他的具体实现方式，Stream的reduce方法是用来遍历这个Stream的，看下他的实现，是在ReferencePipeline这个实现类中的：</span></p>
  <p data-lake-id="ub40cea29" id="ub40cea29"><span data-lake-id="u8d4a5084" id="u8d4a5084">​</span><br></p>
  <pre lang="java"><code>
@Override
public final Optional&lt;P_OUT&gt; reduce(BinaryOperator&lt;P_OUT&gt; accumulator) {
    return evaluate(ReduceOps.makeRef(accumulator));
}

final &lt;R&gt; R evaluate(TerminalOp&lt;E_OUT, R&gt; terminalOp) {
  assert getOutputShape() == terminalOp.inputShape();
  if (linkedOrConsumed)
      throw new IllegalStateException(MSG_STREAM_LINKED);
  linkedOrConsumed = true;

  return isParallel()
         ? terminalOp.evaluateParallel(this, sourceSpliterator(terminalOp.getOpFlags()))
         : terminalOp.evaluateSequential(this, sourceSpliterator(terminalOp.getOpFlags()));
}
</code></pre>
  <p data-lake-id="u739f9b32" id="u739f9b32"><span data-lake-id="u049ea51e" id="u049ea51e"></span></p>
  <p data-lake-id="u1300490d" id="u1300490d"><span data-lake-id="u0d5f32fb" id="u0d5f32fb">可以看到，这里调用了一个evaluate方法，然后再方法中有一个是否并行流的判断——</span><code data-lake-id="u8ef16c80" id="u8ef16c80"><span data-lake-id="u95f8cf25" id="u95f8cf25">isParallel()</span></code><span data-lake-id="u47999fc5" id="u47999fc5">，如果是并行流，那么执行的是</span><code data-lake-id="u52a31254" id="u52a31254"><span data-lake-id="u46491016" id="u46491016">terminalOp.evaluateParallel</span></code><span data-lake-id="u1f6b9da8" id="u1f6b9da8">，接下来看一下具体实现。</span></p>
  <p data-lake-id="u65075250" id="u65075250"><span data-lake-id="ua4fed350" id="ua4fed350">​</span><br></p>
  <p data-lake-id="u112dbea8" id="u112dbea8"><span data-lake-id="udf4dee88" id="udf4dee88">这个实现类有很多个：</span></p>
  <p data-lake-id="u916c6f06" id="u916c6f06"><img src="https://cdn.nlark.com/yuque/0/2024/png/5378072/1705734014734-6b8f5e10-c62e-4544-ae0c-4d26d5fb7162.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_57%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="u335108b6" id="u335108b6"><br></p>
  <p data-lake-id="u2b8713ba" id="u2b8713ba"><span data-lake-id="u8d5bc2a4" id="u8d5bc2a4">随便先找一个打开看下，如MatchOp中：</span></p>
  <p data-lake-id="ua0e25f1c" id="ua0e25f1c"><span data-lake-id="uf6c9aa7a" id="uf6c9aa7a">​</span><br></p>
  <pre lang="java"><code>
@Override
public &lt;S&gt; Boolean evaluateParallel(PipelineHelper&lt;T&gt; helper,
                                    Spliterator&lt;S&gt; spliterator) {
    // Approach for parallel implementation:
    // - Decompose as per usual
    // - run match on leaf chunks, call result "b"
    // - if b == matchKind.shortCircuitOn, complete early and return b
    // - else if we complete normally, return !shortCircuitOn

    return new MatchTask&lt;&gt;(this, helper, spliterator).invoke();
}
</code></pre>
  <p data-lake-id="u7aa087c1" id="u7aa087c1"><span data-lake-id="u4c160acb" id="u4c160acb"></span></p>
  <p data-lake-id="u8c971c38" id="u8c971c38"><span data-lake-id="u87145bf6" id="u87145bf6">这里面用到了MatchTask。</span></p>
  <p data-lake-id="uffdfeb44" id="uffdfeb44"><span data-lake-id="u3d28523f" id="u3d28523f">​</span><br></p>
  <p data-lake-id="u8510c1c9" id="u8510c1c9"><span data-lake-id="u1b20405a" id="u1b20405a">再看下FinOp：</span></p>
  <p data-lake-id="uf303f888" id="uf303f888"><span data-lake-id="uc6f2890b" id="uc6f2890b">​</span><br></p>
  <pre lang="java"><code>
@Override
public &lt;P_IN&gt; O evaluateParallel(PipelineHelper&lt;T&gt; helper,
                                 Spliterator&lt;P_IN&gt; spliterator) {
    return new FindTask&lt;&gt;(this, helper, spliterator).invoke();
}
</code></pre>
  <p data-lake-id="ua7200838" id="ua7200838"><span data-lake-id="u94f7e0d6" id="u94f7e0d6"></span></p>
  <p data-lake-id="u5b841376" id="u5b841376"><span data-lake-id="u80bfcce5" id="u80bfcce5">这里又用到了一个FindTask。以及其他的几个实现分别用到了ReduceTask、ForEachTask等。</span></p>
  <p data-lake-id="ub52fdcf3" id="ub52fdcf3"><span data-lake-id="u068e225d" id="u068e225d">​</span><br></p>
  <p data-lake-id="ubb27c7c1" id="ubb27c7c1"><span data-lake-id="u9f6d082d" id="u9f6d082d">其实，这几个Task都是CountedCompleter的子类，而CountedCompleter其实就是一个ForkJoinTask</span></p>
  <p data-lake-id="u1f016017" id="u1f016017"><span data-lake-id="u46be9813" id="u46be9813">​</span><br></p>
  <pre lang="java"><code>
public abstract class CountedCompleter&lt;T&gt; extends ForkJoinTask&lt;T&gt; {
}
</code></pre>
  <p data-lake-id="u6fd92e80" id="u6fd92e80"><span data-lake-id="u610c6880" id="u610c6880">​</span><br></p>
  <p data-lake-id="u0bd610e9" id="u0bd610e9"><span data-lake-id="ufc228c3f" id="ufc228c3f">也就是我们熟悉的ForkJoinPool的实现了，如果你不了解ForkJoinPool可以看下面这篇：</span></p>
  <p data-lake-id="u30f739c9" id="u30f739c9"><span data-lake-id="u4ba96d6e" id="u4ba96d6e">​</span><br></p>
  <p data-lake-id="u3f2ac35d" id="u3f2ac35d"><span data-lake-id="u438c2553" id="u438c2553">​</span><br></p>
  <p data-lake-id="u9beaf0a0" id="u9beaf0a0"><span data-lake-id="u4c407de9" id="u4c407de9">​</span><br></p>
  <h1 data-lake-id="HToKR" id="HToKR"><span data-lake-id="uc5c2e0cd" id="uc5c2e0cd">扩展知识</span></h1>
  <h2 data-lake-id="gI20I" id="gI20I"><span data-lake-id="u842453fe" id="u842453fe">并行流一定更快吗？</span></h2>
  <p data-lake-id="u8f2a80ac" id="u8f2a80ac"><br></p>
  <p data-lake-id="u72b0be83" id="u72b0be83"><span data-lake-id="u2488061b" id="u2488061b">答案是不一定。</span></p>
  <p data-lake-id="u28778de0" id="u28778de0"><span data-lake-id="u94580125" id="u94580125">​</span><br></p>
  <p data-lake-id="ue5c49774" id="ue5c49774"><span data-lake-id="u40d65320" id="u40d65320">其性能优势取决于多种因素，包括数据量大小、CPU数量、单个任务的计算成本以及任务的类型等。</span></p>
  <p data-lake-id="u3fc4ea71" id="u3fc4ea71"><span data-lake-id="u142ff50a" id="u142ff50a">​</span><br></p>
  <p data-lake-id="uab417086" id="uab417086"><span data-lake-id="uba7adef8" id="uba7adef8">并行流在处理大型数据集时往往表现更好，因为数据可以被分割成多个小块，然后在不同的处理器核心上并行处理。对于较小的数据集，串行流可能更有效，因为并行流的线程分配和管理本身就有一定的开销。</span></p>
  <p data-lake-id="ud301f08a" id="ud301f08a"><span data-lake-id="ufab4825c" id="ufab4825c">​</span><br></p>
  <blockquote data-lake-id="ue6bd43ef" id="ue6bd43ef">
   <p data-lake-id="u5b8f5feb" id="u5b8f5feb"><span data-lake-id="u21532ded" id="u21532ded"> "大数据集"的定义在不同的上下文和应用中可能有所不同，尤其是在决定是否使用并行流处理时。没有一个固定的阈值可以定义何时数据集变成“大”。我认为当集合中数据量超过1000就可以算大了。</span></p>
  </blockquote>
  <p data-lake-id="u96784b42" id="u96784b42"><span data-lake-id="ufbb0676f" id="ufbb0676f">​</span><br></p>
  <p data-lake-id="u5b6cda39" id="u5b6cda39"><span data-lake-id="u7c8865eb" id="u7c8865eb">之前有人做过实验测试，我们直接看下结果吧，具体的可以参考：</span><a href="https://www.hollischuang.com/archives/3364" target="_blank" data-lake-id="ud1b956e8" id="ud1b956e8"><span data-lake-id="u6e04cb0c" id="u6e04cb0c">https://www.hollischuang.com/archives/3364</span></a><span data-lake-id="ud8aaa2e3" id="ud8aaa2e3"> </span></p>
  <p data-lake-id="u60832f2e" id="u60832f2e"><span data-lake-id="u85a1dee2" id="u85a1dee2">​</span><br></p>
  <p data-lake-id="u32880745" id="u32880745"><img src="https://cdn.nlark.com/yuque/0/2024/png/5378072/1705734848331-d3734d53-e306-4659-a79f-6b65000e8a77.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_33%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="u2b437625" id="u2b437625"><br></p>
  <p data-lake-id="ud0c521a8" id="ud0c521a8"><img src="https://cdn.nlark.com/yuque/0/2024/png/5378072/1705734854045-7035d0b8-6dcb-47ea-a85a-3a210f79bb47.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_33%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="ub2a27fea" id="ub2a27fea"><img src="https://cdn.nlark.com/yuque/0/2024/png/5378072/1705734863297-c888c151-feb1-4eba-a4f2-b218a83a14dc.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_33%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="u6236b5f6" id="u6236b5f6"><br></p>
  <p data-lake-id="ufbe6718a" id="ufbe6718a"><span data-lake-id="ud7c838a4" id="ud7c838a4">​</span><br></p>
  <p data-lake-id="u7bde2290" id="u7bde2290"><span data-lake-id="u8a119371" id="u8a119371">基于这个测试，我们可以得出以下结论：</span></p>
  <ol list="u9f1b154e">
   <li fid="u16daf642" data-lake-id="ud4ea035d" id="ud4ea035d"><span data-lake-id="u629296ab" id="u629296ab">对于简单操作，比如最简单的遍历，Stream串行API性能明显差于显示迭代，但并行的Stream API能够发挥多核特性。</span></li>
   <li fid="u16daf642" data-lake-id="u21aa4ad2" id="u21aa4ad2"><span data-lake-id="ua5cfa9cb" id="ua5cfa9cb">对于复杂操作，Stream串行API性能可以和手动实现的效果匹敌，在并行执行时Stream API效果远超手动实现。</span></li>
  </ol>
  <p data-lake-id="u0d47534b" id="u0d47534b"><br></p>
  <p data-lake-id="u7ee4bb33" id="u7ee4bb33"><span data-lake-id="u136fb02b" id="u136fb02b">所以，如果出于性能考虑：</span></p>
  <ul list="ub3a17d8e">
   <li fid="u41d6dc22" data-lake-id="ub59071b1" id="ub59071b1"><span data-lake-id="ue571992a" id="ue571992a">对于简单操作推荐使用外部迭代手动实现，</span></li>
   <li fid="u41d6dc22" data-lake-id="u7fb66a49" id="u7fb66a49"><span data-lake-id="u7802405e" id="u7802405e">对于复杂操作，推荐使用Stream API， </span></li>
   <li fid="u41d6dc22" data-lake-id="u6085aae5" id="u6085aae5"><span data-lake-id="u66a4a2f0" id="u66a4a2f0">在多核情况下，推荐使用并行Stream API来发挥多核优势，单核情况下不建议使用并行Stream API。</span></li>
   <li fid="u41d6dc22" data-lake-id="uf9df5494" id="uf9df5494"><span data-lake-id="u9e5c07dd" id="u9e5c07dd">数据量比较小的情况，尤其是集合中只有几十个甚至几个的时候，没必要用并行流。</span></li>
  </ul>
  <p data-lake-id="uf0f27bbd" id="uf0f27bbd"><span data-lake-id="u9d6b40cf" id="u9d6b40cf" class="lake-fontsize-12" style="color: rgb(55, 65, 81)">​</span><br></p>
  <h3 data-lake-id="P5zi2" id="P5zi2"><span data-lake-id="ue03e0bc9" id="ue03e0bc9" style="color: rgb(55, 65, 81)">使用自定义线程池</span></h3>
  <p data-lake-id="u0d955338" id="u0d955338"><span data-lake-id="ue642ceaf" id="ue642ceaf" style="color: rgb(55, 65, 81)">​</span><br></p>
  <p data-lake-id="u68560336" id="u68560336"><span data-lake-id="u46d1af1a" id="u46d1af1a">默认情况下，所有的并行流操作都共享一个公共的 ForkJoinPool，它的线程数量通常等于处理器的核心数减一。如果需要，可以使用自定义的 ForkJoinPool 来执行操作。自定义线程池可以帮助我们：</span></p>
  <p data-lake-id="uf9610362" id="uf9610362"><span data-lake-id="u16cad4f2" id="u16cad4f2">​</span><br></p>
  <ol list="u299eb25e">
   <li fid="uba8638fb" data-lake-id="uc7000f78" id="uc7000f78" data-lake-index-type="true"><span data-lake-id="u02e918f6" id="u02e918f6">避免资源竞争</span><span data-lake-id="u3d9d1484" id="u3d9d1484">：使用公共的</span><span data-lake-id="u4472651d" id="u4472651d">ForkJoinPool</span><span data-lake-id="u2838fcc6" id="u2838fcc6">可能会与其他并行任务竞争资源。</span></li>
   <li fid="uba8638fb" data-lake-id="ubb1ded03" id="ubb1ded03" data-lake-index-type="true"><span data-lake-id="u58d13062" id="u58d13062">调整性能</span><span data-lake-id="u40331d66" id="u40331d66">：根据应用程序的需求调整线程池的大小，优化性能。</span></li>
   <li fid="uba8638fb" data-lake-id="u137cd45e" id="u137cd45e" data-lake-index-type="true"><span data-lake-id="u5970e0a9" id="u5970e0a9">更好的错误处理和监控：自定义线程池可以提供更多的错误处理和监控机制。</span></li>
  </ol>
  <p data-lake-id="uae2bce0f" id="uae2bce0f"><span data-lake-id="u5f39baa4" id="u5f39baa4" style="color: rgb(55, 65, 81)">​</span><br></p>
  <p data-lake-id="u0e4d493c" id="u0e4d493c"><span data-lake-id="u8b289159" id="u8b289159" style="color: rgb(55, 65, 81)">​</span><br></p>
  <p data-lake-id="ufea97da6" id="ufea97da6"><span data-lake-id="uc94d357b" id="uc94d357b" style="color: rgb(55, 65, 81)">自定义方式如下：</span></p>
  <p data-lake-id="ufda07f43" id="ufda07f43"><span data-lake-id="u197727da" id="u197727da" style="color: rgb(55, 65, 81)">​</span><br></p>
  <pre lang="java"><code>
import java.util.concurrent.ForkJoinPool;
import java.util.stream.Stream;

public class CustomThreadPoolExample {
    public static void main(String[] args) {
        // 创建具有特定线程数的ForkJoinPool
        ForkJoinPool customThreadPool = new ForkJoinPool(4); 

        try {
            customThreadPool.submit(() -&gt; {
                // 在自定义线程池中执行并行流操作
                Stream.of("Apple", "Banana", "Cherry", "Date")
                      .parallel()
                      .forEach(System.out::println);
            }).get(); // 等待操作完成
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            customThreadPool.shutdown(); // 关闭线程池
        }
    }
}

</code></pre>
  <p data-lake-id="uaab50c9e" id="uaab50c9e"><br></p>
  <p data-lake-id="u86233329" id="u86233329"><span data-lake-id="u55c98689" id="u55c98689">我们创建了一个自定义的ForkJoinPool，并使用它的submit方法来执行并行流操作。</span></p>
 </body>
</html>