  <html>
    <head>
      <meta charset="utf-8">
      <title>Spark CoreCreate Repository</title>
      <style>
        #wrapper {width: 960px; margin: 0 auto;}
        /* Asciidoctor default stylesheet | MIT License | http://asciidoctor.org */
/* Uncomment @import statement below to use as custom stylesheet */
/*@import "https://fonts.googleapis.com/css?family=Open+Sans:300,300italic,400,400italic,600,600italic%7CNoto+Serif:400,400italic,700,700italic%7CDroid+Sans+Mono:400,700";*/
article,aside,details,figcaption,figure,footer,header,hgroup,main,nav,section,summary{display:block}
audio,canvas,video{display:inline-block}
audio:not([controls]){display:none;height:0}
script{display:none!important}
html{font-family:sans-serif;-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%}
a{background:transparent}
a:focus{outline:thin dotted}
a:active,a:hover{outline:0}
h1{font-size:2em;margin:.67em 0}
abbr[title]{border-bottom:1px dotted}
b,strong{font-weight:bold}
dfn{font-style:italic}
hr{-moz-box-sizing:content-box;box-sizing:content-box;height:0}
mark{background:#ff0;color:#000}
code,kbd,pre,samp{font-family:monospace;font-size:1em}
pre{white-space:pre-wrap}
q{quotes:"\201C" "\201D" "\2018" "\2019"}
small{font-size:80%}
sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}
sup{top:-.5em}
sub{bottom:-.25em}
img{border:0}
svg:not(:root){overflow:hidden}
figure{margin:0}
fieldset{border:1px solid silver;margin:0 2px;padding:.35em .625em .75em}
legend{border:0;padding:0}
button,input,select,textarea{font-family:inherit;font-size:100%;margin:0}
button,input{line-height:normal}
button,select{text-transform:none}
button,html input[type="button"],input[type="reset"],input[type="submit"]{-webkit-appearance:button;cursor:pointer}
button[disabled],html input[disabled]{cursor:default}
input[type="checkbox"],input[type="radio"]{box-sizing:border-box;padding:0}
button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}
textarea{overflow:auto;vertical-align:top}
table{border-collapse:collapse;border-spacing:0}
*,*::before,*::after{-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box}
html,body{font-size:100%}
body{background:#fff;color:rgba(0,0,0,.8);padding:0;margin:0;font-family:"Noto Serif","DejaVu Serif",serif;font-weight:400;font-style:normal;line-height:1;position:relative;cursor:auto;tab-size:4;-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased}
a:hover{cursor:pointer}
img,object,embed{max-width:100%;height:auto}
object,embed{height:100%}
img{-ms-interpolation-mode:bicubic}
.left{float:left!important}
.right{float:right!important}
.text-left{text-align:left!important}
.text-right{text-align:right!important}
.text-center{text-align:center!important}
.text-justify{text-align:justify!important}
.hide{display:none}
img,object,svg{display:inline-block;vertical-align:middle}
textarea{height:auto;min-height:50px}
select{width:100%}
.center{margin-left:auto;margin-right:auto}
.stretch{width:100%}
.subheader,.admonitionblock td.content>.title,.audioblock>.title,.exampleblock>.title,.imageblock>.title,.listingblock>.title,.literalblock>.title,.stemblock>.title,.openblock>.title,.paragraph>.title,.quoteblock>.title,table.tableblock>.title,.verseblock>.title,.videoblock>.title,.dlist>.title,.olist>.title,.ulist>.title,.qlist>.title,.hdlist>.title{line-height:1.45;color:#7a2518;font-weight:400;margin-top:0;margin-bottom:.25em}
div,dl,dt,dd,ul,ol,li,h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6,pre,form,p,blockquote,th,td{margin:0;padding:0;direction:ltr}
a{color:#2156a5;text-decoration:underline;line-height:inherit}
a:hover,a:focus{color:#1d4b8f}
a img{border:none}
p{font-family:inherit;font-weight:400;font-size:1em;line-height:1.6;margin-bottom:1.25em;text-rendering:optimizeLegibility}
p aside{font-size:.875em;line-height:1.35;font-style:italic}
h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6{font-family:"Open Sans","DejaVu Sans",sans-serif;font-weight:300;font-style:normal;color:#ba3925;text-rendering:optimizeLegibility;margin-top:1em;margin-bottom:.5em;line-height:1.0125em}
h1 small,h2 small,h3 small,#toctitle small,.sidebarblock>.content>.title small,h4 small,h5 small,h6 small{font-size:60%;color:#e99b8f;line-height:0}
h1{font-size:2.125em}
h2{font-size:1.6875em}
h3,#toctitle,.sidebarblock>.content>.title{font-size:1.375em}
h4,h5{font-size:1.125em}
h6{font-size:1em}
hr{border:solid #dddddf;border-width:1px 0 0;clear:both;margin:1.25em 0 1.1875em;height:0}
em,i{font-style:italic;line-height:inherit}
strong,b{font-weight:bold;line-height:inherit}
small{font-size:60%;line-height:inherit}
code{font-family:"Droid Sans Mono","DejaVu Sans Mono",monospace;font-weight:400;color:rgba(0,0,0,.9)}
ul,ol,dl{font-size:1em;line-height:1.6;margin-bottom:1.25em;list-style-position:outside;font-family:inherit}
ul,ol{margin-left:1.5em}
ul li ul,ul li ol{margin-left:1.25em;margin-bottom:0;font-size:1em}
ul.square li ul,ul.circle li ul,ul.disc li ul{list-style:inherit}
ul.square{list-style-type:square}
ul.circle{list-style-type:circle}
ul.disc{list-style-type:disc}
ol li ul,ol li ol{margin-left:1.25em;margin-bottom:0}
dl dt{margin-bottom:.3125em;font-weight:bold}
dl dd{margin-bottom:1.25em}
abbr,acronym{text-transform:uppercase;font-size:90%;color:rgba(0,0,0,.8);border-bottom:1px dotted #ddd;cursor:help}
abbr{text-transform:none}
blockquote{margin:0 0 1.25em;padding:.5625em 1.25em 0 1.1875em;border-left:1px solid #ddd}
blockquote cite{display:block;font-size:.9375em;color:rgba(0,0,0,.6)}
blockquote cite::before{content:"\2014 \0020"}
blockquote cite a,blockquote cite a:visited{color:rgba(0,0,0,.6)}
blockquote,blockquote p{line-height:1.6;color:rgba(0,0,0,.85)}
@media screen and (min-width:768px){h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6{line-height:1.2}
h1{font-size:2.75em}
h2{font-size:2.3125em}
h3,#toctitle,.sidebarblock>.content>.title{font-size:1.6875em}
h4{font-size:1.4375em}}
table{background:#fff;margin-bottom:1.25em;border:solid 1px #dedede}
table thead,table tfoot{background:#f7f8f7}
table thead tr th,table thead tr td,table tfoot tr th,table tfoot tr td{padding:.5em .625em .625em;font-size:inherit;color:rgba(0,0,0,.8);text-align:left}
table tr th,table tr td{padding:.5625em .625em;font-size:inherit;color:rgba(0,0,0,.8)}
table tr.even,table tr.alt,table tr:nth-of-type(even){background:#f8f8f7}
table thead tr th,table tfoot tr th,table tbody tr td,table tr td,table tfoot tr td{display:table-cell;line-height:1.6}
h1,h2,h3,#toctitle,.sidebarblock>.content>.title,h4,h5,h6{line-height:1.2;word-spacing:-.05em}
h1 strong,h2 strong,h3 strong,#toctitle strong,.sidebarblock>.content>.title strong,h4 strong,h5 strong,h6 strong{font-weight:400}
.clearfix::before,.clearfix::after,.float-group::before,.float-group::after{content:" ";display:table}
.clearfix::after,.float-group::after{clear:both}
*:not(pre)>code{font-size:.9375em;font-style:normal!important;letter-spacing:0;padding:.1em .5ex;word-spacing:-.15em;background-color:#f7f7f8;-webkit-border-radius:4px;border-radius:4px;line-height:1.45;text-rendering:optimizeSpeed;word-wrap:break-word}
*:not(pre)>code.nobreak{word-wrap:normal}
*:not(pre)>code.nowrap{white-space:nowrap}
pre,pre>code{line-height:1.45;color:rgba(0,0,0,.9);font-family:"Droid Sans Mono","DejaVu Sans Mono",monospace;font-weight:400;text-rendering:optimizeSpeed}
em em{font-style:normal}
strong strong{font-weight:400}
.keyseq{color:rgba(51,51,51,.8)}
kbd{font-family:"Droid Sans Mono","DejaVu Sans Mono",monospace;display:inline-block;color:rgba(0,0,0,.8);font-size:.65em;line-height:1.45;background-color:#f7f7f7;border:1px solid #ccc;-webkit-border-radius:3px;border-radius:3px;-webkit-box-shadow:0 1px 0 rgba(0,0,0,.2),0 0 0 .1em white inset;box-shadow:0 1px 0 rgba(0,0,0,.2),0 0 0 .1em #fff inset;margin:0 .15em;padding:.2em .5em;vertical-align:middle;position:relative;top:-.1em;white-space:nowrap}
.keyseq kbd:first-child{margin-left:0}
.keyseq kbd:last-child{margin-right:0}
.menuseq,.menuref{color:#000}
.menuseq b:not(.caret),.menuref{font-weight:inherit}
.menuseq{word-spacing:-.02em}
.menuseq b.caret{font-size:1.25em;line-height:.8}
.menuseq i.caret{font-weight:bold;text-align:center;width:.45em}
b.button::before,b.button::after{position:relative;top:-1px;font-weight:400}
b.button::before{content:"[";padding:0 3px 0 2px}
b.button::after{content:"]";padding:0 2px 0 3px}
p a>code:hover{color:rgba(0,0,0,.9)}
#header,#content,#footnotes,#footer{width:100%;margin-left:auto;margin-right:auto;margin-top:0;margin-bottom:0;max-width:62.5em;*zoom:1;position:relative;padding-left:.9375em;padding-right:.9375em}
#header::before,#header::after,#content::before,#content::after,#footnotes::before,#footnotes::after,#footer::before,#footer::after{content:" ";display:table}
#header::after,#content::after,#footnotes::after,#footer::after{clear:both}
#content{margin-top:1.25em}
#content::before{content:none}
#header>h1:first-child{color:rgba(0,0,0,.85);margin-top:2.25rem;margin-bottom:0}
#header>h1:first-child+#toc{margin-top:8px;border-top:1px solid #dddddf}
#header>h1:only-child,body.toc2 #header>h1:nth-last-child(2){border-bottom:1px solid #dddddf;padding-bottom:8px}
#header .details{border-bottom:1px solid #dddddf;line-height:1.45;padding-top:.25em;padding-bottom:.25em;padding-left:.25em;color:rgba(0,0,0,.6);display:-ms-flexbox;display:-webkit-flex;display:flex;-ms-flex-flow:row wrap;-webkit-flex-flow:row wrap;flex-flow:row wrap}
#header .details span:first-child{margin-left:-.125em}
#header .details span.email a{color:rgba(0,0,0,.85)}
#header .details br{display:none}
#header .details br+span::before{content:"\00a0\2013\00a0"}
#header .details br+span.author::before{content:"\00a0\22c5\00a0";color:rgba(0,0,0,.85)}
#header .details br+span#revremark::before{content:"\00a0|\00a0"}
#header #revnumber{text-transform:capitalize}
#header #revnumber::after{content:"\00a0"}
#content>h1:first-child:not([class]){color:rgba(0,0,0,.85);border-bottom:1px solid #dddddf;padding-bottom:8px;margin-top:0;padding-top:1rem;margin-bottom:1.25rem}
#toc{border-bottom:1px solid #e7e7e9;padding-bottom:.5em}
#toc>ul{margin-left:.125em}
#toc ul.sectlevel0>li>a{font-style:italic}
#toc ul.sectlevel0 ul.sectlevel1{margin:.5em 0}
#toc ul{font-family:"Open Sans","DejaVu Sans",sans-serif;list-style-type:none}
#toc li{line-height:1.3334;margin-top:.3334em}
#toc a{text-decoration:none}
#toc a:active{text-decoration:underline}
#toctitle{color:#7a2518;font-size:1.2em}
@media screen and (min-width:768px){#toctitle{font-size:1.375em}
body.toc2{padding-left:15em;padding-right:0}
#toc.toc2{margin-top:0!important;background-color:#f8f8f7;position:fixed;width:15em;left:0;top:0;border-right:1px solid #e7e7e9;border-top-width:0!important;border-bottom-width:0!important;z-index:1000;padding:1.25em 1em;height:100%;overflow:auto}
#toc.toc2 #toctitle{margin-top:0;margin-bottom:.8rem;font-size:1.2em}
#toc.toc2>ul{font-size:.9em;margin-bottom:0}
#toc.toc2 ul ul{margin-left:0;padding-left:1em}
#toc.toc2 ul.sectlevel0 ul.sectlevel1{padding-left:0;margin-top:.5em;margin-bottom:.5em}
body.toc2.toc-right{padding-left:0;padding-right:15em}
body.toc2.toc-right #toc.toc2{border-right-width:0;border-left:1px solid #e7e7e9;left:auto;right:0}}
@media screen and (min-width:1280px){body.toc2{padding-left:20em;padding-right:0}
#toc.toc2{width:20em}
#toc.toc2 #toctitle{font-size:1.375em}
#toc.toc2>ul{font-size:.95em}
#toc.toc2 ul ul{padding-left:1.25em}
body.toc2.toc-right{padding-left:0;padding-right:20em}}
#content #toc{border-style:solid;border-width:1px;border-color:#e0e0dc;margin-bottom:1.25em;padding:1.25em;background:#f8f8f7;-webkit-border-radius:4px;border-radius:4px}
#content #toc>:first-child{margin-top:0}
#content #toc>:last-child{margin-bottom:0}
#footer{max-width:100%;background-color:rgba(0,0,0,.8);padding:1.25em}
#footer-text{color:rgba(255,255,255,.8);line-height:1.44}
#content{margin-bottom:.625em}
.sect1{padding-bottom:.625em}
@media screen and (min-width:768px){#content{margin-bottom:1.25em}
.sect1{padding-bottom:1.25em}}
.sect1:last-child{padding-bottom:0}
.sect1+.sect1{border-top:1px solid #e7e7e9}
#content h1>a.anchor,h2>a.anchor,h3>a.anchor,#toctitle>a.anchor,.sidebarblock>.content>.title>a.anchor,h4>a.anchor,h5>a.anchor,h6>a.anchor{position:absolute;z-index:1001;width:1.5ex;margin-left:-1.5ex;display:block;text-decoration:none!important;visibility:hidden;text-align:center;font-weight:400}
#content h1>a.anchor::before,h2>a.anchor::before,h3>a.anchor::before,#toctitle>a.anchor::before,.sidebarblock>.content>.title>a.anchor::before,h4>a.anchor::before,h5>a.anchor::before,h6>a.anchor::before{content:"\00A7";font-size:.85em;display:block;padding-top:.1em}
#content h1:hover>a.anchor,#content h1>a.anchor:hover,h2:hover>a.anchor,h2>a.anchor:hover,h3:hover>a.anchor,#toctitle:hover>a.anchor,.sidebarblock>.content>.title:hover>a.anchor,h3>a.anchor:hover,#toctitle>a.anchor:hover,.sidebarblock>.content>.title>a.anchor:hover,h4:hover>a.anchor,h4>a.anchor:hover,h5:hover>a.anchor,h5>a.anchor:hover,h6:hover>a.anchor,h6>a.anchor:hover{visibility:visible}
#content h1>a.link,h2>a.link,h3>a.link,#toctitle>a.link,.sidebarblock>.content>.title>a.link,h4>a.link,h5>a.link,h6>a.link{color:#ba3925;text-decoration:none}
#content h1>a.link:hover,h2>a.link:hover,h3>a.link:hover,#toctitle>a.link:hover,.sidebarblock>.content>.title>a.link:hover,h4>a.link:hover,h5>a.link:hover,h6>a.link:hover{color:#a53221}
.audioblock,.imageblock,.literalblock,.listingblock,.stemblock,.videoblock{margin-bottom:1.25em}
.admonitionblock td.content>.title,.audioblock>.title,.exampleblock>.title,.imageblock>.title,.listingblock>.title,.literalblock>.title,.stemblock>.title,.openblock>.title,.paragraph>.title,.quoteblock>.title,table.tableblock>.title,.verseblock>.title,.videoblock>.title,.dlist>.title,.olist>.title,.ulist>.title,.qlist>.title,.hdlist>.title{text-rendering:optimizeLegibility;text-align:left;font-family:"Noto Serif","DejaVu Serif",serif;font-size:1rem;font-style:italic}
table.tableblock.fit-content>caption.title{white-space:nowrap;width:0}
.paragraph.lead>p,#preamble>.sectionbody>[class="paragraph"]:first-of-type p{font-size:1.21875em;line-height:1.6;color:rgba(0,0,0,.85)}
table.tableblock #preamble>.sectionbody>[class="paragraph"]:first-of-type p{font-size:inherit}
.admonitionblock>table{border-collapse:separate;border:0;background:none;width:100%}
.admonitionblock>table td.icon{text-align:center;width:80px}
.admonitionblock>table td.icon img{max-width:none}
.admonitionblock>table td.icon .title{font-weight:bold;font-family:"Open Sans","DejaVu Sans",sans-serif;text-transform:uppercase}
.admonitionblock>table td.content{padding-left:1.125em;padding-right:1.25em;border-left:1px solid #dddddf;color:rgba(0,0,0,.6)}
.admonitionblock>table td.content>:last-child>:last-child{margin-bottom:0}
.exampleblock>.content{border-style:solid;border-width:1px;border-color:#e6e6e6;margin-bottom:1.25em;padding:1.25em;background:#fff;-webkit-border-radius:4px;border-radius:4px}
.exampleblock>.content>:first-child{margin-top:0}
.exampleblock>.content>:last-child{margin-bottom:0}
.sidebarblock{border-style:solid;border-width:1px;border-color:#e0e0dc;margin-bottom:1.25em;padding:1.25em;background:#f8f8f7;-webkit-border-radius:4px;border-radius:4px}
.sidebarblock>:first-child{margin-top:0}
.sidebarblock>:last-child{margin-bottom:0}
.sidebarblock>.content>.title{color:#7a2518;margin-top:0;text-align:center}
.exampleblock>.content>:last-child>:last-child,.exampleblock>.content .olist>ol>li:last-child>:last-child,.exampleblock>.content .ulist>ul>li:last-child>:last-child,.exampleblock>.content .qlist>ol>li:last-child>:last-child,.sidebarblock>.content>:last-child>:last-child,.sidebarblock>.content .olist>ol>li:last-child>:last-child,.sidebarblock>.content .ulist>ul>li:last-child>:last-child,.sidebarblock>.content .qlist>ol>li:last-child>:last-child{margin-bottom:0}
.literalblock pre,.listingblock pre:not(.highlight),.listingblock pre[class="highlight"],.listingblock pre[class^="highlight "],.listingblock pre.CodeRay,.listingblock pre.prettyprint{background:#f7f7f8}
.sidebarblock .literalblock pre,.sidebarblock .listingblock pre:not(.highlight),.sidebarblock .listingblock pre[class="highlight"],.sidebarblock .listingblock pre[class^="highlight "],.sidebarblock .listingblock pre.CodeRay,.sidebarblock .listingblock pre.prettyprint{background:#f2f1f1}
.literalblock pre,.literalblock pre[class],.listingblock pre,.listingblock pre[class]{-webkit-border-radius:4px;border-radius:4px;word-wrap:break-word;overflow-x:auto;padding:1em;font-size:.8125em}
@media screen and (min-width:768px){.literalblock pre,.literalblock pre[class],.listingblock pre,.listingblock pre[class]{font-size:.90625em}}
@media screen and (min-width:1280px){.literalblock pre,.literalblock pre[class],.listingblock pre,.listingblock pre[class]{font-size:1em}}
.literalblock pre.nowrap,.literalblock pre.nowrap pre,.listingblock pre.nowrap,.listingblock pre.nowrap pre{white-space:pre;word-wrap:normal}
.literalblock.output pre{color:#f7f7f8;background-color:rgba(0,0,0,.9)}
.listingblock pre.highlightjs{padding:0}
.listingblock pre.highlightjs>code{padding:1em;-webkit-border-radius:4px;border-radius:4px}
.listingblock pre.prettyprint{border-width:0}
.listingblock>.content{position:relative}
.listingblock code[data-lang]::before{display:none;content:attr(data-lang);position:absolute;font-size:.75em;top:.425rem;right:.5rem;line-height:1;text-transform:uppercase;color:#999}
.listingblock:hover code[data-lang]::before{display:block}
.listingblock.terminal pre .command::before{content:attr(data-prompt);padding-right:.5em;color:#999}
.listingblock.terminal pre .command:not([data-prompt])::before{content:"$"}
table.pyhltable{border-collapse:separate;border:0;margin-bottom:0;background:none}
table.pyhltable td{vertical-align:top;padding-top:0;padding-bottom:0;line-height:1.45}
table.pyhltable td.code{padding-left:.75em;padding-right:0}
pre.pygments .lineno,table.pyhltable td:not(.code){color:#999;padding-left:0;padding-right:.5em;border-right:1px solid #dddddf}
pre.pygments .lineno{display:inline-block;margin-right:.25em}
table.pyhltable .linenodiv{background:none!important;padding-right:0!important}
.quoteblock{margin:0 1em 1.25em 1.5em;display:table}
.quoteblock>.title{margin-left:-1.5em;margin-bottom:.75em}
.quoteblock blockquote,.quoteblock p{color:rgba(0,0,0,.85);font-size:1.15rem;line-height:1.75;word-spacing:.1em;letter-spacing:0;font-style:italic;text-align:justify}
.quoteblock blockquote{margin:0;padding:0;border:0}
.quoteblock blockquote::before{content:"\201c";float:left;font-size:2.75em;font-weight:bold;line-height:.6em;margin-left:-.6em;color:#7a2518;text-shadow:0 1px 2px rgba(0,0,0,.1)}
.quoteblock blockquote>.paragraph:last-child p{margin-bottom:0}
.quoteblock .attribution{margin-top:.75em;margin-right:.5ex;text-align:right}
.verseblock{margin:0 1em 1.25em}
.verseblock pre{font-family:"Open Sans","DejaVu Sans",sans;font-size:1.15rem;color:rgba(0,0,0,.85);font-weight:300;text-rendering:optimizeLegibility}
.verseblock pre strong{font-weight:400}
.verseblock .attribution{margin-top:1.25rem;margin-left:.5ex}
.quoteblock .attribution,.verseblock .attribution{font-size:.9375em;line-height:1.45;font-style:italic}
.quoteblock .attribution br,.verseblock .attribution br{display:none}
.quoteblock .attribution cite,.verseblock .attribution cite{display:block;letter-spacing:-.025em;color:rgba(0,0,0,.6)}
.quoteblock.abstract blockquote::before,.quoteblock.excerpt blockquote::before,.quoteblock .quoteblock blockquote::before{display:none}
.quoteblock.abstract blockquote,.quoteblock.abstract p,.quoteblock.excerpt blockquote,.quoteblock.excerpt p,.quoteblock .quoteblock blockquote,.quoteblock .quoteblock p{line-height:1.6;word-spacing:0}
.quoteblock.abstract{margin:0 1em 1.25em;display:block}
.quoteblock.abstract>.title{margin:0 0 .375em;font-size:1.15em;text-align:center}
.quoteblock.excerpt,.quoteblock .quoteblock{margin:0 0 1.25em;padding:0 0 .25em 1em;border-left:.25em solid #dddddf}
.quoteblock.excerpt blockquote,.quoteblock.excerpt p,.quoteblock .quoteblock blockquote,.quoteblock .quoteblock p{color:inherit;font-size:1.0625rem}
.quoteblock.excerpt .attribution,.quoteblock .quoteblock .attribution{color:inherit;text-align:left;margin-right:0}
table.tableblock{max-width:100%;border-collapse:separate}
p.tableblock:last-child{margin-bottom:0}
td.tableblock>.content{margin-bottom:-1.25em}
table.tableblock,th.tableblock,td.tableblock{border:0 solid #dedede}
table.grid-all>thead>tr>.tableblock,table.grid-all>tbody>tr>.tableblock{border-width:0 1px 1px 0}
table.grid-all>tfoot>tr>.tableblock{border-width:1px 1px 0 0}
table.grid-cols>*>tr>.tableblock{border-width:0 1px 0 0}
table.grid-rows>thead>tr>.tableblock,table.grid-rows>tbody>tr>.tableblock{border-width:0 0 1px}
table.grid-rows>tfoot>tr>.tableblock{border-width:1px 0 0}
table.grid-all>*>tr>.tableblock:last-child,table.grid-cols>*>tr>.tableblock:last-child{border-right-width:0}
table.grid-all>tbody>tr:last-child>.tableblock,table.grid-all>thead:last-child>tr>.tableblock,table.grid-rows>tbody>tr:last-child>.tableblock,table.grid-rows>thead:last-child>tr>.tableblock{border-bottom-width:0}
table.frame-all{border-width:1px}
table.frame-sides{border-width:0 1px}
table.frame-topbot,table.frame-ends{border-width:1px 0}
table.stripes-all tr,table.stripes-odd tr:nth-of-type(odd){background:#f8f8f7}
table.stripes-none tr,table.stripes-odd tr:nth-of-type(even){background:none}
th.halign-left,td.halign-left{text-align:left}
th.halign-right,td.halign-right{text-align:right}
th.halign-center,td.halign-center{text-align:center}
th.valign-top,td.valign-top{vertical-align:top}
th.valign-bottom,td.valign-bottom{vertical-align:bottom}
th.valign-middle,td.valign-middle{vertical-align:middle}
table thead th,table tfoot th{font-weight:bold}
tbody tr th{display:table-cell;line-height:1.6;background:#f7f8f7}
tbody tr th,tbody tr th p,tfoot tr th,tfoot tr th p{color:rgba(0,0,0,.8);font-weight:bold}
p.tableblock>code:only-child{background:none;padding:0}
p.tableblock{font-size:1em}
td>div.verse{white-space:pre}
ol{margin-left:1.75em}
ul li ol{margin-left:1.5em}
dl dd{margin-left:1.125em}
dl dd:last-child,dl dd:last-child>:last-child{margin-bottom:0}
ol>li p,ul>li p,ul dd,ol dd,.olist .olist,.ulist .ulist,.ulist .olist,.olist .ulist{margin-bottom:.625em}
ul.checklist,ul.none,ol.none,ul.no-bullet,ol.no-bullet,ol.unnumbered,ul.unstyled,ol.unstyled{list-style-type:none}
ul.no-bullet,ol.no-bullet,ol.unnumbered{margin-left:.625em}
ul.unstyled,ol.unstyled{margin-left:0}
ul.checklist{margin-left:.625em}
ul.checklist li>p:first-child>.fa-square-o:first-child,ul.checklist li>p:first-child>.fa-check-square-o:first-child{width:1.25em;font-size:.8em;position:relative;bottom:.125em}
ul.checklist li>p:first-child>input[type="checkbox"]:first-child{margin-right:.25em}
ul.inline{display:-ms-flexbox;display:-webkit-box;display:flex;-ms-flex-flow:row wrap;-webkit-flex-flow:row wrap;flex-flow:row wrap;list-style:none;margin:0 0 .625em -1.25em}
ul.inline>li{margin-left:1.25em}
.unstyled dl dt{font-weight:400;font-style:normal}
ol.arabic{list-style-type:decimal}
ol.decimal{list-style-type:decimal-leading-zero}
ol.loweralpha{list-style-type:lower-alpha}
ol.upperalpha{list-style-type:upper-alpha}
ol.lowerroman{list-style-type:lower-roman}
ol.upperroman{list-style-type:upper-roman}
ol.lowergreek{list-style-type:lower-greek}
.hdlist>table,.colist>table{border:0;background:none}
.hdlist>table>tbody>tr,.colist>table>tbody>tr{background:none}
td.hdlist1,td.hdlist2{vertical-align:top;padding:0 .625em}
td.hdlist1{font-weight:bold;padding-bottom:1.25em}
.literalblock+.colist,.listingblock+.colist{margin-top:-.5em}
.colist td:not([class]):first-child{padding:.4em .75em 0;line-height:1;vertical-align:top}
.colist td:not([class]):first-child img{max-width:none}
.colist td:not([class]):last-child{padding:.25em 0}
.thumb,.th{line-height:0;display:inline-block;border:solid 4px #fff;-webkit-box-shadow:0 0 0 1px #ddd;box-shadow:0 0 0 1px #ddd}
.imageblock.left{margin:.25em .625em 1.25em 0}
.imageblock.right{margin:.25em 0 1.25em .625em}
.imageblock>.title{margin-bottom:0}
.imageblock.thumb,.imageblock.th{border-width:6px}
.imageblock.thumb>.title,.imageblock.th>.title{padding:0 .125em}
.image.left,.image.right{margin-top:.25em;margin-bottom:.25em;display:inline-block;line-height:0}
.image.left{margin-right:.625em}
.image.right{margin-left:.625em}
a.image{text-decoration:none;display:inline-block}
a.image object{pointer-events:none}
sup.footnote,sup.footnoteref{font-size:.875em;position:static;vertical-align:super}
sup.footnote a,sup.footnoteref a{text-decoration:none}
sup.footnote a:active,sup.footnoteref a:active{text-decoration:underline}
#footnotes{padding-top:.75em;padding-bottom:.75em;margin-bottom:.625em}
#footnotes hr{width:20%;min-width:6.25em;margin:-.25em 0 .75em;border-width:1px 0 0}
#footnotes .footnote{padding:0 .375em 0 .225em;line-height:1.3334;font-size:.875em;margin-left:1.2em;margin-bottom:.2em}
#footnotes .footnote a:first-of-type{font-weight:bold;text-decoration:none;margin-left:-1.05em}
#footnotes .footnote:last-of-type{margin-bottom:0}
#content #footnotes{margin-top:-.625em;margin-bottom:0;padding:.75em 0}
.gist .file-data>table{border:0;background:#fff;width:100%;margin-bottom:0}
.gist .file-data>table td.line-data{width:99%}
div.unbreakable{page-break-inside:avoid}
.big{font-size:larger}
.small{font-size:smaller}
.underline{text-decoration:underline}
.overline{text-decoration:overline}
.line-through{text-decoration:line-through}
.aqua{color:#00bfbf}
.aqua-background{background-color:#00fafa}
.black{color:#000}
.black-background{background-color:#000}
.blue{color:#0000bf}
.blue-background{background-color:#0000fa}
.fuchsia{color:#bf00bf}
.fuchsia-background{background-color:#fa00fa}
.gray{color:#606060}
.gray-background{background-color:#7d7d7d}
.green{color:#006000}
.green-background{background-color:#007d00}
.lime{color:#00bf00}
.lime-background{background-color:#00fa00}
.maroon{color:#600000}
.maroon-background{background-color:#7d0000}
.navy{color:#000060}
.navy-background{background-color:#00007d}
.olive{color:#606000}
.olive-background{background-color:#7d7d00}
.purple{color:#600060}
.purple-background{background-color:#7d007d}
.red{color:#bf0000}
.red-background{background-color:#fa0000}
.silver{color:#909090}
.silver-background{background-color:#bcbcbc}
.teal{color:#006060}
.teal-background{background-color:#007d7d}
.white{color:#bfbfbf}
.white-background{background-color:#fafafa}
.yellow{color:#bfbf00}
.yellow-background{background-color:#fafa00}
span.icon>.fa{cursor:default}
a span.icon>.fa{cursor:inherit}
.admonitionblock td.icon [class^="fa icon-"]{font-size:2.5em;text-shadow:1px 1px 2px rgba(0,0,0,.5);cursor:default}
.admonitionblock td.icon .icon-note::before{content:"\f05a";color:#19407c}
.admonitionblock td.icon .icon-tip::before{content:"\f0eb";text-shadow:1px 1px 2px rgba(155,155,0,.8);color:#111}
.admonitionblock td.icon .icon-warning::before{content:"\f071";color:#bf6900}
.admonitionblock td.icon .icon-caution::before{content:"\f06d";color:#bf3400}
.admonitionblock td.icon .icon-important::before{content:"\f06a";color:#bf0000}
.conum[data-value]{display:inline-block;color:#fff!important;background-color:rgba(0,0,0,.8);-webkit-border-radius:100px;border-radius:100px;text-align:center;font-size:.75em;width:1.67em;height:1.67em;line-height:1.67em;font-family:"Open Sans","DejaVu Sans",sans-serif;font-style:normal;font-weight:bold}
.conum[data-value] *{color:#fff!important}
.conum[data-value]+b{display:none}
.conum[data-value]::after{content:attr(data-value)}
pre .conum[data-value]{position:relative;top:-.125em}
b.conum *{color:inherit!important}
.conum:not([data-value]):empty{display:none}
dt,th.tableblock,td.content,div.footnote{text-rendering:optimizeLegibility}
h1,h2,p,td.content,span.alt{letter-spacing:-.01em}
p strong,td.content strong,div.footnote strong{letter-spacing:-.005em}
p,blockquote,dt,td.content,span.alt{font-size:1.0625rem}
p{margin-bottom:1.25rem}
.sidebarblock p,.sidebarblock dt,.sidebarblock td.content,p.tableblock{font-size:1em}
.exampleblock>.content{background-color:#fffef7;border-color:#e0e0dc;-webkit-box-shadow:0 1px 4px #e0e0dc;box-shadow:0 1px 4px #e0e0dc}
.print-only{display:none!important}
@page{margin:1.25cm .75cm}
@media print{*{-webkit-box-shadow:none!important;box-shadow:none!important;text-shadow:none!important}
html{font-size:80%}
a{color:inherit!important;text-decoration:underline!important}
a.bare,a[href^="#"],a[href^="mailto:"]{text-decoration:none!important}
a[href^="http:"]:not(.bare)::after,a[href^="https:"]:not(.bare)::after{content:"(" attr(href) ")";display:inline-block;font-size:.875em;padding-left:.25em}
abbr[title]::after{content:" (" attr(title) ")"}
pre,blockquote,tr,img,object,svg{page-break-inside:avoid}
thead{display:table-header-group}
svg{max-width:100%}
p,blockquote,dt,td.content{font-size:1em;orphans:3;widows:3}
h2,h3,#toctitle,.sidebarblock>.content>.title{page-break-after:avoid}
#toc,.sidebarblock,.exampleblock>.content{background:none!important}
#toc{border-bottom:1px solid #dddddf!important;padding-bottom:0!important}
body.book #header{text-align:center}
body.book #header>h1:first-child{border:0!important;margin:2.5em 0 1em}
body.book #header .details{border:0!important;display:block;padding:0!important}
body.book #header .details span:first-child{margin-left:0!important}
body.book #header .details br{display:block}
body.book #header .details br+span::before{content:none!important}
body.book #toc{border:0!important;text-align:left!important;padding:0!important;margin:0!important}
body.book #toc,body.book #preamble,body.book h1.sect0,body.book .sect1>h2{page-break-before:always}
.listingblock code[data-lang]::before{display:block}
#footer{padding:0 .9375em}
.hide-on-print{display:none!important}
.print-only{display:block!important}
.hide-for-print{display:none!important}
.show-for-print{display:inherit!important}}
@media print,amzn-kf8{#header>h1:first-child{margin-top:1.25rem}
.sect1{padding:0!important}
.sect1+.sect1{border:0}
#footer{background:none}
#footer-text{color:rgba(0,0,0,.6);font-size:.9em}}
@media amzn-kf8{#header,#content,#footnotes,#footer{padding:0}}

      </style>
      <link href='https://fonts.googleapis.com/css?family=Noto+Serif' rel='stylesheet' type='text/css'>
      <link href='https://fonts.googleapis.com/css?family=Open+Sans:400,300,300italic,400italic,600,600italic,700,700italic,800,800italic' rel='stylesheet' type='text/css'>
      <link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet">
      <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/styles/default.min.css">
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/highlight.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/asciidoc.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/yaml.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/dockerfile.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/makefile.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/go.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/rust.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/haskell.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/typescript.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/scss.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/less.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/handlebars.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/groovy.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/scala.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/bash.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.9.0/languages/ini.min.js"></script>
      <script>hljs.initHighlightingOnLoad();</script>
    </head>
    <body>
      <div id="wrapper">
        <div class="article">
          <h1 id="__asciidoctor-preview-0__">Spark Core</h1>
<div id="preamble">
<div class="sectionbody">
<div id="__asciidoctor-preview-1__" class="exampleblock">
<div class="title">全阶段目标</div>
<div class="content">
<div id="__asciidoctor-preview-2__" class="olist arabic">
<ol class="arabic">
<li>
<p>理解 Spark 的特点和作用</p>
</li>
<li>
<p>能够完成 Spark 的集群搭建和安装</p>
</li>
<li>
<p>通过入门案例理解 Spark 的编程模型 RDD</p>
</li>
<li>
<p>了解 RDD 的常见使用</p>
</li>
</ol>
</div>
</div>
</div>
</div>
<div id="toc" class="toc">
<div id="toctitle">Table of Contents</div>
<ul class="sectlevel1">
<li><a href="#_1_spark_概述">1. Spark 概述</a>
<ul class="sectlevel2">
<li><a href="#_1_1_spark是什么">1.1. Spark是什么</a></li>
<li><a href="#_1_2_spark的特点优点">1.2. Spark的特点(优点)</a></li>
<li><a href="#_1_3_spark组件">1.3. Spark组件</a></li>
<li><a href="#_1_4_spark和hadoop的异同">1.4. Spark和Hadoop的异同</a></li>
</ul>
</li>
<li><a href="#_2_spark_集群搭建">2. Spark 集群搭建</a>
<ul class="sectlevel2">
<li><a href="#_2_1_spark_集群结构">2.1. Spark 集群结构</a></li>
<li><a href="#_2_2_spark_集群搭建">2.2. Spark 集群搭建</a></li>
<li><a href="#_2_3_spark_集群高可用搭建">2.3. Spark 集群高可用搭建</a></li>
<li><a href="#_2_4_第一个应用的运行">2.4. 第一个应用的运行</a></li>
</ul>
</li>
<li><a href="#_3_spark_入门">3. Spark 入门</a>
<ul class="sectlevel2">
<li><a href="#_3_1_spark_shell_的方式编写_wordcount">3.1. Spark shell 的方式编写 WordCount</a></li>
<li><a href="#_3_2_读取_hdfs_上的文件">3.2. 读取 HDFS 上的文件</a></li>
<li><a href="#_3_4_编写独立应用提交_spark_任务">3.4. 编写独立应用提交 Spark 任务</a></li>
</ul>
</li>
<li><a href="#_4_rdd_入门">4. RDD 入门</a>
<ul class="sectlevel2">
<li><a href="#_4_1_创建_rdd">4.1. 创建 RDD</a></li>
<li><a href="#_4_2_rdd_算子">4.2. RDD 算子</a></li>
</ul>
</li>
</ul>
</div>
</div>
<div class="sect1">
<h2 id="_1_spark_概述">1. Spark 概述</h2>
<div class="sectionbody">
<div id="__asciidoctor-preview-7__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-8__" class="olist arabic">
<ol class="arabic">
<li>
<p>Spark 是什么</p>
</li>
<li>
<p>Spark 的特点</p>
</li>
<li>
<p>Spark 生态圈的组成</p>
</li>
</ol>
</div>
</div>
</div>
<div class="sect2">
<h3 id="_1_1_spark是什么">1.1. Spark是什么</h3>
<div id="__asciidoctor-preview-12__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-13__" class="olist arabic">
<ol class="arabic">
<li>
<p>了解 Spark 的历史和产生原因, 从而浅显的理解 Spark 的作用</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-15__" class="dlist">
<dl>
<dt class="hdlist1">Spark的历史</dt>
<dd>
<div id="__asciidoctor-preview-18__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-19__" class="ulist">
<ul>
<li>
<p>2009 年由加州大学伯克利分校 AMPLab 开创</p>
</li>
<li>
<p>2010 年通过BSD许可协议开源发布</p>
</li>
<li>
<p>2013 年捐赠给Apache软件基金会并切换开源协议到切换许可协议至 Apache2.0</p>
</li>
<li>
<p>2014 年 2 月，Spark 成为 Apache 的顶级项目</p>
</li>
<li>
<p>2014 年 11 月, Spark的母公司Databricks团队使用Spark刷新数据排序世界记录</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Spark是什么</dt>
<dd>
<div id="__asciidoctor-preview-27__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-28__" class="paragraph">
<p>Apache Spark 是一个快速的, 多用途的集群计算系统,
相对于 Hadoop MapReduce 将中间结果保存在磁盘中, Spark 使用了内存保存中间结果, 能在数据尚未写入硬盘时在内存中进行运算.</p>
</div>
<div id="__asciidoctor-preview-29__" class="paragraph">
<p>Spark 只是一个计算框架, 不像 Hadoop 一样包含了分布式文件系统和完备的调度系统, 如果要使用 Spark, 需要搭载其它的文件系统和更成熟的调度系统</p>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">为什么会有Spark</dt>
<dd>
<div id="__asciidoctor-preview-32__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-33__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/Snipaste_2019-05-05_11-26-03.png" alt="Snipaste 2019 05 05 11 26 03">
</div>
</div>
<div id="__asciidoctor-preview-34__" class="paragraph">
<p>Spark 产生之前, 已经有非常成熟的计算系统存在了, 例如 MapReduce, 这些计算系统提供了高层次的API, 把计算运行在集群中并提供容错能力, 从而实现分布式计算.</p>
</div>
<div id="__asciidoctor-preview-35__" class="paragraph">
<p>虽然这些框架提供了大量的对访问利用计算资源的抽象, 但是它们缺少了对利用分布式内存的抽象, 这些框架多个计算之间的数据复用就是将中间数据写到一个稳定的文件系统中(例如HDFS), 所以会产生数据的复制备份, 磁盘的I/O以及数据的序列化, 所以这些框架在遇到需要在多个计算之间复用中间结果的操作时会非常的不高效.</p>
</div>
<div id="__asciidoctor-preview-36__" class="paragraph">
<p>而这类操作是非常常见的, 例如迭代式计算, 交互式数据挖掘, 图计算等.</p>
</div>
<div id="__asciidoctor-preview-37__" class="paragraph">
<p>认识到这个问题后, 学术界的 AMPLab 提出了一个新的模型, 叫做 <code>RDDs</code>.</p>
</div>
<div id="__asciidoctor-preview-38__" class="paragraph">
<p><code>RDDs</code> 是一个可以容错且并行的数据结构, 它可以让用户显式的将中间结果数据集保存在内中, 并且通过控制数据集的分区来达到数据存放处理最优化.</p>
</div>
<div id="__asciidoctor-preview-39__" class="paragraph">
<p>同时 <code>RDDs</code> 也提供了丰富的 API 来操作数据集.</p>
</div>
<div id="__asciidoctor-preview-40__" class="paragraph">
<p>后来 RDDs 被 AMPLab 在一个叫做 Spark 的框架中提供并开源.</p>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-41__" class="exampleblock">
<div class="title">总结</div>
<div class="content">
<div id="__asciidoctor-preview-42__" class="olist arabic">
<ol class="arabic">
<li>
<p>Spark 是Apache的开源框架</p>
</li>
<li>
<p>Spark 的母公司叫做 Databricks</p>
</li>
<li>
<p>Spark 是为了解决 MapReduce 等过去的计算系统无法在内存中保存中间结果的问题</p>
</li>
<li>
<p>Spark 的核心是 RDDs, RDDs 不仅是一种计算框架, 也是一种数据结构</p>
</li>
</ol>
</div>
</div>
</div>
</div>
<div class="sect2">
<h3 id="_1_2_spark的特点优点">1.2. Spark的特点(优点)</h3>
<div id="__asciidoctor-preview-47__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-48__" class="olist arabic">
<ol class="arabic">
<li>
<p>理解 Spark 的特点, 从而理解为什么要使用 Spark</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-50__" class="dlist">
<dl>
<dt class="hdlist1">速度快</dt>
<dd>
<div id="__asciidoctor-preview-53__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-54__" class="ulist">
<ul>
<li>
<p>Spark 的在内存时的运行速度是 Hadoop MapReduce 的100倍</p>
</li>
<li>
<p>基于硬盘的运算速度大概是 Hadoop MapReduce 的10倍</p>
</li>
<li>
<p>Spark 实现了一种叫做 RDDs 的 DAG 执行引擎, 其数据缓存在内存中可以进行迭代处理</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">易用</dt>
<dd>
<div id="__asciidoctor-preview-60__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-61__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">df = spark.read.json("logs.json")
df.where("age &gt; 21") \
  .select("name.first") \
  .show()</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-62__" class="ulist">
<ul>
<li>
<p>Spark 支持 Java, Scala, Python, R, SQL 等多种语言的API.</p>
</li>
<li>
<p>Spark 支持超过80个高级运算符使得用户非常轻易的构建并行计算程序</p>
</li>
<li>
<p>Spark 可以使用基于 Scala, Python, R, SQL的 Shell 交互式查询.</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">通用</dt>
<dd>
<div id="__asciidoctor-preview-68__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-69__" class="ulist">
<ul>
<li>
<p>Spark 提供一个完整的技术栈, 包括 SQL执行, Dataset命令式API, 机器学习库MLlib, 图计算框架GraphX, 流计算SparkStreaming</p>
</li>
<li>
<p>用户可以在同一个应用中同时使用这些工具, 这一点是划时代的</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">兼容</dt>
<dd>
<div id="__asciidoctor-preview-74__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-75__" class="ulist">
<ul>
<li>
<p>Spark 可以运行在 Hadoop Yarn, Apache Mesos, Kubernets, Spark Standalone等集群中</p>
</li>
<li>
<p>Spark 可以访问 HBase, HDFS, Hive, Cassandra 在内的多种数据库</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-78__" class="exampleblock">
<div class="title">总结</div>
<div class="content">
<div id="__asciidoctor-preview-79__" class="ulist">
<ul>
<li>
<p>支持 Java, Scala, Python 和 R 的 API</p>
</li>
<li>
<p>可扩展至超过 8K 个节点</p>
</li>
<li>
<p>能够在内存中缓存数据集, 以实现交互式数据分析</p>
</li>
<li>
<p>提供命令行窗口, 减少探索式的数据分析的反应时间</p>
</li>
</ul>
</div>
</div>
</div>
</div>
<div class="sect2">
<h3 id="_1_3_spark组件">1.3. Spark组件</h3>
<div id="__asciidoctor-preview-84__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-85__" class="olist arabic">
<ol class="arabic">
<li>
<p>理解 Spark 能做什么</p>
</li>
<li>
<p>理解 Spark 的学习路线</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-88__" class="paragraph">
<p>Spark 最核心的功能是 RDDs, RDDs 存在于 <code>spark-core</code> 这个包内, 这个包也是 Spark 最核心的包.</p>
</div>
<div id="__asciidoctor-preview-89__" class="paragraph">
<p>同时 Spark 在 <code>spark-core</code> 的上层提供了很多工具, 以便于适应不用类型的计算.</p>
</div>
<div id="__asciidoctor-preview-90__" class="dlist">
<dl>
<dt class="hdlist1">Spark-Core 和 弹性分布式数据集(RDDs)</dt>
<dd>
<div id="__asciidoctor-preview-93__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-94__" class="ulist">
<ul>
<li>
<p>Spark-Core 是整个 Spark 的基础, 提供了分布式任务调度和基本的 I/O 功能</p>
</li>
<li>
<p>Spark 的基础的程序抽象是弹性分布式数据集(RDDs), 是一个可以并行操作, 有容错的数据集合</p>
<div id="__asciidoctor-preview-97__" class="ulist">
<ul>
<li>
<p>RDDs 可以通过引用外部存储系统的数据集创建(如HDFS, HBase), 或者通过现有的 RDDs 转换得到</p>
</li>
<li>
<p>RDDs 抽象提供了 Java, Scala, Python 等语言的API</p>
</li>
<li>
<p>RDDs 简化了编程复杂性, 操作 RDDs 类似通过 Scala 或者 Java8 的 Streaming 操作本地数据集合</p>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Spark SQL</dt>
<dd>
<div id="__asciidoctor-preview-103__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-104__" class="ulist">
<ul>
<li>
<p>Spark SQL 在 <code>spark-core</code> 基础之上带出了一个名为 DataSet 和 DataFrame 的数据抽象化的概念</p>
</li>
<li>
<p>Spark SQL 提供了在 Dataset 和 DataFrame 之上执行 SQL 的能力</p>
</li>
<li>
<p>Spark SQL 提供了 DSL, 可以通过 Scala, Java, Python 等语言操作 DataSet 和 DataFrame</p>
</li>
<li>
<p>它还支持使用 JDBC/ODBC 服务器操作 SQL 语言</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Spark Streaming</dt>
<dd>
<div id="__asciidoctor-preview-111__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-112__" class="ulist">
<ul>
<li>
<p>Spark Streaming 充分利用 <code>spark-core</code> 的快速调度能力来运行流分析</p>
</li>
<li>
<p>它截取小批量的数据并可以对之运行 RDD Transformation</p>
</li>
<li>
<p>它提供了在同一个程序中同时使用流分析和批量分析的能力</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">MLlib</dt>
<dd>
<div id="__asciidoctor-preview-118__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-119__" class="ulist">
<ul>
<li>
<p>MLlib 是 Spark 上分布式机器学习的框架. Spark分布式内存的架构 比 Hadoop磁盘式 的 Apache Mahout 快上 10 倍, 扩展性也非常优良</p>
</li>
<li>
<p>MLlib 可以使用许多常见的机器学习和统计算法, 简化大规模机器学习</p>
</li>
<li>
<p>汇总统计, 相关性, 分层抽样, 假设检定, 随即数据生成</p>
</li>
<li>
<p>支持向量机, 回归, 线性回归, 逻辑回归, 决策树, 朴素贝叶斯</p>
</li>
<li>
<p>协同过滤, ALS</p>
</li>
<li>
<p>K-means</p>
</li>
<li>
<p>SVD奇异值分解, PCA主成分分析</p>
</li>
<li>
<p>TF-IDF, Word2Vec, StandardScaler</p>
</li>
<li>
<p>SGD随机梯度下降, L-BFGS</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">GraphX</dt>
<dd>
<div id="__asciidoctor-preview-131__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-132__" class="paragraph">
<p>GraphX 是分布式图计算框架, 提供了一组可以表达图计算的 API, GraphX 还对这种抽象化提供了优化运行</p>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-133__" class="exampleblock">
<div class="title">总结</div>
<div class="content">
<div id="__asciidoctor-preview-134__" class="ulist">
<ul>
<li>
<p>Spark 提供了 批处理(RDDs), 结构化查询(DataFrame), 流计算(SparkStreaming), 机器学习(MLlib), 图计算(GraphX) 等组件</p>
</li>
<li>
<p>这些组件均是依托于通用的计算引擎 RDDs 而构建出的, 所以 <code>spark-core</code> 的 RDDs 是整个 Spark 的基础</p>
</li>
</ul>
</div>
<div id="__asciidoctor-preview-137__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/site/20190506/WseAzPXovsHa.png" alt="WseAzPXovsHa">
</div>
</div>
</div>
</div>
</div>
<div class="sect2">
<h3 id="_1_4_spark和hadoop的异同">1.4. Spark和Hadoop的异同</h3>
<table id="__asciidoctor-preview-138__" class="tableblock frame-all grid-all stretch">
<colgroup>
<col style="width: 33.3333%;">
<col style="width: 33.3333%;">
<col style="width: 33.3334%;">
</colgroup>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Hadoop</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Spark</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><strong>类型</strong></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">基础平台, 包含计算, 存储, 调度</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">分布式计算工具</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><strong>场景</strong></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">大规模数据集上的批处理</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">迭代计算, 交互式计算, 流计算</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><strong>延迟</strong></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">大</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">小</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><strong>易用性</strong></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">API 较为底层, 算法适应性差</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">API 较为顶层, 方便使用</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><strong>价格</strong></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">对机器要求低, 便宜</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">对内存有要求, 相对较贵</p></td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
<div class="sect1">
<h2 id="_2_spark_集群搭建">2. Spark 集群搭建</h2>
<div class="sectionbody">
<div id="__asciidoctor-preview-139__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-140__" class="olist arabic">
<ol class="arabic">
<li>
<p>从 Spark 的集群架构开始, 理解分布式环境, 以及 Spark 的运行原理</p>
</li>
<li>
<p>理解 Spark 的集群搭建, 包括高可用的搭建方式</p>
</li>
</ol>
</div>
</div>
</div>
<div class="sect2">
<h3 id="_2_1_spark_集群结构">2.1. Spark 集群结构</h3>
<div id="__asciidoctor-preview-143__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-144__" class="olist arabic">
<ol class="arabic">
<li>
<p>通过应用运行流程, 理解分布式调度的基础概念</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-146__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div class="title">Spark 如何将程序运行在一个集群中?</div>
<div id="__asciidoctor-preview-147__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/site/20190506/Xr4bx4UiKJpH.png" alt="Xr4bx4UiKJpH">
</div>
</div>
<div id="__asciidoctor-preview-148__" class="paragraph">
<p>Spark 自身是没有集群管理工具的, 但是如果想要管理数以千计台机器的集群, 没有一个集群管理工具还不太现实, 所以 Spark 可以借助外部的集群工具来进行管理</p>
</div>
<div id="__asciidoctor-preview-149__" class="paragraph">
<p>整个流程就是使用 Spark 的 Client 提交任务, 找到集群管理工具申请资源, 后将计算任务分发到集群中运行</p>
</div>
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-150__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/cf76d1086f4a7d7e21c96ceed8bdb271.png" alt="cf76d1086f4a7d7e21c96ceed8bdb271" width="600">
</div>
</div>
<div id="__asciidoctor-preview-151__" class="dlist">
<dl>
<dt class="hdlist1">名词解释</dt>
<dd>
<div id="__asciidoctor-preview-154__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-155__" class="ulist">
<ul>
<li>
<p><code>Driver</code></p>
<div id="__asciidoctor-preview-157__" class="paragraph">
<p>该进程调用 Spark 程序的 main 方法, 并且启动 SparkContext</p>
</div>
</li>
<li>
<p><code>Cluster Manager</code></p>
<div id="__asciidoctor-preview-159__" class="paragraph">
<p>该进程负责和外部集群工具打交道, 申请或释放集群资源</p>
</div>
</li>
<li>
<p><code>Worker</code></p>
<div id="__asciidoctor-preview-161__" class="paragraph">
<p>该进程是一个守护进程, 负责启动和管理 Executor</p>
</div>
</li>
<li>
<p><code>Executor</code></p>
<div id="__asciidoctor-preview-163__" class="paragraph">
<p>该进程是一个JVM虚拟机, 负责运行 Spark Task</p>
</div>
</li>
</ul>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-164__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/cf76d1086f4a7d7e21c96ceed8bdb271.png" alt="cf76d1086f4a7d7e21c96ceed8bdb271" width="600">
</div>
</div>
<div id="__asciidoctor-preview-165__" class="dlist">
<dl>
<dt class="hdlist1">运行一个 Spark 程序大致经历如下几个步骤</dt>
<dd>
<div id="__asciidoctor-preview-168__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-169__" class="olist arabic">
<ol class="arabic">
<li>
<p>启动 Drive, 创建 SparkContext</p>
</li>
<li>
<p>Client 提交程序给 Drive, Drive <strong>向 Cluster Manager 申请集群资源</strong></p>
</li>
<li>
<p>资源申请完毕, <strong>在 Worker 中启动 Executor</strong></p>
</li>
<li>
<p>Driver 将程序转化为 Tasks, 分发给 Executor 执行</p>
</li>
</ol>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">问题一: Spark 程序可以运行在什么地方?</dt>
<dd>
<div id="__asciidoctor-preview-176__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-177__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div id="__asciidoctor-preview-178__" class="ulist">
<ul>
<li>
<p><strong>集群:</strong> 一组协同工作的计算机, 通常表现的好像是一台计算机一样, <strong>所运行的任务由软件来控制和调度</strong></p>
</li>
<li>
<p><strong>集群管理工具:</strong> 调度任务到集群的软件</p>
</li>
<li>
<p><strong>常见的集群管理工具:</strong> Hadoop Yarn, Apache Mesos, Kubernetes</p>
</li>
</ul>
</div>
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-182__" class="paragraph">
<p>Spark 可以将任务运行在两种模式下:</p>
</div>
<div id="__asciidoctor-preview-183__" class="ulist">
<ul>
<li>
<p><strong>单机,</strong> 使用线程模拟并行来运行程序</p>
</li>
<li>
<p><strong>集群,</strong> 使用集群管理器来和不同类型的集群交互, 将任务运行在集群中</p>
</li>
</ul>
</div>
<div id="__asciidoctor-preview-186__" class="paragraph">
<p>Spark 可以使用的集群管理工具有:</p>
</div>
<div id="__asciidoctor-preview-187__" class="ulist">
<ul>
<li>
<p>Spark Standalone</p>
</li>
<li>
<p>Hadoop Yarn</p>
</li>
<li>
<p>Apache Mesos</p>
</li>
<li>
<p>Kubernetes</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">问题二: Driver 和 Worker 什么时候被启动?</dt>
<dd>
<div id="__asciidoctor-preview-194__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-195__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/cf76d1086f4a7d7e21c96ceed8bdb271.png" alt="cf76d1086f4a7d7e21c96ceed8bdb271" width="600">
</div>
</div>
<div id="__asciidoctor-preview-196__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/33c817e136edc008c3ef71cb6992e9a3.png" alt="33c817e136edc008c3ef71cb6992e9a3" width="800">
</div>
</div>
<div id="__asciidoctor-preview-197__" class="ulist">
<ul>
<li>
<p>Standalone 集群中, 分为两个角色: Master 和 Slave, 而 Slave 就是 Worker, 所以在 Standalone 集群中, 启动之初就会创建固定数量的 Worker</p>
</li>
<li>
<p>Driver 的启动分为两种模式: Client 和 Cluster. 在 Client 模式下, Driver 运行在 Client 端, 在 Client 启动的时候被启动. 在 Cluster 模式下, Driver 运行在某个 Worker 中, 随着应用的提交而启动</p>
</li>
</ul>
</div>
<div id="__asciidoctor-preview-200__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/92180f4b9061374cdf3169b4bd84090e.png" alt="92180f4b9061374cdf3169b4bd84090e" width="800">
</div>
</div>
<div id="__asciidoctor-preview-201__" class="ulist">
<ul>
<li>
<p>在 Yarn 集群模式下, 也依然分为 Client 模式和 Cluster 模式, 较新的版本中已经逐渐在废弃 Client 模式了, 所以上图所示为 Cluster 模式</p>
</li>
<li>
<p>如果要在 Yarn 中运行 Spark 程序, 首先会和 RM 交互, 开启 ApplicationMaster, 其中运行了 Driver, Driver创建基础环境后, 会由 RM 提供对应的容器, 运行 Executor, Executor会反向向 Driver 反向注册自己, 并申请 Tasks 执行</p>
</li>
<li>
<p>在后续的 Spark 任务调度部分, 会更详细介绍</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-205__" class="exampleblock">
<div class="title">总结</div>
<div class="content">
<div id="__asciidoctor-preview-206__" class="ulist">
<ul>
<li>
<p><code>Master</code> 负责总控, 调度, 管理和协调 Worker, 保留资源状况等</p>
</li>
<li>
<p><code>Slave</code> 对应 Worker 节点, 用于启动 Executor 执行 Tasks, 定期向 Master汇报</p>
</li>
<li>
<p><code>Driver</code> 运行在 Client 或者 Slave(Worker) 中, 默认运行在 Slave(Worker) 中</p>
</li>
</ul>
</div>
</div>
</div>
</div>
<div class="sect2">
<h3 id="_2_2_spark_集群搭建">2.2. Spark 集群搭建</h3>
<div id="__asciidoctor-preview-210__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-211__" class="olist arabic">
<ol class="arabic">
<li>
<p>大致了解 Spark Standalone 集群搭建的过程</p>
<div id="__asciidoctor-preview-213__" class="paragraph">
<p>这个部分的目的是搭建一套用于测试和学习的集群, 实际的工作中可能集群环境会更复杂一些</p>
</div>
</li>
</ol>
</div>
</div>
</div>
<table id="__asciidoctor-preview-214__" class="tableblock frame-all grid-all stretch">
<caption class="title">Table 1. 集群组件</caption>
<colgroup>
<col style="width: 33.3333%;">
<col style="width: 33.3333%;">
<col style="width: 33.3334%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Node01</th>
<th class="tableblock halign-left valign-top">Node02</th>
<th class="tableblock halign-left valign-top">Node03</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">Master</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Slave</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Slave</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">History Server</p></td>
<td class="tableblock halign-left valign-top"></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
<div id="__asciidoctor-preview-215__" class="dlist">
<dl>
<dt class="hdlist1">Step 1 下载和解压</dt>
<dd>
<div id="__asciidoctor-preview-218__" class="exampleblock">
<div class="content">
<div id="__asciidoctor-preview-219__" class="admonitionblock warning">
<table>
<tr>
<td class="icon">
<i class="fa icon-warning" title="Warning"></i>
</td>
<td class="content">
此步骤假设大家的 Hadoop 集群已经能够无碍的运行, 并且 Linux 的防火墙和 SELinux 已经关闭, 时钟也已经同步, 如果还没有, 请参考 Hadoop 集群搭建部分, 完成以上三件事
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-220__" class="olist arabic">
<ol class="arabic">
<li>
<p>下载 Spark 安装包, 下载时候选择对应的 Hadoop 版本(资料中已经提供了 Spark 安装包, 直接上传至集群 Master 即可, 无需遵循以下步骤)</p>
<div id="__asciidoctor-preview-222__" class="paragraph">
<p><code><a href="https://archive.apache.org/dist/spark/spark-2.2.0/spark-2.2.0-bin-hadoop2.7.tgz" class="bare">https://archive.apache.org/dist/spark/spark-2.2.0/spark-2.2.0-bin-hadoop2.7.tgz</a></code></p>
</div>
<div id="__asciidoctor-preview-223__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code># 下载 Spark
cd /export/softwares
wget https://archive.apache.org/dist/spark/spark-2.2.0/spark-2.2.0-bin-hadoop2.7.tgz</code></pre>
</div>
</div>
</li>
<li>
<p>解压并拷贝到`export/servers`</p>
<div id="__asciidoctor-preview-225__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code># 解压 Spark 安装包
tar xzvf spark-2.2.0-bin-hadoop2.7.tgz

# 移动 Spark 安装包
mv spark-2.2.0-bin-hadoop2.7.tgz /export/servers/spark</code></pre>
</div>
</div>
</li>
<li>
<p>修改配置文件`spark-env.sh`, 以指定运行参数</p>
<div id="__asciidoctor-preview-227__" class="ulist">
<ul>
<li>
<p>进入配置目录, 并复制一份新的配置文件, 以供在此基础之上进行修改</p>
<div id="__asciidoctor-preview-229__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers/spark/conf
cp spark-env.sh.template spark-env.sh
vi spark-env.sh</code></pre>
</div>
</div>
</li>
<li>
<p>将以下内容复制进配置文件末尾</p>
<div id="__asciidoctor-preview-231__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code># 指定 Java Home
export JAVA_HOME=/export/servers/jdk1.8.0

# 指定 Spark Master 地址
export SPARK_MASTER_HOST=node01
export SPARK_MASTER_PORT=7077</code></pre>
</div>
</div>
</li>
</ul>
</div>
</li>
</ol>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Step 2 配置</dt>
<dd>
<div id="__asciidoctor-preview-234__" class="exampleblock">
<div class="content">
<div id="__asciidoctor-preview-235__" class="olist arabic">
<ol class="arabic">
<li>
<p>修改配置文件 <code>slaves</code>, 以指定从节点为止, 从在使用 <code>sbin/start-all.sh</code> 启动集群的时候, 可以一键启动整个集群所有的 Worker</p>
<div id="__asciidoctor-preview-237__" class="ulist">
<ul>
<li>
<p>进入配置目录, 并复制一份新的配置文件, 以供在此基础之上进行修改</p>
<div id="__asciidoctor-preview-239__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers/spark/conf
cp slaves.template slaves
vi slaves</code></pre>
</div>
</div>
</li>
<li>
<p>配置所有从节点的地址</p>
<div id="__asciidoctor-preview-241__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>node02
node03</code></pre>
</div>
</div>
</li>
</ul>
</div>
</li>
<li>
<p>配置 <code>HistoryServer</code></p>
<div id="__asciidoctor-preview-243__" class="olist loweralpha">
<ol class="loweralpha" type="a">
<li>
<p>默认情况下, Spark 程序运行完毕后, 就无法再查看运行记录的 Web UI 了, 通过 HistoryServer 可以提供一个服务, 通过读取日志文件, 使得我们可以在程序运行结束后, 依然能够查看运行过程</p>
</li>
<li>
<p>复制 <code>spark-defaults.conf</code>, 以供修改</p>
<div id="__asciidoctor-preview-246__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers/spark/conf
cp spark-defaults.conf.template spark-defaults.conf
vi spark-defaults.conf</code></pre>
</div>
</div>
</li>
<li>
<p>将以下内容复制到`spark-defaults.conf`末尾处, 通过这段配置, 可以指定 Spark 将日志输入到 HDFS 中</p>
<div id="__asciidoctor-preview-248__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>spark.eventLog.enabled  true
spark.eventLog.dir      hdfs://node01:8020/spark_log
spark.eventLog.compress true</code></pre>
</div>
</div>
</li>
<li>
<p>将以下内容复制到`spark-env.sh`的<strong>末尾</strong>, 配置 HistoryServer 启动参数, 使得 HistoryServer 在启动的时候读取 HDFS 中写入的 Spark 日志</p>
<div id="__asciidoctor-preview-250__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code># 指定 Spark History 运行参数
export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=4000 -Dspark.history.retainedApplications=3 -Dspark.history.fs.logDirectory=hdfs://node01:8020/spark_log"</code></pre>
</div>
</div>
</li>
<li>
<p>为 Spark 创建 HDFS 中的日志目录</p>
<div id="__asciidoctor-preview-252__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>hdfs dfs -mkdir -p /spark_log</code></pre>
</div>
</div>
</li>
</ol>
</div>
</li>
</ol>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Step 3 分发和运行</dt>
<dd>
<div id="__asciidoctor-preview-255__" class="exampleblock">
<div class="content">
<div id="__asciidoctor-preview-256__" class="olist arabic">
<ol class="arabic">
<li>
<p>将 Spark 安装包分发给集群中其它机器</p>
<div id="__asciidoctor-preview-258__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers
scp -r spark root@node02:$PWD
scp -r spark root@node03:$PWD</code></pre>
</div>
</div>
</li>
<li>
<p>启动 Spark Master 和 Slaves, 以及 HistoryServer</p>
<div id="__asciidoctor-preview-260__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers/spark
sbin/start-all.sh
sbin/start-history-server.sh</code></pre>
</div>
</div>
</li>
</ol>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-261__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-262__" class="paragraph">
<p>Spark 的集群搭建大致有如下几个步骤</p>
</div>
<div id="__asciidoctor-preview-263__" class="olist arabic">
<ol class="arabic">
<li>
<p>下载和解压 Spark</p>
</li>
<li>
<p>配置 Spark 的所有从节点位置</p>
</li>
<li>
<p>配置 Spark History server 以便于随时查看 Spark 应用的运行历史</p>
</li>
<li>
<p>分发和运行 Spark 集群</p>
</li>
</ol>
</div>
</div>
</div>
</div>
<div class="sect2">
<h3 id="_2_3_spark_集群高可用搭建">2.3. Spark 集群高可用搭建</h3>
<div id="__asciidoctor-preview-268__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-269__" class="olist arabic">
<ol class="arabic">
<li>
<p>简要了解如何使用 Zookeeper 帮助 Spark Standalone 高可用</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-271__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div id="__asciidoctor-preview-272__" class="paragraph">
<p>对于 Spark Standalone 集群来说, 当 Worker 调度出现问题的时候, 会自动的弹性容错, 将出错的 Task 调度到其它 Worker 执行</p>
</div>
<div id="__asciidoctor-preview-273__" class="paragraph">
<p>但是对于 Master 来说, 是会出现单点失败的, 为了避免可能出现的单点失败问题, Spark 提供了两种方式满足高可用</p>
</div>
<div id="__asciidoctor-preview-274__" class="ulist">
<ul>
<li>
<p>使用 Zookeeper 实现 Masters 的主备切换</p>
</li>
<li>
<p>使用文件系统做主备切换</p>
</li>
</ul>
</div>
<div id="__asciidoctor-preview-277__" class="paragraph">
<p>使用文件系统做主备切换的场景实在太小, 所以此处不再花费笔墨介绍</p>
</div>
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-278__" class="dlist">
<dl>
<dt class="hdlist1">Step 1 停止 Spark 集群</dt>
<dd>
<div id="__asciidoctor-preview-281__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers/spark
sbin/stop-all.sh</code></pre>
</div>
</div>
</dd>
<dt class="hdlist1">Step 2 修改配置文件, 增加 Spark 运行时参数, 从而指定 Zookeeper 的位置</dt>
<dd>
<div id="__asciidoctor-preview-284__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-285__" class="olist arabic">
<ol class="arabic">
<li>
<p>进入 <code>spark-env.sh</code> 所在目录, 打开 vi 编辑</p>
<div id="__asciidoctor-preview-287__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers/spark/conf
vi spark-env.sh</code></pre>
</div>
</div>
</li>
<li>
<p>编辑 <code>spark-env.sh</code>, 添加 Spark 启动参数, 并去掉 SPARK_MASTER_HOST 地址</p>
<div id="__asciidoctor-preview-289__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/db287fa523a39bd1a5e277c3ccd10a26.png" alt="db287fa523a39bd1a5e277c3ccd10a26">
</div>
</div>
<div id="__asciidoctor-preview-290__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code># 指定 Java Home
export JAVA_HOME=/export/servers/jdk1.8.0_141

# 指定 Spark Master 地址
# export SPARK_MASTER_HOST=node01
export SPARK_MASTER_PORT=7077

# 指定 Spark History 运行参数
export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=4000 -Dspark.history.retainedApplications=3 -Dspark.history.fs.logDirectory=hdfs://node01:8020/spark_log"

# 指定 Spark 运行时参数
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node01:2181,node02:2181,node03:2181 -Dspark.deploy.zookeeper.dir=/spark"</code></pre>
</div>
</div>
</li>
</ol>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Step 3 分发配置文件到整个集群</dt>
<dd>
<div id="__asciidoctor-preview-293__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers/spark/conf
scp spark-env.sh node02:$PWD
scp spark-env.sh node03:$PWD</code></pre>
</div>
</div>
</dd>
<dt class="hdlist1">Step 4 启动</dt>
<dd>
<div id="__asciidoctor-preview-296__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-297__" class="olist arabic">
<ol class="arabic">
<li>
<p>在 <code>node01</code> 上启动整个集群</p>
<div id="__asciidoctor-preview-299__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers/spark
sbin/start-all.sh
sbin/start-history-server.sh</code></pre>
</div>
</div>
</li>
<li>
<p>在 <code>node02</code> 上单独再启动一个 Master</p>
<div id="__asciidoctor-preview-301__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers/spark
sbin/start-master.sh</code></pre>
</div>
</div>
</li>
</ol>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Step 5 查看 <code>node01 master</code> 和 <code>node02 master</code> 的 WebUI</dt>
<dd>
<div id="__asciidoctor-preview-304__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-305__" class="olist arabic">
<ol class="arabic">
<li>
<p>你会发现一个是 <code>ALIVE(主)</code>, 另外一个是 <code>STANDBY(备)</code></p>
<div id="__asciidoctor-preview-307__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/1e21fca197a3023f0d937178e746a745.png" alt="1e21fca197a3023f0d937178e746a745" width="800">
</div>
</div>
</li>
<li>
<p>如果关闭一个, 则另外一个成为`ALIVE`, 但是这个过程可能要持续两分钟左右, 需要耐心等待</p>
<div id="__asciidoctor-preview-309__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code># 在 Node01 中执行如下指令
cd /export/servers/spark/
sbin/stop-master.sh</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-310__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/4b227c658421d6f62a9ab0b1bcaa1988.png" alt="4b227c658421d6f62a9ab0b1bcaa1988" width="800">
</div>
</div>
</li>
</ol>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-311__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div class="title">Spark HA 选举</div>
<div id="__asciidoctor-preview-312__" class="paragraph">
<p>Spark HA 的 Leader 选举使用了一个叫做 Curator 的 Zookeeper 客户端来进行</p>
</div>
<div id="__asciidoctor-preview-313__" class="paragraph">
<p>Zookeeper 是一个分布式强一致性的协调服务, Zookeeper 最基本的一个保证是: 如果多个节点同时创建一个 ZNode, 只有一个能够成功创建. 这个做法的本质使用的是 Zookeeper 的 ZAB 协议, 能够在分布式环境下达成一致.</p>
</div>
</td>
</tr>
</table>
</div>
<table id="__asciidoctor-preview-314__" class="tableblock frame-all grid-all stretch">
<caption class="title">Table 2. 附录:Spark各服务端口</caption>
<colgroup>
<col style="width: 50%;">
<col style="width: 50%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Service</th>
<th class="tableblock halign-left valign-top">port</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">Master WebUI</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">node01:8080</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">Worker WebUI</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">node01:8081</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">History Server</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">node01:4000</p></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_2_4_第一个应用的运行">2.4. 第一个应用的运行</h3>
<div id="__asciidoctor-preview-315__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-316__" class="olist arabic">
<ol class="arabic">
<li>
<p>从示例应用运行中理解 Spark 应用的运行流程</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-318__" class="dlist">
<dl>
<dt class="hdlist1">流程</dt>
<dd>
<div id="__asciidoctor-preview-321__" class="exampleblock">
<div class="content">
<div id="__asciidoctor-preview-322__" class="dlist">
<dl>
<dt class="hdlist1">Step 1 进入 Spark 安装目录中</dt>
<dd>
<div id="__asciidoctor-preview-325__" class="literalblock">
<div class="content">
<pre>cd /export/servers/spark/</pre>
</div>
</div>
</dd>
<dt class="hdlist1">Step 2 运行 Spark 示例任务</dt>
<dd>
<div id="__asciidoctor-preview-328__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://node01:7077,node02:7077,node03:7077 \
--executor-memory 1G \
--total-executor-cores 2 \
/export/servers/spark/examples/jars/spark-examples_2.11-2.2.3.jar \
100</code></pre>
</div>
</div>
</dd>
<dt class="hdlist1">Step 3 运行结果</dt>
<dd>
<div id="__asciidoctor-preview-331__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>Pi is roughly 3.141550671141551</code></pre>
</div>
</div>
</dd>
</dl>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-332__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div id="__asciidoctor-preview-333__" class="paragraph">
<p>刚才所运行的程序是 Spark 的一个示例程序, 使用 Spark 编写了一个以蒙特卡洛算法来计算圆周率的任务</p>
</div>
<div id="__asciidoctor-preview-334__" class="dlist">
<dl>
<dt class="hdlist1">蒙特卡洛算法概述</dt>
<dd>
<div id="__asciidoctor-preview-337__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-338__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/c0c058aa864df043d3618b18104dd642.png" alt="c0c058aa864df043d3618b18104dd642" width="650">
</div>
</div>
<div id="__asciidoctor-preview-339__" class="olist arabic">
<ol class="arabic">
<li>
<p>在一个正方形中, 内切出一个圆形</p>
<div id="__asciidoctor-preview-341__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/b2685a183453b8e5464885b26ae42798.png" alt="b2685a183453b8e5464885b26ae42798">
</div>
</div>
</li>
<li>
<p>随机向正方形内均匀投 n 个点, 其落入内切圆内的内外点的概率满足如下</p>
<div id="__asciidoctor-preview-343__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/6cd3660c8719b01815fba25a96ec1a87.png" alt="6cd3660c8719b01815fba25a96ec1a87">
</div>
</div>
</li>
</ol>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-344__" class="paragraph">
<p>以上就是蒙特卡洛的大致理论, 通过这个蒙特卡洛, 便可以通过迭代循环投点的方式实现蒙特卡洛算法求圆周率</p>
</div>
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-345__" class="dlist">
<dl>
<dt class="hdlist1">计算过程</dt>
<dd>
<div id="__asciidoctor-preview-348__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-349__" class="olist arabic">
<ol class="arabic">
<li>
<p>不断的生成随机的点, 根据点距离圆心是否超过半径来判断是否落入园内</p>
</li>
<li>
<p>通过
<span class="image"><img src="images/Spark01-cfb9a.png" alt="Spark01 cfb9a" width="22"></span>
来计算圆周率</p>
</li>
<li>
<p>不断的迭代</p>
</li>
</ol>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-353__" class="qlist qanda">
<ol>
<li>
<p><em>思考1: 迭代计算</em></p>
<div id="__asciidoctor-preview-356__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-357__" class="paragraph">
<p>如果上述的程序使用 MapReduce 该如何编写? 是否会有大量的向 HDFS 写入, 后再次读取数据的做法? 是否会影响性能?</p>
</div>
<div id="__asciidoctor-preview-358__" class="paragraph">
<p>Spark 为什么擅长这类操作? 大家可以发挥想象, 如何解决这种迭代计算的问题</p>
</div>
</div>
</div>
</li>
<li>
<p><em>思考2: 数据规模</em></p>
<div id="__asciidoctor-preview-361__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-362__" class="paragraph">
<p>刚才的计算只做了100次, 如果迭代100亿次, 在单机上运行和一个集群中运行谁更合适?</p>
</div>
</div>
</div>
</li>
</ol>
</div>
</div>
</div>
</div>
<div class="sect1">
<h2 id="_3_spark_入门">3. Spark 入门</h2>
<div class="sectionbody">
<div id="__asciidoctor-preview-363__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-364__" class="olist arabic">
<ol class="arabic">
<li>
<p>通过理解 Spark 小案例, 来理解 Spark 应用</p>
</li>
<li>
<p>理解编写 Spark 程序的两种常见方式</p>
<div id="__asciidoctor-preview-367__" class="olist loweralpha">
<ol class="loweralpha" type="a">
<li>
<p>spark-shell</p>
</li>
<li>
<p>spark-submit</p>
</li>
</ol>
</div>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-370__" class="dlist">
<dl>
<dt class="hdlist1">Spark 官方提供了两种方式编写代码, 都比较重要, 分别如下</dt>
<dd>
<div id="__asciidoctor-preview-373__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-374__" class="ulist">
<ul>
<li>
<p><code>spark-shell</code><br>
Spark shell 是 Spark 提供的一个基于 Scala 语言的交互式解释器, 类似于 Scala 提供的交互式解释器, Spark shell 也可以直接在 Shell 中编写代码执行<br>
这种方式也比较重要, 因为一般的数据分析任务可能需要探索着进行, 不是一蹴而就的, 使用 Spark shell 先进行探索, 当代码稳定以后, 使用独立应用的方式来提交任务, 这样是一个比较常见的流程</p>
</li>
<li>
<p><code>spark-submit</code><br>
Spark submit 是一个命令, 用于提交 Scala 编写的基于 Spark 框架, 这种提交方式常用作于在集群中运行任务</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div class="sect2">
<h3 id="_3_1_spark_shell_的方式编写_wordcount">3.1. Spark shell 的方式编写 WordCount</h3>
<div id="__asciidoctor-preview-377__" class="exampleblock">
<div class="title">概要</div>
<div class="content">
<div id="__asciidoctor-preview-378__" class="paragraph">
<p>在初始阶段工作可以全部使用 Spark shell 完成, 它可以加快原型开发, 使得迭代更快, 很快就能看到想法的结果. 但是随着项目规模越来越大, 这种方式不利于代码维护, 所以可以编写独立应用. 一般情况下, 在探索阶段使用 Spark shell, 在最终使用独立应用的方式编写代码并使用 Maven 打包上线运行</p>
</div>
<div id="__asciidoctor-preview-379__" class="paragraph">
<p>接下来使用 Spark shell 的方式编写一个 WordCount</p>
</div>
</div>
</div>
<div id="__asciidoctor-preview-380__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div class="title">Spark shell 简介</div>
<div id="__asciidoctor-preview-381__" class="ulist">
<ul>
<li>
<p>启动 Spark shell<br>
进入 Spark 安装目录后执行 <code>spark-shell --master master</code> 就可以提交Spark 任务</p>
</li>
<li>
<p>Spark shell 的原理是把每一行 Scala 代码编译成类, 最终交由 Spark 执行</p>
</li>
</ul>
</div>
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-384__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div class="title">Master地址的设置</div>
<div id="__asciidoctor-preview-385__" class="paragraph">
<p>Master 的地址可以有如下几种设置方式</p>
</div>
<table id="__asciidoctor-preview-386__" class="tableblock frame-all grid-all stretch">
<caption class="title">Table 3. master</caption>
<colgroup>
<col style="width: 50%;">
<col style="width: 50%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">地址</th>
<th class="tableblock halign-left valign-top">解释</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><code>local[N]</code></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">使用 N 条 Worker 线程在本地运行</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><code>spark://host:port</code></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">在 Spark standalone 中运行, 指定 Spark 集群的 Master 地址, 端口默认为 7077</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><code>mesos://host:port</code></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">在 Apache Mesos 中运行, 指定 Mesos 的地址</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><code>yarn</code></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">在 Yarn 中运行, Yarn 的地址由环境变量 <code>HADOOP_CONF_DIR</code> 来指定</p></td>
</tr>
</tbody>
</table>
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-387__" class="dlist">
<dl>
<dt class="hdlist1">Step 1 准备文件</dt>
<dd>
<div id="__asciidoctor-preview-390__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-391__" class="paragraph">
<p>在 Node01 中创建文件 <code>/export/data/wordcount.txt</code></p>
</div>
<div id="__asciidoctor-preview-392__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>hadoop spark flume
spark hadoop
flume hadoop</code></pre>
</div>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Step 2 启动 Spark shell</dt>
<dd>
<div id="__asciidoctor-preview-395__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-396__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/servers/spark
bin/spark-shell --master local[2]</code></pre>
</div>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Step 3 执行如下代码</dt>
<dd>
<div id="__asciidoctor-preview-399__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-400__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">scala&gt; val sourceRdd = sc.textFile("file:///export/data/wordcount.txt")
sourceRdd: org.apache.spark.rdd.RDD[String] = file:///export/data/wordcount.txt MapPartitionsRDD[1] at textFile at &lt;console&gt;:24

scala&gt; val flattenCountRdd = sourceRdd.flatMap(_.split(" ")).map((_, 1))
flattenCountRdd: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[3] at map at &lt;console&gt;:26

scala&gt; val aggCountRdd = flattenCountRdd.reduceByKey(_ + _)
aggCountRdd: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at &lt;console&gt;:28

scala&gt; val result = aggCountRdd.collect
result: Array[(String, Int)] = Array((spark,2), (hadoop,3), (flume,2))</code></pre>
</div>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-401__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div class="title">sc</div>
<div id="__asciidoctor-preview-402__" class="paragraph">
<p>上述代码中 <code>sc</code> 变量指的是 SparkContext, 是 Spark 程序的上下文和入口</p>
</div>
<div id="__asciidoctor-preview-403__" class="paragraph">
<p>正常情况下我们需要自己创建, 但是如果使用 Spark shell 的话, Spark shell 会帮助我们创建, 并且以变量 <code>sc</code> 的形式提供给我们调用</p>
</div>
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-404__" class="dlist">
<dl>
<dt class="hdlist1">运行流程</dt>
<dd>
<div id="__asciidoctor-preview-407__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-408__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/60a2714b057c19957908cfda93b8c321.png" alt="60a2714b057c19957908cfda93b8c321">
</div>
</div>
<div id="__asciidoctor-preview-409__" class="olist arabic">
<ol class="arabic">
<li>
<p><code>flatMap(_.split(" "))</code> 将数据转为数组的形式, 并展平为多个数据</p>
</li>
<li>
<p><code>map_, 1</code> 将数据转换为元组的形式</p>
</li>
<li>
<p><code>reduceByKey(_ + _)</code> 计算每个 Key 出现的次数</p>
</li>
</ol>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-413__" class="exampleblock">
<div class="title">总结</div>
<div class="content">
<div id="__asciidoctor-preview-414__" class="olist arabic">
<ol class="arabic">
<li>
<p>使用 Spark shell 可以快速验证想法</p>
</li>
<li>
<p>Spark 框架下的代码非常类似 Scala 的函数式调用</p>
</li>
</ol>
</div>
</div>
</div>
</div>
<div class="sect2">
<h3 id="_3_2_读取_hdfs_上的文件">3.2. 读取 HDFS 上的文件</h3>
<div id="__asciidoctor-preview-417__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-418__" class="olist arabic">
<ol class="arabic">
<li>
<p>理解 Spark 访问 HDFS 的两种方式</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-420__" class="dlist">
<dl>
<dt class="hdlist1">Step 1 上传文件到 HDFS 中</dt>
<dd>
<div id="__asciidoctor-preview-423__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>cd /export/data
hdfs dfs -mkdir /dataset
hdfs dfs -put wordcount.txt /dataset/</code></pre>
</div>
</div>
</dd>
<dt class="hdlist1">Step 2 在 Spark shell 中访问 HDFS</dt>
<dd>
<div id="__asciidoctor-preview-426__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">val sourceRdd = sc.textFile("hdfs://node01:8020/dataset/wordcount.txt")
val flattenCountRdd = sourceRdd.flatMap(_.split(" ")).map((_, 1))
val aggCountRdd = flattenCountRdd.reduceByKey(_ + _)
val result = aggCountRdd.collect</code></pre>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-427__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div class="title">如何使得 Spark 可以访问 HDFS?</div>
<div id="__asciidoctor-preview-428__" class="paragraph">
<p>可以通过指定 HDFS 的 NameNode 地址直接访问, 类似于上面代码中的 <code>sc.textFile("hdfs://node01:8020/dataset/wordcount.txt")</code></p>
</div>
<div id="__asciidoctor-preview-429__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/155c0a820881a7db91ea8d7cc53555d9.png" alt="155c0a820881a7db91ea8d7cc53555d9">
</div>
</div>
<div id="__asciidoctor-preview-430__" class="dlist">
<dl>
<dt class="hdlist1">也可以通过向 Spark 配置 Hadoop 的路径, 来通过路径直接访问</dt>
<dd>
<div id="__asciidoctor-preview-433__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-434__" class="dlist">
<dl>
<dt class="hdlist1">1.在 <code>spark-env.sh</code> 中添加 Hadoop 的配置路径</dt>
<dd>
<p><code>export HADOOP_CONF_DIR="/etc/hadoop/conf"</code></p>
</dd>
<dt class="hdlist1">2.在配置过后, 可以直接使用 <code>hdfs:///路径</code> 的形式直接访问</dt>
<dd>
<div id="__asciidoctor-preview-439__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/dd904b1653a52fe15d0bb7808d98b9af.png" alt="dd904b1653a52fe15d0bb7808d98b9af">
</div>
</div>
</dd>
<dt class="hdlist1">3.在配置过后, 也可以直接使用路径访问</dt>
<dd>
<div id="__asciidoctor-preview-442__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/3eabed898ed57db55370c25fad555072.png" alt="3eabed898ed57db55370c25fad555072">
</div>
</div>
</dd>
</dl>
</div>
</div>
</div>
</dd>
</dl>
</div>
</td>
</tr>
</table>
</div>
</div>
<div class="sect2">
<h3 id="_3_4_编写独立应用提交_spark_任务">3.4. 编写独立应用提交 Spark 任务</h3>
<div id="__asciidoctor-preview-443__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-444__" class="olist arabic">
<ol class="arabic">
<li>
<p>理解如何编写 Spark 独立应用</p>
</li>
<li>
<p>理解 WordCount 的代码流程</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-447__" class="dlist">
<dl>
<dt class="hdlist1">Step 1 创建工程</dt>
<dd>
<div id="__asciidoctor-preview-450__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-451__" class="olist arabic">
<ol class="arabic">
<li>
<p>创建 IDEA 工程</p>
<div id="__asciidoctor-preview-453__" class="olist loweralpha">
<ol class="loweralpha" type="a">
<li>
<p><span class="image"><img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/ee1391b4d7e1214b5b4155b6806a6794.png" alt="ee1391b4d7e1214b5b4155b6806a6794" width="150"></span> &#8594; <span class="image"><img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/24f103c1662f69cbb0af4bfc8a54b071.png" alt="24f103c1662f69cbb0af4bfc8a54b071" width="160"></span> &#8594; <span class="image"><img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/9affa530ce6f4de7da24efa30c5b4227.png" alt="9affa530ce6f4de7da24efa30c5b4227" width="70"></span></p>
</li>
<li>
<p><span class="image"><img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/4a8dac7fcd60c730512028265f27699f.png" alt="4a8dac7fcd60c730512028265f27699f" width="130"></span> &#8594; <span class="image"><img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/17fd56ce77043ded7754dc08b72a1f63.png" alt="17fd56ce77043ded7754dc08b72a1f63" width="130"></span> &#8594; <span class="image"><img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/412959e49ee5078f2e6d609d14e6307f.png" alt="412959e49ee5078f2e6d609d14e6307f" width="70"></span></p>
</li>
</ol>
</div>
</li>
<li>
<p>增加 Scala 支持</p>
<div id="__asciidoctor-preview-457__" class="olist loweralpha">
<ol class="loweralpha" type="a">
<li>
<p>右键点击工程目录 <span class="image"><img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/410a1fe6ae14ce614ee6e50f4e263e51.png" alt="410a1fe6ae14ce614ee6e50f4e263e51" width="150"></span></p>
</li>
<li>
<p>选择增加框架支持 <span class="image"><img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/c0c839c6f01db04cc112bfd2af260961.png" alt="c0c839c6f01db04cc112bfd2af260961" width="200"></span></p>
</li>
<li>
<p>选择 Scala 添加框架支持</p>
</li>
</ol>
</div>
</li>
</ol>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Step 2 编写 Maven 配置文件 <code>pom.xml</code></dt>
<dd>
<div id="__asciidoctor-preview-463__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-464__" class="olist arabic">
<ol class="arabic">
<li>
<p>工程根目录下增加文件 <code>pom.xml</code></p>
</li>
<li>
<p>添加以下内容</p>
<div id="__asciidoctor-preview-467__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-xml hljs" data-lang="xml">&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"&gt;
    &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt;

    &lt;groupId&gt;cn.itcast&lt;/groupId&gt;
    &lt;artifactId&gt;spark&lt;/artifactId&gt;
    &lt;version&gt;0.1.0&lt;/version&gt;

    &lt;properties&gt;
        &lt;scala.version&gt;2.11.8&lt;/scala.version&gt;
        &lt;spark.version&gt;2.2.0&lt;/spark.version&gt;
        &lt;slf4j.version&gt;1.7.16&lt;/slf4j.version&gt;
        &lt;log4j.version&gt;1.2.17&lt;/log4j.version&gt;
    &lt;/properties&gt;

    &lt;dependencies&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;org.scala-lang&lt;/groupId&gt;
            &lt;artifactId&gt;scala-library&lt;/artifactId&gt;
            &lt;version&gt;${scala.version}&lt;/version&gt;
        &lt;/dependency&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;org.apache.spark&lt;/groupId&gt;
            &lt;artifactId&gt;spark-core_2.11&lt;/artifactId&gt;
            &lt;version&gt;${spark.version}&lt;/version&gt;
        &lt;/dependency&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;org.apache.hadoop&lt;/groupId&gt;
            &lt;artifactId&gt;hadoop-client&lt;/artifactId&gt;
            &lt;version&gt;2.6.0&lt;/version&gt;
        &lt;/dependency&gt;

        &lt;dependency&gt;
            &lt;groupId&gt;org.slf4j&lt;/groupId&gt;
            &lt;artifactId&gt;jcl-over-slf4j&lt;/artifactId&gt;
            &lt;version&gt;${slf4j.version}&lt;/version&gt;
        &lt;/dependency&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;org.slf4j&lt;/groupId&gt;
            &lt;artifactId&gt;slf4j-api&lt;/artifactId&gt;
            &lt;version&gt;${slf4j.version}&lt;/version&gt;
        &lt;/dependency&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;org.slf4j&lt;/groupId&gt;
            &lt;artifactId&gt;slf4j-log4j12&lt;/artifactId&gt;
            &lt;version&gt;${slf4j.version}&lt;/version&gt;
        &lt;/dependency&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;log4j&lt;/groupId&gt;
            &lt;artifactId&gt;log4j&lt;/artifactId&gt;
            &lt;version&gt;${log4j.version}&lt;/version&gt;
        &lt;/dependency&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;junit&lt;/groupId&gt;
            &lt;artifactId&gt;junit&lt;/artifactId&gt;
            &lt;version&gt;4.10&lt;/version&gt;
            &lt;scope&gt;provided&lt;/scope&gt;
        &lt;/dependency&gt;
    &lt;/dependencies&gt;

    &lt;build&gt;
        &lt;sourceDirectory&gt;src/main/scala&lt;/sourceDirectory&gt;
        &lt;testSourceDirectory&gt;src/test/scala&lt;/testSourceDirectory&gt;
        &lt;plugins&gt;

            &lt;plugin&gt;
                &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
                &lt;artifactId&gt;maven-compiler-plugin&lt;/artifactId&gt;
                &lt;version&gt;3.0&lt;/version&gt;
                &lt;configuration&gt;
                    &lt;source&gt;1.8&lt;/source&gt;
                    &lt;target&gt;1.8&lt;/target&gt;
                    &lt;encoding&gt;UTF-8&lt;/encoding&gt;
                &lt;/configuration&gt;
            &lt;/plugin&gt;

            &lt;plugin&gt;
                &lt;groupId&gt;net.alchim31.maven&lt;/groupId&gt;
                &lt;artifactId&gt;scala-maven-plugin&lt;/artifactId&gt;
                &lt;version&gt;3.2.0&lt;/version&gt;
                &lt;executions&gt;
                    &lt;execution&gt;
                        &lt;goals&gt;
                            &lt;goal&gt;compile&lt;/goal&gt;
                            &lt;goal&gt;testCompile&lt;/goal&gt;
                        &lt;/goals&gt;
                        &lt;configuration&gt;
                            &lt;args&gt;
                                &lt;arg&gt;-dependencyfile&lt;/arg&gt;
                                &lt;arg&gt;${project.build.directory}/.scala_dependencies&lt;/arg&gt;
                            &lt;/args&gt;
                        &lt;/configuration&gt;
                    &lt;/execution&gt;
                &lt;/executions&gt;
            &lt;/plugin&gt;

            &lt;plugin&gt;
                &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
                &lt;artifactId&gt;maven-shade-plugin&lt;/artifactId&gt;
                &lt;version&gt;3.1.1&lt;/version&gt;
                &lt;executions&gt;
                    &lt;execution&gt;
                        &lt;phase&gt;package&lt;/phase&gt;
                        &lt;goals&gt;
                            &lt;goal&gt;shade&lt;/goal&gt;
                        &lt;/goals&gt;
                        &lt;configuration&gt;
                            &lt;filters&gt;
                                &lt;filter&gt;
                                    &lt;artifact&gt;*:*&lt;/artifact&gt;
                                    &lt;excludes&gt;
                                        &lt;exclude&gt;META-INF/*.SF&lt;/exclude&gt;
                                        &lt;exclude&gt;META-INF/*.DSA&lt;/exclude&gt;
                                        &lt;exclude&gt;META-INF/*.RSA&lt;/exclude&gt;
                                    &lt;/excludes&gt;
                                &lt;/filter&gt;
                            &lt;/filters&gt;
                            &lt;transformers&gt;
                                &lt;transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"&gt;
                                    &lt;mainClass&gt;&lt;/mainClass&gt;
                                &lt;/transformer&gt;
                            &lt;/transformers&gt;
                        &lt;/configuration&gt;
                    &lt;/execution&gt;
                &lt;/executions&gt;
            &lt;/plugin&gt;
        &lt;/plugins&gt;
    &lt;/build&gt;
&lt;/project&gt;</code></pre>
</div>
</div>
</li>
<li>
<p>因为在 <code>pom.xml</code> 中指定了 Scala 的代码目录, 所以创建目录 <code>src/main/scala</code> 和目录 <code>src/test/scala</code></p>
</li>
<li>
<p>创建 Scala object <code>WordCount</code></p>
</li>
</ol>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Step 3 编写代码</dt>
<dd>
<div id="__asciidoctor-preview-472__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-473__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">object WordCounts {

  def main(args: Array[String]): Unit = {
    // 1. 创建 Spark Context
    val conf = new SparkConf().setMaster("local[2]")
    val sc: SparkContext = new SparkContext(conf)

    // 2. 读取文件并计算词频
    val source: RDD[String] = sc.textFile("hdfs://node01:8020/dataset/wordcount.txt", 2)
    val words: RDD[String] = source.flatMap { line =&gt; line.split(" ") }
    val wordsTuple: RDD[(String, Int)] = words.map { word =&gt; (word, 1) }
    val wordsCount: RDD[(String, Int)] = wordsTuple.reduceByKey { (x, y) =&gt; x + y }

    // 3. 查看执行结果
    println(wordsCount.collect)
  }
}</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-474__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
和 Spark shell 中不同, 独立应用需要手动创建 SparkContext
</td>
</tr>
</table>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">Step 4 运行</dt>
<dd>
<div id="__asciidoctor-preview-477__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-478__" class="paragraph">
<p>运行 Spark 独立应用大致有两种方式, 一种是直接在 IDEA 中调试, 另一种是可以在提交至 Spark 集群中运行, 而 Spark 又支持多种集群, 不同的集群有不同的运行方式</p>
</div>
<div id="__asciidoctor-preview-479__" class="dlist">
<dl>
<dt class="hdlist1">直接在 IDEA 中运行 Spark 程序</dt>
<dd>
<div id="__asciidoctor-preview-482__" class="exampleblock">
<div class="content">
<div id="__asciidoctor-preview-483__" class="olist arabic">
<ol class="arabic">
<li>
<p>在工程根目录创建文件夹和文件</p>
<div id="__asciidoctor-preview-485__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/f6ccfd3d52928baa0478100832a723b0.png" alt="f6ccfd3d52928baa0478100832a723b0" width="800">
</div>
</div>
</li>
<li>
<p>修改读取文件的路径为`dataset/wordcount.txt`</p>
<div id="__asciidoctor-preview-487__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/ad2eef5059c8fb5e819d9287c6c9cb25.png" alt="ad2eef5059c8fb5e819d9287c6c9cb25" width="800">
</div>
</div>
</li>
<li>
<p>右键运行 Main 方法</p>
<div id="__asciidoctor-preview-489__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/37b5dcc51939c056608275f89a3d0fc1.png" alt="37b5dcc51939c056608275f89a3d0fc1" width="800">
</div>
</div>
</li>
</ol>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-490__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div class="title">spark-submit 命令</div>
<div id="__asciidoctor-preview-491__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>spark-submit [options] &lt;app jar&gt; &lt;app options&gt;</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-492__" class="ulist">
<ul>
<li>
<p><code>app jar</code> 程序 Jar 包</p>
</li>
<li>
<p><code>app options</code> 程序 Main 方法传入的参数</p>
</li>
<li>
<p><code>options</code> 提交应用的参数, 可以有如下选项</p>
</li>
</ul>
</div>
<table id="__asciidoctor-preview-496__" class="tableblock frame-all grid-all stretch">
<caption class="title">Table 4. options 可选参数</caption>
<colgroup>
<col style="width: 50%;">
<col style="width: 50%;">
</colgroup>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">参数</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">解释</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><code>--master &lt;url&gt;</code></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">同 Spark shell 的 Master, 可以是spark, yarn, mesos, kubernetes等 URL</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><code>--deploy-mode &lt;client or cluster&gt;</code></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Driver 运行位置, 可选 Client 和 Cluster, 分别对应运行在本地和集群(Worker)中</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><code>--class &lt;class full name&gt;</code></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Jar 中的 Class, 程序入口</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><code>--jars &lt;dependencies path&gt;</code></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">依赖 Jar 包的位置</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><code>--driver-memory &lt;memory size&gt;</code></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Driver 程序运行所需要的内存, 默认 512M</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock"><code>--executor-memory &lt;memory size&gt;</code></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Executor 的内存大小, 默认 1G</p></td>
</tr>
</tbody>
</table>
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-497__" class="dlist">
<dl>
<dt class="hdlist1">提交到 Spark Standalone 集群中运行</dt>
<dd>
<div id="__asciidoctor-preview-500__" class="exampleblock">
<div class="content">
<div id="__asciidoctor-preview-501__" class="olist arabic">
<ol class="arabic">
<li>
<p>在 IDEA 中使用 Maven 打包</p>
<div id="__asciidoctor-preview-503__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/adf0a41da23b6c209edd4dc69d8688e6.png" alt="adf0a41da23b6c209edd4dc69d8688e6" width="200">
</div>
</div>
</li>
<li>
<p>拷贝打包的 Jar 包上传到 node01 中</p>
<div id="__asciidoctor-preview-505__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/103e4db41405dcf7ba740b4653b5c216.png" alt="103e4db41405dcf7ba740b4653b5c216" width="200">
</div>
</div>
</li>
<li>
<p>在 node01 中 Jar 包所在的目录执行如下命令</p>
<div id="__asciidoctor-preview-507__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>spark-submit --master spark://node01:7077 \
--class cn.itcast.spark.WordCounts \
original-spark-0.1.0.jar</code></pre>
</div>
</div>
</li>
</ol>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-508__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
<div class="title">如何在任意目录执行 spark-submit 命令?</div>
<div id="__asciidoctor-preview-509__" class="olist arabic">
<ol class="arabic">
<li>
<p>在 <code>/etc/profile</code> 中写入如下</p>
<div id="__asciidoctor-preview-511__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>export SPARK_BIN=/export/servers/spark/bin
export PATH=$PATH:$SPARK_BIN</code></pre>
</div>
</div>
</li>
<li>
<p>执行 <code>/etc/profile</code> 使得配置生效</p>
<div id="__asciidoctor-preview-513__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code>source /etc/profile</code></pre>
</div>
</div>
</li>
</ol>
</div>
</td>
</tr>
</table>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-514__" class="exampleblock">
<div class="title">总结: 三种不同的运行方式</div>
<div class="content">
<div id="__asciidoctor-preview-515__" class="dlist">
<dl>
<dt class="hdlist1">Spark shell</dt>
<dd>
<div id="__asciidoctor-preview-518__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-519__" class="ulist">
<ul>
<li>
<p>作用</p>
<div id="__asciidoctor-preview-521__" class="ulist">
<ul>
<li>
<p>一般用作于探索阶段, 通过 Spark shell 快速的探索数据规律</p>
</li>
<li>
<p>当探索阶段结束后, 代码确定以后, 通过独立应用的形式上线运行</p>
</li>
</ul>
</div>
</li>
<li>
<p>功能</p>
<div id="__asciidoctor-preview-525__" class="ulist">
<ul>
<li>
<p>Spark shell 可以选择在集群模式下运行, 还是在线程模式下运行</p>
</li>
<li>
<p>Spark shell 是一个交互式的运行环境, 已经内置好了 SparkContext 和 SparkSession 对象, 可以直接使用</p>
</li>
<li>
<p>Spark shell 一般运行在集群中安装有 Spark client 的服务器中, 所以可以自有的访问 HDFS</p>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">本地运行</dt>
<dd>
<div id="__asciidoctor-preview-531__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-532__" class="ulist">
<ul>
<li>
<p>作用</p>
<div id="__asciidoctor-preview-534__" class="ulist">
<ul>
<li>
<p>在编写独立应用的时候, 每次都要提交到集群中还是不方便, 另外很多时候需要调试程序, 所以在 IDEA 中直接运行会比较方便, 无需打包上传了</p>
</li>
</ul>
</div>
</li>
<li>
<p>功能</p>
<div id="__asciidoctor-preview-537__" class="ulist">
<ul>
<li>
<p>因为本地运行一般是在开发者的机器中运行, 而不是集群中, 所以很难直接使用 HDFS 等集群服务, 需要做一些本地配置, 用的比较少</p>
</li>
<li>
<p>需要手动创建 SparkContext</p>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">集群运行</dt>
<dd>
<div id="__asciidoctor-preview-542__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-543__" class="ulist">
<ul>
<li>
<p>作用</p>
<div id="__asciidoctor-preview-545__" class="ulist">
<ul>
<li>
<p>正式环境下比较多见, 独立应用编写好以后, 打包上传到集群中, 使用`spark-submit`来运行, 可以完整的使用集群资源</p>
</li>
</ul>
</div>
</li>
<li>
<p>功能</p>
<div id="__asciidoctor-preview-548__" class="ulist">
<ul>
<li>
<p>同时在集群中通过`spark-submit`来运行程序也可以选择是用线程模式还是集群模式</p>
</li>
<li>
<p>集群中运行是全功能的, HDFS 的访问, Hive 的访问都比较方便</p>
</li>
<li>
<p>需要手动创建 SparkContext</p>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
</div>
</dd>
</dl>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="sect1">
<h2 id="_4_rdd_入门">4. RDD 入门</h2>
<div class="sectionbody">
<div id="__asciidoctor-preview-552__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-553__" class="paragraph">
<p>上面通过一个 WordCount 案例, 演示了 Spark 大致的编程模型和运行方式, 接下来针对 Spark 的编程模型做更详细的扩展</p>
</div>
<div id="__asciidoctor-preview-554__" class="olist arabic">
<ol class="arabic">
<li>
<p>理解 WordCount 的代码</p>
<div id="__asciidoctor-preview-556__" class="olist loweralpha">
<ol class="loweralpha" type="a">
<li>
<p>从执行角度上理解, 数据之间如何流转</p>
</li>
<li>
<p>从原理角度理解, 各个算子之间如何配合</p>
</li>
</ol>
</div>
</li>
<li>
<p>粗略理解 Spark 中的编程模型 RDD</p>
</li>
<li>
<p>理解 Spark 中 RDD 的各个算子</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-561__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">object WordCounts {

  def main(args: Array[String]): Unit = {
    // 1. 创建 Spark Context
    val conf = new SparkConf().setMaster("local[2]")
    val sc: SparkContext = new SparkContext(conf)

    // 2. 读取文件并计算词频
    val source: RDD[String] = sc.textFile("hdfs://node01:8020/dataset/wordcount.txt", 2)
    val words: RDD[String] = source.flatMap { line =&gt; line.split(" ") }
    val wordsTuple: RDD[(String, Int)] = words.map { word =&gt; (word, 1) }
    val wordsCount: RDD[(String, Int)] = wordsTuple.reduceByKey { (x, y) =&gt; x + y }

    // 3. 查看执行结果
    println(wordsCount.collect)
  }
}</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-562__" class="paragraph">
<p>在这份 WordCount 代码中, 大致的思路如下:</p>
</div>
<div id="__asciidoctor-preview-563__" class="olist arabic">
<ol class="arabic">
<li>
<p>使用 <code>sc.textFile()</code> 方法读取 HDFS 中的文件, 并生成一个 <code>RDD</code></p>
</li>
<li>
<p>使用 <code>flatMap</code> 算子将读取到的每一行字符串打散成单词, 并把每个单词变成新的行</p>
</li>
<li>
<p>使用 <code>map</code> 算子将每个单词转换成 <code>(word, 1)</code> 这种元组形式</p>
</li>
<li>
<p>使用 <code>reduceByKey</code> 统计单词对应的频率</p>
</li>
</ol>
</div>
<div id="__asciidoctor-preview-568__" class="paragraph">
<p>其中所使用到的算子有如下几个:</p>
</div>
<div id="__asciidoctor-preview-569__" class="ulist">
<ul>
<li>
<p><code>flatMap</code> 是一对多</p>
</li>
<li>
<p><code>map</code> 是一对一</p>
</li>
<li>
<p><code>reduceByKey</code> 是按照 Key 聚合, 类似 MapReduce 中的 Shuffled</p>
</li>
</ul>
</div>
<div id="__asciidoctor-preview-573__" class="paragraph">
<p>如果用图形表示的话, 如下:</p>
</div>
<div id="__asciidoctor-preview-574__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/2d5bc5474ac87123de26d9c5ca402dd4.png" alt="2d5bc5474ac87123de26d9c5ca402dd4">
</div>
</div>
<div id="__asciidoctor-preview-575__" class="exampleblock">
<div class="title">总结以及引出新问题</div>
<div class="content">
<div id="__asciidoctor-preview-576__" class="paragraph">
<p>上面大概说了两件事:</p>
</div>
<div id="__asciidoctor-preview-577__" class="olist arabic">
<ol class="arabic">
<li>
<p>代码流程</p>
</li>
<li>
<p>算子</p>
</li>
</ol>
</div>
<div id="__asciidoctor-preview-580__" class="paragraph">
<p>在代码中有一些东西并未交代:</p>
</div>
<div id="__asciidoctor-preview-581__" class="olist arabic">
<ol class="arabic">
<li>
<p>source, words, wordsTuple 这些变量的类型是 <code>RDD[Type]</code>, 什么是 <code>RDD</code>?</p>
</li>
<li>
<p>还有更多算子吗?</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-584__" class="sidebarblock">
<div class="content">
<div class="title">RDD 是什么</div>
<div id="__asciidoctor-preview-585__" class="imageblock text-center">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/fa029f454c7b6445fa72ea6df999f67e.png" alt="fa029f454c7b6445fa72ea6df999f67e">
</div>
</div>
<div id="__asciidoctor-preview-586__" class="dlist">
<dl>
<dt class="hdlist1">定义</dt>
<dd>
<div id="__asciidoctor-preview-589__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-590__" class="paragraph">
<p>RDD, 全称为 Resilient Distributed Datasets, 是一个容错的, 并行的数据结构, 可以让用户显式地将数据存储到磁盘和内存中, 并能控制数据的分区.</p>
</div>
<div id="__asciidoctor-preview-591__" class="paragraph">
<p>同时, RDD 还提供了一组丰富的操作来操作这些数据. 在这些操作中, 诸如 map, flatMap, filter 等转换操作实现了 Monad 模式, 很好地契合了 Scala 的集合操作. 除此之外, RDD 还提供了诸如 join, groupBy, reduceByKey 等更为方便的操作, 以支持常见的数据运算.</p>
</div>
<div id="__asciidoctor-preview-592__" class="paragraph">
<p>通常来讲, 针对数据处理有几种常见模型, 包括: Iterative Algorithms, Relational Queries, MapReduce, Stream Processing. 例如 Hadoop MapReduce 采用了 MapReduce 模型, Storm 则采用了 Stream Processing 模型. RDD 混合了这四种模型, 使得 Spark 可以应用于各种大数据处理场景.</p>
</div>
<div id="__asciidoctor-preview-593__" class="paragraph">
<p>RDD 作为数据结构, 本质上是一个只读的分区记录集合. 一个 RDD 可以包含多个分区, 每个分区就是一个 DataSet 片段.</p>
</div>
<div id="__asciidoctor-preview-594__" class="paragraph">
<p>RDD 之间可以相互依赖, 如果 RDD 的每个分区最多只能被一个子 RDD 的一个分区使用，则称之为窄依赖, 若被多个子 RDD 的分区依赖，则称之为宽依赖. 不同的操作依据其特性, 可能会产生不同的依赖. 例如 map 操作会产生窄依赖, 而 join 操作则产生宽依赖.</p>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">特点</dt>
<dd>
<div id="__asciidoctor-preview-597__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-598__" class="olist arabic">
<ol class="arabic">
<li>
<p>RDD 是一个编程模型</p>
<div id="__asciidoctor-preview-600__" class="olist loweralpha">
<ol class="loweralpha" type="a">
<li>
<p>RDD 允许用户显式的指定数据存放在内存或者磁盘</p>
</li>
<li>
<p>RDD 是分布式的, 用户可以控制 RDD 的分区</p>
</li>
</ol>
</div>
</li>
<li>
<p>RDD 是一个编程模型</p>
<div id="__asciidoctor-preview-604__" class="olist loweralpha">
<ol class="loweralpha" type="a">
<li>
<p>RDD 提供了丰富的操作</p>
</li>
<li>
<p>RDD 提供了 map, flatMap, filter 等操作符, 用以实现 Monad 模式</p>
</li>
<li>
<p>RDD 提供了 reduceByKey, groupByKey 等操作符, 用以操作 Key-Value 型数据</p>
</li>
<li>
<p>RDD 提供了 max, min, mean 等操作符, 用以操作数字型的数据</p>
</li>
</ol>
</div>
</li>
<li>
<p>RDD 是混合型的编程模型, 可以支持迭代计算, 关系查询, MapReduce, 流计算</p>
</li>
<li>
<p>RDD 是只读的</p>
</li>
<li>
<p>RDD 之间有依赖关系, 根据执行操作的操作符的不同, 依赖关系可以分为宽依赖和窄依赖</p>
</li>
</ol>
</div>
</div>
</div>
</dd>
</dl>
</div>
</div>
</div>
<div id="__asciidoctor-preview-612__" class="sidebarblock">
<div class="content">
<div class="title">RDD 的分区</div>
<div id="__asciidoctor-preview-613__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/f738dbe3df690bc0ba8f580a3e2d1112.png" alt="f738dbe3df690bc0ba8f580a3e2d1112">
</div>
</div>
<div id="__asciidoctor-preview-614__" class="dlist">
<dl>
<dt class="hdlist1">整个 WordCount 案例的程序从结构上可以用上图表示, 分为两个大部分</dt>
<dt class="hdlist1">存储</dt>
<dd>
<p>文件如果存放在 HDFS 上, 是分块的, 类似上图所示, 这个 <code>wordcount.txt</code> 分了三块</p>
</dd>
<dt class="hdlist1">计算</dt>
<dd>
<p>Spark 不止可以读取 HDFS, Spark 还可以读取很多其它的数据集, Spark 可以从数据集中创建出 RDD</p>
<div id="__asciidoctor-preview-620__" class="paragraph">
<p>例如上图中, 使用了一个 RDD 表示 HDFS 上的某一个文件, 这个文件在 HDFS 中是分三块, 那么 RDD 在读取的时候就也有三个分区, 每个 RDD 的分区对应了一个 HDFS 的分块</p>
</div>
<div id="__asciidoctor-preview-621__" class="paragraph">
<p>后续 RDD 在计算的时候, 可以更改分区, 也可以保持三个分区, 每个分区之间有依赖关系, 例如说 RDD2 的分区一依赖了 RDD1 的分区一</p>
</div>
<div id="__asciidoctor-preview-622__" class="paragraph">
<p>RDD 之所以要设计为有分区的, 是因为要进行分布式计算, 每个不同的分区可以在不同的线程, 或者进程, 甚至节点中, 从而做到并行计算</p>
</div>
</dd>
</dl>
</div>
</div>
</div>
<div id="__asciidoctor-preview-623__" class="exampleblock">
<div class="title">总结</div>
<div class="content">
<div id="__asciidoctor-preview-624__" class="olist arabic">
<ol class="arabic">
<li>
<p>RDD 是弹性分布式数据集</p>
</li>
<li>
<p>RDD 一个非常重要的前提和基础是 RDD 运行在分布式环境下, 其可以分区</p>
</li>
</ol>
</div>
</div>
</div>
<div class="sect2">
<h3 id="_4_1_创建_rdd">4.1. 创建 RDD</h3>
<div id="__asciidoctor-preview-627__" class="sidebarblock">
<div class="content">
<div class="title">程序入口 SparkContext</div>
<div id="__asciidoctor-preview-628__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">val conf = new SparkConf().setMaster("local[2]")
val sc: SparkContext = new SparkContext(conf)</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-629__" class="paragraph">
<p><code>SparkContext</code> 是 spark-core 的入口组件, 是一个 Spark 程序的入口, 在 Spark 0.x 版本就已经存在 <code>SparkContext</code> 了, 是一个元老级的 API</p>
</div>
<div id="__asciidoctor-preview-630__" class="paragraph">
<p>如果把一个 Spark 程序分为前后端, 那么服务端就是可以运行 Spark 程序的集群, 而 <code>Driver</code> 就是 Spark 的前端, 在 <code>Driver</code> 中 <code>SparkContext</code> 是最主要的组件, 也是 <code>Driver</code> 在运行时首先会创建的组件, 是 <code>Driver</code> 的核心</p>
</div>
<div id="__asciidoctor-preview-631__" class="paragraph">
<p><code>SparkContext</code> 从提供的 API 来看, 主要作用是连接集群, 创建 RDD, 累加器, 广播变量等</p>
</div>
</div>
</div>
<div id="__asciidoctor-preview-632__" class="dlist">
<dl>
<dt class="hdlist1">简略的说, RDD 有三种创建方式</dt>
<dd>
<div id="__asciidoctor-preview-635__" class="ulist">
<ul>
<li>
<p>RDD 可以通过本地集合直接创建</p>
</li>
<li>
<p>RDD 也可以通过读取外部数据集来创建</p>
</li>
<li>
<p>RDD 也可以通过其它的 RDD 衍生而来</p>
</li>
</ul>
</div>
</dd>
<dt class="hdlist1">通过本地集合直接创建 RDD</dt>
<dd>
<div id="__asciidoctor-preview-641__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-642__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">val conf = new SparkConf().setMaster("local[2]")
val sc = new SparkContext(conf)

val list = List(1, 2, 3, 4, 5, 6)
val rddParallelize = sc.parallelize(list, 2)
val rddMake = sc.makeRDD(list, 2)</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-643__" class="paragraph">
<p>通过 <code>parallelize</code> 和 <code>makeRDD</code> 这两个 API 可以通过本地集合创建 RDD</p>
</div>
<div id="__asciidoctor-preview-644__" class="paragraph">
<p>这两个 API 本质上是一样的, 在 <code>makeRDD</code> 这个方法的内部, 最终也是调用了 <code>parallelize</code></p>
</div>
<div id="__asciidoctor-preview-645__" class="paragraph">
<p>因为不是从外部直接读取数据集的, 所以没有外部的分区可以借鉴, 于是在这两个方法都都有两个参数, 第一个参数是本地集合, 第二个参数是分区数</p>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">通过读取外部文件创建 RDD</dt>
<dd>
<div id="__asciidoctor-preview-648__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-649__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">val conf = new SparkConf().setMaster("local[2]")
val sc = new SparkContext(conf)

val source: RDD[String] = sc.textFile("hdfs://node01:8020/dataset/wordcount.txt")</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-650__" class="ulist">
<ul>
<li>
<p>访问方式</p>
<div id="__asciidoctor-preview-652__" class="ulist">
<ul>
<li>
<p>支持访问文件夹, 例如 <code>sc.textFile("hdfs:///dataset")</code></p>
</li>
<li>
<p>支持访问压缩文件, 例如 <code>sc.textFile("hdfs:///dataset/words.gz")</code></p>
</li>
<li>
<p>支持通过通配符访问, 例如 <code>sc.textFile("hdfs:///dataset/*.txt")</code></p>
</li>
</ul>
</div>
</li>
</ul>
</div>
<div id="__asciidoctor-preview-656__" class="admonitionblock warning">
<table>
<tr>
<td class="icon">
<i class="fa icon-warning" title="Warning"></i>
</td>
<td class="content">
<div id="__asciidoctor-preview-657__" class="paragraph">
<p>如果把 Spark 应用跑在集群上, 则 Worker 有可能在任何一个节点运行</p>
</div>
<div id="__asciidoctor-preview-658__" class="paragraph">
<p>所以如果使用 <code><a href="file:///&#8230;&#8203" class="bare">file:///&#8230;&#8203</a>;</code> 形式访问本地文件的话, 要确保所有的 Worker 中对应路径上有这个文件, 否则可能会报错无法找到文件</p>
</div>
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-659__" class="ulist">
<ul>
<li>
<p><strong>分区</strong></p>
<div id="__asciidoctor-preview-661__" class="ulist">
<ul>
<li>
<p>默认情况下读取 HDFS 中文件的时候, 每个 HDFS 的 <code>block</code> 对应一个 RDD 的 <code>partition</code>, <code>block</code> 的默认是128M</p>
</li>
<li>
<p>通过第二个参数, 可以指定分区数量, 例如 <code>sc.textFile("hdfs://node01:8020/dataset/wordcount.txt", 20)</code></p>
</li>
<li>
<p>如果通过第二个参数指定了分区, 这个分区数量一定不能小于`block`数</p>
</li>
</ul>
</div>
</li>
</ul>
</div>
<div id="__asciidoctor-preview-665__" class="admonitionblock note">
<table>
<tr>
<td class="icon">
<i class="fa icon-note" title="Note"></i>
</td>
<td class="content">
通常每个 CPU core 对应 2 - 4 个分区是合理的值
</td>
</tr>
</table>
</div>
<div id="__asciidoctor-preview-666__" class="ulist">
<ul>
<li>
<p>支持的平台</p>
<div id="__asciidoctor-preview-668__" class="ulist">
<ul>
<li>
<p>支持 Hadoop 的几乎所有数据格式, 支持 HDFS 的访问</p>
</li>
<li>
<p>通过第三方的支持, 可以访问AWS和阿里云中的文件, 详情查看对应平台的 API</p>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">通过其它的 RDD 衍生新的 RDD</dt>
<dd>
<div id="__asciidoctor-preview-673__" class="openblock">
<div class="content">
<div id="__asciidoctor-preview-674__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">val conf = new SparkConf().setMaster("local[2]")
val sc = new SparkContext(conf)

val source: RDD[String] = sc.textFile("hdfs://node01:8020/dataset/wordcount.txt", 20)
val words = source.flatMap { line =&gt; line.split(" ") }</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-675__" class="ulist">
<ul>
<li>
<p><code>source</code> 是通过读取 HDFS 中的文件所创建的</p>
</li>
<li>
<p><code>words</code> 是通过 <code>source</code> 调用算子 <code>map</code> 生成的新 RDD</p>
</li>
</ul>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-678__" class="exampleblock">
<div class="title">总结</div>
<div class="content">
<div id="__asciidoctor-preview-679__" class="olist arabic">
<ol class="arabic">
<li>
<p>RDD 的可以通过三种方式创建, 通过本地集合创建, 通过外部数据集创建, 通过其它的 RDD 衍生</p>
</li>
</ol>
</div>
</div>
</div>
</div>
<div class="sect2">
<h3 id="_4_2_rdd_算子">4.2. RDD 算子</h3>
<div id="__asciidoctor-preview-681__" class="exampleblock">
<div class="title">目标</div>
<div class="content">
<div id="__asciidoctor-preview-682__" class="olist arabic">
<ol class="arabic">
<li>
<p>理解各个算子的作用</p>
</li>
<li>
<p>通过理解算子的作用, 反向理解 WordCount 程序, 以及 Spark 的要点</p>
</li>
</ol>
</div>
</div>
</div>
<div id="__asciidoctor-preview-685__" class="dlist">
<dl>
<dt class="hdlist1">Map 算子</dt>
<dd>
<div id="__asciidoctor-preview-688__" class="sidebarblock">
<div class="content">
<div id="__asciidoctor-preview-689__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">sc.parallelize(Seq(1, 2, 3))
  .map( num =&gt; num * 10 )
  .collect()</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-690__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/c59d44296918b864a975ebbeb60d4c04.png" alt="c59d44296918b864a975ebbeb60d4c04" width="800">
</div>
</div>
<div id="__asciidoctor-preview-691__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/57c2f77284bfa8f99ade091fdd7e9f83.png" alt="57c2f77284bfa8f99ade091fdd7e9f83" width="800">
</div>
</div>
<div id="__asciidoctor-preview-692__" class="dlist">
<dl>
<dt class="hdlist1">作用</dt>
<dd>
<p>把 RDD 中的数据 一对一 的转为另一种形式</p>
</dd>
<dt class="hdlist1">调用</dt>
<dd>
<p><code>def map[U: ClassTag](f: T &#8658; U): RDD[U]</code></p>
</dd>
<dt class="hdlist1">参数</dt>
<dd>
<p><code>f</code> &#8594; Map 算子是 <code>原RDD &#8594; 新RDD</code> 的过程, 这个函数的参数是原 RDD 数据, 返回值是经过函数转换的新 RDD 的数据</p>
</dd>
<dt class="hdlist1">注意点</dt>
<dd>
<p>Map 是一对一, 如果函数是 <code>String &#8594; Array[String]</code> 则新的 RDD 中每条数据就是一个数组</p>
</dd>
</dl>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">FlatMap 算子</dt>
<dd>
<div id="__asciidoctor-preview-703__" class="sidebarblock">
<div class="content">
<div id="__asciidoctor-preview-704__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">sc.parallelize(Seq("Hello lily", "Hello lucy", "Hello tim"))
  .flatMap( line =&gt; line.split(" ") )
  .collect()</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-705__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/f6c4feba14bb71372aa0cb678067c6a8.png" alt="f6c4feba14bb71372aa0cb678067c6a8" width="800">
</div>
</div>
<div id="__asciidoctor-preview-706__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/ec39594f30ca4d59e2ef5cdc60387866.png" alt="ec39594f30ca4d59e2ef5cdc60387866" width="800">
</div>
</div>
<div id="__asciidoctor-preview-707__" class="dlist">
<dl>
<dt class="hdlist1">作用</dt>
<dd>
<p>FlatMap 算子和 Map 算子类似, 但是 FlatMap 是一对多</p>
</dd>
<dt class="hdlist1">调用</dt>
<dd>
<p><code>def flatMap[U: ClassTag](f: T &#8658; List[U]): RDD[U]</code></p>
</dd>
<dt class="hdlist1">参数</dt>
<dd>
<p><code>f</code> &#8594; 参数是原 RDD 数据, 返回值是经过函数转换的新 RDD 的数据, 需要注意的是返回值是一个集合, 集合中的数据会被展平后再放入新的 RDD</p>
</dd>
<dt class="hdlist1">注意点</dt>
<dd>
<p>flatMap 其实是两个操作, 是 <code>map + flatten</code>, 也就是先转换, 后把转换而来的 List 展开</p>
</dd>
</dl>
</div>
</div>
</div>
</dd>
<dt class="hdlist1">ReduceByKey 算子</dt>
<dd>
<div id="__asciidoctor-preview-718__" class="sidebarblock">
<div class="content">
<div id="__asciidoctor-preview-719__" class="listingblock">
<div class="content">
<pre class="highlightjs highlight"><code class="language-java hljs" data-lang="java">sc.parallelize(Seq(("a", 1), ("a", 1), ("b", 1)))
  .reduceByKey( (curr, agg) =&gt; curr + agg )
  .collect()</code></pre>
</div>
</div>
<div id="__asciidoctor-preview-720__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/07678e1b4d6ba1dfaf2f5df89489def4.png" alt="07678e1b4d6ba1dfaf2f5df89489def4" width="800">
</div>
</div>
<div id="__asciidoctor-preview-721__" class="imageblock">
<div class="content">
<img src="https://doc-1256053707.cos.ap-beijing.myqcloud.com/a9b444d144d6996c83b33f6a48806a1a.png" alt="a9b444d144d6996c83b33f6a48806a1a" width="800">
</div>
</div>
<div id="__asciidoctor-preview-722__" class="dlist">
<dl>
<dt class="hdlist1">作用</dt>
<dd>
<p>首先按照 Key 分组, 接下来把整组的 Value 计算出一个聚合值, 这个操作非常类似于 MapReduce 中的 Reduce</p>
</dd>
<dt class="hdlist1">调用</dt>
<dd>
<p><code>def reduceByKey(func: (V, V) &#8658; V): RDD[(K, V)]</code></p>
</dd>
<dt class="hdlist1">参数</dt>
<dd>
<p>func &#8594; 执行数据处理的函数, 传入两个参数, 一个是当前值, 一个是局部汇总, 这个函数需要有一个输出, 输出就是这个 Key 的汇总结果</p>
</dd>
<dt class="hdlist1">注意点</dt>
<dd>
<div id="__asciidoctor-preview-731__" class="ulist">
<ul>
<li>
<p>ReduceByKey 只能作用于 Key-Value 型数据, Key-Value 型数据在当前语境中特指 Tuple2</p>
</li>
<li>
<p>ReduceByKey 是一个需要 Shuffled 的操作</p>
</li>
<li>
<p>和其它的 Shuffled 相比, ReduceByKey是高效的, 因为类似 MapReduce 的, 在 Map 端有一个 Cominer, 这样 I/O 的数据便会减少</p>
</li>
</ul>
</div>
</dd>
</dl>
</div>
</div>
</div>
</dd>
</dl>
</div>
<div id="__asciidoctor-preview-735__" class="exampleblock">
<div class="title">总结</div>
<div class="content">
<div id="__asciidoctor-preview-736__" class="olist arabic">
<ol class="arabic">
<li>
<p>map 和 flatMap 算子都是转换, 只是 flatMap 在转换过后会再执行展开, 所以 map 是一对一, flatMap 是一对多</p>
</li>
<li>
<p>reduceByKey 类似 MapReduce 中的 Reduce</p>
</li>
</ol>
</div>
</div>
</div>
</div>
</div>
</div>
        </div>
      </div>
    </body>
  </html>
