<h3>Process Parameter Settings:</h3>
<p>
Parameters can be used as variables in process runs, commonly used in the input and output of data as field names, table names, filter conditions.the parameters can used in SQL 、output and API.Press the "new",produc a new process parameters.when in SQL,use it like:${Parameter name}.For example：parameter a = 当前月;select * from table a where ‘month’=${a}.
</p>
<p>
The nodes that support process parameters are: the relational database input the hive input, the input of the "hbase input", the iotdb input, the relational database output, the hive output, HBase output,the data filtering, the input query analyzer, the process query analyzer, all the extended programming nodes, the property generation.
</p>
<h3>Environmental Parameter Settings:</h3>
<p>
Configuration information of process running environment
</p>
<ul>
<li>
<span class='param-label'>Managing CPU: </span>Number of CPU cores required to manage processes
</li>
<li>
<span class='param-label'>Managing Memory: </span>Size of memory required to manage processes
</li>
<li>
<span class='param-label'>Number of Executors:</span> Number of executors required for the current task to start
</li>
<li>
<span class='param-label'>Executing CPU: </span>Number of CPU cores required by each executor
</li>
<li>
<span class='param-label'>Executing Memory:</span> Size of memory required by each executor
</li>
<li>
<span class='param-label'>Cache:</span> A very important feature of spark is that the data set is cached in memory by default, or you can manually specify the cache mode.<br/>
MEMORY_ONLY: RDD is stored in jvm. If there is not enough memory, some partitions will not be cached, and if needed, it will be recalculated. Default Methods:
MEMORY_AND_DISK: RDD is stored in jvm, if there is not enough memory, some partitions will not be stored on the disk.<br/>
MEMORY_ONLY_SER: RDD is serialized and stored in jvm. This method is similar to MEMORY_ONLY, but it is more space-saving than MEMORY_ONLY and requires more CPU resources.<br/>
MEMORY_AND_DISK_SER: Is similar to MEMORY_AND_DISK, except that the storage of the object is the bytecode after serialization.<br/>
DISK_ONLY: RDD is stored on disk.<br/>
*_2: Is similar to the other methods, but two copies will be generated in the cluster.<br/>
OFF_HEAP: RDD data are serialized and stored in Tachyon. Compared with MEMORY_ONLY_SER, OFF_HEAP reduces garbage collection costs, making Spark Executor smaller and lighter while sharing memory; and the data are stored in Tachyon, the node failure of Spark cluster will not cause data loss. Therefore this method is more attractive in large-memory or multi-concurrence scenarios. Tachyon is not directly included in the Spark system, it needs to choose the appropriate version for deployment; its data are managed in blocks. These blocks can be discarded according to certain algorithms and will not be rebuilt
</li>
<li>
<span class='param-label'>Execution mode:</span> Users can choose the execution mode of the process according to the amount of data, including cluster mode, lightweight mode and stand-alone mode.
Cluster mode: Large amount of data, cloud deployment mode, can effectively improve the processing efficiency of massive data. (More than a million pieces of data are recommended)
Lightweight mode: small amount of data, execution resource sharing (unified allocation by administrators), process execution efficiency. (Recommend 10,000 data below)
Stand-alone mode: small amount of data, exclusive execution resources (independent configuration of execution resources), process execution efficiency is relatively high. (Suggestions for millions of data below)
</li>
</ul>
<h3>Resource usage monitoring:</h3>
<p>
Resource utilization monitoring:Monitor the resource usage of local/cluster and current user queues.Users can view the number of CPUs,memory usage and the remaining situation at any time.It provides a basis for process configuration environment parameters.
</p>