Apache Spark requires a cluster manager and a distributed storage system. For cluster management, Spark supports standalone (native Spark cluster, where you can launch a cluster either manually or use the launch scripts provided by the install package. It is also possible to run these daemons on a single machine for testing), Hadoop YARN, Apache Mesos or Kubernetes.  For distributed storage, Spark can interface with a wide variety, including Alluxio, Hadoop Distributed File System (HDFS), MapR File System (MapR-FS), Cassandra, OpenStack Swift, Amazon S3, Kudu, Lustre file system, or a custom solution can be implemented. Spark also supports a pseudo-distributed local mode, usually used only for development or testing purposes, where distributed storage is not required and the local file system can be used instead; in such a scenario, Spark is run on a single machine with one executor per CPU core.
What are the main components of Apache Spark based on the paragraph?
Based on the paragraph, a cluster manager and a distributed storage are two main components of Apache Spark. For cluster manager, Spark supports a standalone cluster,  or  Hadoop YARN, Apache Mesos, or Kubernetes. For storage, Spark supports local file systems for development and testing, as well as Hadoop File System, Cassandra, and cloud storages