安装JDK 1.7以上 Hadoop 2.7.0不支持JDK1.6,Spark 1.5.0开始不支持JDK 1.6
安装Scala 2.10.4
安装 Hadoop 2.x 至少HDFS
spark-env.sh
export JAVA_HOME=
export SCALA_HOME=
export HADOOP_CONF_DIR=/opt/modules/hadoop-2.2.0/etc/hadoop //运行在yarn上必须要指定
export SPARK_MASTER_IP=server1
export SPARK_MASTER_PORT=8888
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_WORKER_CORES=
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=26g
export SPARK_WORKER_PORT=7078
export SPARK_WORKER_WEBUI_PORT=8081
export SPARK_JAVA_OPTS="-verbose:gc -XX:-PrintGCDetails -XX:PrintGCTimeStamps"
slaves指定worker节点
xx.xx.xx.2
xx.xx.xx.3
xx.xx.xx.4
xx.xx.xx.5
运行spark-submit时默认的属性从spark-defaults.conf文件读取
spark-defaults.conf
spark.master=spark://hadoop-spark.dargon.org:7077
启动集群
start-master.sh
start-salves.sh
spark-shell命令其实也是执行spark-submit命令
spark-submit --help

deploy-mode针对driver program(SparkContext)的client(本地)、cluster(集群)
默认是client的,SparkContext运行在本地,如果改成cluster则SparkContext运行在集群上
hadoop on yarn的部署模式就是cluster,SparkContext运行在Application Master
spark-shell quick-start链接
http://spark.apache.org/docs/latest/quick-start.html