搭建Spark分布式集群

发布于:2025-02-10 ⋅ 阅读:(90) ⋅ 点赞:(0)

1,下载

下载 spark-3.5.4-bin-without-hadoop.tgz 
地址: https://downloads.apache.org/spark/spark-3.5.4/

2,安装

通过虚拟机设置共享文件夹将需要的安装包复制到linux虚拟机中 localhost1。虚拟机的共享盘在 /mnt/hgfs/。 将共享盘安装包复制到 存在目标路径/opt/software/

解压缩

cd /opt/software/
tar -zxvf spark-3.5.4-bin-without-hadoop.tgz -C /usr/local/applications/

3,配置环境变量

配置三个Linux节点

vi /etc/profile


SPARK_HOME=/usr/local/applications/spark-3.5.4-bin-without-hadoop
PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

export SPARK_HOME PATH
source /etc/profile

4,修改Spark配置

cd $SPARK_HOME/conf

workers

cp workers.template workers

vi workers

localhost1
localhost2
localhost3

spark-defaults.conf

cp spark-defaults.conf.template spark-defaults.conf

vi spark-defaults.conf

spark.master                     spark://localhost1:7077
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://localhost1:9000/spark-eventlog
spark.serializer                 org.apache.spark.serializer.KryoSerializer
spark.driver.memory              512m

启动HDFS

start-dfs.sh

创建HDFS日志目录

hdfs dfs -mkdir /spark-eventlog

spark-env.sh

cp spark-env.sh.template spark-env.sh

vi spark-env.sh

export JAVA_HOME=/usr/local/java/jdk1.8.0_431
export HADOOP_HOME=/usr/local/applications/hadoop-3.3.6
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export SPARK_DIST_CLASSPATH=$(/usr/local/applications/hadoop-3.3.6/bin/hadoop classpath)
export SPARK_MASTER_HOST=localhost1
export SPARK_MASTER_PORT=7077

5, 将Spark软件分发到集群

先关闭防火墙

systemctl stop firewalld
 
systemctl disable firewalld

将spark分发到localhost2 和 localhost3

cd /usr/local/applications
scp -r spark-3.5.4-bin-without-hadoop root@localhost2:/usr/local/applications/spark-3.5.4-bin-without-hadoop
scp -r spark-3.5.4-bin-without-hadoop root@localhost3:/usr/local/applications/spark-3.5.4-bin-without-hadoop

6, 启动集群

cd $SPARK_HOME/sbin

./start-all.sh

启动后查看三个节点的进程

[root@localhost1 sbin]# jps
3397 Jps
3190 Master
3336 Worker


[root@localhost2 ~]# jps
2966 Worker
3030 Jps


[root@localhost3 ~]# jps
2972 Worker
3037 Jps

在浏览器中输入: 

可以看见如下 Spark 的 Web 界面:

7,集群测试

需要使用hdfs,所以需要先启动HDFS

start-dfs.sh

1, 计算圆周率

run-example SparkPi 10

输出结果

[root@localhost1 conf]# run-example SparkPi 10
Pi is roughly 3.141343141343141

2, 启动spark-shell

[root@localhost1 conf]# spark-shell
Spark context Web UI available at http://localhost1:4040
Spark context available as 'sc' (master = spark://localhost1:7077, app id = app-20250128143941-0005).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.5.4
      /_/
         
Using Scala version 2.12.18 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_431)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

http://localhost1:4040/jobs/

在spark shell 中执行

scala> val lines = sc.textFile("/wcinput/wc.txt")
lines: org.apache.spark.rdd.RDD[String] = /wcinput/wc.txt MapPartitionsRDD[1] at textFile at <console>:23

scala> lines.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_).collect().foreach(println)
(mapreduce,3)                                                                   
(yarn,2)
(neil,3)
(hadoop,2)
(jack,3)
(hdfs,1)

-------------- 以上集群搭建已经完成----------------------------------------------

8, History Server

1, spark-defaults.conf

spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://localhost1:9000/spark-eventlog
spark.eventLog.compress          true

export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 -Dspark.history.retainedApplications=50 -Dspark.history.fs.logDirectory=hdfs://localhost1:9000/spark-eventlog"

2, 启动

$SPARK_HOME/sbin/start-history-server.sh

3,web-ui

http://localhost1:18080/


网站公告

今日签到

点亮在社区的每一天
去签到