大数据技术--实验02-HDFS实践【实测可行】

发布于:2024-07-24 ⋅ 阅读:(148) ⋅ 点赞:(0)

1. 启动hadoop集群

接上个实验01,启动hadoop集群,启动结束后使用jps命令列出守护进程验证安装是否成功。

#启动HDFS

[hadoop@master ~]$ start-dfs.sh

#启动Yarn

[hadoop@master ~]$ start-yarn.sh

也可以直接使用start-all.sh启动Hadoop

# master主节点:

[hadoop@master ~]$ jps

3717 SecondaryNameNode

3855 ResourceManager

3539 NameNode

3903 JobHistoryServer

4169 Jps

#slave1节点

[hadoop@slave1 ~]$ jps

2969 Jps

2683 DataNode

2789 NodeManager

# slave2 节点

[hadoop@slave2 ~]$ jps

2614 Jps

2363 DataNode

2470 NodeManager

发现JobHistoryServer没有启动,所以需要执行

[hadoop@master hadoop-2.6.0]$ sbin/mr-jobhistory-daemon.sh start historyserver

starting historyserver, logging to /home/hadoop/local/opt/hadoop-2.6.0/logs/mapred-hadoop-historyserver-master.out

2. HDFS命令

1)新建文件夹。

hadoop fs -mkdir /user

hadoop fs -mkdir /user/root

1)新建文件夹。

hadoop fs -mkdir /user

hadoop fs -mkdir /user/root

2)查看文件夹权限。

# hadoop fs -ls -d /user/root

drwxr-xr-x   - root supergroup          0 2015-05-29 17:29 /user/root

3)上传文件。

拷贝02-上机实验/ds.txt到客户端机器,运行下面的命令和结果对照

cp命令将ds.txtmnt目录拷贝至hadoop-2.6.0,然后在hadoop-2.6.0目录下执行

# hadoop fs -put ds.txt /user/root/ds.txt

# hadoop fs -ls -R /user/root

-rw-r--r--   3 root supergroup       9135 2015-05-29 19:07 /user/root/ds.txt

4)查看文件内容。

# hadoop fs -cat /user/root/ds.txt

17.759065824032646,0.6708203932499373

20.787886563063058,0.7071067811865472

17.944905786933322,0.5852349955359809

……

5)复制/移动/删除文件。

# hadoop fs -cp /user/root/ds.txt /user/root/ds_backup.txt

# hadoop fs -ls /user/root

Found 2 items

-rw-r--r--   3 root supergroup       9135 2015-05-29 19:07 /user/root/ds.txt

-rw-r--r--   3 root supergroup       9135 2015-05-29 19:30 /user/root/ds_backup.tx

# hadoop fs -mv /user/root/ds_backup.txt /user/root/ds_backup1.txt

# hadoop fs -ls /user/root

Found 2 items

-rw-r--r--   3 root supergroup       9135 2015-05-29 19:07 /user/root/ds.txt

-rw-r--r--   3 root supergroup       9135 2015-05-29 19:30 /user/root/ds_backup1.txt

# hadoop fs -rm -r /user/root/ds_backup1.txt

15/05/29 19:32:51 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.

Deleted /user/root/ds_backup1.txt

# hadoop fs -ls /user/root

Found 1 items

-rw-r--r--   3 root supergroup       9135 2015-05-29 19:07 /user/root/ds.txt


网站公告

今日签到

点亮在社区的每一天
去签到