开发工具:
文件大小: 515kb
下载次数: 0
上传时间: 2019-07-02
详细说明:
HDFS是Hadoop分布式计算的存储基础。HDFS具有高容错性,可以部署在通用硬件设备上,适合数据密集型应用,并且提供对数据读写的高吞 吐量。HDFS能 够提供对数据的可扩展访问,通过简单地往集群里添加节点就可以解决大量客户端同时访问的问题。HDFS支持传统的层次文件组织结构,同现 有的一些文件系 统类似,如可以对文件进行创建、删除、重命名等操作。
< property>
cname>hadoop. proxyuser. htpfs hosts*
/property>
< property>
hadoop.proxyuser.httpfs.groups
*
2/etc/hadoop/ conf/hdfs-site xm
p
robert>
dfs. nameservices
bdcluster
≤ property>
dfs. ha. namenodes bdcluster
nn002, nn003
dfs namenode. rpc-address bdcluster nn 002
master002: 8020
dfs namenode. rpc-address bdcluster nn003
master003: 8020
≤/ property>
≤ property>
dfs.namenode.http-address.bdcluster,nn002
master002: 50070
≤ property
dfs.namenode.http-address.bdcluster.nn003
master003: 50070
dfs namenode shared edits. dir
ajournal: //master002: 8485; master003: 8485 master004: 8485/bd cluster
dfs.journalnode ed its dir
/data/disk01/hadoop/ hdfs/journalnode value>
rtv
dfsclient failover proxy provider. bdcluster
org. apache hadoop hdfs server namenode ha Configured FailoverProxy Provider
dfs. ha. fencing. methods
sshfences/value>
dfs. ha. fencing. ssh. private-key-files
/var/lib/hadoop-hdfs/, ssh/id_ dsa
≤ property
dfs. ha. automatic-failoverenabled
trues/value>
≤ property>
ha zookeeper. quorum
master002: 2181, master003: 2181, master004: 2181, master005: 2181, master006: 2181
dfs permissions. superusergroup
dfs namenode name. dir
/data/disko/hadoop/hdfs/namenode
≤ property
dfs datanode data. dir
/data/disk01hadoop/hdfsdatanode, data/disk02hadoop /hdfsdatanode, /data/disk03/hadoop/hdfs/datanode, /data/disko/hadoop/h
dfs datanode failed. volumes. tolerated
3
≤ property>
dfs datanode. max. xcievers
4096
dfs. webhdfs enabled
trues/value>
3/etc/hadoop/ conf/slaves
slave001
slave002
slave004
slave007
dave008
slave009
slaved 10
slave011
slave012
slave013
slave014
slave015
slaved 6
slave 18
slave019
slave020
slave021
slave022
slave023
slave024
slave026
slave027
slave 28
slave029
slave030
slave032
slave033
slave034
slave035
slave036
slave038
slave039
slave040
slave041
slave042
slave044
slave045
slave047
slave050
slave051
slaved 53
slave056
slave057
slaves
slave059
slave060
slave061
slave062
slave063
slave064
HDFS配置
1.配置hdfs用户的免密码登陆
2创建数据目录
namenode
mkdir -p/data/diskO/hadoop/hdfs/namenode
chown-R hdfs: hdfs /data/ disk01/hadoop /hdfs/
chown -R hdfs: hdfs /data/disk01 /hadoop /hdfs/namenode
chmod 700 /data/disk01/hadoop/hdfs/namenode
datanode
mkdir -p/data/disko/hadoop/hdfs/datanode
mod 700/data/disk01/ hadoop/hdfs/datanode
chown -R hdfs: hdfs /data/disk01/ hadoop/hdfs/
mkdir -p/data/disk02/hadoop/hdts/datanode
chmod 700/data/disk02/ hadoop/hdfs/ datanode
chown-R hdfs: hdfs /data/disk02/hadoop/hdfs/
mkdir -p/data/disk03/hadoop/hd fs/datanode
chmod 700/data/ disk03/hadoop/hdfs/datanode
chown-R hdfs: hdfs /data/disk03/hadoop/hdfs/
mkdir-p/data/disk04/hadoop/hdfs/datanode
chmod 700 /data/disk04/hadoop/hdfs/datanode
chown -R hdfs hdfs /data/disk04/ hadoop /hdfs/
mkdir -p/data/disko/hadoop/hdfs/datanode
chmod 700/data/disk05/hadoop/hdfs/datanode
chown-R hdfs: hdfs /data/disk05/hadoop/ hdfs/
mkdir -p /data/disk06 /hadoop/hdfs/datanode
chmod 700/data/disk06/hadoop/hdfs/datanode
chown -R hdfs: hdfs /data/ disk06/hadoop/hdfs/
mkdir -p/data/disk07/hadoop/hdts/datanode
chmod 700 /data/disk07/hadoophdfs/ datanode
chown-R hdfs: hdfs / data/ disk07/ hadoop /hdfs,
journalnode
mkdir -p/data/disk01/hadoop/hdfs/journalnode
chown -R hdfs: hdfs / data/disk01/ hadoop/hdfs/journalnod
3启动 jjournalnode
service hadoop-hdfs-journalnode start
4.格式化 namenode( master002)
sudo -u hdfs hadoop namenode -format
5.在 ZooKeeper中初始化HA状态( namenode master0o2)
hdfs zkfc-formatzK
初始化 Shared Edits directory( master002)
hdfs namenode -in itializeshared Edits
7启动 Inamenode
formatted namenode(mastero02)
ce hadoop-hdfs-namen
standby namenode(master003)
sudo -u hdfshdfs namenode -bootstrap stand by
service hadoop-hdfs-namenode start
8启动 datanode
service hadoop-hdfs-datanode start
9启动zkfc( namenode
service hadoop-hdfs-zkfc start
10.初始化HDFS目录
/usr/lib/ hadoop /libexec/init-hdfs sh
Hdfs运维
安装环境
装路径
/usr/lib/ hadoop-hdfs
配置文件路径
日志路径
运行关闭|查看状态
NameNode
service hadoop-hdf s-namenode start stop status
Data Node
service hadoop-hdfs-datanode start stop stetus
Journal node
servi: e ha: Iciop-hdrs-journalnode slart s lop saLus
ckfc
service hadcop-hdfs-zkfc start stop status
常用命令
查看集群状态
sudo -u hdfshdfs dfsadmin -report
检查文件及其副本
sudo- u hdfs hdfs fsck[文件名]- files- blocks- locat ions- racks
Hdfs项目经验
1.两个NN都为 standby
1.问题发现: hadoop fs-s/不可用
2.查看日志后/ar/og/ hadoop-hdfs/ hadoop- hdfs-namenode- biodata101og
2014-05-11 16: 01: 35, 752 WARN org. apache hadoop hdfs server namenode ha. EditLogTailer: Unable to trigger a rcl of the active Nn
org. apache hadoop ipc RemoteException (org. apache hadoop. ipc Standby Exception): Operat ion category TOURNAL is not supported in state standby
at org. apache hadoop hdfs server. namenode ha Stardby State. checkOperat ion(Standby State. java: 87)
同时这方面的有关日志都可以看看,/var/og/ hadoop-hdfs
hadoop-hdfs-namenode-bigdata 101log
hadoop-hdfs-zkfc-bigdata 101log
hadoop-hdfs-datanode-bigdata101 log
hadoop-hdfs-journalnode-bigdata 101log
查看所有有关 Hadoop服务NN、DN等系统运行状况
$> service--status-all Grep -i hadoop
Hadoop datanode is running
Hadoop journa lnode is running
Hadoop namenode is running
GK」
Ioop kfc is running
FAILEI
loop) hlprs is running
Hadoop nodemanager is running
4.发现主从的 hadoop- namenode设备上的zkf务都没正常开启
s>service zktc start
启动后NN正常
(系统自动生成,下载前可以参看下载内容)
下载文件列表
相关说明
- 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
- 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度。
- 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
- 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
- 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
- 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.