手把手教你Hadoop集群搭建

本集群所需软件:

创新互联建站专注于企业成都营销网站建设、网站重做改版、息烽网站定制设计、自适应品牌网站建设、H5响应式网站商城网站开发、集团公司官网建设、成都外贸网站建设、高端网站制作、响应式网页设计等建站业务,价格优惠性价比高,为息烽等各大城市提供网站开发制作服务。

VMwareworkstation64_12.5.7.0

Xmanager Enterprise 4 Build 0182

CentOS-7-x86_64-DVD-1708

hadoop-2.8.2.tar

jdk-8u151-linux-x64.tar

三台主机

master 192.168.1.140

slave1 192.168.1.141

slave2 192.168.1.142

1.安装虚拟机

(1)安装虚拟机

(2)克隆虚拟机

2.修改主机名

hostnamectl set-hostname master(slave1,slave2)

3.修改IP

vi /etc/sysconfig/network-scripts/ifcfg-ens33 

ONBOOT=yes

IPADDR=192.168.1.140

NETMASK=255.255.255.0

GATEWAY=192.168.1.1

DNS=192.168.1.254

4.修改/etc/hosts文件

添加 

192.168.1.140 master

192.168.1.141 slave1

192.168.1.142 slave2

5.关闭防火墙

systemctl stop firewalld

禁止防火墙

systemctl disable firewalld

6.修改SELinux

vi /etc/sysconfig/selinux

SELinux=disabled

7.免密码登录

ssh-keygen 然后一直回车

在master上将id_rsa.pub重定向到authorized_keys

cat id_rsa.pub>>authorized_keys

将authorized_keys分发到slaver1、slaver2的/root/.ssh目录下

scp authorized_keys slaver1/slaver2的IP地址 /root/.ssh/

8.卸载jdk

rpm -qa | grep java

删除所有java

rpm -e java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64 --nodeps

9.yum源配置

cd /run/media/root/CentOS\ 7\ x86_64/Packages/

cp * /yum

cd /yum

rpm -ivh deltarpm-3.6-3.el7.x86_64.rpm 

rpm -ivh python-deltarpm-3.6-3.el7.x86_64.rpm 

rpm -ivh createrepo-0.9.9-28.el7.noarch.rpm 

createrepo .

cd /etc/yum.repos.d/

rm -rf *

vi yum.repo

[local]

name=yum.repo

baseurl=file:///yum

gpgcheck=0

enabled=1

yum clean all

yum repolist

10.配置ftp服务

yum -y install ftp* vsftpd*

slave yum源用baseurl=ftp://192.168.1.140/pub

11.安装yum包

yum -y install libc*

yum -y install openssh*

yum -y install man*

yum -y install compat-libstdc++-33*

yum -y install libaio-0.*

yum -y install libaio-devel*

yum -y install sysstat-9.*

yum -y install glibc-2.*

yum -y install glibc-devel-2.* glibc-headers-2.*

yum -y install ksh-2*

yum -y install libgcc-4.*

yum -y install libstdc++-4.*

yum -y install libstdc++-4.*.i686*

yum -y install libstdc++-devel-4.*

yum -y install gcc-4.*x86_64*

yum -y install gcc-c++-4.*x86_64*

yum -y install elfutils-libelf-0*x86_64* elfutils-libelf-devel-0*x86_64*

yum -y install elfutils-libelf-0*i686* elfutils-libelf-devel-0*i686*

yum -y install libtool-ltdl*i686*

yum -y install ncurses*i686*

yum -y install ncurses*

yum -y install readline*

yum -y install unixODBC*

yum -y install zlib

yum -y install zlib*

yum -y install openssl*

yum -y install patch*

yum -y install git*

yum -y install lzo-devel* zlib-devel* gcc* autoconf* automake* libtool*

yum -y install lzop*

yum -y install lrzsz*

yum -y install lzo-devel*  zlib-devel*  gcc* autoconf* automake* libtool*

yum -y install nc*

yum -y install glibc*

yum -y install gzip*

yum -y install zlib*

yum -y install gcc*

yum -y install gcc-c++*

yum -y install make*

yum -y install protobuf*

yum -y install protoc*

yum -y install cmake*

yum -y install openssl-devel*

yum -y install ncurses-devel*

yum -y install unzip*

yum -y install telnet*

yum -y install telnet-server*

yum -y install wget*

yum -y install svn*

yum -y install ntpdate*

关闭不必要的服务

chkconfig autofs off

chkconfig acpid off

chkconfig sendmail off

chkconfig cups-config-daemon off

chkconfig cpus off

chkconfig xfs off

chkconfig lm_sensors off

chkconfig gpm off

chkconfig openibd off

chkconfig pcmcia off

chkconfig cpuspeed off

chkconfig nfslock off

chkconfig iptables off

chkconfig ip6tables off

chkconfig rpcidmapd off

chkconfig apmd off

chkconfig sendmail off

chkconfig arptables_jf off

chkconfig microcode_ctl off

chkconfig rpcgssd off

12.安装Java

将jdk-8u151-linux-x64.tar.gz安装包上传到/usr/local下

解压 tar xzvf jdk 补全

改名 mv jdk1.8.0_151/ java

修改/etc/profile文件

export JAVA_HOME=/usr/local/java

export JRE_HOME=/usr/java/jre

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib

export PATH=$PATH:$JAVA_HOME/bin

生成环境变量

source /etc/profile

scp命令将整个文件夹分发到从节点slaver1、slaver2上laver2上

scp -r /etc/profile root@slaver1(slave2):/etc/profile

13.hadoop安装

上传Hadoop包到/usr/local/下

解压 tar xzvf hadoop-2.8.2.tar.gz 

改名 mv hadoop-2.8.2 hadoop

修改/etc/profile文件

export HADOOP_HOME=/usr/local/hadoop

#export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native"

export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native

export HADOOP_COMMON_LIB_NATIVE_DIR=/usr/local/hadoop/lib/native

export HADOOP_OPTS="-Djava.library.path=/usr/local/hadoop/lib"

#export HADOOP_ROOT_LOGGER=DEBUG,console

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

生成环境变量 

source /etc/profile

修改配置文件

cd /usr/local/hadoop/etc/hadoop

 (1)vi hadoop-env.sh

 修改内容export JAVA_HOME=/usr/local/java

 (2)vi core-site.xml

 

 fs.defaultFS

 hdfs://192.168.1.140:9000

 

 

 io.file.buffer.size

 131072

 

 

 hadoop.tmp.dir

 file:/usr/local/hadoop/tmp

 Abase for other temporary directories.

 

 

 hadoop.proxyuser.root.hosts

 *

 

 

 hadoop.proxyuser.root.groups

 *

  

 (3)修改yarn-site.xml文件

 vi yarn-site.xml

 

            yarn.nodemanager.aux-services

            mapreduce_shuffle

 

 

            yarn.nodemanager.aux-services.mapreduce.shuffle.class

            org.apache.hadoop.mapred.ShuffleHandler

 

 

             yarn.resourcemanager.address

             192.168.1.140:8032

 

 

             yarn.resourcemanager.scheduler.address

             192.168.1.140:8030

 

 

             yarn.resourcemanager.resource-tracker.address

             192.168.1.140:8035

 

 

             yarn.resourcemanager.admin.address

             192.168.1.140:8033

 

 

             yarn.resourcemanager.webapp.address

             192.168.1.140:8088

 

  (4)修改hdfs-site.xml文件

 

 dfs.namenode.name.dir

 file:/usr/local/hadoop/hdfs/name

 

 

 dfs.datanode.data.dir

 file:/usr/local/hadoop/hdfs/data

 

 

 dfs.replication

 3

 

 

 dfs.namenode.secondary.http-address

 192.168.1.140:9001

 

 

 dfs.webhdfs.enabled

 true

  

 (5)修改mapred-site.xml

 

 mapreduce.framework.name

 yarn

 

 

 (6)修改slaves文件

192.168.1.140

192.168.1.141

192.168.1.142

 14.将master配置好的文件发送到slave1,slave2上,并生成环境变量

 15.格式化hadoop

 Hadoop namenode -format

 16.启动集群

 start-all.sh

 master :

 [root@master ~]# jps

4705 SecondaryNameNode

8712 Jps

4299 NameNode

4891 ResourceManager

5035 NodeManager

4447 DataNode

 slave1:

 [root@slave1 ~]# jps

2549 DataNode

2729 NodeManager

3258 Jps

 slave2:

 [root@slave2 ~]# jps

3056 Jps

2596 DataNode

2938 NodeManager

 17.在浏览器输入master地址+:8088(MR管理界面)

 

 在浏览器输入master地址+:50070(HDFS管理界面)

 

 

 



本文名称:手把手教你Hadoop集群搭建
地址分享:http://hbruida.cn/article/piishg.html