hadoop1.1.2伪分布安装的示例分析

这篇文章主要介绍了hadoop1.1.2伪分布安装的示例分析,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。

创新互联建站是专业的万全网站建设公司,万全接单;提供成都做网站、成都网站建设,网页设计,网站设计,建网站,PHP网站建设等专业做网站服务;采用PHP框架,可快速的进行万全网站开发网页制作和功能扩展;专业做搜索引擎喜爱的网站,专业的做网站团队,希望更多企业前来合作!

1.伪分布式的安装

      1.1 修改ip

                   (1)打开VMWare或者VirtualBox的虚拟网卡

                   (2)在VMWare或者VirtualBox设置网络连接方式为host-only

                   (3)在linux中,修改ip。有上角的图标,右键,选择Edit  Connections....

                            ****ip必须与windows下虚拟网卡的ip在同一个网段,网关必须是存在的。

                   (4)重启网卡,执行命令service network restart

                            ****报错,如no suitable adapter错误,

                   (5)验证:执行命令ifconfig

      1.2 关闭防火墙

                   (1)执行命令:service iptables stop 关闭防火墙

                   (2)验证:执行命令service iptables status

      1.3 关闭防火墙的自动开启

                   (1)执行命令chkconfig iptables off

                   (2)验证:执行命令chkconfig --list|grep iptables

     1.4 修改hostname

                   (1)执行命令hostname cloud4  修改会话中的hostname

                   (2)验证:执行命令hostname

                   (3)执行命令vi  /etc/sysconfig/network 修改文件中的hostname

                   (4)验证:执行命令reboot -h now 重启机器

      1.5 设置ip与hostname绑定

                   (1)执行命令vi  /etc/hosts

                            在文本最后增加一行192.168.80.100 cloud4

                   (2)验证:ping cloud4

(3)在window中配置:主机名对应的ip

C:\Windows\System32\drivers\etc\hosts

      1.6 ssh免密码登陆

                   (1)执行命令ssh-keygen -t rsa (然后一路Enter)  产生秘钥位于/root/.ssh/

                   (2)执行命令cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys  产生授权文件

                   (3)验证:ssh localhost  (ssh 主机名)

      1.7 安装jdk

                   (1)使用winscp把jdk、hadoop复制到linux的/root/Downloads

                   (2)cp  /root/Downloads/*  /usr/local

                   (3)cd /usr/local

                            赋予执行权限 chmod u+x  jdk-6u24-linux-i586.bin

                   (4)./jdk-6u24-linux-i586.bin

                   (5)重命名 mv jdk1.6.0_24  jdk

                   (6)执行命令 vi /etc/profile 设置环境变量 

                            增加两行         export JAVA_HOME=/usr/local/jdk

                                                  export PATH=.:$JAVA_HOME/bin:$PATH

                            保存退出

                      执行命令  source  /etc/profile

                    (7)验证:执行命令java -version

      1.8 安装hadoop

                   (1)执行命令 tar -zxvf hadoop-1.1.2.tar.gz  解压缩

                   (2)执行命令  mv hadoop-1.1.2  hadoop

                   (3)执行命令 vi  /etc/profile  设置环境变量

                            增加一行         export HADOOP_HOME=/usr/local/hadoop

                            修改一行         export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH

                            保存退出

                      执行命令  source  /etc/profile     

                   (4)验证:执行命令 hadoop

                   (5)修改位于conf/的配置文件hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml

                            <1>文件hadoop-env.sh的第9行

                            export JAVA_HOME=/usr/local/jdk/

                            <2>文件core-site.xml

                           

                                    

                                               fs.default.name

                                               hdfs://cloud4:9000

                                               change your own hostname

                                    

                                    

                                               hadoop.tmp.dir

                                               /usr/local/hadoop/tmp

                                      

                           

                            <3>文件hdfs-site.xml

                           

                                    

                                               dfs.replication    #表示设置副本数,默认是3

                                               1

                                    

                                    

                                               dfs.permissions   #表示是否设置权限控制

                                               false

                                    

                           

如果是super-user(超级用户),它是nameNode进程的标识。系统不会执行任何权限检查

                            <4>文件mapred-site.xml

                           

                                    

                                               mapred.job.tracker

                                               cloud4:9001

                                               change your own hostname

                                    

                           

                   (6)执行命令 hadoop namenode -format 进行格式化

                   (7)执行命令 start-all.sh 启动hadoop

                   (8)验证:

                            <1>执行命令jps 查看java进程,发现5个进程,分别是NameNode、SecondaryNameNode、DataNode、JobTracker、TaskTracker

                            <2>通过浏览器查看:http://cloud4:50070 和 http://cloud4:50030

                                     *****修改windows的C:/Windows/system32/drivers/etc/目录下的hosts文件

1.9如果去掉警告提示:

[root@cloud4 ~]# hadoop fs -ls /

Warning: $HADOOP_HOME is deprecated.(去掉警告)

方法如下:

[root@cloud4 ~]# vi /etc/profile   (添加一句话)

# /etc/profile

export HADOOP_HOME_WARN_SUPPRESS=1

export JAVA_HOME=/usr/local/jdk

export HADOOP_HOME=/usr/local/hadoop

export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH

[root@cloud4 ~]# source /etc/peofile  (立即生效)

感谢你能够认真阅读完这篇文章,希望小编分享的“hadoop1.1.2伪分布安装的示例分析”这篇文章对大家有帮助,同时也希望大家多多支持创新互联,关注创新互联行业资讯频道,更多相关知识等着你来学习!


名称栏目:hadoop1.1.2伪分布安装的示例分析
链接分享:http://hbruida.cn/article/jphhee.html