hadoop伪分布式的安装步骤-创新互联

本篇内容主要讲解“hadoop伪分布式的安装步骤”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“hadoop伪分布式的安装步骤”吧!

创新互联公司拥有网站维护技术和项目管理团队,建立的售前、实施和售后服务体系,为客户提供定制化的网站设计、网站制作、网站维护、成都服务器托管解决方案。为客户网站安全和日常运维提供整体管家式外包优质服务。我们的网站维护服务覆盖集团企业、上市公司、外企网站、购物商城网站建设、政府网站等各类型客户群体,为全球成百上千企业提供全方位网站维护、服务器维护解决方案。

1. 解压缩 /opt/software/hadoop-2.8.1.tar.gz 文件 
    [root@hadoop002 software]$ cd   /opt/software/
    [root@hadoop002 software]$ pwd
    /opt/software
    [root@hadoop002 software]$ tar -xzvf  hadoop-2.8.1.tar.gz
    [root@hadoop002 software]$ ll
    total 208784
    drwxr-xr-x.  6 root   root        4096 Nov 10  2015 apache-maven-3.3.9
    -rw-r--r--.  1 root   root     8617253 Apr 23 15:14 apache-maven-3.3.9-bin.zip
    drwxr-xr-x.  7 root   root        4096 Aug 21  2009 findbugs-1.3.9
    -rw-r--r--.  1 root   root     7546219 Apr 23 15:14 findbugs-1.3.9.zip
    drwxr-xr-x. 10 root   root   4096 Apr 23 16:36 hadoop-2.8.1
    -rw-r--r--.  1 root   root   194976866 Apr 23 15:40 hadoop-2.8.1.tar.gz
    drwxr-xr-x. 10 109965   5000      4096 Apr 23 15:26 protobuf-2.5.0
    -rw-r--r--.  1 root   root     2401901 Apr 23 15:15 protobuf-2.5.0.tar.gz

2. 添加软连接  
    [root@hadoop002 software]$ ln -s /opt/software/hadoop-2.8.1 /opt/software/hadoop
    [root@hadoop002 software]$ ll
    total 208784
    drwxr-xr-x.  6 root   root        4096 Nov 10  2015 apache-maven-3.3.9
    -rw-r--r--.  1 root   root     8617253 Apr 23 15:14 apache-maven-3.3.9-bin.zip
    drwxr-xr-x.  7 root   root        4096 Aug 21  2009 findbugs-1.3.9
    -rw-r--r--.  1 root   root     7546219 Apr 23 15:14 findbugs-1.3.9.zip
    lrwxrwxrwx.  1 root    root   26 Apr 23 15:41 hadoop -> /opt/software/hadoop-2.8.1
    drwxr-xr-x. 10 root    root   4096 Apr 23 16:36 hadoop-2.8.1
    -rw-r--r--.  1 root   root   194976866 Apr 23 15:40 hadoop-2.8.1.tar.gz
    drwxr-xr-x. 10 109965   5000      4096 Apr 23 15:26 protobuf-2.5.0
    -rw-r--r--.  1 root   root     2401901 Apr 23 15:15 protobuf-2.5.0.tar.gz


3.  设置环境变量
    [root@hadoop002 software]$  vi /etc/profile
    export HADOOP_HOME=/opt/software/hadoop
    export PATH=$HADOOP_HOME/bin:$FINDBUGS_HOME/bin:$PROTOC_HOME/bin:$MAVEN_HOME/bin:$MYSQL_HOME/bin:$JAVA_HOME/bin:$PATH

    --  环境变量生效
    [root@hadoop002 software]$ source /etc/profile


4. 修改软连接所属的用户和用户组
    [root@hadoop002 software]$ chown -R hadoop:hadoop hadoop/*

5. 将hadoop-2.8.1文件的所属的用户和用户组 修改为hadoop
    [root@hadoop002 software]$ chown -R  hadoop:hadoop hadoop-2.8.1
    [root@hadoop002 software]$ chown -R  hadoop:hadoop hadoop-2.8.1/*

6.  切换hadoop用户
    [root@hadoop002 hadoop]# su - hadoop
    [hadoop@hadoop002 ~]$ cd /opt/software/hadoop
    [hadoop@hadoop002 hadoop]$ ll
    total 148
    drwxr-xr-x. 2 hadoop hadoop  4096 Apr 18 14:11 bin
    drwxr-xr-x. 3 hadoop hadoop  4096 Apr 18 14:10 etc
    drwxr-xr-x. 2 hadoop hadoop  4096 Apr 18 14:11 include
    drwxr-xr-x. 3 hadoop hadoop  4096 Apr 18 14:10 lib
    drwxr-xr-x. 2 hadoop hadoop  4096 Apr 18 14:11 libexec
    drwxr-xr-x. 2 hadoop hadoop  4096 Apr 18 14:11 sbin
    drwxr-xr-x. 3 hadoop hadoop  4096 Apr 18 14:10 share

    bin:可执行文件
    etc:配置文件
    sbin: shell脚本  启动,关闭hdfs,yarn服务

7. # 删除txt文件
    [hadoop@hadoop002 hadoop]$ rm -f *.txt

8. 配置hadoop用户的ssh信任关系
    [hadoop@hadoop002 hadoop]$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/hadoop/.ssh'.
    Your identification has been saved in /home/hadoop/.ssh/id_rsa.
    Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
    The key fingerprint is:
    29:06:4c:1a:a4:74:b4:f4:c2:43:8a:07:68:00:fa:e4 hadoop@hadoop002 
    The key's randomart image is:
    +--[ RSA 2048]----+
    |Bo+=.         |
    |=+*=o         |
    |=.+=o.        |
    | =  o.   .      |
    |  E   o S      |
    |     . .         |
    |                 |
    |                 |
    |                 |
    +-----------------+

    [hadoop@hadoop002 hadoop]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    [hadoop@hadoop002 hadoop]$ chmod 0600 ~/.ssh/authorized_keys

    [hadoop@hadoop002 hadoop]$ ssh hadoop002 date  # 第一次回车需要输入yes
    The authenticity of host 'hadoop002 (192.168.90.164)' can't be established.
    RSA key fingerprint is 3a:51:6d:9b:94:d3:91:bf:fd:ab:da:0a:5b:8c:f2:6c.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop002 ,192.168.90.164' (RSA) to the list of known hosts.
    Tue May 22 10:46:32 CST 2018

[hadoop@hadoop002 hadoop]$ ssh hadoop002 date # 不需要回车输入yes,即OK
    Tue May 22 10:46:37 CST 2018

9. 配置文件
    [hadoop@hadoop002 hadoop]$ pwd
    /opt/software/hadoop
    [hadoop@hadoop002 hadoop]$  vi etc/hadoop/core-site.xml
    
        
            fs.defaultFS
            hdfs://localhost:9000
        
    


    [hadoop@hadoop002 hadoop]$  etc/hadoop/hdfs-site.xml

   
        dfs.replication
        1
   

10.   格式化 和 启动
    # 添加环境变量
    [hadoop@hadoop002 hadoop]$ vi etc/hadoop/hadoop-env.sh
    export JAVA_HOME=/usr/java/jdk1.8.0_11

    #查看hdfs命令的路径
    [hadoop@hadoop002 hadoop]$ which hdfs
    /opt/sofeware/hadoop/bin/hdfs

    #格式化
    [hadoop@hadoop002 hadoop]$  bin/hdfs namenode -format
    ........
    18/04/19 12:53:42 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.

    # 启动hdfs
    [hadoop@hadoop002 hadoop]$ sbin/start-dfs.sh
    Starting namenodes on [192.168.90.164]
    192.168.90.164: namenode running as process 15421. Stop it first.
    localhost: datanode running as process 15523. Stop it first.
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: secondarynamenode running as process 15717. Stop it first.

    #通过 jps查看进程
    [hadoop@hadoop002 hadoop]$ jps
    15523 DataNode
    15717 SecondaryNameNode
    15421 NameNode
    16877 Jps

11  修改dfs启动的进程以hadoop002启动
    #修改前的启动
    [hadoop@hadoop002 hadoop]$ sbin/start-dfs.sh 
    Starting namenodes on [192.168.90.164]
    192.168.90.164: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-localhost.localdomain.out
    localhost: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-localhost.localdomain.out
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out


    配置 在hadoop002上启动
    三个进程:
    namenode:         192.168.90.164    bin/hdfs getconf -namenodes    etc/hadoop/core-site.xml        
    datanode:         localhost         using default slaves file     etc/hadoop/slaves  # 修改slaves文件里面的内容为hadoop002
    secondarynamenode: 0.0.0.0


    #修改文件:etc/hadoop/core-site.xml:
    
       
            fs.defaultFS
            hdfs://hadoop002:9000
        
    

    # 修改slaves文件里面的内容为hadoop002
    [hadoop@hadoop002 hadoop]$ vi  etc/hadoop/slaves
    #localhost
    hadoop002

    #修改文件:etc/hadoop/hdfs-site.xml 
    添加:
    
       
                dfs.namenode.secondary.http-address
                hadoop002:50090 #默认是0.0.0.0:50090
        
        
                dfs.namenode.secondary.https-address
                hadoop002:50091#默认是0.0.0.0:50091
        
    

    0.0.0.0 是一个非常特殊的IP,代表的是当前机器的IP。

    # 配置好后,再次启动
    [hadoop@hadoop002 sbin]$  sbin/start-dfs.sh
    Starting namenodes on [hadoop002]
    hadoop002: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop002.out
    hadoop002: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop002.out
    Starting secondary namenodes [hadoop002]
    hadoop002: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop002.out

    到这里就安装完成了。

到此,相信大家对“hadoop伪分布式的安装步骤”有了更深的了解,不妨来实际操作一番吧!这里是创新互联网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!


网站标题:hadoop伪分布式的安装步骤-创新互联
网页路径:http://hbruida.cn/article/gggso.html