[实验] Hadoop 大数据平台的搭建 (单机版)

纪念:站主于 2019 年 10 月完成了此开源实验,并将过程中的所有命令经过整理和注释以后,形成以下教程

软件准备:

在 Hadoop 官网上下载搭建平台所需软件 Hadoop(本次实验使用的是 hadoop-3.2.1.tar.gz):

http://hadoop.apache.org

正文:

步骤一:硬件环境要求

1) CPU:双核
2) 内存:2G 以上
3) 硬盘:10G 以上

步骤二:系统环境要求

1) 服务器的系统需要是 CentOS Linux 7 版本
2) 服务器系统要配置好可用的软件源
3) 服务器要能 ping 通自己的主机名

步骤三:软件环境要求

3.1 安装 Hadoop 所需的 Java 环境

# yum install java-1.8.0-openjdk-devel

(补充:这里安装 java-openjdk-devel 的版本是 1.8.0)

3.2 显示本机在 Java 环境下所处的角色

# jps

步骤四:安装 Hadoop

4.1 解压 Hadoop 安装包

# tar -xvf hadoop-3.2.1.tar.gz

(补充:这里要安装的 hadoop 版本是 3.2.1)

4.2 创建 Hadoop 的安装目录

# mkdir /usr/local/hadoop

4.3 安装 Hadoop

# mv hadoop-3.2.1/* /usr/local/hadoop

(补充:这里安装的是 hadoop-3.2.1.tar.gz)

4.4 第 1 次启动 Hadoop 会提示报错

/usr/local/hadoop/bin/hadoop
Error: JAVA_HOME is not set and could not be found.

(补充:造成这种原因,主要是他找不到自己的配置文件和自己所需要的配置文件)

4.5 解决第 1 次启动 Hadoop 报错的问题

4.5.1 解决第 1 次启动 Hadoop 报错问题的思路

先确认刚刚安装的 java-1.8.0-openjdk-devel 软件的安装位置,然后再将这个位置写到 Hadoop 的配置文件里

4.5.2 显示 java-1.8.0-openjdk-devel 软件的安装位置
# rpm -ql java-1.8.0-openjdk
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre/bin/policytool
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre/lib/amd64/libawt_xawt.so
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre/lib/amd64/libjawt.so
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre/lib/amd64/libjsoundalsa.so
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre/lib/amd64/libsplashscreen.so
/usr/share/applications/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64-policytool.desktop
/usr/share/icons/hicolor/16x16/apps/java-1.8.0.png
/usr/share/icons/hicolor/24x24/apps/java-1.8.0.png
/usr/share/icons/hicolor/32x32/apps/java-1.8.0.png
/usr/share/icons/hicolor/48x48/apps/java-1.8.0.png

(补充:可以看出这里是安装目录:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre)

4.5.3 在 Hadoop 的配置文件里指定 java-openjdk-devel 和 Hadoop 配置文件的安装位置
# vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh

将以下内容:

......
54 # export JAVA_HOME=
......
68 # export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
......

修改为:

......
54 export JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre"
......
68 export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
......

步骤五:启动 Hadoop

# /usr/local/hadoop/bin/hadoop
Usage: hadoop [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
 or    hadoop [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]
  where CLASSNAME is a user-provided Java class

  OPTIONS is none or any of:

buildpaths                       attempt to add class files from build tree
--config dir                     Hadoop config directory
--debug                          turn on shell script debug mode
--help                           usage information
hostnames list[,of,host,names]   hosts to use in slave mode
hosts filename                   list of hosts to use in slave mode
loglevel level                   set the log4j level for this command
workers                          turn on worker mode

  SUBCOMMAND is one of:


    Admin Commands:

daemonlog     get/set the log level for each daemon

    Client Commands:

archive       create a Hadoop archive
checknative   check native Hadoop and compression libraries availability
classpath     prints the class path needed to get the Hadoop jar and the required libraries
conftest      validate configuration XML files
credential    interact with credential providers
distch        distributed metadata changer
distcp        copy file or directories recursively
dtutil        operations related to delegation tokens
envvars       display computed Hadoop environment variables
fs            run a generic filesystem user client
gridmix       submit a mix of synthetic job, modeling a profiled from production load
jar <jar>     run a jar file. NOTE: please use "yarn jar" to launch YARN applications, not this
              command.
jnipath       prints the java.library.path
kdiag         Diagnose Kerberos Problems
kerbname      show auth_to_local principal conversion
key           manage keys via the KeyProvider
rumenfolder   scale a rumen input trace
rumentrace    convert logs into a rumen trace
s3guard       manage metadata on S3
trace         view and modify Hadoop tracing settings
version       print the version

    Daemon Commands:

kms           run KMS, the Key Management Server

SUBCOMMAND may print help when invoked w/o parameters or with -h.