下载
$ wget https://github.com/apache/ranger/archive/release-ranger-2.0.0.tar.gz
解压缩
$ tar zxvf release-ranger-2.0.0.tar.gz -C /opt/module/
编译
$ mvn clean compile package assembly:assembly install -DskipTests
结果
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for ranger 2.0.0:
[INFO]
[INFO] ranger ............................................. SUCCESS [ 4.100 s]
[INFO] Jdbc SQL Connector ................................. SUCCESS [ 1.290 s]
[INFO] Credential Support ................................. SUCCESS [ 1.137 s]
[INFO] Audit Component .................................... SUCCESS [ 2.280 s]
[INFO] Common library for Plugins ......................... SUCCESS [ 6.534 s]
[INFO] Installer Support Component ........................ SUCCESS [ 0.494 s]
[INFO] Credential Builder ................................. SUCCESS [ 1.100 s]
[INFO] Embedded Web Server Invoker ........................ SUCCESS [ 1.138 s]
[INFO] Key Management Service ............................. SUCCESS [ 2.092 s]
[INFO] ranger-plugin-classloader .......................... SUCCESS [ 0.566 s]
[INFO] HBase Security Plugin Shim ......................... SUCCESS [ 2.343 s]
[INFO] HBase Security Plugin .............................. SUCCESS [ 2.907 s]
[INFO] Hdfs Security Plugin ............................... SUCCESS [ 1.357 s]
[INFO] Hive Security Plugin ............................... SUCCESS [ 5.140 s]
[INFO] Knox Security Plugin Shim .......................... SUCCESS [ 1.802 s]
[INFO] Knox Security Plugin ............................... SUCCESS [ 2.060 s]
[INFO] Storm Security Plugin .............................. SUCCESS [ 1.495 s]
[INFO] YARN Security Plugin ............................... SUCCESS [ 2.249 s]
[INFO] Ozone Security Plugin .............................. SUCCESS [ 1.619 s]
[INFO] Ranger Util ........................................ SUCCESS [ 2.068 s]
[INFO] Unix Authentication Client ......................... SUCCESS [ 0.740 s]
[INFO] Security Admin Web Application ..................... SUCCESS [ 52.486 s]
[INFO] KAFKA Security Plugin .............................. SUCCESS [ 1.327 s]
[INFO] SOLR Security Plugin ............................... SUCCESS [ 1.942 s]
[INFO] NiFi Security Plugin ............................... SUCCESS [ 1.336 s]
[INFO] NiFi Registry Security Plugin ...................... SUCCESS [ 1.365 s]
[INFO] Unix User Group Synchronizer ....................... SUCCESS [ 3.101 s]
[INFO] Ldap Config Check Tool ............................. SUCCESS [ 0.613 s]
[INFO] Unix Authentication Service ........................ SUCCESS [ 1.107 s]
[INFO] KMS Security Plugin ................................ SUCCESS [ 1.367 s]
[INFO] Tag Synchronizer ................................... SUCCESS [ 1.750 s]
[INFO] Hdfs Security Plugin Shim .......................... SUCCESS [ 1.081 s]
[INFO] Hive Security Plugin Shim .......................... SUCCESS [ 3.308 s]
[INFO] YARN Security Plugin Shim .......................... SUCCESS [ 1.254 s]
[INFO] OZONE Security Plugin Shim ......................... SUCCESS [ 1.634 s]
[INFO] Storm Security Plugin shim ......................... SUCCESS [ 1.279 s]
[INFO] KAFKA Security Plugin Shim ......................... SUCCESS [ 1.097 s]
[INFO] SOLR Security Plugin Shim .......................... SUCCESS [ 2.864 s]
[INFO] Atlas Security Plugin Shim ......................... SUCCESS [ 2.036 s]
[INFO] KMS Security Plugin Shim ........................... SUCCESS [ 1.270 s]
[INFO] ranger-examples .................................... SUCCESS [ 0.196 s]
[INFO] Ranger Examples - Conditions and ContextEnrichers .. SUCCESS [ 1.125 s]
[INFO] Ranger Examples - SampleApp ........................ SUCCESS [ 0.607 s]
[INFO] Ranger Examples - Ranger Plugin for SampleApp ...... SUCCESS [ 1.220 s]
[INFO] Ranger Tools ..................### 下载spark
wget https://www.apache.org/dyn/closer.lua/spark/spark-2.4.3/spark-2.4.3-bin-hadoop2.7.tgz
### 解压缩
tar zxvf spark-2.4.3-bin-hadoop2.7.tgz -C /opt/module/
### 拷贝初始文件
cp spark-env.sh.template spark-env.sh
cp spark-defaults.conf.template spark-defaults.conf
cp slaves.template slaves
### 修改spark-env.sh
grep -vE ‘#| ′ s p a r k − e n v . s h e x p o r t S P A R K D I S T C L A S S P A T H = ‘ spark-env.sh export SPARK_DIST_CLASSPATH= ′spark−env.shexportSPARKDISTCLASSPATH=(/opt/module/hadoop-3.2.0/bin/hadoop classpath)
JAVA_HOME=/opt/module/jdk1.8.0_211
SCALA_HOME=/opt/module/scala-2.13.0
HADOOP_CONF_DIR=/opt/module/hadoop-3.2.0/etc/hadoop
HADOOP_HOME=/opt/module/hadoop-3.2.0
SPARK_DAEMON_JAVA_OPTS=”-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop11:2181,hadoop12:2181,hadoop13:2181 -Dspark.deploy.zookeeper.dir=/spark”
### 设置spark-defaults.conf
grep -vE ‘#|$’ conf/spark-defaults.conf
spark.master spark://hadoop11:7077,hadoop12:7077
### 修改slaves
grep -vE ‘#|$’ slaves
hadoop13
hadoop14
### 增加环境变量
SPARK_HOME
export SPARK_HOME=/opt/module/spark-2.4.3-bin-hadoop2.7
export PATH= P A T H : PATH: PATH:SPARK_HOME/bin
### 同步数据
xsync.sh spark-2.4.3-bin-hadoop2.7/
### 启动spark
./sbin/start-all.sh
### 检查
xcall.sh jps
================== hadoop11 jps==================
3056 QuorumPeerMain
3745 DFSZKFailoverController
3539 JournalNode
10691 Master
10875 Jps
3294 NameNode
3918 NodeManager
6990 HMaster
================== hadoop12 jps==================
7796 Master
2629 NameNode
2935 DFSZKFailoverController
3111 NodeManager
2793 JournalNode
4073 HMaster
2492 QuorumPeerMain
7884 Jps
2703 DataNode
================== hadoop13 jps==================
2693 JournalNode
6631 Worker
2457 QuorumPeerMain
3066 ResourceManager
6747 Jps
2588 DataNode
3997 HRegionServer
3199 NodeManager
================== hadoop14 jps==================
2755 ResourceManager
3767 HRegionServer
6007 Jps
5896 Worker
2603 DataNode
2846 NodeManager
### web验证
![avatar](../img/spark/01_01.png)
### 切换测试
[hadoop@hadoop11 spark-2.4.3-bin-hadoop2.7]$ ./sbin/stop-master.sh
stopping org.apache.spark.deploy.master.Master
![avatar](../img/spark/01_02.png)
### 验证
[hadoop@hadoop11 spark-2.4.3-bin-hadoop2.7]$ bin/run-example SparkPi 2>&1 | grep “Pi is”
Pi is roughly 3.1379156895784477
[hadoop@hadoop11 spark-2.4.3-bin-hadoop2.7]$ ./bin/spark-shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/spark-2.4.3-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/module/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2019-06-13 17:09:04,701 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Setting default log level to “WARN”.
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://hadoop11:4040
Spark context available as ‘sc’ (master = spark://hadoop11:7077,hadoop12:7077, app id = app-20190613170930-0001).
Spark session available as ‘spark’.
Welcome to
____ __
/ / ___ / /
\ / _ / _ `/ __/ ‘/
// .__/_,// //_\ version 2.4.3
//
Using Scala version 2.11.12 (Java HotSpot™ 64-Bit Server VM, Java 1.8.0_211)
Type in expressions to have them evaluated.
Type :help for more information.
scala> 3+5
res0: Int = 8
scala> :quit
### 手动启动master
[hadoop@hadoop12 spark-2.4.3-bin-hadoop2.7]$ ./sbin/start-master.sh
..................... SUCCESS [ 1.288 s]
[INFO] Atlas Security Plugin .............................. SUCCESS [ 1.479 s]
[INFO] Sqoop Security Plugin .............................. SUCCESS [ 1.230 s]
[INFO] Sqoop Security Plugin Shim ......................... SUCCESS [ 1.006 s]
[INFO] Kylin Security Plugin .............................. SUCCESS [ 1.293 s]
[INFO] Kylin Security Plugin Shim ......................... SUCCESS [ 1.150 s]
[INFO] Elasticsearch Security Plugin Shim ................. SUCCESS [ 0.504 s]
[INFO] Elasticsearch Security Plugin ...................... SUCCESS [ 1.110 s]
[INFO] Presto Security Plugin ............................. SUCCESS [ 1.163 s]
[INFO] Presto Security Plugin Shim ........................ SUCCESS [ 1.130 s]
[INFO] Unix Native Authenticator .......................... SUCCESS [ 1.389 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 25:05 min
[INFO] Finished at: 2019-11-25T09:04:59+08:00
[INFO] ------------------------------------------------------------------------
配置solr
# grep -vE '^$|^#' install.properties
SOLR_USER=hadoop
SOLR_GROUP=hadoop
MAX_AUDIT_RETENTION_DAYS=90
SOLR_INSTALL=true
SOLR_DOWNLOAD_URL=http://archive.apache.org/dist/lucene/solr/5.2.1/solr-5.2.1.tgz
SOLR_INSTALL_FOLDER=/opt/module/solr-5.2.1
SOLR_RANGER_HOME=/opt/module/solr-5.2.1/ranger_audit_server
SOLR_RANGER_PORT=6083
SOLR_DEPLOYMENT=standalone
SOLR_RANGER_DATA_FOLDER=/opt/module/solr-5.2.1/ranger_audit_server/data
SOLR_ZK=
SOLR_HOST_URL=http://`hostname -f`:${SOLR_RANGER_PORT}
SOLR_SHARDS=1
SOLR_REPLICATION=1
SOLR_LOG_FOLDER=/opt/module/solr-5.2.1/ranger_audit_server/logs
SOLR_RANGER_COLLECTION=ranger_audits
SOLR_MAX_MEM=2g
安装
# ./setup.sh
# cat /opt/module/solr-5.2.1/ranger_audit_server/install_notes.txt
Solr installation notes for Ranger Audits.
Note: Don't edit this file. It will be over written if you run ./setup.sh again.
You have installed Solr in standalone mode.
Note: In production deployment, it is recommended to run in SolrCloud mode with at least 2 nodes and replication factor 2
Start and Stoping Solr:
Login as user hadoop or root and the run the below commands to start or stop Solr:
To start Solr run: /opt/module/solr-5.2.1/ranger_audit_server/scripts/start_solr.sh
To stop Solr run: /opt/module/solr-5.2.1/ranger_audit_server/scripts/stop_solr.sh
After starting Solr for RangerAudit, Solr will listen at 6083. E.g http://hadoop11:6083
Configure Ranger to use the following URL http://hadoop11:6083/solr/ranger_audits
Solr HOME for Ranger Audit is /opt/module/solr-5.2.1/ranger_audit_server
DATA FOLDER: /opt/module/solr-5.2.1/ranger_audit_server/data
Make sure you have enough disk space for index. In production, it is recommended to have at least 1TB free.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_centos7-root 50G 15G 36G 30% /
启动
/opt/module/solr-5.2.1/ranger_audit_server/scripts/start_solr.sh
解压缩
tar zxvf ranger-2.0.0-admin.tar.gz -C /opt/module/
修改配置
# grep -vE '^$|^#' install.properties
PYTHON_COMMAND_INVOKER=python
DB_FLAVOR=MYSQL
SQL_CONNECTOR_JAR=/opt/software/mysql-connector-java-5.1.47/mysql-connector-java-5.1.47.jar
db_root_user=root
db_root_password=123456
db_host=hadoop13:3306
db_ssl_enabled=false
db_ssl_required=false
db_ssl_verifyServerCertificate=false
db_ssl_auth_type=2-way
javax_net_ssl_keyStore=
javax_net_ssl_keyStorePassword=
javax_net_ssl_trustStore=
javax_net_ssl_trustStorePassword=
db_name=ranger
db_user=root
db_password=123456
rangerAdmin_password=
rangerTagsync_password=
rangerUsersync_password=
keyadmin_password=
audit_store=solr
audit_solr_urls=http://hadoop11:6083/solr/ranger_audits
audit_solr_user=
audit_solr_password=
audit_solr_zookeepers=
audit_solr_collection_name=ranger_audits
audit_solr_config_name=ranger_audits
audit_solr_no_shards=1
audit_solr_no_replica=1
audit_solr_max_shards_per_node=1
audit_solr_acl_user_list_sasl=solr,infra-solr
policymgr_external_url=http://hadoop11:6080
policymgr_http_enabled=true
policymgr_https_keystore_file=
policymgr_https_keystore_keyalias=rangeradmin
policymgr_https_keystore_password=
policymgr_supportedcomponents=
unix_user=hadoop
unix_user_pwd=xiechuan
unix_group=hadoop
authentication_method=NONE
remoteLoginEnabled=true
authServiceHostName=localhost
authServicePort=5151
ranger_unixauth_keystore=keystore.jks
ranger_unixauth_keystore_password=password
ranger_unixauth_truststore=cacerts
ranger_unixauth_truststore_password=changeit
xa_ldap_url=
xa_ldap_userDNpattern=
xa_ldap_groupSearchBase=
xa_ldap_groupSearchFilter=
xa_ldap_groupRoleAttribute=
xa_ldap_base_dn=
xa_ldap_bind_dn=
xa_ldap_bind_password=
xa_ldap_referral=
xa_ldap_userSearchFilter=
xa_ldap_ad_domain=
xa_ldap_ad_url=
xa_ldap_ad_base_dn=
xa_ldap_ad_bind_dn=
xa_ldap_ad_bind_password=
xa_ldap_ad_referral=
xa_ldap_ad_userSearchFilter=
spnego_principal=
spnego_keytab=
token_valid=30
cookie_domain=
cookie_path=/
admin_principal=
admin_keytab=
lookup_principal=
lookup_keytab=
hadoop_conf=/etc/hadoop/conf
sso_enabled=false
sso_providerurl=https://127.0.0.1:8443/gateway/knoxsso/api/v1/websso
sso_publickey=
RANGER_ADMIN_LOG_DIR=$PWD
RANGER_PID_DIR_PATH=/var/run/ranger
XAPOLICYMGR_DIR=$PWD
app_home=$PWD/ews/webapp
TMPFILE=$PWD/.fi_tmp
LOGFILE=$PWD/logfile
LOGFILES="$LOGFILE"
JAVA_BIN='java'
JAVA_VERSION_REQUIRED='1.8'
JAVA_ORACLE='Java(TM) SE Runtime Environment'
ranger_admin_max_heap_size=1g
PATCH_RETRY_INTERVAL=120
STALE_PATCH_ENTRY_HOLD_TIME=10
mysql_core_file=db/mysql/optimized/current/ranger_core_db_mysql.sql
mysql_audit_file=db/mysql/xa_audit_db.sql
oracle_core_file=db/oracle/optimized/current/ranger_core_db_oracle.sql
oracle_audit_file=db/oracle/xa_audit_db_oracle.sql
postgres_core_file=db/postgres/optimized/current/ranger_core_db_postgres.sql
postgres_audit_file=db/postgres/xa_audit_db_postgres.sql
sqlserver_core_file=db/sqlserver/optimized/current/ranger_core_db_sqlserver.sql
sqlserver_audit_file=db/sqlserver/xa_audit_db_sqlserver.sql
sqlanywhere_core_file=db/sqlanywhere/optimized/current/ranger_core_db_sqlanywhere.sql
sqlanywhere_audit_file=db/sqlanywhere/xa_audit_db_sqlanywhere.sql
cred_keystore_filename=$app_home/WEB-INF/classes/conf/.jceks/rangeradmin.jceks
安装
./setup.sh
启动
ranger-admin start
验证
http://192.168.17.11:6080/
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zz9RkB6W-1576902357434)(…/img/ranger/01_01.png)]
解压缩usersync
tar zxvf /opt/module/ranger-release-ranger-2.0.0/target/ranger-2.0.0-usersync.tar.gz -C /opt/module/
修改配置
# grep -vE '^$|^#' install.properties
ranger_base_dir = /etc/ranger
POLICY_MGR_URL = http://hadoop11:6080
SYNC_SOURCE = unix
MIN_UNIX_USER_ID_TO_SYNC = 500
MIN_UNIX_GROUP_ID_TO_SYNC = 500
SYNC_INTERVAL =
unix_user=root
unix_group=xiechuan
rangerUsersync_password=
usersync_principal=
usersync_keytab=
hadoop_conf=/etc/hadoop/conf
CRED_KEYSTORE_FILENAME=/etc/ranger/usersync/conf/rangerusersync.jceks
AUTH_SSL_ENABLED=false
AUTH_SSL_KEYSTORE_FILE=/etc/ranger/usersync/conf/cert/unixauthservice.jks
AUTH_SSL_KEYSTORE_PASSWORD=UnIx529p
AUTH_SSL_TRUSTSTORE_FILE=
AUTH_SSL_TRUSTSTORE_PASSWORD=
ROLE_ASSIGNMENT_LIST_DELIMITER = &
USERS_GROUPS_ASSIGNMENT_LIST_DELIMITER = :
USERNAME_GROUPNAME_ASSIGNMENT_LIST_DELIMITER = ,
GROUP_BASED_ROLE_ASSIGNMENT_RULES =
SYNC_LDAP_URL =
SYNC_LDAP_BIND_DN =
SYNC_LDAP_BIND_PASSWORD =
SYNC_LDAP_DELTASYNC =
SYNC_LDAP_SEARCH_BASE =
SYNC_LDAP_USER_SEARCH_BASE =
SYNC_LDAP_USER_SEARCH_SCOPE = sub
SYNC_LDAP_USER_OBJECT_CLASS = person
SYNC_LDAP_USER_SEARCH_FILTER =
SYNC_LDAP_USER_NAME_ATTRIBUTE = cn
SYNC_LDAP_USER_GROUP_NAME_ATTRIBUTE = memberof,ismemberof
SYNC_LDAP_USERNAME_CASE_CONVERSION=lower
SYNC_LDAP_GROUPNAME_CASE_CONVERSION=lower
logdir=logs
USERSYNC_PID_DIR_PATH=/var/run/ranger
SYNC_GROUP_SEARCH_ENABLED=
SYNC_GROUP_USER_MAP_SYNC_ENABLED=
SYNC_GROUP_SEARCH_BASE=
SYNC_GROUP_SEARCH_SCOPE=
SYNC_GROUP_OBJECT_CLASS=
SYNC_LDAP_GROUP_SEARCH_FILTER=
SYNC_GROUP_NAME_ATTRIBUTE=
SYNC_GROUP_MEMBER_ATTRIBUTE_NAME=
SYNC_PAGED_RESULTS_ENABLED=
SYNC_PAGED_RESULTS_SIZE=
SYNC_LDAP_REFERRAL =ignore
执行安装
./setup.sh
启动
./ranger-usersync-services.sh start
安装ranger-hdfs插件
tar zxvf ranger-2.0.0-hdfs-plugin.tar.gz -C /opt/module/
配置
# grep -vE '^$|^#' install.properties
POLICY_MGR_URL=http://hadoop11:6080
REPOSITORY_NAME=hadoopdev
COMPONENT_INSTALL_DIR_NAME=/opt/module/hadoop-3.2.0
XAAUDIT.SOLR.ENABLE=true
XAAUDIT.SOLR.URL=http://hadoop11:6083/solr/ranger_audits
XAAUDIT.SOLR.USER=NONE
XAAUDIT.SOLR.PASSWORD=NONE
XAAUDIT.SOLR.ZOOKEEPER=NONE
XAAUDIT.SOLR.FILE_SPOOL_DIR=/var/log/hadoop/hdfs/audit/solr/spool
XAAUDIT.HDFS.ENABLE=true
XAAUDIT.HDFS.HDFS_DIR=hdfs://xiechuan:9000/ranger/audit
XAAUDIT.HDFS.FILE_SPOOL_DIR=/var/log/hadoop/hdfs/audit/hdfs/spool
XAAUDIT.HDFS.AZURE_ACCOUNTNAME=__REPLACE_AZURE_ACCOUNT_NAME
XAAUDIT.HDFS.AZURE_ACCOUNTKEY=__REPLACE_AZURE_ACCOUNT_KEY
XAAUDIT.HDFS.AZURE_SHELL_KEY_PROVIDER=__REPLACE_AZURE_SHELL_KEY_PROVIDER
XAAUDIT.HDFS.AZURE_ACCOUNTKEY_PROVIDER=__REPLACE_AZURE_ACCOUNT_KEY_PROVIDER
XAAUDIT.HDFS.IS_ENABLED=true
XAAUDIT.HDFS.DESTINATION_DIRECTORY=hdfs://xiechuan:9000/ranger/audit/%app-type%/%time:yyyyMMdd%
XAAUDIT.HDFS.LOCAL_BUFFER_DIRECTORY=/var/log/hadoop/%app-type%/audit
XAAUDIT.HDFS.LOCAL_ARCHIVE_DIRECTORY=/var/log/hadoop/%app-type%/audit/archive
XAAUDIT.HDFS.DESTINTATION_FILE=%hostname%-audit.log
XAAUDIT.HDFS.DESTINTATION_FLUSH_INTERVAL_SECONDS=900
XAAUDIT.HDFS.DESTINTATION_ROLLOVER_INTERVAL_SECONDS=86400
XAAUDIT.HDFS.DESTINTATION_OPEN_RETRY_INTERVAL_SECONDS=60
XAAUDIT.HDFS.LOCAL_BUFFER_FILE=%time:yyyyMMdd-HHmm.ss%.log
XAAUDIT.HDFS.LOCAL_BUFFER_FLUSH_INTERVAL_SECONDS=60
XAAUDIT.HDFS.LOCAL_BUFFER_ROLLOVER_INTERVAL_SECONDS=600
XAAUDIT.HDFS.LOCAL_ARCHIVE_MAX_FILE_COUNT=10
XAAUDIT.SOLR.IS_ENABLED=true
XAAUDIT.SOLR.MAX_QUEUE_SIZE=1
XAAUDIT.SOLR.MAX_FLUSH_INTERVAL_MS=1000
XAAUDIT.SOLR.SOLR_URL=http://hadoop11:6083/solr/ranger_audits
SSL_KEYSTORE_FILE_PATH=/etc/hadoop/conf/ranger-plugin-keystore.jks
SSL_KEYSTORE_PASSWORD=myKeyFilePassword
SSL_TRUSTSTORE_FILE_PATH=/etc/hadoop/conf/ranger-plugin-truststore.jks
SSL_TRUSTSTORE_PASSWORD=changeit
CUSTOM_USER=hdfs
CUSTOM_GROUP=hadoop
启动
# ./enable-hdfs-plugin.sh
Custom group is available, using default user and custom group.
+ Tue Nov 26 11:15:44 CST 2019 : hadoop: lib folder=/opt/module/hadoop-3.2.0/share/hadoop/hdfs/lib conf folder=/opt/module/hadoop-3.2.0/etc/hadoop
+ Tue Nov 26 11:15:44 CST 2019 : Saving current config file: /opt/module/hadoop-3.2.0/etc/hadoop/hdfs-site.xml to /opt/module/hadoop-3.2.0/etc/hadoop/.hdfs-site.xml.20191126-111544 ...
+ Tue Nov 26 11:15:45 CST 2019 : Saving current config file: /opt/module/hadoop-3.2.0/etc/hadoop/ranger-hdfs-audit.xml to /opt/module/hadoop-3.2.0/etc/hadoop/.ranger-hdfs-audit.xml.20191126-111544 ...
+ Tue Nov 26 11:15:45 CST 2019 : Saving current config file: /opt/module/hadoop-3.2.0/etc/hadoop/ranger-hdfs-security.xml to /opt/module/hadoop-3.2.0/etc/hadoop/.ranger-hdfs-security.xml.20191126-111544 ...
+ Tue Nov 26 11:15:46 CST 2019 : Saving current config file: /opt/module/hadoop-3.2.0/etc/hadoop/ranger-policymgr-ssl.xml to /opt/module/hadoop-3.2.0/etc/hadoop/.ranger-policymgr-ssl.xml.20191126-111544 ...
+ Tue Nov 26 11:15:51 CST 2019 : Saving current JCE file: /etc/ranger/hadoopdev/cred.jceks to /etc/ranger/hadoopdev/.cred.jceks.20191126111551 ...
Ranger Plugin for hadoop has been enabled. Please restart hadoop to ensure that changes are effective.
enable插件
./enable-hdfs-plugin.sh
界面配置hdfs
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-REm5mGL0-1576902357435)(…/img/ranger/01_02.png)]
打开hdfs
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-I7EqTPmA-1576902357435)(…/img/ranger/01_03.png)]
今天的文章ranger安装_ranger中文分享到此就结束了,感谢您的阅读,如果确实帮到您,您可以动动手指转发给其他人。
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
如需转载请保留出处:https://bianchenghao.cn/58984.html