atlas-入门使用

概述

Atlas是一组可扩展和可扩展的核心基础治理服务,使企业能够有效地满足Hadoop中的合规性要求,并允许与整个企业数据生态系统集成。

Apache Atlas为组织提供了开放的元数据管理和治理功能,以建立其数据资产的目录,对这些资产进行分类和治理,并为数据科学家,分析师和数据治理团队提供围绕这些数据资产的协作功能。

功能

元数据类型和实例

各种Hadoop和非Hadoop元数据的预定义类型
能够为要管理的元数据定义新类型
类型可以具有原始属性,复杂属性,对象引用;可以继承其他类型
类型的实例(称为实体)捕获元数据对象详细信息及其关系
REST API与类型和实例配合使用,可以更轻松地集成

分类

动态创建分类的能力-如PII,EXPIRES_ON,DATA_QUALITY,SENSITIVE
分类可以包含属性-例如EXPIRES_ON分类中的expiry_date属性
实体可以与多种分类相关联,从而使发现和安全实施更加容易
通过沿袭传播分类-自动确保分类遵循数据经过各种处理的过程

血缘

直观的UI,可查看数据沿各种流程移动的沿袭
REST API访问和更新沿袭

搜索/发现

直观的用户界面,可按类型,分类,属性值或自由文本搜索实体
丰富的REST API,可按复杂条件进行搜索
类似于SQL的查询语言以搜索实体-域特定语言(DSL)
安全和数据屏蔽
用于元数据访问的细粒度安全性,使您能够控制对实体实例的访问以及诸如添加/更新/删除分类的操作
与Apache Ranger的集成可基于与Apache Atlas中的实体相关联的分类对数据访问进行授权/数据屏蔽。例如:
谁可以访问归类为PII,敏感的数据
客户服务用户只能看到分类为NATIONAL_ID的列的最后4位数字

各版本特性

1.0

特征
将关系介绍为一流的类型
支持沿实体关系传播分类(例如沿袭)
细粒度的元数据安全性,使访问控制达到实体实例级别
词汇表功能介绍
V2样式通知的介绍
介绍用于HBase的Atlas挂钩
支持Cassandra和Elasticsearch(技术预览)
更新
图形存储已从Titan 0.5.4更新为JanusGraph 0.2.0
DSL重写,用ANTLR代替基于Scala的实现
通过切换为使用V2样式通知,提高了Atlas Hooks的性能
Atlas Web UI中的重要更新

变化
DSL搜索
通过DSL重写和简化,某些较早的构造可能无法正常工作。这是以前版本中行为更改的列表。在此处可以找到更多与DSL相关的更改。

使用字符串属性过滤或缩小结果范围时,该值务必用双引号引起来
表名=“ Table1”
表格,其中name =“ Table1”
不再支持联接查询,例如hive_table,hive_db
Select子句仅适用于直接实体属性或单个引用(实体)类型。
表选择名称,所有者
表选择列
表格选择名称,所有者,列(无效)
OrderBy子句只能与单个基本属性一起使用。
GroupBy子句只能与单个基本属性一起使用。
表分组名称
表格groupby列(无效)
Typename不能有多个别名
表为t(确定)
表格为t1,t2(不起作用)
Has子句仅适用于原始属性。
表有名字
表具有列或表具有数据库(不支持)
Aggregator子句只能与单个基本属性一起使用。
表选择分钟(名称)
表选择最大值(名称)
表选择总和(createTime)
表格选择最小值(列)(无效)
表格选择上限(列)(无效)
表格选择总和(列)(无效)
不能使用不同的原始属性重复Aggregator子句,最后出现的子句将具有优先权。
表选择分钟(名称),分钟(创建时间)将忽略分钟(名称)
使用聚合子句(最小,最大,总和)时,限制和偏移量不适用
表格选择的最小(名称)限制10偏移量5-最小(名称)是针对资产类型的所有实体计算的

2.0

特征
软引用属性实现。
图存储级别的唯一属性约束
Janusgraph的Atlas索引修复工具
在地图集中创建新关系时的关系通知
Atlas导入变换处理程序实现

更新
更新了组件版本以使用Hadoop 3.1,Hive 3.1,HBase 2.0,Solr 7.5和Kafka 2.0
将JanusGraph版本更新为0.3.1
更新了身份验证以支持可信代理
更新了补丁程序框架,以保留应用于地图集的typedef补丁程序并处理数据补丁程序。
更新了指标模块以收集通知指标
更新了Atlas导出以支持增量导出元数据。
通知处理方面的改进:
通知处理以支持批量提交
通知处理中的新选项,可以忽略可能不正确的hive_column_lineage
更新了Hive挂钩以避免重复的列世系实体;还更新了Atlas服务器以跳过重复的列世系实体
改进了通知处理程序中的批处理,从而避免了多次处理实体
添加选项以忽略/修剪临时/临时配置单元表的元数据
创建新关系时避免不必要的查找
界面改进:
用户界面:基本搜索中“类型和分类”下拉列表之外的显示计数
UI:显示流程实体的沿袭信息
UI:沿袭图表显示实体特定的图标
用户界面:在实体详细信息页面的关系视图内添加关系表。
用户界面:在基本搜索中添加服务类型下拉菜单以过滤entitydef类型。
各种错误修复和优化

安装

环境信息

centos-release-7-2.1511.el7.centos.2.10.x86_64
jdk1.8.0_241
Hadoop-2.7.7
Hive-2.3.6
kafka_2.11-2.0.1
Maven 3.5.4
Node js 12.16.2
Scala 2.11.12
Atlas-1.1.0(内置zookeeper 3.4.6 hbase1.2.0 solr 5.5.1 也可以使用外部的 配置比较复杂,
kafka是使用外部的)

atlas配置文件

atlas-env.sh

#!/usr/bin/env bash
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The java implementation to use. If JAVA_HOME is not found we expect java and jar to be in path
#export JAVA_HOME=

# any additional java opts you want to set. This will apply to both client and server operations
#export ATLAS_OPTS=

# any additional java opts that you want to set for client only
#export ATLAS_CLIENT_OPTS=

# java heap size we want to set for the client. Default is 1024MB
#export ATLAS_CLIENT_HEAP=

# any additional opts you want to set for atlas service.
#export ATLAS_SERVER_OPTS=

# indicative values for large number of metadata entities (equal or more than 10,000s)
#export ATLAS_SERVER_OPTS="-server -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+PrintTenuringDistribution -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=dumps/atlas_server.hprof -Xloggc:logs/gc-worker.log -verbose:gc -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1m -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCTimeStamps"

# java heap size we want to set for the atlas server. Default is 1024MB
#export ATLAS_SERVER_HEAP=

# indicative values for large number of metadata entities (equal or more than 10,000s) for JDK 8
#export ATLAS_SERVER_HEAP="-Xms15360m -Xmx15360m -XX:MaxNewSize=5120m -XX:MetaspaceSize=100M -XX:MaxMetaspaceSize=512m"

# What is is considered as atlas home dir. Default is the base locaion of the installed software
#export ATLAS_HOME_DIR=

# Where log files are stored. Defatult is logs directory under the base install location
#export ATLAS_LOG_DIR=

# Where pid files are stored. Defatult is logs directory under the base install location
#export ATLAS_PID_DIR=

# where the atlas janusgraph db data is stored. Defatult is logs/data directory under the base install location
#export ATLAS_DATA_DIR=

# Where do you want to expand the war file. By Default it is in /server/webapp dir under the base install dir.
#export ATLAS_EXPANDED_WEBAPP_DIR=

# indicates whether or not a local instance of HBase should be started for Atlas
export MANAGE_LOCAL_HBASE=true

# indicates whether or not a local instance of Solr should be started for Atlas
export MANAGE_LOCAL_SOLR=true

# indicates whether or not cassandra is the embedded backend for Atlas
export MANAGE_EMBEDDED_CASSANDRA=false

# indicates whether or not a local instance of Elasticsearch should be started for Atlas
export MANAGE_LOCAL_ELASTICSEARCH=false

atlas-application.properties

#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#########  Graph Database Configs  #########

# Graph Database

#Configures the graph database to use.  Defaults to JanusGraph
#atlas.graphdb.backend=org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase

# Graph Storage
# Set atlas.graph.storage.backend to the correct value for your desired storage
# backend. Possible values:
#
# hbase
# cassandra
# embeddedcassandra - Should only be set by building Atlas with  -Pdist,embedded-cassandra-solr
# berkeleyje
#
# See the configuration documentation for more information about configuring the various  storage backends.
#
atlas.graph.storage.backend=hbase
atlas.graph.storage.hbase.table=apache_atlas_janus

#Hbase
#For standalone mode , specify localhost
#for distributed mode, specify zookeeper quorum here
atlas.graph.storage.hostname=localhost
atlas.graph.storage.hbase.regions-per-server=1
atlas.graph.storage.lock.wait-time=10000

# Gremlin Query Optimizer
#
# Enables rewriting gremlin queries to maximize performance. This flag is provided as
# a possible way to work around any defects that are found in the optimizer until they
# are resolved.
#atlas.query.gremlinOptimizerEnabled=true

# Delete handler
#
# This allows the default behavior of doing "soft" deletes to be changed.
#
# Allowed Values:
# org.apache.atlas.repository.store.graph.v1.SoftDeleteHandlerV1 - all deletes are "soft" deletes
# org.apache.atlas.repository.store.graph.v1.HardDeleteHandlerV1 - all deletes are "hard" deletes
#
#atlas.DeleteHandlerV1.impl=org.apache.atlas.repository.store.graph.v1.SoftDeleteHandlerV1

# Entity audit repository
#
# This allows the default behavior of logging entity changes to hbase to be changed.
#
# Allowed Values:
# org.apache.atlas.repository.audit.HBaseBasedAuditRepository - log entity changes to hbase
# org.apache.atlas.repository.audit.CassandraBasedAuditRepository - log entity changes to cassandra
# org.apache.atlas.repository.audit.NoopEntityAuditRepository - disable the audit repository
#
atlas.EntityAuditRepository.impl=org.apache.atlas.repository.audit.HBaseBasedAuditRepository

# if Cassandra is used as a backend for audit from the above property, uncomment and set the following
# properties appropriately. If using the embedded cassandra profile, these properties can remain
# commented out.
# atlas.EntityAuditRepository.keyspace=atlas_audit
# atlas.EntityAuditRepository.replicationFactor=1


# Graph Search Index
atlas.graph.index.search.backend=solr

#Solr
#Solr cloud mode properties
atlas.graph.index.search.solr.mode=cloud
atlas.graph.index.search.solr.zookeeper-url=localhost:2181
atlas.graph.index.search.solr.zookeeper-connect-timeout=60000
atlas.graph.index.search.solr.zookeeper-session-timeout=60000
atlas.graph.index.search.solr.wait-searcher=true

#Solr http mode properties
#atlas.graph.index.search.solr.mode=http
#atlas.graph.index.search.solr.http-urls=http://localhost:8983/solr

# Solr-specific configuration property
atlas.graph.index.search.max-result-set-size=150
 
#########  Notification Configs  ######### 
atlas.notification.embedded=false
atlas.kafka.data=${sys:atlas.home}/data/kafka
atlas.kafka.zookeeper.connect=localhost:2181
atlas.kafka.bootstrap.servers=localhost:9092
atlas.kafka.zookeeper.session.timeout.ms=400
atlas.kafka.zookeeper.connection.timeout.ms=200
atlas.kafka.zookeeper.sync.time.ms=20
atlas.kafka.auto.commit.interval.ms=1000
atlas.kafka.hook.group.id=atlas

atlas.kafka.enable.auto.commit=false
atlas.kafka.auto.offset.reset=earliest
atlas.kafka.session.timeout.ms=30000
atlas.kafka.offsets.topic.replication.factor=1
atlas.kafka.poll.timeout.ms=1000

atlas.notification.create.topics=true
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.notification.log.failed.messages=true
atlas.notification.consumer.retry.interval=500
atlas.notification.hook.retry.interval=1000
# Enable for Kerberized Kafka clusters
#atlas.notification.kafka.service.principal=kafka/[email protected]
#atlas.notification.kafka.keytab.location=/etc/security/keytabs/kafka.service.keytab

## Server port configuration
#atlas.server.http.port=21000
#atlas.server.https.port=21443

#########  Security Properties  #########

# SSL config
atlas.enableTLS=false

#truststore.file=/path/to/truststore.jks
#cert.stores.credential.provider.path=jceks://file/path/to/credentialstore.jceks

#following only required for 2-way SSL
#keystore.file=/path/to/keystore.jks

# Authentication config

atlas.authentication.method.kerberos=false
atlas.authentication.method.file=true

#### ldap.type= LDAP or AD
atlas.authentication.method.ldap.type=none

#### user credentials file
atlas.authentication.method.file.filename=${sys:atlas.home}/conf/users-credentials.properties

### groups from UGI
#atlas.authentication.method.ldap.ugi-groups=true

######## LDAP properties #########
#atlas.authentication.method.ldap.url=ldap://<ldap server url>:389
#atlas.authentication.method.ldap.userDNpattern=uid={0},ou=People,dc=example,dc=com
#atlas.authentication.method.ldap.groupSearchBase=dc=example,dc=com
#atlas.authentication.method.ldap.groupSearchFilter=(member=uid={0},ou=Users,dc=example,dc=com)
#atlas.authentication.method.ldap.groupRoleAttribute=cn
#atlas.authentication.method.ldap.base.dn=dc=example,dc=com
#atlas.authentication.method.ldap.bind.dn=cn=Manager,dc=example,dc=com
#atlas.authentication.method.ldap.bind.password=<password>
#atlas.authentication.method.ldap.referral=ignore
#atlas.authentication.method.ldap.user.searchfilter=(uid={0})
#atlas.authentication.method.ldap.default.role=<default role>


######### Active directory properties #######
#atlas.authentication.method.ldap.ad.domain=example.com
#atlas.authentication.method.ldap.ad.url=ldap://<AD server url>:389
#atlas.authentication.method.ldap.ad.base.dn=(sAMAccountName={0})
#atlas.authentication.method.ldap.ad.bind.dn=CN=team,CN=Users,DC=example,DC=com
#atlas.authentication.method.ldap.ad.bind.password=<password>
#atlas.authentication.method.ldap.ad.referral=ignore
#atlas.authentication.method.ldap.ad.user.searchfilter=(sAMAccountName={0})
#atlas.authentication.method.ldap.ad.default.role=<default role>

#########  JAAS Configuration ########

#atlas.jaas.KafkaClient.loginModuleName = com.sun.security.auth.module.Krb5LoginModule
#atlas.jaas.KafkaClient.loginModuleControlFlag = required
#atlas.jaas.KafkaClient.option.useKeyTab = true
#atlas.jaas.KafkaClient.option.storeKey = true
#atlas.jaas.KafkaClient.option.serviceName = kafka
#atlas.jaas.KafkaClient.option.keyTab = /etc/security/keytabs/atlas.service.keytab
#atlas.jaas.KafkaClient.option.principal = atlas/[email protected]

#########  Server Properties  #########
atlas.rest.address=http://localhost:21000
# If enabled and set to true, this will run setup steps when the server starts
#atlas.server.run.setup.on.start=false

#########  Entity Audit Configs  #########
atlas.audit.hbase.tablename=apache_atlas_entity_audit
atlas.audit.zookeeper.session.timeout.ms=1000
atlas.audit.hbase.zookeeper.quorum=localhost:2181

#########  High Availability Configuration ########
atlas.server.ha.enabled=false
#### Enabled the configs below as per need if HA is enabled #####
#atlas.server.ids=id1
#atlas.server.address.id1=localhost:21000
#atlas.server.ha.zookeeper.connect=localhost:2181
#atlas.server.ha.zookeeper.retry.sleeptime.ms=1000
#atlas.server.ha.zookeeper.num.retries=3
#atlas.server.ha.zookeeper.session.timeout.ms=20000
## if ACLs need to be set on the created nodes, uncomment these lines and set the values ##
#atlas.server.ha.zookeeper.acl=<scheme>:<id>
#atlas.server.ha.zookeeper.auth=<scheme>:<authinfo>



######### Atlas Authorization #########
atlas.authorizer.impl=simple
atlas.authorizer.simple.authz.policy.file=atlas-simple-authz-policy.json

#########  Type Cache Implementation ########
# A type cache class which implements
# org.apache.atlas.typesystem.types.cache.TypeCache.
# The default implementation is org.apache.atlas.typesystem.types.cache.DefaultTypeCache which is a local in-memory type cache.
#atlas.TypeCache.impl=

#########  Performance Configs  #########
#atlas.graph.storage.lock.retries=10
#atlas.graph.storage.cache.db-cache-time=120000

#########  CSRF Configs  #########
atlas.rest-csrf.enabled=true
atlas.rest-csrf.browser-useragents-regex=^Mozilla.*,^Opera.*,^Chrome.*
atlas.rest-csrf.methods-to-ignore=GET,OPTIONS,HEAD,TRACE
atlas.rest-csrf.custom-header=X-XSRF-HEADER

############ KNOX Configs ################
#atlas.sso.knox.browser.useragent=Mozilla,Chrome,Opera
#atlas.sso.knox.enabled=true
#atlas.sso.knox.providerurl=https://<knox gateway ip>:8443/gateway/knoxsso/api/v1/websso
#atlas.sso.knox.publicKey=

############ Atlas Metric/Stats configs ################
# Format: atlas.metric.query.<key>.<name>
atlas.metric.query.cache.ttlInSecs=900
#atlas.metric.query.general.typeCount=
#atlas.metric.query.general.typeUnusedCount=
#atlas.metric.query.general.entityCount=
#atlas.metric.query.general.tagCount=
#atlas.metric.query.general.entityDeleted=
#
#atlas.metric.query.entity.typeEntities=
#atlas.metric.query.entity.entityTagged=
#
#atlas.metric.query.tags.entityTags=

#########  Compiled Query Cache Configuration  #########

# The size of the compiled query cache.  Older queries will be evicted from the cache
# when we reach the capacity.

#atlas.CompiledQueryCache.capacity=1000

# Allows notifications when items are evicted from the compiled query
# cache because it has become full.  A warning will be issued when
# the specified number of evictions have occurred.  If the eviction
# warning threshold <= 0, no eviction warnings will be issued.

#atlas.CompiledQueryCache.evictionWarningThrottle=0


#########  Full Text Search Configuration  #########

#Set to false to disable full text search.
#atlas.search.fulltext.enable=true

#########  Gremlin Search Configuration  #########

#Set to false to disable gremlin search.
atlas.search.gremlin.enable=false


########## Add http headers ###########

#atlas.headers.Access-Control-Allow-Origin=*
#atlas.headers.Access-Control-Allow-Methods=GET,OPTIONS,HEAD,PUT,POST
#atlas.headers.<headerName>=<headerValue>

########## HIVE HOOK Configs #########

# whether to run the hook synchronously. false recommended to avoid delays in Hive query completion. Default: false
atlas.hook.hive.synchronous=false
# number of retries for notification failure. Default: 3
atlas.hook.hive.numRetries=3
# queue size for the threadpool. Default: 10000
atlas.hook.hive.queueSize=10000
# clusterName to use in qualifiedName of entities. Default: primary 
atlas.cluster.name=primary

# atlas.hook.hive.synchronous - boolean, true to run the hook synchronously. default false. Recommended to be set to false to avoid delays in hive query completion.
# atlas.hook.hive.numRetries - number of retries for notification failure. default 3
# atlas.hook.hive.minThreads - core number of threads. default 1
# atlas.hook.hive.maxThreads - maximum number of threads. default 5
# atlas.hook.hive.keepAliveTime - keep alive time in msecs. default 10
# atlas.hook.hive.queueSize - queue size for the threadpool. default 10000


######### Sqoop Hook Configs #######
# whether to run the hook synchronously. false recommended to avoid delays in Sqoop operation completion. Default: false
atlas.hook.sqoop.synchronous=false
# number of retries for notification failure. Default: 3
atlas.hook.sqoop.numRetries=3
# queue size for the threadpool. Default: 10000
atlas.hook.sqoop.queueSize=10000   
############

启动

1、启动hadoop


[root@work3 maven]# start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-namenode-work3.alex.com.out
localhost: datanode running as process 2139. Stop it first.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-secondarynamenode-work3.alex.com.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.7.7/logs/yarn-root-resourcemanager-work3.alex.com.out
localhost: starting nodemanager, logging to /opt/hadoop-2.7.7/logs/yarn-root-nodemanager-work3.alex.com.out

可以访问hdfs web UI
http://192.168.79.13:50070/dfshealth.html#tab-overview



2、启动 hive 

[root@work3 maven]# hive --service metastore 
2020-05-04 17:40:49: Starting Hive Metastore Server
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive-2.3.6-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

如需后台运行 可以执行
nohup hive --service metastore & 

[root@work3 ~]# hiveserver2
which: no hbase in (:/opt/maven/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/jdk1.8.0_241/bin:/opt/hadoop-2.7.7/bin:/opt/hadoop-2.7.7/sbin:/opt/hive-2.3.6-bin/bin:/opt/scala-2.11.12/bin:/opt/spark-2.4.5-bin-hadoop2.7/bin:/opt/node-v12.16.2-linux-x64/bin:/opt/zookeeper-3.4.6-bin/bin:/opt/phoenix4/bin:/opt/sqoop-1.4.7/bin:/opt/kylin-2.6.5-bin-hbase1x/bin:/opt/kafka_2.11-2.0.1/bin:/opt/phantomjs-2.1.1/bin:/root/bin)
2020-05-04 17:43:46: Starting HiveServer2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive-2.3.6-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

如需后台运行 可以执行
nohup hiveserver2 & 

3、启动atlas 
[root@work3 atlas-1.1.0]# ./bin/atlas_start.py 
configured for local hbase.
hbase started.
configured for local solr.
solr started.
setting up solr collections...
starting atlas on host localhost
starting atlas on port 21000
...............................................
Apache Atlas Server started!!!


4、启动kafka
[root@work3 ~]# /opt/kafka_2.11-2.0.1/bin/kafka-server-start.sh /opt/kafka_2.11-2.0.1/config/server.properties
[2020-05-04 17:49:35,120] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-05-04 17:49:36,296] INFO starting (kafka.server.KafkaServer)
[2020-05-04 17:49:36,301] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2020-05-04 17:49:36,379] INFO [ZooKeeperClient] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-05-04 17:49:36,397] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2020-05-04 17:49:36,397] INFO Client environment:host.name=work3.alex.com (org.apache.zookeeper.ZooKeeper)
[2020-05-04 17:49:36,397] INFO Client environment:java.version=1.8.0_241 (org.apache.zookeeper.ZooKeeper)
[2020-05-04 17:49:36,397] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2020-05-04 17:49:36,397] INFO Client environment:java.home=/opt/jdk1.8.0_241/jre (org.apache.zookeeper.ZooKeeper)






5、访问 altas web监控界面

http://192.168.79.13:21000/

默认用户名admin
密码 admin

在这里插入图片描述
在这里插入图片描述

参考资料

http://atlas.apache.org/#/

https://atlas.apache.org/1.1.0/

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章