体验版地址 | 账密 poc/123456
首页
数据集成
元数据管理
元数据拾取
应用分析
系统菜单管理
元数据管理
数据质量
数据市场
数据标准
BI报表
数据资产
流程编排
AllData AI Studio 社区版
AllData Studio 社区版
Dlink
FlinkX
ElAdmin
Dlink+CDC+Hudi
cube-studio
ElAdmin
ElAdmin
Rancher
Hive+Doris
Dlink+FlinkCDC+Doris
DolphinScheduler
SREWorks
Doris
lowcode-engine
数据库版本为 mysql5.7 及以上版本
1.1 source install/eladmin/eladmin_alldatadc.sql
1.2 source install/eladmin/eladmin_dts.sql
1.3 source install/datax/eladmin_data_cloud.sql
1.4 source install/datax/eladmin_cloud_quartz.sql
1.5 source install/datax/eladmin_foodmart2.sql
1.6 source install/datax/eladmin_robot.sql
config 文件夹下的配置文件,修改 redis,mysql 和 rabbitmq 的配置信息
cd install/datax
mvn install:install-file -DgroupId=com.aspose -DartifactId=aspose-words -Dversion=20.3 -Dpackaging=jar -Dfile=aspose-words-20.3.jar
获取安装包build/eladmin-release-2.6.tar.gz
上传服务器解压
5.1 必须启动、并且顺序启动
eureka->config->gateway
5.2 按需启动
cd install/16gmaster
譬如启动元数据管理
sh
install/16gmaster/data-metadata-service.sh
tail -100f
install/16gmaster/data-metadata-service.log
5.2 按需启动
cd install/16gdata
按需启动相关服务
5.3 按需启动
cd install/16gslave
按需启动相关服务
6.1 启动
sh install/16gmaster/eladmin-system.sh
6.2 部署
Eladmin
前端source /etc/profile
cd $(dirname $0)
source /root/.bashrc && nvm use v10.15.3
nohup npm run dev &
6.3 访问
Eladmin
页面用户名:admin 密码:123456
参考Resource/FlinkDDLSQL.sql
CREATE TABLE data_gen (
amount BIGINT
) WITH (
'connector' = 'datagen',
'rows-per-second' = '1',
'number-of-rows' = '3',
'fields.amount.kind' = 'random',
'fields.amount.min' = '10',
'fields.amount.max' = '11');
CREATE TABLE mysql_sink (
amount BIGINT,
PRIMARY KEY (amount) NOT ENFORCED
) WITH (
'connector' = 'jdbc',
'url' = 'jdbc:mysql://localhost:3306/test_db',
'table-name' = 'test_table',
'username' = 'root',
'password' = '123456',
'lookup.cache.max-rows' = '5000',
'lookup.cache.ttl' = '10min'
);
INSERT INTO mysql_sink SELECT amount as amount FROM data_gen;
获取结果
1、Flink血缘构建结果-表:
[LineageTable{id='4', name='data_gen', columns=[LineageColumn{name='amount', title='amount'}]},
LineageTable{id='6', name='mysql_sink', columns=[LineageColumn{name='amount', title='amount'}]}]
表ID: 4
表Namedata_gen
表ID: 4
表Namedata_gen
表-列LineageColumn{name='amount', title='amount'}
表ID: 6
表Namemysql_sink
表ID: 6
表Namemysql_sink
表-列LineageColumn{name='amount', title='amount'}
2、Flink血缘构建结果-边:
[LineageRelation{id='1', srcTableId='4', tgtTableId='6', srcTableColName='amount', tgtTableColName='amount'}]
表-边: LineageRelation{id='1', srcTableId='4', tgtTableId='6', srcTableColName='amount', tgtTableColName='amount'}
1、BUSINESS FOR ALL DATA PLATFORM 商业项目
2、BUSINESS FOR ALL DATA PLATFORM 计算引擎
3、DEVOPS FOR ALL DATA PLATFORM 运维引擎
4、DATA GOVERN FOR ALL DATA PLATFORM 数据治理引擎
5、DATA Integrate FOR ALL DATA PLATFORM 数据集成引擎
6、AI FOR ALL DATA PLATFORM 人工智能引擎
7、DATA ODS FOR ALL DATA PLATFORM 数据采集引擎
8、OLAP FOR ALL DATA PLATFORM OLAP查询引擎
9、OPTIMIZE FOR ALL DATA PLATFORM 性能优化引擎
10、DATABASES FOR ALL DATA PLATFORM 分布式存储引擎
set execution.checkpointing.interval=15sec;
CREATE CATALOG alldata_catalog WITH (
'type'='table-store',
'warehouse'='file:/tmp/table_store'
);
USE CATALOG alldata_catalog;
CREATE TABLE word_count (
word STRING PRIMARY KEY NOT ENFORCED, cnt BIGINT
);
CREATE TEMPORARY TABLE word_table (
word STRING
) WITH (
'connector' = 'datagen', 'fields.word.length' = '1'
);
INSERT INTO word_count SELECT word, COUNT(*) FROM word_table GROUP BY word;
-- POC Test OLAP QUERY
SET sql-client.execution.result-mode = 'tableau';
RESET execution.checkpointing.interval;
SET execution.runtime-mode = 'batch';
SELECT * FROM word_count;
-- POC Test Stream QUERY
-- SET execution.runtime-mode = 'streaming';
-- SELECT
interval
, COUNT(*) AS interval_cnt FROM-- (SELECT cnt / 10000 AS
interval
FROM word_count) GROUP BYinterval
;
### 2、Dlink启动并运行成功
### 3、OLAP查询
4.1 Stream Read 1
> 4.2 Stream Read 2
Component | Description | Important Composition |
---|---|---|
ai-studio | AI STUDIO FOR ALL DATA PLATFORM artificial intelligence engine | 人工智能引擎 |
ai-tasks | AI STUDIO TASKS FOR ALL DATA PLATFORM MLAPPS Engine | 人工智能模型任务 |
assembly | WHOLE PACKAGE BUILD FOR ALL DATA PLATFORM assembly engine | 整包构建引擎 |
buried | BURIED FOR ALL DATA PLATFORM data acquisition engine | 埋点解决方案 |
buried-shop | BURIED SHOP FOR ALL DATA PLATFORM commerce engine | 多端商城 |
buried-trade | BURIED TRADE FOR ALL DATA PLATFORM commerce engine | 商业系统 |
crawler | CRAWLER DATA TRADE FOR ALL DATA PLATFORM commerce engine | 爬虫任务 |
crawlerlab | CRAWLER PLATFORM FOR ALL DATA PLATFORM commerce engine | 爬虫引擎系统 |
olap | OLAP FOR ALL DATA PLATFORM OLAP query engine | 混合OLAP查询引擎 |
alldata-dts | DATA Integrate FOR ALL DATA PLATFORM Data Integration Engine | 数据集成引擎 |
cluster | DATA SRE FOR ALL DATA PLATFORM OLAP query engine | 智能大数据运维引擎 |
deploy | DEPLOY FOR ALL DATA PLATFORM OLAP query engine | 安装部署 |
documents | DOCUMENT FOR ALL DATA PLATFORM OLAP query engine | 官方文档 |
govern | DATA GOVERN FOR ALL DATA PLATFORM Data Governance Engine | 数据治理引擎 |
studio | ONE HUB FOR ALL DATA PLATFORM ONE HUB Engine | AllData总部前后端解决方案 |
lakehouse | ONE LAKE FOR ALL DATA PLATFORM ONE LAKE engine | 数据湖引擎 |
studio-tasks | STUDIO TASKS FOR ALL DATA PLATFORM Data Task Engine | 大数据流批计算任务 |
knowledge | KNOWLEDGE GRAPH FOR ALL DATA PLATFORM Data Task Engine | 知识图谱引擎 |
AllData | AllData社区项目通过二开大数据生态组件,以及大数据采集、大数据存储、大数据计算、大数据开发来建设一站式大数据平台 | Github一站式开源大数据平台AllData社区项目 |
1、AllData前端解决方案
studio/eladmin-web
2、AllData后端解决方案
studio/eladmin
3、多租户运维平台前端
studio/tenant
4、多租户运维平台前端
studio/tenantBack