Flink 系列文章
1、Flink 部署、概念介绍、source、transformation、sink使用示例、四大基石介绍和示例等系列综合文章链接
13、Flink 的table api与sql的基本概念、通用api介绍及入门示例
14、Flink 的table api与sql之数据类型: 内置数据类型以及它们的属性
15、Flink 的table api与sql之流式概念-详解的介绍了动态表、时间属性配置(如何处理更新结果)、时态表、流上的join、流上的确定性以及查询配置
16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及FileSystem示例(1)
16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Elasticsearch示例(2)
16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Apache Kafka示例(3)
16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及JDBC示例(4)
16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Apache Hive示例(6)
20、Flink SQL之SQL Client: 不用编写代码就可以尝试 Flink SQL,可以直接提交 SQL 任务到集群上
22、Flink 的table api与sql之创建表的DDL
24、Flink 的table api与sql之Catalogs(介绍、类型、java api和sql实现ddl、java api和sql操作catalog)-1
24、Flink 的table api与sql之Catalogs(java api操作数据库、表)-2
24、Flink 的table api与sql之Catalogs(java api操作视图)-3
26、Flink 的SQL之概览与入门示例
27、Flink 的SQL之SELECT (select、where、distinct、order by、limit、集合操作和去重)介绍及详细示例(1)
27、Flink 的SQL之SELECT (SQL Hints 和 Joins)介绍及详细示例(2)
27、Flink 的SQL之SELECT (窗口函数)介绍及详细示例(3)
27、Flink 的SQL之SELECT (窗口聚合)介绍及详细示例(4)
27、Flink 的SQL之SELECT (Group Aggregation分组聚合、Over Aggregation Over聚合 和 Window Join 窗口关联)介绍及详细示例(5)
27、Flink 的SQL之SELECT (Top-N、Window Top-N 窗口 Top-N 和 Window Deduplication 窗口去重)介绍及详细示例(6)
27、Flink 的SQL之SELECT (Pattern Recognition 模式检测)介绍及详细示例(7)
29、Flink SQL之DESCRIBE、EXPLAIN、USE、SHOW、LOAD、UNLOAD、SET、RESET、JAR、JOB Statements、UPDATE、DELETE(1)
29、Flink SQL之DESCRIBE、EXPLAIN、USE、SHOW、LOAD、UNLOAD、SET、RESET、JAR、JOB Statements、UPDATE、DELETE(2)
30、Flink SQL之SQL 客户端(通过kafka和filesystem的例子介绍了配置文件使用-表、视图等)
32、Flink table api和SQL 之用户自定义 Sources & Sinks实现及详细示例
41、Flink之Hive 方言介绍及详细示例
42、Flink 的table api与sql之Hive Catalog
43、Flink之Hive 读写及详细验证示例
44、Flink之module模块介绍及使用示例和Flink SQL使用hive内置函数及自定义函数详细示例–网上有些说法好像是错误的
文章目录
- Flink 系列文章
- 五、Catalog API
-
- 3、视图操作
-
- 1)、官方示例
- 2)、SQL创建HIVE 视图示例
-
- 1、maven依赖
- 2、代码
- 3、运行结果
- 3)、API创建Hive 视图示例
-
- 1、maven依赖
- 2、代码
- 3、运行结果
本文简单介绍了通过java api操作视图,提供了三个示例,即sql实现和java api的两种实现方式。
本文依赖flink和hive、hadoop集群能正常使用。
本文示例java api的实现是通过Flink 1.13.5版本做的示例,SQL 如果没有特别说明则是Flink 1.17版本。
五、Catalog API
3、视图操作
1)、官方示例
// create view
catalog.createTable(new ObjectPath("mydb", "myview"), new CatalogViewImpl(...), false);
// drop view
catalog.dropTable(new ObjectPath("mydb", "myview"), false);
// alter view
catalog.alterTable(new ObjectPath("mydb", "mytable"), new CatalogViewImpl(...), false);
// rename view
catalog.renameTable(new ObjectPath("mydb", "myview"), "my_new_view", false);
// get view
catalog.getTable("myview");
// check if a view exist or not
catalog.tableExists("mytable");
// list views in a database
catalog.listViews("mydb");
2)、SQL创建HIVE 视图示例
1、服务器托管网maven依赖
properties>
encoding>UTF-8encoding>
project.build.sourceEncoding>UTF-8project.build.sourceEncoding>
maven.compiler.source>1.8maven.compiler.source>
maven.compiler.target>1.8maven.compiler.target>
java.version>1.8java.version>
scala.version>2.12scala.version>
flink.version>1.13.6flink.version>
properties>
dependencies>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-clients_2.11artifactId>
version>${flink.version}version>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-scala_2.11artifactId>
version>${flink.version}version>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-javaartifactId>
version>${flink.version}version>
scope>providedscope>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-streaming-scala_2.11artifactId>
version>${flink.version}version>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-streaming-java_2.11artifactId>
version>${flink.version}version>
scope>providedscope>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-table-api-scala-bridge_2.11artifactId>
version>${flink.version}version>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-table-api-java-bridge_2.11artifactId>
version>${flink.version}version>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-table-planner-blink_2.11artifactId>
version>${flink.version}version>
scope>providedscope>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-table-commonartifactId>
version>${flink.version}version>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-connector-kafka_2.12artifactId>
version>${flink.version}version>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-sql-connector-kafka_2.12artifactId>
version>${flink.version}version>
scope>providedscope>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-connector-jdbc_2.12artifactId>
version>${flink.version}version>
scope>providedscope>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-csvartifactId>
version>${flink.version}version>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-jsonartifactId>
version>${flink.version}version>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-connector-hive_2.12artifactId>
version>${flink.version}version>
scope>providedscope>
dependency>
dependency>
groupId>org.apache.hivegroupId>
artifactId>hive-metastoreartifactId>
version>2.1.0version>
dependency>
dependency>
groupId>org.apache.hivegroupId>
artifactId>hive-execartifactId>
version>3.1.2version>
scope>providedscope>
dependency>
dependency>
groupId>org.apache.flinkgroupId>
artifactId>flink-shaded-hadoop-2-uberartifactId>
version>2.7.5-10.0version>
dependency>
dependency>
groupId>mysqlgroupId>
artifactId>mysql-connector-javaartifactId>
version>5.1.38version>
scope>providedscope>
dependency>
dependency>
groupId>org.slf4jgroupId>
artifactId>slf4j-log4j12artifactId>
version>1.7.7version>
scope>runtimescope>
dependency>
dependency>
groupId>log4jgroupId>
artifactId>log4jartifactId>
version>1.2.17version>
scope>runtimescope>
dependency>
dependency>
groupId>com.alibabagroupId>
artifactId>fastjsonartifactId>
version>1.2.44version>
dependency>
dependency>
groupId>org.projectlombokgroupId>
artifactId>lombokartifactId>
version>1.18.2version>
dependency>
dependencies>
build>
sourceDirectory>src/main/javasourceDirectory>
plugins>
plugin>
groupId>org.apache.maven.pluginsgroupId>
artifactId>maven-compiler-pluginartifactId>
version>3.5.1version>
configuration>
source>1.8source>
target>1.8target>
configuration>
plugin>
plugin>
groupId>org.apache.maven.pluginsgroupId>
artifactId>maven-surefire-pluginartifactId>
version>2.18.1version>
configuration>
useFile>falseuseFile>
disableXmlReport>truedisableXmlReport>
includes>
include>**/*Test.*include>
include>**/*Suite.*include>
includes>
configuration>
plugin>
plugin>
groupId>org.apache.maven.pluginsgroupId>
artifactId>maven-shade-pluginartifactId>
version>2.3version>
executions>
execution>
phase>packagephase>
goals>
goal>shadegoal>
goals>
configuration>
filters>
filter>
artifact>*:*artifact>
excludes>
exclude>META-INF/*.SFexclude>
exclude>META-INF/*.DSAexclude>
exclude>META-INF/*.RSAexclude>
excludes>
filter>
filters>
transformers>
transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
mainClass> org.table_sql.TestHiveViewBySQLDemomainClass>
transformer>
transformers>
configuration>
execution>
executions>
plugin>
plugins>
build>
2、代码
package org.table_sql;
import java.util.HashMap;
import java.util.List;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.SqlDialect;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.catalog.CatalogDatabaseImpl;
import org.apache.flink.table.catalog.CatalogView;
import org.apache.flink.table.catalog.ObjectPath;
import org.apache.flink.table.catalog.hive.HiveCatalog;
import org.apache.flink.table.module.hive.HiveModule;
import org.apache.flink.types.Row;
import org.apache.flink.util.CollectionUtil;
/**
* @author alanchan
*
*/
public class TestHiveViewBySQLDemo {
public static final String tableName = "viewtest";
public static final String hive_create_table_sql = "CREATE TABLE " + tableName + " (n" +
" id INT,n" +
" name STRING,n" +
" age INT" + ") " +
"TBLPROPERTIES (n" +
" 'sink.partition-commit.delay'='5 s',n" +
" 'sink.partition-commit.trigger'='partition-time',n" +
" 'sink.partition-commit.policy.kind'='metastore,success-file'" + ")";
/**
* @param args
* @throws Exception
*/
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tenv = StreamTableEnvironment.create(env);
String moduleName = "myhive";
String hiveVersion = "3.1.2";
tenv.loadModule(moduleName, new HiveModule(hiveVersion));
String name = "alan_hive";
String defaultDatabase = "default";
String databaseName = "viewtest_db";
String hiveConfDir = "/usr/local/bigdata/apache-hive-3.1.2-bin/conf";
HiveCatalog hiveCatalog = new HiveCatalog(name, defaultDatabase, hiveConfDir);
tenv.registerCatalog(name, hiveCatalog);
tenv.useCatalog(name);
tenv.listDatabases();
hiveCatalog.createDatabase(databaseName, new CatalogDatabaseImpl(new HashMap(), hiveConfDir) {
}, true);
// tenv.executeSql("create database "+databaseName);
tenv.useDatabase(databaseName);
// 创建第一个视图viewName_byTable
String selectSQL = "select * from " + tableName;
String viewName_byTable = "test_view_table_V";
String createViewSQL = "create view " + viewName_byTable + " as " + selectSQL;
tenv.getConfig().setSqlDialect(SqlDialect.HIVE);
tenv.executeSql(hive_create_table_sql);
// tenv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
String insertSQL = "insert into " + tableName + " values (1,'alan',18)";
tenv.executeSql(insertSQL);
tenv.executeSql(createViewSQL);
tenv.listViews();
CatalogView catalogView = (CatalogView) hiveCatalog.getTable(new ObjectPath(databaseName, viewName_byTable));
ListRow> results = CollectionUtil.iteratorToList(tenv.executeSql("select * from " + viewName_byTable).collect());
for (Row row : results) {
System.out.println("test_view_table_V: " + row.toString());
}
// 创建第二个视图
String viewName_byView = "test_view_view_V";
tenv.executeSql("create view " + viewName_byView + " (v2_id,v2_name,v2_age) comment 'test_view_view_V comment' as select * from " + viewName_byTable);
catalogView = (CatalogView) hiveCatalog.getTable(new ObjectPath(databaseName, viewName_byView));
results = CollectionUtil.iteratorToList(tenv.executeSql("select * from " + viewName_byView).collect());
System.out.println("test_view_view_V comment : " + catalogView.getComment());
for (Row row : results) {
System.out.println("test_view_view_V : " + row.toString());
}
tenv.executeSql("drop database " + databaseName + " cascade");
}
}
3、运行结果
前提是flink的集群可用。使用maven打包成jar。
[alanchan@server2 bin]$ flink run /usr/local/bigdata/flink-1.13.5/examples/table/table_sql-0.0.2-SNAPSHOT.jar
Hive Session ID = ed6d5c9b-e00f-4881-840d-24c72aba6db7
Hive Session ID = 14445dc8-1f08-4f0f-bb45-aba8c6f52174
Job has been submitted with JobID bff7b59367bd5de6e778b442c4cc4404
Hive Session ID = 4c16f4fc-4c10-4353-b322-e6633e3ebe3d
Hive Session ID = 57949f09-bdcb-497f-a85c-ed9766fc4ce3
2023-10-13 02:42:24,891 INFO org.apache.hadoop.mapred.FileInputFormat [] - Total input files to process : 0
Job has been submitted with JobID 80e48bb76e3d580412fdcdc434a8a979
test_view_table_V: +I[1, alan, 18]
Hive Session ID = a73d5b93-2129-4159-ad5e-0814df77e987
Hive Session ID = e4ae1a79-4d5e-4835-81de-ebc2041eedf9
2023-10-13 02:42:33,648 INFO org.apache.hadoop.mapred.FileInputFormat [] - Total input files to process : 1
Job has been submitted with JobID c228d9ce3bdce91dc68bff75d14db1e5
test_view_view_V comment : test_view_view_V comment
test_view_view_V : +I[1, alan, 18]
Hive Session ID = e4a38393-d760-4bd3-8d8b-864cbe0daba7
3)、API创建Hive 视图示例
通过api创建视图相对比较麻烦,且存在版本更新的过期方法情况。
通过TableSchema和CatalogViewImpl创建视图则已过期,当前推荐使用通过CatalogView和ResolvedSchema来创建视图。
另外需要注意的是下面两个参数的区别
String originalQuery,原始的sql
String expandedQuery,带有数据库名称的表,甚至包含hivecatalog
例如:如果使用default作为默认的数据库,查询语句为select * from test1,则
originalQuery = ”select name,value from test1“即可,
expandedQuery = “selecttest1.name
, test1.value
from default.test1
”
修改、删除视图等操作比较简单,不再赘述。
1、maven依赖
此处使用的依赖与上示例一致,mainclass变成本示例的类,不再赘述。
2、代码
import static org.apache.flink.util.Preconditions.checkNotNull;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.DataTypes;
import org.apache.flink.table.api.Schema;
import org.apache.flink.table.api.SqlDialect;
import org.apache.flink.table.api.TableSchema;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.catalog.CatalogBaseTable;
import org.apache.flink.table.catalog.CatalogDatabaseImpl;
import org.apache.flink.table.catalog.CatalogView;
import org.apache.flink.table.catalog.CatalogViewImpl;
import org.apache.flink.table.catalog.ObjectPath;
import org.apache.flink.table.catalog.ResolvedCatalogView;
import org.apache.flink.table.catalog.ResolvedSchema;
import org.apache.flink.table.catalog.exceptions.CatalogException;
import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
import org.apache.flink.table.catalog.exceptions.TableAlreadyExistException;
import org.apache.flink.table.catalog.hive.HiveCatalog;
import org.apache.flink.table.module.hive.HiveModule;
import org.apache.flink.types.Row;
import org.apache.flink.util.CollectionUtil;
import org.apache.flink.table.catalog.CatalogBaseTable;
import org.apache.flink.table.catalog.Column;
/**
* @author alanchan
*
*/
public class TestHiveViewByAPIDemo {
public static final String tableName = "viewtest";
public static final String hive_create_table_sql = "CREATE TABLE " + tableName + " (n" +
" id INT,n" +
" name STRING,n" +
" age INT" + ") " +
"TBLPROPERTIES (n" +
" 'sink.partition-commit.delay'='5 s',n" +
" 'sink.partition-commit.trigger'='partition-time',n" +
" 'sink.partition-commit.policy.kind'='metastore,success-file'" + ")";
/**
* @param args
* @throws Exception
*/
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tenv = StreamTableEnvironment.create(env);
System.setProperty("HADOOP_USER_NAME", "alanchan");
String moduleName = "myhive";
String hiveVersion = "3.1.2";
tenv.loadModule(moduleName, new HiveModule(hiveVersion));
String catalogName = "alan_hive";
String defaultDatabase = "default";
String databaseName = "viewtest_db";
String hiveConfDir = "/usr/local/bigdata/apache-hive-3.1.2-bin/conf";
HiveCatalog hiveCatalog = new HiveCatalog(catalogName, defaultDatabase, hiveConfDir);
tenv.registerCatalog(catalogName, hiveCatalog);
tenv.useCatalog(catalogName);
tenv.listDatabases();
hiveCatalog.createDatabase(databaseName, new CatalogDatabaseImpl(new HashMap(), hiveConfDir) {
}, true);
// tenv.executeSql("create database "+databaseName);
tenv.useDatabase(databaseName);
tenv.getConfig().setSqlDialect(SqlDialect.HIVE);
tenv.executeSql(hive_create_table_sql);
String insertSQL = "insert into " + tableName + " values (1,'alan',18)";
String insertSQL2 = "insert into " + tableName + " values (2,'alan2',19)";
String insertSQL3 = "insert into " + tableName + " values (3,'alan3',20)";
tenv.executeSql(insertSQL);
tenv.服务器托管网executeSql(insertSQL2);
tenv.executeSql(insertSQL3);
tenv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
String viewName1 = "test_view_table_V";
String viewName2 = "test_view_table_V2";
ObjectPath path1= new ObjectPath(databaseName, viewName1);
//ObjectPath.fromString("viewtest_db.test_view_table_V2")
ObjectPath path2= new ObjectPath(databaseName, viewName2);
String originalQuery = "SELECT id, name, age FROM "+tableName+" WHERE id >=1 ";
// String originalQuery = String.format("select * from %s",tableName+" WHERE id >=1 ");
System.out.println("originalQuery:"+originalQuery);
String expandedQuery = "SELECT id, name, age FROM "+databaseName+"."+tableName+" WHERE id >=1 ";
// String expandedQuery = String.format("select * from %s.%s", catalogName, path1.getFullName());
System.out.println("expandedQuery:"+expandedQuery);
String comment = "this is a comment";
// 创建视图,第一种方式(通过TableSchema和CatalogViewImpl),已声明过期
createView1(originalQuery,expandedQuery,comment,hiveCatalog,path1);
// 查询视图
ListRow> results = CollectionUtil.iteratorToList( tenv.executeSql("select * from " + viewName1).collect());
for (Row row : results) {
System.out.println("test_view_table_V: " + row.toString());
}
// 创建视图,第二种方式(通过Schema和ResolvedSchema)
createView2(originalQuery,expandedQuery,comment,hiveCatalog,path2);
ListRow> results2 = CollectionUtil.iteratorToList( tenv.executeSql("select * from viewtest_db.test_view_table_V2").collect());
for (Row row : results2) {
System.out.println("test_view_table_V2: " + row.toString());
}
tenv.executeSql("drop database " + databaseName + " cascade");
}
static void createView1(String originalQuery,String expandedQuery,String comment,HiveCatalog hiveCatalog,ObjectPath path) throws Exception {
TableSchema viewSchema = new TableSchema(new String[]{"id", "name","age"}, new TypeInformation[]{Types.INT, Types.STRING,Types.INT});
CatalogBaseTable viewTable = new CatalogViewImpl(
originalQuery,
expandedQuery,
viewSchema,
new HashMap(),
comment);
hiveCatalog.createTable(path, viewTable, false);
}
static void createView2(String originalQuery,String expandedQuery,String comment,HiveCatalog hiveCatalog,ObjectPath path) throws Exception {
ResolvedSchema resolvedSchema = new ResolvedSchema(
Arrays.asList(
Column.physical("id", DataTypes.INT()),
Column.physical("name", DataTypes.STRING()),
Column.physical("age", DataTypes.INT())),
Collections.emptyList(),
null);
CatalogView origin = CatalogView.of(
Schema.newBuilder().fromResolvedSchema(resolvedSchema).build(),
comment,
// String.format("select * from tt"),
// String.format("select * from %s.%s", TEST_CATALOG_NAME, path1.getFullName()),
originalQuery,
expandedQuery,
Collections.emptyMap());
CatalogView view = new ResolvedCatalogView(origin, resolvedSchema);
// ObjectPath.fromString("viewtest_db.test_view_table_V2")
hiveCatalog.createTable(path, view, false);
}
}
3、运行结果
[alanchan@server2 bin]$ flink run /usr/local/bigdata/flink-1.13.5/examples/table/table_sql-0.0.3-SNAPSHOT.jar
Hive Session ID = ab4d159a-b2d3-489e-988f-eebdc43d9517
Hive Session ID = 391de19c-5d5a-4a83-a88c-c43cca71fc63
Job has been submitted with JobID a880510032165523f3f2a559c5ab4ec9
Hive Session ID = cb063c31-eaf2-44e3-8fc0-9e8d2a6a3a5d
Job has been submitted with JobID cb05286c404b561306f8eb3969c3456a
Hive Session ID = 8132b36e-c9e2-41a2-8f42-3fe842e0991f
Job has been submitted with JobID 264aef7da1b17598bda159d946827dea
Hive Session ID = 7657be14-8188-4362-84a9-4c84c596021b
2023-10-16 07:21:19,073 INFO org.apache.hadoop.mapred.FileInputFormat [] - Total input files to process : 3
Job has been submitted with JobID 05c2bb7265b0430cb12e00237f18444b
test_view_table_V: +I[1, alan, 18]
test_view_table_V: +I[2, alan2, 19]
test_view_table_V: +I[3, alan3, 20]
Hive Session ID = 7bb01c0d-03c9-413a-9040-c89676cec3b9
2023-10-16 07:21:27,512 INFO org.apache.hadoop.mapred.FileInputFormat [] - Total input files to process : 3
Job has been submitted with JobID 79130d1fe56d88a784980d16e7f1cfb4
test_view_table_V2: +I[1, alan, 18]
test_view_table_V2: +I[2, alan2, 19]
test_view_table_V2: +I[3, alan3, 20]
Hive Session ID = 6d44ea95-f733-4c56-8da4-e2687a4bf945
本文简单介绍了通过java api操作视图,提供了三个示例,即sql实现和java api的两种实现方式。
服务器托管,北京服务器托管,服务器租用 http://www.fwqtg.net
变量:bash作为程序设计语言和其它高级语言一样也提供使用和定义变量的功能,简单说就是让一个特定的字符串代表不固定的内容 a=服务器托管网123,echo $a #分为:预定义变量、环境变量、自定义变量、位置变量服务器托管,北京服务器托管,服务器租用 http…