当前位置:网站首页>Three ways to operate tables in Apache iceberg
Three ways to operate tables in Apache iceberg
2020-11-09 07:35:00 【osc_tjee7s】
stay Apache Iceberg There are many ways to create tables in , Among them is the use of Catalog How or how to implement org.apache.iceberg.Tables Interface . Let's briefly introduce how to use ..
If you want to know in time Spark、Hadoop perhaps HBase Related articles , Welcome to WeChat official account. : iteblog_hadoop
List of articles
Use Hive catalog
You can tell by the name ,Hive catalog It's through connections Hive Of MetaStore, hold Iceberg The table is stored in it , Its implementation class is org.apache.iceberg.hive.HiveCatalog, Here is the passage sparkContext Medium hadoopConfiguration To get HiveCatalog The way :
import org.apache.iceberg.hive.HiveCatalog;
Catalog catalog = new HiveCatalog(spark.sparkContext().hadoopConfiguration());
Catalog The interface defines the method of operation table , such as createTable, loadTable, renameTable, as well as dropTable. If you want to create a table , We need to define TableIdentifier, Tabular Schema And partition information , as follows :
import org.apache.iceberg.Table;
import org.apache.iceberg.catalog.TableIdentifier;
import org.apache.iceberg.PartitionSpec;
import org.apache.iceberg.Schema;
TableIdentifier name = TableIdentifier.of("default", "iteblog");
Schema schema = new Schema(
Types.NestedField.required(1, "id", Types.IntegerType.get()),
Types.NestedField.optional(2, "name", Types.StringType.get()),
Types.NestedField.required(3, "age", Types.IntegerType.get()),
Types.NestedField.optional(4, "ts", Types.TimestampType.withZone())
);
PartitionSpec spec = PartitionSpec.builderFor(schema).year("ts").bucket("id", 2).build();
Table table = catalog.createTable(name, schema, spec);
Use Hadoop catalog
Hadoop catalog Do not rely on Hive MetaStore To store metadata , Its use HDFS Or a similar file system to store metadata . Be careful , File systems need to support atomic renaming operations , So the local file system (local FS)、 Object storage (S3、OSS etc. ) To store Apache Iceberg Metadata is not secure . Here's how to get HadoopCatalog Example :
import org.apache.hadoop.conf.Configuration;
import org.apache.iceberg.hadoop.HadoopCatalog;
Configuration conf = new Configuration();
String warehousePath = "hdfs://www.iteblog.com:8020/warehouse_path";
HadoopCatalog catalog = new HadoopCatalog(conf, warehousePath);
and Hive catalog equally ,HadoopCatalog Also realize Catalog Interface , So it also implements various operations of the table , Include createTable, loadTable, as well as dropTable. Here's how to use HadoopCatalog To create Iceberg Example :
import org.apache.iceberg.Table;
import org.apache.iceberg.catalog.TableIdentifier;
TableIdentifier name = TableIdentifier.of("logging", "logs");
Table table = catalog.createTable(name, schema, spec);
Use Hadoop tables
Iceberg It also supports storing in HDFS Table in table of contents . and Hadoop catalog equally , File systems need to support atomic renaming operations , So the local file system (local FS)、 Object storage (S3、OSS etc. ) To store Apache Iceberg Metadata is not secure . Tables stored in this way do not support various operations of the table , For example, it doesn't support renameTable. Here's how to get HadoopTables Example :
import org.apache.hadoop.conf.Configuration;
import org.apache.iceberg.hadoop.HadoopTables;
import org.apache.iceberg.Table;
Configuration conf = new Configuration():
HadoopTables tables = new HadoopTables(conf);
Table table = tables.create(schema, spec, table_location);
stay Spark in , It supports HiveCatalog、HadoopCatalog as well as HadoopTables Way to create 、 Load table . If the incoming table is not a path , select HiveCatalog, otherwise Spark It will be inferred that the table is stored in HDFS Upper .
Of course ,Apache Iceberg The storage place of table metadata is pluggable , So we can customize the way metadata is stored , such as AWS Just one for the community issue, Its handle Apache Iceberg Metadata in is stored in glue Inside , See #1633、#1608.
In addition to this blog post , It's all original !Please add : Reprinted from Past memory (https://www.iteblog.com/)
Link to this article : 【Apache Iceberg There are three ways to operate the table 】(https://www.iteblog.com/archives/9886.html)
版权声明
本文为[osc_tjee7s]所创,转载请带上原文链接,感谢
边栏推荐
- Finally, the python project is released as exe executable program process
- EasyNTS上云网关设备在雪亮工程项目中的实战应用
- Android 解决setRequestedOrientation之后手机屏幕的旋转不触发onConfigurationChanged方法
- Combine theory with practice to understand CORS thoroughly
- c++11-17 模板核心知识(二)—— 类模板
- 20201108编程练习——练习3
- 基于链表的有界阻塞队列 —— LinkedBlockingQueue
- A solution to the problem that color picker (palette) cannot use shortcut keys in sublime Text3 plug-in
- Exception capture and handling in C + +
- 卧槽,这年轻人不讲武德,应届生凭“小抄”干掉5年老鸟,成功拿到字节20Koffer
猜你喜欢

ubuntu 上使用微信的新方案——手机投屏

使用递增计数器的线程同步工具 —— 信号量,它的原理是什么样子的?

常见特征金字塔网络FPN及变体

Chapter 5 programming

上线1周,B.Protocal已有7000ETH资产!

如何通过Sidecar自定义资源减少Istio代理资源消耗

Concurrent linked queue: a non blocking unbounded thread safe queue

Oschina plays disorderly on Monday

Programmers should know the URI, a comprehensive understanding of the article

2 normal mode
随机推荐
理论与实践相结合彻底理解CORS
SAP S/4HANA 2020安装实录
Why choose f for the back end of dark website? - darklang
Sublime text3 插件ColorPicker(调色板)不能使用快捷键的解决方法
Oschina plays disorderly on Monday
How to reduce the resource consumption of istio agent through sidecar custom resource
Factory Pattern模式(简单工厂、工厂方法、抽象工厂模式)
Do you know how the computer starts?
Copy on write collection -- copyonwritearraylist
2 普通模式
A brief introduction of C code to open or close the firewall example
SaaS: another manifestation of platform commercialization capability
STS安装
Installation record of SAP s / 4hana 2020
Detailed analysis of OpenGL es framework (8) -- OpenGL es Design Guide
Review of API knowledge
Finally, the python project is released as exe executable program process
Core knowledge of C + + 11-17 template (2) -- class template
GDI 及OPENGL的区别
Tips in Android Development: requires permission android.permission write_ Settings solution