当前位置:网站首页>Three ways to operate tables in Apache iceberg
Three ways to operate tables in Apache iceberg
2020-11-09 07:35:00 【osc_tjee7s】
stay Apache Iceberg There are many ways to create tables in , Among them is the use of Catalog How or how to implement org.apache.iceberg.Tables Interface . Let's briefly introduce how to use ..
List of articles
Use Hive catalog
You can tell by the name ,Hive catalog It's through connections Hive Of MetaStore, hold Iceberg The table is stored in it , Its implementation class is org.apache.iceberg.hive.HiveCatalog, Here is the passage sparkContext Medium hadoopConfiguration To get HiveCatalog The way :
import org.apache.iceberg.hive.HiveCatalog;
Catalog catalog = new HiveCatalog(spark.sparkContext().hadoopConfiguration());
Catalog The interface defines the method of operation table , such as createTable, loadTable, renameTable, as well as dropTable. If you want to create a table , We need to define TableIdentifier, Tabular Schema And partition information , as follows :
import org.apache.iceberg.Table;
import org.apache.iceberg.catalog.TableIdentifier;
import org.apache.iceberg.PartitionSpec;
import org.apache.iceberg.Schema;
TableIdentifier name = TableIdentifier.of("default", "iteblog");
Schema schema = new Schema(
Types.NestedField.required(1, "id", Types.IntegerType.get()),
Types.NestedField.optional(2, "name", Types.StringType.get()),
Types.NestedField.required(3, "age", Types.IntegerType.get()),
Types.NestedField.optional(4, "ts", Types.TimestampType.withZone())
);
PartitionSpec spec = PartitionSpec.builderFor(schema).year("ts").bucket("id", 2).build();
Table table = catalog.createTable(name, schema, spec);
Use Hadoop catalog
Hadoop catalog Do not rely on Hive MetaStore To store metadata , Its use HDFS Or a similar file system to store metadata . Be careful , File systems need to support atomic renaming operations , So the local file system (local FS)、 Object storage (S3、OSS etc. ) To store Apache Iceberg Metadata is not secure . Here's how to get HadoopCatalog Example :
import org.apache.hadoop.conf.Configuration;
import org.apache.iceberg.hadoop.HadoopCatalog;
Configuration conf = new Configuration();
String warehousePath = "hdfs://www.iteblog.com:8020/warehouse_path";
HadoopCatalog catalog = new HadoopCatalog(conf, warehousePath);
and Hive catalog equally ,HadoopCatalog Also realize Catalog Interface , So it also implements various operations of the table , Include createTable, loadTable, as well as dropTable. Here's how to use HadoopCatalog To create Iceberg Example :
import org.apache.iceberg.Table;
import org.apache.iceberg.catalog.TableIdentifier;
TableIdentifier name = TableIdentifier.of("logging", "logs");
Table table = catalog.createTable(name, schema, spec);
Use Hadoop tables
Iceberg It also supports storing in HDFS Table in table of contents . and Hadoop catalog equally , File systems need to support atomic renaming operations , So the local file system (local FS)、 Object storage (S3、OSS etc. ) To store Apache Iceberg Metadata is not secure . Tables stored in this way do not support various operations of the table , For example, it doesn't support renameTable. Here's how to get HadoopTables Example :
import org.apache.hadoop.conf.Configuration;
import org.apache.iceberg.hadoop.HadoopTables;
import org.apache.iceberg.Table;
Configuration conf = new Configuration():
HadoopTables tables = new HadoopTables(conf);
Table table = tables.create(schema, spec, table_location);
stay Spark in , It supports HiveCatalog、HadoopCatalog as well as HadoopTables Way to create 、 Load table . If the incoming table is not a path , select HiveCatalog, otherwise Spark It will be inferred that the table is stored in HDFS Upper .
Of course ,Apache Iceberg The storage place of table metadata is pluggable , So we can customize the way metadata is stored , such as AWS Just one for the community issue, Its handle Apache Iceberg Metadata in is stored in glue Inside , See #1633、#1608.
In addition to this blog post , It's all original !Please add : Reprinted from Past memory (https://www.iteblog.com/)
Link to this article : 【Apache Iceberg There are three ways to operate the table 】(https://www.iteblog.com/archives/9886.html)
版权声明
本文为[osc_tjee7s]所创,转载请带上原文链接,感谢
边栏推荐
- Detailed analysis of OpenGL es framework (8) -- OpenGL es Design Guide
- Tips in Android Development: requires permission android.permission write_ Settings solution
- 通过canvas获取视频第一帧封面图
- 20201108编程练习——练习3
- 写时复制集合 —— CopyOnWriteArrayList
- 上线1周,B.Protocal已有7000ETH资产!
- Introduction to nmon
- 华为HCIA笔记
- 2 普通模式
- 你有没有想过为什么交易和退款要拆开不同的表
猜你喜欢
Factory pattern pattern pattern (simple factory, factory method, abstract factory pattern)
Concurrent linked queue: a non blocking unbounded thread safe queue
Investigation of solutions to rabbitmq cleft brain problem
2 normal mode
STC转STM32第一次开发
商品管理系统——SPU检索功能
How does semaphore, a thread synchronization tool that uses an up counter, look like?
平台商业化能力的另一种表现形式SAAS
EasyNTS上云网关设备在雪亮工程项目中的实战应用
Have you ever thought about why the transaction and refund have to be split into different tables
随机推荐
Finally, the python project is released as exe executable program process
C/C++编程笔记:指针篇!从内存理解指针,让你完全搞懂指针
失业日志 11月5日
架构中台图
23 pictures, take you to the recommended system
How does semaphore, a thread synchronization tool that uses an up counter, look like?
服务器性能监控神器nmon使用介绍
FC 游戏机的工作原理是怎样的?
首次开通csdn,这篇文章送给过去的自己和正在发生的你
STS安装
Do you know how the computer starts?
2020,Android开发者打破寒冬的利器是什么?
SaaS: another manifestation of platform commercialization capability
Teacher Liang's small class
操作系统之bios
几行代码轻松实现跨系统传递 traceId,再也不用担心对不上日志了!
如何通过Sidecar自定义资源减少Istio代理资源消耗
平台商业化能力的另一种表现形式SAAS
1.操作系统是干什么的?
How to get started with rabbitmq