Skip to content

Latest commit

 

History

History
94 lines (61 loc) · 3.98 KB

spark-sql-JDBCRelation.adoc

File metadata and controls

94 lines (61 loc) · 3.98 KB

JDBCRelation — Relation with Inserting or Overwriting Data, Column Pruning and Filter Pushdown

As a BaseRelation, JDBCRelation defines the schema of tuples (data) and the SQLContext.

As a InsertableRelation, JDBCRelation supports inserting or overwriting data.

JDBCRelation is created when:

When requested for a human-friendly text representation, JDBCRelation requests the JDBCOptions for the name of the table and the number of partitions (if defined).

JDBCRelation([table]) [numPartitions=[number]]
spark sql JDBCRelation webui query details
Figure 1. JDBCRelation in web UI (Details for Query)
scala> df.explain
== Physical Plan ==
*Scan JDBCRelation(projects) [numPartitions=1] [id#0,name#1,website#2] ReadSchema: struct<id:int,name:string,website:string>

JDBCRelation uses the SparkSession to return a SQLContext.

JDBCRelation turns the needConversion flag off, i.e. FIXME

Creating JDBCRelation Instance

JDBCRelation takes the following when created:

Finding Unhandled Filter Predicates — unhandledFilters Method

unhandledFilters(filters: Array[Filter]): Array[Filter]
Note
unhandledFilters is part of BaseRelation Contract to find unhandled Filter predicates.

unhandledFilters…​FIXME

Schema of Tuples (Data) — schema Property

schema: StructType
Note
schema is part of BaseRelation Contract to return the schema of the tuples in a relation.

schema uses JDBCRDD to resolveTable given the JDBCOptions (that simply returns the Catalyst schema of the table, also known as the default table schema).

If customSchema JDBC option was defined, schema uses JdbcUtils to replace the data types in the default table schema.

Inserting or Overwriting Data — insert Method

insert(data: DataFrame, overwrite: Boolean): Unit
Note
insert is part of InsertableRelation Contract that inserts or overwrites data in a relation.

insert…​FIXME

Building Distributed Data Scan with Column Pruning and Filter Pushdown — buildScan Method

buildScan(requiredColumns: Array[String], filters: Array[Filter]): RDD[Row]
Note
buildScan is part of PrunedFilteredScan Contract to build a distributed data scan (as a RDD[Row]) with support for column pruning and filter pushdown.

buildScan…​FIXME