Skip to content

Latest commit

 

History

History
374 lines (267 loc) · 20.9 KB

spark-sql-SQLConf.adoc

File metadata and controls

374 lines (267 loc) · 20.9 KB

SQLConf — Internal Configuration Store

SQLConf is an internal key-value configuration store for parameters and hints used in Spark SQL.

SQLConf offers methods to get, set, unset or clear values of configuration properties, but has also the accessor methods to read the current value of a configuration property or hint.

You can access a session-specific SQLConf using SessionState.

scala> spark.version
res0: String = 2.3.0

scala> :type spark
org.apache.spark.sql.SparkSession

scala> :type spark.sessionState.conf
org.apache.spark.sql.internal.SQLConf

scala> println(spark.sessionState.conf.offHeapColumnVectorEnabled)
false

import spark.sessionState.conf

// accessing properties through accessor methods
scala> conf.numShufflePartitions
res1: Int = 200

// setting properties using aliases
import org.apache.spark.sql.internal.SQLConf.SHUFFLE_PARTITIONS
conf.setConf(SHUFFLE_PARTITIONS, 2)
scala> conf.numShufflePartitions
res2: Int = 2

// unset aka reset properties to the default value
conf.unsetConf(SHUFFLE_PARTITIONS)
scala> conf.numShufflePartitions
res3: Int = 200

// You could also access the internal SQLConf using get
import org.apache.spark.sql.internal.SQLConf
val cc = SQLConf.get
scala> cc == conf
res4: Boolean = true
Note

SQLConf is an internal part of Spark SQL and is not meant to be used directly.

Spark SQL configuration is available through the user-facing interface RuntimeConfig that you can access using SparkSession.

scala> spark.version
res0: String = 2.3.0

scala> :type spark
org.apache.spark.sql.SparkSession

scala> :type spark.conf
org.apache.spark.sql.RuntimeConfig
Table 1. SQLConf’s Accessor Methods
Name Parameter Description

adaptiveExecutionEnabled

spark.sql.adaptive.enabled

Used exclusively when EnsureRequirements adds an ExchangeCoordinator (for adaptive query execution)

autoBroadcastJoinThreshold

spark.sql.autoBroadcastJoinThreshold

Used exclusively in JoinSelection execution planning strategy

autoSizeUpdateEnabled

spark.sql.statistics.size.autoUpdate.enabled

Used when:

broadcastTimeout

spark.sql.broadcastTimeout

Used exclusively in BroadcastExchangeExec (for broadcasting a table to executors).

bucketingEnabled

spark.sql.sources.bucketing.enabled

Used when FileSourceScanExec is requested for the input RDD and to determine output partitioning and output ordering

cacheVectorizedReaderEnabled

spark.sql.inMemoryColumnarStorage.enableVectorizedReader

Used exclusively when InMemoryTableScanExec physical operator is requested for supportsBatch flag.

caseSensitiveAnalysis

spark.sql.caseSensitive

cboEnabled

spark.sql.cbo.enabled

Used in:

columnBatchSize

spark.sql.inMemoryColumnarStorage.batchSize

Used when…​FIXME

dataFramePivotMaxValues

spark.sql.pivotMaxValues

Used exclusively in pivot operator.

dataFrameRetainGroupColumns

spark.sql.retainGroupColumns

Used exclusively in RelationalGroupedDataset when creating the result Dataset (after agg, count, mean, max, avg, min, and sum operators).

defaultSizeInBytes

spark.sql.defaultSizeInBytes

Used when:

exchangeReuseEnabled

spark.sql.exchange.reuse

Used when ReuseSubquery and ReuseExchange physical optimizations are executed

Note
When disabled (i.e. false), ReuseSubquery and ReuseExchange physical optimizations do no optimizations.

fallBackToHdfsForStatsEnabled

spark.sql.statistics.fallBackToHdfs

Used exclusively when DetermineTableStats logical resolution rule is executed.

histogramEnabled

spark.sql.statistics.histogram.enabled

Used exclusively when AnalyzeColumnCommand logical command is executed.

histogramNumBins

spark.sql.statistics.histogram.numBins

Used exclusively when AnalyzeColumnCommand is executed with spark.sql.statistics.histogram.enabled turned on (and calculates percentiles).

hugeMethodLimit

spark.sql.codegen.hugeMethodLimit

Used exclusively when WholeStageCodegenExec unary physical operator is requested to execute (and generate a RDD[InternalRow]), i.e. when the compiled function exceeds this threshold, the whole-stage codegen is deactivated for this subtree of the query plan.

ignoreCorruptFiles

spark.sql.files.ignoreCorruptFiles

Used when:

ignoreMissingFiles

spark.sql.files.ignoreMissingFiles

Used exclusively when FileScanRDD is created (and then to compute a partition)

inMemoryPartitionPruning

spark.sql.inMemoryColumnarStorage.partitionPruning

Used exclusively when InMemoryTableScanExec physical operator is requested for filtered cached column batches (as a RDD[CachedBatch]).

isParquetBinaryAsString

spark.sql.parquet.binaryAsString

isParquetINT96AsTimestamp

spark.sql.parquet.int96AsTimestamp

numShufflePartitions

spark.sql.shuffle.partitions

Used in:

isParquetINT96TimestampConversion

spark.sql.parquet.int96TimestampConversion

Used exclusively when ParquetFileFormat is requested to build a data reader with partition column values appended.

joinReorderEnabled

spark.sql.cbo.joinReorder.enabled

Used exclusively in CostBasedJoinReorder logical plan optimization

limitScaleUpFactor

spark.sql.limit.scaleUpFactor

Used exclusively when a physical operator is requested the first n rows as an array.

offHeapColumnVectorEnabled

spark.sql.columnVector.offheap.enabled

Used when:

optimizerInSetConversionThreshold

spark.sql.optimizer.inSetConversionThreshold

Used exclusively when OptimizeIn logical query optimization is applied to a logical plan (and replaces an In predicate expression with an InSet)

parquetFilterPushDown

spark.sql.parquet.filterPushdown

Used exclusively when ParquetFileFormat is requested to build a data reader with partition column values appended.

parquetRecordFilterEnabled

spark.sql.parquet.recordLevelFilter.enabled

Used exclusively when ParquetFileFormat is requested to build a data reader with partition column values appended.

parquetVectorizedReaderEnabled

spark.sql.parquet.enableVectorizedReader

Used when:

preferSortMergeJoin

spark.sql.join.preferSortMergeJoin

Used exclusively in JoinSelection execution planning strategy to prefer sort merge join over shuffle hash join.

runSQLonFile

spark.sql.runSQLOnFiles

Used when:

sessionLocalTimeZone

spark.sql.session.timeZone

starSchemaDetection

spark.sql.cbo.starSchemaDetection

Used exclusively in ReorderJoin logical plan optimization (and indirectly in StarSchemaDetection)

stringRedactionPattern

spark.sql.redaction.string.regex

Used when:

subexpressionEliminationEnabled

spark.sql.subexpressionElimination.enabled

Used exclusively when SparkPlan is requested for subexpressionEliminationEnabled flag.

supportQuotedRegexColumnName

spark.sql.parser.quotedRegexColumnNames

Used when:

useCompression

spark.sql.inMemoryColumnarStorage.compressed

Used when…​FIXME

wholeStageEnabled

spark.sql.codegen.wholeStage

Used in:

wholeStageFallback

spark.sql.codegen.fallback

Used exclusively when WholeStageCodegenExec is executed.

wholeStageMaxNumFields

spark.sql.codegen.maxFields

Used in:

wholeStageSplitConsumeFuncByOperator

spark.sql.codegen.splitConsumeFuncByOperator

Used exclusively when CodegenSupport is requested to consume

wholeStageUseIdInClassName

spark.sql.codegen.useIdInClassName

Used exclusively when WholeStageCodegenExec is requested to generate the Java source code for the child physical plan subtree (when created)

windowExecBufferSpillThreshold

spark.sql.windowExec.buffer.spill.threshold

Used exclusively when WindowExec unary physical operator is executed.

useObjectHashAggregation

spark.sql.execution.useObjectHashAggregateExec

Used exclusively in Aggregation execution planning strategy when selecting a physical plan.

Getting Parameters and Hints

You can get the current parameters and hints using the following family of get methods.

getConfString(key: String): String
getConf[T](entry: ConfigEntry[T], defaultValue: T): T
getConf[T](entry: ConfigEntry[T]): T
getConf[T](entry: OptionalConfigEntry[T]): Option[T]
getConfString(key: String, defaultValue: String): String
getAllConfs: immutable.Map[String, String]
getAllDefinedConfs: Seq[(String, String, String)]

Setting Parameters and Hints

You can set parameters and hints using the following family of set methods.

setConf(props: Properties): Unit
setConfString(key: String, value: String): Unit
setConf[T](entry: ConfigEntry[T], value: T): Unit

Unsetting Parameters and Hints

You can unset parameters and hints using the following family of unset methods.

unsetConf(key: String): Unit
unsetConf(entry: ConfigEntry[_]): Unit

Clearing All Parameters and Hints

clear(): Unit

You can use clear to remove all the parameters and hints in SQLConf.

Redacting Data Source Options with Sensitive Information — redactOptions Method

redactOptions(options: Map[String, String]): Map[String, String]

redactOptions takes the values of the spark.sql.redaction.options.regex and spark.redaction.regex configuration properties.

For every regular expression (in the order), redactOptions redacts sensitive information, i.e. finds the first match of a regular expression pattern in every option key or value and if either matches replaces the value with ***(redacted).

Note
redactOptions is used exclusively when SaveIntoDataSourceCommand logical command is requested for the simple description.