You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When attempting to write data to an Iceberg table using the Spark write API, a NullPointerException is thrown. The job fails immediately after initiating the write operation.
The write operation should complete successfully, persisting the data to the Iceberg table without exceptions.
1. Set up an Iceberg catalog with the following configuration in Spark:
2. spark.sql.catalog.my_catalog = org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.my_catalog.type = hadoop
spark.sql.catalog.my_catalog.warehouse = s3://my-bucket/my-warehouse
CREATE TABLE my_catalog.default.sample_table (
id INT,
name STRING
) USING iceberg;
import spark.implicits._
val data = Seq((1, "Alice"), (2, "Bob")).toDF("id", "name")
data.writeTo("my_catalog.default.sample_table").append()
java.lang.NullPointerException
at org.apache.iceberg..(:)
at org.apache.spark.sql..(:)
...
Willingness to contribute
I can contribute a fix for this bug independently
I would be willing to contribute a fix for this bug with guidance from the Iceberg community
I cannot contribute a fix for this bug at this time
The text was updated successfully, but these errors were encountered:
Apache Iceberg version
1.3.0
Query engine
Spark
Please describe the bug 🐞
When attempting to write data to an Iceberg table using the Spark write API, a NullPointerException is thrown. The job fails immediately after initiating the write operation.
The write operation should complete successfully, persisting the data to the Iceberg table without exceptions.
1. Set up an Iceberg catalog with the following configuration in Spark:
2. spark.sql.catalog.my_catalog = org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.my_catalog.type = hadoop
spark.sql.catalog.my_catalog.warehouse = s3://my-bucket/my-warehouse
CREATE TABLE my_catalog.default.sample_table (
id INT,
name STRING
) USING iceberg;
import spark.implicits._
val data = Seq((1, "Alice"), (2, "Bob")).toDF("id", "name")
data.writeTo("my_catalog.default.sample_table").append()
java.lang.NullPointerException
at org.apache.iceberg..(:)
at org.apache.spark.sql..(:)
...
Willingness to contribute
The text was updated successfully, but these errors were encountered: