You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for this wonderful library. We want to try S3Committer to write Spark output directly to S3. And during our testing, we noticed that the job creates output files in /tmp/hadoop-root/mapred folder and S3 committer uploads them to S3 and says success, but these uploaded files are not visible in S3, event after few hours. My suspicion is that we may be missing some required config and went through the code, but couldn't find one. Here is the config we use in our Spark job.
Thanks for this wonderful library. We want to try S3Committer to write Spark output directly to S3. And during our testing, we noticed that the job creates output files in
/tmp/hadoop-root/mapred
folder and S3 committer uploads them to S3 and says success, but these uploaded files are not visible in S3, event after few hours. My suspicion is that we may be missing some required config and went through the code, but couldn't find one. Here is the config we use in our Spark job.--conf spark.hadoop.s3.multipart.committer.conflict-mode=APPEND --conf spark.hadoop.spark.sql.parquet.output.committer.class=com.netflix.bdp.s3.S3PartitionedOutputCommitter
Is there anything else required to be able to make this work?
Thanks,
Jegan
The text was updated successfully, but these errors were encountered: