-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
source pyspark-csv from a data frame or a parquet table #8
Comments
We have source parquet tables with hundreds of string columns (some up to 800) and it would be great to use something like pyspark-csv to convert to a dataframe (or a parquet table) with correct data types. |
I'm not sure I understand your problem. |
Let's say we have a dataframe where all columns are |
We have a lot of use cases for pyspark-csv where source files are parquet files..
Would it be possible for pyspark-csv to add a parquet table or an arbitrary data frame?
The text was updated successfully, but these errors were encountered: