Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Arrow PyCapsule Interface #752

Closed
Tracked by #828
kylebarron opened this issue Jul 11, 2024 · 2 comments · Fixed by #825
Closed
Tracked by #828

Implement Arrow PyCapsule Interface #752

kylebarron opened this issue Jul 11, 2024 · 2 comments · Fixed by #825
Labels
enhancement New feature or request

Comments

@kylebarron
Copy link
Contributor

kylebarron commented Jul 11, 2024

Is your feature request related to a problem or challenge? Please describe what you are trying to do.

The Arrow PyCapsule Interface is a new spec to simplify Arrow interop between compiled Python libraries.

For example, there's a from_arrow_table method

/// Construct datafusion dataframe from Arrow Table
pub fn from_arrow_table(
&mut self,
data: Bound<'_, PyAny>,
name: Option<&str>,
py: Python,
) -> PyResult<PyDataFrame> {
but this narrowly allows only pyarrow Table objects (it expects a to_batches method). This would fail on a pyarrow.RecordBatchReader or any non-pyarrow Arrow objects.

A from_arrow method that looks for the __arrow_c_stream__ method would work out of the box on any Arrow-based Python library implementing this spec. That includes pyarrow Tables and RecordBatchReaders, ibis Tables (ibis-project/ibis#9143), nanoarrow objects, and hopefully soon duckdb and polars objects as well (pola-rs/polars#12530).

Implementing __arrow_c_stream__ on datafusion exports means that any of those other libraries would just work on datafusion classes, without needing to know anything specific of datafusion.

Describe the solution you'd like

PyCapsule import has been implemented in arrow upstream, but export hasn't been implemented. I've implemented import and export in pyo3-arrow (separated for a few reasons). I'm not sure if datafusion-python wants another dependency, but the content of pyo3-arrow could also be copied into here. Exporting raw pycapsule objects could be implemented in upstream arrow if preferred.

Describe alternatives you've considered

Additional context

@timsaucer
Copy link
Contributor

This turned out to be easier than I expected. Right now I have it importing using the upstream pycapsule implementation and I'm exporting DataFrame. Is there anything else you think needs exporting?

from datafusion import lit, SessionContext
import nanoarrow as na
import pyarrow as pa

ctx = SessionContext()
table = pa.table({"a": [1, 2, 3, 4], "b": ["a", "b", "c", "d"]})
print(table)
df = ctx.from_arrow_table(table).with_column("c", lit(3))
df.show()
nd = na.Array(df)
print(nd)

Produces

pyarrow.Table
a: int64
b: string
----
a: [[1,2,3,4]]
b: [["a","b","c","d"]]
DataFrame()
+---+---+---+
| a | b | c |
+---+---+---+
| 1 | a | 3 |
| 2 | b | 3 |
| 3 | c | 3 |
| 4 | d | 3 |
+---+---+---+
nanoarrow.Array<non-nullable struct<a: int64, b: string, c: int64>>[4]
{'a': 1, 'b': 'a', 'c': 3}
{'a': 2, 'b': 'b', 'c': 3}
{'a': 3, 'b': 'c', 'c': 3}
{'a': 4, 'b': 'd', 'c': 3}

I'll put up a PR after I've had time to write up some good unit tests.

@timsaucer
Copy link
Contributor

Dropping note for myself: Currently ignoring the requested schema. That needs to be resolved before committing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants