Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loading 600K features from a parquet file #4589

Open
dmartinol opened this issue Sep 30, 2024 · 1 comment
Open

Loading 600K features from a parquet file #4589

dmartinol opened this issue Sep 30, 2024 · 1 comment

Comments

@dmartinol
Copy link
Contributor

Context

Given the sample files from the OpenShift AI tutorial - Fraud detection example, repo link, including 600,000 rows modelling a fraud detection dataset.

Objective

I want to model the same training dataset as a Feast offline store using a FileSource and then fetch all the historical features at once.

See all the details to replicate the issue here: https://github.com/dmartinol/feast_issue_600K

@dmartinol
Copy link
Contributor Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant