-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clone and run Dask-cuDF tests #8
Conversation
echo "Cloning cudf@{CUDF_VERSION}" | ||
|
||
if [ ! -d "cudf" ]; then | ||
git clone https://github.com/rapidsai/cudf.git --branch $CUDF_VERSION |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something I'm trying to think through: the commit we get the nightly wheel from will potentially not match the commit we test against, whenever there are commits merged between the time the nightly wheel build runs and the time this code runs.
If there are any changes to the tests that require new code in dask_cudf, then they'll fail here.
Ideally we'd checkout the exact commit the nightly wheel was built against, but I haven't found an easy way to do that yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there are any changes to the tests that require new code in dask_cudf, then they'll fail here.
Aha - Great point. I don't think that's a "show stopper", but we definitely want to use the same commit if we can find a way to do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, not a blocker.
Overall, I think this is the right approach: clone the downstream repo and run those tests. There's a potential issue around ensuring the version installed from the nightly wheel build matches the version under test (#8 (comment)). |
FYI, CI isn't set up to run on PRs currently. Trying this out locally, I do see a failure at (I'm very confused about the failure, but haven't looked closely. It seems to be failing for the warning we expect it to throw?)
|
I don't see that failure, but I do see a failure in |
Sorry - Just looked closely at this. The failure may actually makes some sense. I think the problem is that we throw two warnings before the "correct" API is called. It's a bit strange that it doesn't fail in CI (maybe a different pytest version?). Related PR: rapidsai/cudf#18038 |
This should be good to go. Depending on whether or not rapidsai/cudf#18038 and rapidsai/cudf#18045 are merged, we might have failures in the nightly run. I'll plan to merge this shortly, and move over to dask-cuda next. |
Thanks @TomAugspurger ! |
Rough proposal/POC for running dask-cudf pytests.
Not sure if this is the "correct" approach, but I figured a rough PR would provide a reasonable space to start a discussion.