Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tau_p #431

Merged
merged 13 commits into from
Sep 17, 2024
Merged

tau_p #431

merged 13 commits into from
Sep 17, 2024

Conversation

ariellellouch
Copy link
Contributor

@ariellellouch ariellellouch commented Sep 9, 2024

Description

TAU-P transform (429)

I cannot get it to pass auto-tests because numba is not installed.

Checklist

I have (if applicable):

  • [ X] referenced the GitHub issue this PR closes.
  • [ X] documented the new feature with docstrings or appropriate doc page.
  • [ X] included a test. See testing guidelines.
  • [ X] your name has been added to the contributors page (docs/contributors.md).
  • [ X] added the "ready_for_review" tag once the PR is ready to be reviewed.

@ariellellouch ariellellouch added ready_for_review PR is ready for review transform Related to transform operations labels Sep 9, 2024
@d-chambers
Copy link
Contributor

Nice work so far. Two things to get you unstuck.

  1. Including numba as an optional dependency

To do this you just need to add numba to the extras section in the pyproject.toml, where you will already find obspy/xarray here. Then the main test suite will install numba and you should be good to go. However, we also test everything with the minimal set of dependencies, so the tests will still fail.

  1. Mark numba dependent tests

To ensure the tests that need numba get skipped when numba isnt installed you need to mark them as such with pytest.importskip. You can see examples of how to do this here.

Let me know if you run into any other issues.

@ariellellouch
Copy link
Contributor Author

ariellellouch commented Sep 10, 2024

The problem is that most of my tests required numba, because running tau-p requires numba. I tested them locally and they pass. I can also use a non-numba version just for testing, but it seems quite excessive maybe?
Anyhow, I am pushing a working version, although I cannot manage to get the code coverage to pass because of these issues

Copy link

codecov bot commented Sep 10, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.84%. Comparing base (8fa6680) to head (69b8ad8).
Report is 3 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff            @@
##           master     #431    +/-   ##
========================================
  Coverage   99.84%   99.84%            
========================================
  Files         112      114     +2     
  Lines        9037     9142   +105     
========================================
+ Hits         9023     9128   +105     
  Misses         14       14            
Flag Coverage Δ
unittests 99.84% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@d-chambers
Copy link
Contributor

hmmm... yes, I hadn't thought about the coverage implications for numba code. I have a lot of meetings today and tomorrow but I should have some time to take a closer look towards the end of the week.

@d-chambers
Copy link
Contributor

Hey @ariellellouch,

I am going through the PR now and it seems like it might actually be worth adding a decorator for optional jit'ing like we discussed in #429. I will add that to this PR then you can make sure I didn't mess up any of your code ;)

You left this comment:

## This is causing me problem with get_example_2, but I don't
## know how to fix. I used the same code in dispersion.py which
## is also a problem... would be useful to have a function that
## returns dt in seconds.
dt = (time[1] - time[0]) / np.timedelta64(1, "s")

So I thought I would show you a bit more idiomatic way to to do it (I can tweak this when I add the jit decorator, just want you to know for later)

from dascore.utils.time import to_float

dist = patch_cop.get_coord("distance")
# Since we assume evenly sampled time coord this call will enforce it.
time = patch_cop.get_coord("time", require_evenly_sampled=True)
dt = to_float(time.step)

We could probably make it even more concise, but I don't think its too bad to get the time step in seconds that way.

@d-chambers
Copy link
Contributor

Ok, take a look and let me know what you think. I implemented the jit decorator and shrank the tests a bit to make them faster. Also, I did end up running a small dataset in pure python mode not to take a coverage hit. A bit pointless, I know, but we have put a lot of work into getting very high coverage so I didn't want to spoil that.

I was mainly focused on the code as I haven't implemented a TauP transform before. I would love to dive into it more but I am scrambling to finish up my dissertation so if anyone else has experience in this and wants to review the algorithm feel free (eg @eileenrmartin, @aissah, @ahmadtourei).

@ariellellouch
Copy link
Contributor Author

ariellellouch commented Sep 15, 2024

LGTM, but of course would be good if someone double-checks my operators :)

@ahmadtourei
Copy link
Collaborator

I have also skimmed through the code and it looks great to me. Thanks for your work, @ariellellouch!

@d-chambers
Copy link
Contributor

Cool, feel free to mash the green "squash and merge" button when you are ready @ariellellouch

@ariellellouch ariellellouch merged commit 3858a11 into master Sep 17, 2024
15 checks passed
@d-chambers d-chambers deleted the tau_p branch September 17, 2024 13:08
@d-chambers d-chambers mentioned this pull request Sep 28, 2024
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready_for_review PR is ready for review transform Related to transform operations
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants