-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(python): Clean up conversion utils #11789
refactor(python): Clean up conversion utils #11789
Conversation
Many of the functions accept Note that the only place in the code base where |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the attention to detail here. A few comments from me.
I do think it is important to run benchmarks with these before we merge. These are called a lot upon instantiation. |
Of course. Are there standardized benchmarks you want to see?
proposed implementation:
proposed implementation:
|
…r/python_datetime_and_delta_convert
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I made a few more minor improvements.
Turns out that seconds * 1_000_000_000 + microseconds * 1_000
is faster than 1_000 * (seconds * 1_000_000 + microseconds)
.
This can be merged!
This is not an urgent or significant PR.
I saw this PR: #11759, and noticed some code repetitions. A
SECONDS_PER_DAY
variable was introduced, and motivated me to clean up the repetitions.Also, the changes here are in the spirit of #11693, but much smaller in scope, and only related to the python side.
Changes summary:
_fromtimestamp
which was introduced 6MO ago, and is not used or linked toSECONDS_PER_DAY
wherever possible_timedelta_to_pl_timedelta
with_datetime_to_pl_timestamp
. This leads to a marginal perf improvement for the former (10%).A couple of the computations can be encapsulated in a function, but the overhead is so tremendous it will lead to a ~20% perf decrease. There were 2 PRs in this file that led to 15% and 35% perf increase, so it looked not justifiable to encapsulate.