-
-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compare against sum of bests #706
base: main
Are you sure you want to change the base?
Conversation
…ly since it would be querying the base Run for its duration
…since you can't make that the base run.
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very interesting way of implementing this! I didn't even know about delegate_missing_to
! Adding that to my toolkit :P
I am totally for the ability to compare to sums of bests. However, doing it in this way, while super interesting, I think works against us in maintenance cost. It feels pretty hard to intuit what a SumOfBestRun
can do. However this could just be nerves talking because this is a new pattern for me.
Would you be open to conceptually changing the entire comparison system from being a comparison of Run
s, and instead define it as a comparison of lists of Segment
s? As you point out, SumOfBestRun
doesn't have some of the qualities of a run, like videos, histories, possible timesaves, collapsed segments, etc. and I think continuing to lean heavily on Run
will cause problems we can't predict due to how all these (and future) features of a run expect to behave.
But the the thing we are actually using inside the run, the list of segments, doesn't have any extraneous behavior that we need to fake or worry about. What if we instead subclassed or delegated to Array
?
class Segments < Array # or delegate? not sure the difference
def duration(timing)
self.sum { |segment| segment.duration(timing) }
end
# ... etc.
end
In some ways it's even more crazy, but I think it would help us tie a bow around "a thing you can compare to", and seems conceptually easier to create even more types of comparisons (averages, sum of community best, etc.) in the future.
You would have to get your hands dirty refactoring all the comparison code, though, and there may be some gotchas displaying the right UI. Also Array
might not even be the right choice since a lot of the time we'll be dealing with ActiveRecord::Relation
s.
Totally open to counterarguments here, I'm just spouting brain noise right now and haven't given this a lot of thought. Brainstorming welcome :D
I agree it introduces some issues with regard to adding new functionality and the like going forward. You'd have to try to think about "Can/Should this apply to a I'm trying to envision how to simplify comparisons in the way you're talking while also still allowing for comparing parts of |
To throw my 2 cents in, what about a "Comparable" model concern. Then all the things that are related to comparing anything run like would be in one place, and would make future comparisons easier. Then you could have some of them just be PORO classes in the models folder. |
🤔 Good points all around. I do like the idea of extracting common stuff to a concern, but idk if it would solve this completely -- like @moorecp says there are a lot of things we care about when comparing Run-to-Run like videos and the "swap comparison" button, that we can't deal with when comparing Run-to-Segments. Unless I'm misunderstanding how you want to use concerns.
Hmm... I like this line of thinking, but with a twist. What if we made it so comparing to your own sum-of-best actually is a Run-to-Run comparison (comparing the run against itself), and we have a parameter that tells us which segments to compare against (PB by default, but you can specify to compare against sum-of-best). So where our URLs now look like
We support a param like
So that comparing to your own sum-of-best is just a matter of specifying
Would that work? We still need some custom code, but a lot of it seems like it would melt away since we are still dealing with a real run on the other end. And if some of the UI isn't perfect for the first version because of that, it's not a big deal. In the future we could add support for also specifying a |
I think I hear what you're saying, but I feel like that gets us into a similar situation as we have with This might fall apart slightly if you want to implement something like a comparison against a community sum of best like you mentioned, as there is no run there. In thinking about @BatedUrGonnaDie's proposal, in theory you could just fields that say this feature is supported while this feature isn't supported for this thing we're using to compare. It might be okay, or it might end up super messy. I haven't put much thought/research into it. |
If you wanted to go the concern route, then it would end up with a SumOfBestRun almost definitely. It was more to make that method scalable to future comparisons we may have such that any model/object that includes it would be able to be compared instantly (or near instantly by stubbing out some known methods for limitations). |
I wanted the ability to easily analyze my runs against my best splits. I'm not sure if this is something desired in the app itself or not, though. My feelings won't be hurt if the added complexity isn't desired.
Things of note:
gold_timeline
from the sum of best split timeline. It was just a big empty bar since every split is gold and the base run would never show anything there.Swap Comparison
button because, at least as of right now, you can't view aSumOfBestRun
as aRun
, so you can't make that the base run for the#show
action. I could add support for that, but it seems unnecessary.SumOfBestRun
won't display a video, because it doesn't make sense.