-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggested edits for PR #40 #77
Conversation
I'm still working through the changes, but I wanted to say that I think this style of organizing the data is incredibly helpful for my mental model. The fact that it cuts down on all the extra use-state/use-effect hooks in HomePage is also a huge plus! |
@jsoules I think there are a lot of improvements here, but my purpose in creating #40 was to have a minimal starting point to be able to build other features before committing to one way or another of structuring the internal data model. This now conflicts with #53 and #65 in some serious ways (and also #74). Can I propose that we first merge #40 and the things that depend on it... then reincorporate your edits in a way such that we still preserve the functionality of #53 and #65 (and ideally also keep #74 intact)? Aside from that, my preference would be to structure things so that an analysis/project at its core is a collection of files (main.stan, data.json, sampling_opts.json, meta.json, etc.) and then build on top of that an explicit data structure for all the widgets to use. That structure would include getters and setters consistent with what you are doing here, but underlying it all, we have a collection files. In this way we have the best of both worlds.... and we can save/load from gists / zips / etc without having to maintain separate serialization/deserialization code that would need to evolve with adjustments to the features supported project. This is something we can discuss further... but as I said, my purpose of #40 was to defer that decision so I could build those other features. The way this PR is now structured, it would require that those would need to be adapted in some unnatural ways. |
If you would prefer not to merge #40 and and those PR's first, I can recreate those on top of this one instead... and then separately suggest changes. I feel like the most important thing is to be able to move forward with at least some version of the data model merged into main. That way we can solidify the functionality conventions (query parameters, saving to gist, import/export zip, etc) as discussed in our last meeting and then the internal implementation can be refactored afterwards. But if we don't have something merged into main then the various things in the pending PRs won't have a home. @jsoules @WardBrian Let me know what you'd like to do. Thanks. |
I personally feel much more confident in my ability to work with and test this version of the data model. I’ve had a few other things that have come up this week which have meant I haven’t been working on this directly but I have done a bunch of reading on contexts/reducers, so I am ready to jump back in once this would be merged. I’m not sure I understand why PRs would need to be stacked like you describe though? I had thought that was a side effect of the review on 40 taking some time, not the intended workflow |
I agree it's best not to stack the PR's. But sometimes the conflicts are too significant to do it a better way, or I'm not skilled enough. I like to be able to develop without waiting for pending things to be merged, even if it means I need to later resolve conflicts or recreate the code. I'm okay with merging this into main as is, and then going forward as I suggested. I'd rather not spend a lot of time picking through this because I'll have some proposed adjustments after the others have been merged - because it will be easier to motivate those in the larger context. |
@jsoules you currently have this pointing toward the branch for 40, thoughts on re-pointing to main and merging? |
Hi, just catching up here.
I would be fine to merge this into 40 and then 40 into main, or merge this directly; it's kind of a wash since I branched off #40 originally. @magland apologies for not responding right away to your comments from Tuesday--with Wednesday being off and me having to do a presentation yesterday morning, I've only had a little bit to look, and I wanted to have a proposal for making #53 fit off this base first. (I'll have something in that vein up in a couple hours.) |
Sounds good, thanks Jeff! |
It sounds like we have a consensus around this, and I think the easiest thing to do will be to merge this into main directly, so I'm going to do that. |
Sounds good. I can take a crack at #65 |
I went back to re-review PR #40 and think about what I would change with the data model. Rather than continuing to go back and forth with comments or suggest a later refactor, it seemed like it'd be more straightforward to implement some suggested changes here and merge those into that branch before merging that PR.
The goal of these suggestions is to define the data model more clearly (while keeping it extensible) and to centralize the logic for managing it (n the reducer). Along the way, I believe I've simplified instantiating the context and made the kv-store unneeded.
All code related to the data model continues to live in the
gui/src/app/SPAnalysis
directory.The data model changes are in
SPAnalysisDataModel.ts
. This part is a little tricky:SPAnalysisDataModel
as composed of a base, a metadata section, and an ephemera section.SPAnalysisBase
is a list of known files, plus theSamplingOpts
object.enum
(SPAnalysisKnownFiles
) and there's a type whose keys are only the valid file names (SPAnalysisFiles
). This design enforces that the backing model works with a defined list of files (that's easily updated), and will not permit keys for unknown file names (no possible typos).SPAnalysisMetadata
currently only has the title, but as we anticipate adding more metadata, it's easier to just get the object started now rather than special-casing the title (and other potential fields which may be intended to be stored separately).SPAnalysisEphemera
object represents backing data that couldn't reasonably be transferred to another session. I anticipate this could include the server connection and the compilation status of the model(s), but the most important thing for right now is it's where the edited versions of the editor files live.SPAnalysisFiles
type which is also used to define the list of known files in the main model.ephemera
object), no matter what files get added to the file list.ephemera
object (since it's meant to be ephemeral) and when deserializing, we populate the "edited" or "working" versions of the files into theephemera
object.The logic for maintaining this state lives in the
SPAnalysisReducer
file in the same directory. This reducer currently defines seven verbs (five of which are very straightforward):loadStanie
atomically sets all the data from a saved "Stanie" example (this removes the implementation from theLeftPanel
component)retitle
changes the analysis titlesetSamplingOpts
sets the updated sampling optsclear
resets the data model to the initial model (including wiping out the ephemera)loadLocalStorage
is just used for the current deserialization-on-refresh functionalityThis leaves
editFile
andcommitFile
, which interact with theephemera
part of the data model described above. Both of these verbs take a filename, constrained to be one of theSPAnalysisKnownFiles
, so they will apply to any known file type we define in the data model but will also have consistency checking anywhere they're used.editFile
replaces the "edited file state" tracking that had lived in theHomePage
component. This stores the new file value for any of the (enum-defined) files we're tracking.commitFile
implements the "save" function. It only takes the known-file key, and it copies the version from theephemera
object into the main object of the data model, thus persisting it (and placing it where it'll need to be for future serialization).Future data-model-management logic should also go in this reducer.
Finally, I've simplified the context and its instantiation (see the
SPAnalysisContextProvider
component). I define the context as being an instance of theSPAnalysisDataModel
and the dispatch function of anSPAnalysisReducer
that manages its state. TheSPAnalysisContextProvider
provides this context by instantiating the reducer (useReducer
hook) and then wrapping its children in a context-provider tag whose value is the instantiated reducer. (Right now it also has the local-storage-save-and-load-on-page-refresh code, but we intend to rework this in the future.)Currently the
SPAnalysisContextProvider
component is wrapping the main children of theHomePage
component, but I think properly it should be higher in the rendering chain for the main page content; I just didn't want to mess with the routing stuff in this PR.The remaining significant changes are in the
HomePage.tsx
component andLeftPanel.tsx
component. For the latter, the update logic for loading an example has been simplified (it now lives in the reducer). For the former, the component also becomes more lightweight because it no longer needs to track state separately from the overall data model state, & managing that state is based on straightforward reducer calls.Please let me know if these proposed changes make sense. If so, I'd be happy to merge this into the
general-data-model
branch (the target of this PR) and then merge PR #40.