You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently experiment results are semi-manually coded using bayesian style reasoning to come up with the weights.
It's however possible to do this using a more rigorous approach that makes use of well established graph based modeling systems such as bayesian networks.
Work on this has started already since a few months and had a very fruitful conversation about this topic with Joss who provided key insight.
As part of this activity the plan is to move this forward by doing some more modeling using bayes networks and see how it works.
Some sub-activities as part of this might include:
Coming up with labeled data (probably enriched with what we have from the feedback reporting system) to validate the model and/or bootstrap/train it
Build some kind of web interface to make it easier to label data quickly (currently it's too many clicks to do it via explorer for many measurements)
Refine and experiment with different features for the bayes net
Iterate on various configurations of the bayes network
Consider extending the observation data format to make it easier to extract the necessary features
The text was updated successfully, but these errors were encountered:
There are still a few critical theoretical hurdles that need to be overcome, which are questions I would like to pose to people that have more experience about this, namely:
What are some best-practices or rules of thumb to determine optimal cardinality for the nodes and when it's appropriate to split a particular proposition into more sub-propositions?
How do you deal with the fact that the state of a particular proposition might be undefined? Is it OK for it to just be T | F or is it recommended to explicitly add the "unknown" state?
Are there best practices on the optimal cardinality of the CPD tables? (pgmpy has a hard limit of 32, but manually populating tables even of width 10+ is extremely tedious) Are there tricks to try and split the nodes up in a such a way to keep the cardinality low?
After more experimentation with the bayesian network approach and having a working PoC of it, I came to the conclusion that for the moment the performance of running this is not going to scale well to our use case without some significant work to re-engineer the analysis pipeline.
This lead to the conclusion that it was probably best for the time being to rollback to an approach that's simpler and closer to what we had done before, by using a fuzzy logic rule-based style classifier. Put in simpler terms this is just a list of IF THEN clauses that lead to the confidence estimates we have in a particular outcome being true. Through these we are effectively encoding the knowledge we have about certain signals in the measurements being a sign of blocking or not blocking.
In terms of implementation it's done directly as SQL queries which has the benefit of both being more performant than having to carry data in and out of python, but also allows to inspect and update the rules more easily as they all live in one place.
Work related to this is done inside of the following PR: #99, specifically the web_analysis.py contains the mega-sql query to perform the analysis.
I will be following up with some more extensive documentation explaining how this whole system works.
Currently experiment results are semi-manually coded using bayesian style reasoning to come up with the weights.
It's however possible to do this using a more rigorous approach that makes use of well established graph based modeling systems such as bayesian networks.
Work on this has started already since a few months and had a very fruitful conversation about this topic with Joss who provided key insight.
As part of this activity the plan is to move this forward by doing some more modeling using bayes networks and see how it works.
Some sub-activities as part of this might include:
The text was updated successfully, but these errors were encountered: