You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently the steps to create a github pr ttm prediction service are somewhat "hard-coded" or "customized" for specific repos or orgs (openshift/origin, thoth-station). When we want to extend this service to other users / repos in the open source community, there would be some manual effort / tweaking required for data collection, feature engineering, and model training steps.
IMO this process probably wouldn't scale very well with number of new repos (for example, it took us couple of weeks to update the openshift/origin notebooks to work for thoth-station).
Describe the solution you'd like
A generalized / parameterized end-to-end pipeline (data collection, feature engg, model training, deployment) that can be configured to run for a given repo or set of repos. The inputs could be parameters like repo, thresholds in feature selection methods, names of models to explore, etc. And the output would be a trained model in s3 or even a deployed model, as a stretch goal.
This a pipeline should be reproducible, and should be run every time there's a new request by a user to add this service to their repo.
Describe alternatives you've considered
Manually change repo names and other parameters, and rerun notebooks for data collection, feat engg, model training, model deployment.
Is your feature request related to a problem? Please describe.
Currently the steps to create a github pr ttm prediction service are somewhat "hard-coded" or "customized" for specific repos or orgs (openshift/origin, thoth-station). When we want to extend this service to other users / repos in the open source community, there would be some manual effort / tweaking required for data collection, feature engineering, and model training steps.
IMO this process probably wouldn't scale very well with number of new repos (for example, it took us couple of weeks to update the openshift/origin notebooks to work for thoth-station).
Describe the solution you'd like
A generalized / parameterized end-to-end pipeline (data collection, feature engg, model training, deployment) that can be configured to run for a given repo or set of repos. The inputs could be parameters like repo, thresholds in feature selection methods, names of models to explore, etc. And the output would be a trained model in s3 or even a deployed model, as a stretch goal.
This a pipeline should be reproducible, and should be run every time there's a new request by a user to add this service to their repo.
Describe alternatives you've considered
Manually change repo names and other parameters, and rerun notebooks for data collection, feat engg, model training, model deployment.
/cc @oindrillac
The text was updated successfully, but these errors were encountered: