-
Notifications
You must be signed in to change notification settings - Fork 940
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to integrate a model into Spark cluster #579
Comments
If you already have a trained model (and it fits in memory), then the simplest way to run inferencing in a Spark job is to use something like this example. Basically, you load the model in your Note: there is also this example, which tries to emulate the Spark DataFrame API, but may be a little harder to follow how it works. Finally, some folks wrap the TF model inside a Spark UDF, but I don't have an example of that here. |
@leewyang How can I cache my model in pyspark. I found the model got reloaded for every task. Here's a demonstration of how I predict the whole dataset def _predict_dataset():
def _input_fn():
...
estimator = build_estimator(...)
return estimator.predict(_input_fn)
data.mapPartitions(lambda it: _predict_dataset()) |
Can you please give me some coding snippets or resources to read or understand that how I can load my trained_models in spark? I'm not sure how to do this part. |
@jiqiujia could you please help me like how I can load my trained_model that are already saved in my project directory. Can you share the coding steps where you integrated your trained model in spark. Thank you |
@jahidhasanlinix you could follow the examples in this repo: https://github.com/yahoo/TensorFlowOnSpark/tree/master/examples |
@jiqiujia thank you. I'll check it. |
@jiqiujia assuming that your model won't change over the course of the job, you can just cache the model in the python worker processes via a global variable. Just check if it's none/null, and if so, load the model from disk, otherwise use the cached model. |
How can I load the model? I have a code base model and trained model saved in .pt. how can I load into the Cluster? Any help |
@jahidhasanlinix Not quite sure what you're doing here... *.pt are PyTorch models. Have you converted a TensorFlow model to PyTorch (or vice versa)? |
@leewyang https://github.com/hongzimao/decima-sim |
@jahidhasanlinix Unfortunately, I think that code looks like it's beyond the scope of what TFoS is trying to do. Decima presumably integrates with (or replaces) the spark scheduler itself, while TFoS is more about using Spark (and it's default scheduler) to launch training/inferencing jobs on executors. |
@leewyang thank you so much for your response. Is there any other way to integrate this, can you help me with this. |
How can I integrate a model into a Spark cluster in real? I actually have a deep learning (tf, python) based model which I would like to integrate with the Spark cluster to do some experiments. Can anyone give me some suggestions or steps to follow to do that?
The text was updated successfully, but these errors were encountered: