-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scaladoc for methods in Tensor #132
Labels
Comments
I'mm interested in GPU programming in Scala (specifically for speeding up a reaction-diffusion modelling package). I wouldn't mind contributing some documentation in order to get up to speed in Compute.scala. Is this still a good package to use for GPU programming in Scala, or maybe there's a better one out there? Also, how can I run the benchmarks locally? |
Sorry for late reply.
I have left ThoughtWorks, and I would not add new features to this project
unless some other one contributes the feature. The problem for you to use
it in your project is that it lacks of high-level constructs. Even matrix
multiplication is not implemented in the library. Instead, the benchmark
provides an example implementation. If you suppose to directly use OpenCL
or CUDA for your project, then Compute.scala could be an alternative
because it provides a thin framework to let you create your customized
kernels in Scala with the help of JIT. However, if you need higher level
constructs, then Java binding of BLAS / cuBLAS / PyTorch / TensorFlow might
be something you are looking for.
Brian Maso <[email protected]> 于2020年3月17日周二 上午10:18写道:
… I'mm interested in GPU programming in Scala (specifically for speeding up
a reaction-diffusion modelling package
<https://darrenjw.wordpress.com/2019/01/22/stochastic-reaction-diffusion-modelling/%5D=>
).
I wouldn't mind contributing some documentation in order to get up to
speed in Compute.scala.
Is this still a good package to use for GPU programming in Scala, or maybe
there's a better one out there?
Also, how can I run the benchmarks locally?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#132 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAES3OVCKFJKDXZX5THQVQTRH6WFHANCNFSM4EX7XDCQ>
.
|
BTW: you can run the benchmarks with the following sbt command:
sbt> project benchmarks
sbt> jmh:run
Yang, Bo <[email protected]> 于2020年4月2日周四 下午1:08写道:
… Sorry for late reply.
I have left ThoughtWorks, and I would not add new features to this project
unless some other one contributes the feature. The problem for you to use
it in your project is that it lacks of high-level constructs. Even matrix
multiplication is not implemented in the library. Instead, the benchmark
provides an example implementation. If you suppose to directly use OpenCL
or CUDA for your project, then Compute.scala could be an alternative
because it provides a thin framework to let you create your customized
kernels in Scala with the help of JIT. However, if you need higher level
constructs, then Java binding of BLAS / cuBLAS / PyTorch / TensorFlow might
be something you are looking for.
Brian Maso ***@***.***> 于2020年3月17日周二 上午10:18写道:
> I'mm interested in GPU programming in Scala (specifically for speeding up
> a reaction-diffusion modelling package
> <https://darrenjw.wordpress.com/2019/01/22/stochastic-reaction-diffusion-modelling/%5D=>
> ).
>
> I wouldn't mind contributing some documentation in order to get up to
> speed in Compute.scala.
>
> Is this still a good package to use for GPU programming in Scala, or
> maybe there's a better one out there?
>
> Also, how can I run the benchmarks locally?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <#132 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAES3OVCKFJKDXZX5THQVQTRH6WFHANCNFSM4EX7XDCQ>
> .
>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The Scaladoc should be similar to numpy's API reference
The text was updated successfully, but these errors were encountered: