Replies: 3 comments
-
Hello! Thank you for looking at Rust GPU and wanting to potentially contribute. I'm not sure I fully understand what you are thinking, but I'll throw out some information! Rust GPU currently takes your code and compiles it to SPIR-V. It is a rust compiler backend rather than a proc macro. There are pros and cons to this. For a sort of "DSL" that uses Rust syntax and is "compiled" for the GPU using proc macros, you can check out https://github.com/tracel-ai/cubecl. There is an overview of the various GPU-related libraries at https://rust-gpu.github.io/ecosystem. For Rust GPU, we map normal Rust arrays to SPIRV arrays and handle constant sizes and such automatically. Note that |
Beta Was this translation helpful? Give feedback.
-
Thats exactly what i needed thanks |
Beta Was this translation helpful? Give feedback.
-
Let us know if you need help with Rust GPU or any other info! 🍻 |
Beta Was this translation helpful? Give feedback.
-
Hello People,
i new to the howl open source Community thing so please excuse me if my vocabulary isnt that efficient. Please correct me anytime needed.
So i started to build a AI Framework in Rust with a concurrent core and an hopefully easy it use Interface for HPC production and embedded systems. Since you guys already did excellent work i decided to build on top of this project. For AI use i want to use SPIR-Vs "OpTypeArray" type. The array is builded in an upper layer and when given to the gpu got a defined length thats doesn't need to be changed for calculations.
The plan now is, to build a use_ai mod folder in spri-v and an use_ai.rs in macros. I saw you already defined arrays so after transforming my ndarrays for OpTypeArray use i would build shaders and if necessary implement needed Ops in rustc_codegen_spriv. Since i didnt looked to deep into rustc_codegen_spriv, any tips or helpfully informations to beginn with?
And would it be interessting if i pull request when its running? As starting point for development i thought about a macro like this:
`
use proc_macro::TokenStream;
use quote::quote;
use syn::{parse_macro_input, Error, Expr, ExprTuple, Result};
/// To build an SPIR-V array we need a (Array1, Array1.len()) tuple. This fn macro will build the trait thats usable
/// by OpTypeArray. Since we don't change len for AI Operations. Len has to be Array1.len() else we panic for now.
#[proc_macro]
pub fn impl_array(input: TokenStream) -> Result {
let input_tuple = parse_macro_input!(input as ExprTuple);
}
`
Note: Since we want to build the trait form a unknown array we build the array from a macro like this and just build it in crate::use_ai.
P.s.: Saw you guys working on rust-CUDA, wanted to write CUDA operations trough the rustc ptx target. So maybe i could be there a help to to provide extended Tensor operations for ai usage?
Beta Was this translation helpful? Give feedback.
All reactions