Use MIR provide a basis to implement fast and lightweight interpreters and JITs. #4820
Replies: 2 comments 6 replies
-
Some context here: we are only using LLVM for the CUDA and CPU backend. Something I'm more concerned about is the tooling, in LLVM we get to use C++ to write some of the device runtime and helper routines, and we can easily target different backends with very small amount of modification (not just RISC or CISC, GPU uses an extremely different instruction set and execution model). If we move to MIR, what kind of tooling can we expect to have? If we don't care about tooling, we have an existing SPIR-V backend that is a lot quicker than the LLVM backend. |
Beta Was this translation helpful? Give feedback.
-
In terms of IR design, we have the frontend AST, the CHI-IR, and finally the backend IRs (LLVM ir or SPIR-V). I think CHI-IR is quite similar to MIR's position in Rust's compiler stack, however one big difference is that CHI-IR uses abstract memory operations based on SNodes (which can be sparse), while MIR directly maps to memory operations. Because the memory model in different backends are extremely different, CHI-IR can not have direct memory ops. I do not see how a MIR will help us in this case? |
Beta Was this translation helpful? Give feedback.
-
Disadvantages of GCC/LLVM-based JIT
Enough about the advantages of GCC/LLVM-based JITs. Let us speak about the disadvantages:
GCC/LLVM compilation speed
Second, GCC/LLVM compilation speed is slow. It might feel like 20ms on a modern Intel CPU for a method compilation with GCC/LLVM is short, but for less powerful but widely used CPUs this value can be a half-second. For example, according to the SPEC2000 176.gcc benchmark, the Raspberry PI3 B3+ CPU is about 30 times slower than the Intel i7-9700K (score 320 vs. 8520).
We need JIT even more on slow machines, but JIT compilation becomes intolerably slow for these. Even on fast machines, GCC/LLVM-based JITs can be too slow in environments like MinGW. Faster JIT speed can also help achieve desirable JIT performance through aggressive adaptive optimization and function inlining.
What might a faster JIT compilation look like? A keynote about the Java Falcon compiler at the 2017 LLVM developers conference suggested about 100ms per method for an LLVM-based JIT compiler and one millisecond for the faster tier-one JVM compiler. Answering a question about using LLVM for a Python JIT implementation, the speaker (Philip Reems) said that you first need a tier-one compiler implementation.
The central notion of JIT is a well-defined intermediate language called Medium Internal Representation, or MIR. You can find the name in Steven Muchnik's famous book "Advanced Compiler Design and Implementation." The Rust team also uses this term for a Rust intermediate language.
MIR is strongly typed and flexible enough. MIR in different forms is capable of representing machine code for both CISC and RISC processors.
Here is a brief MIR description:
Beta Was this translation helpful? Give feedback.
All reactions