-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bfloat16 support? #653
Comments
I think it's been mentioned in the past but nobody is working in it currently. At the moment all the floating point stuff is implemented with the C Berkley softfloat library but we will eventually move it to a pure Sail implementation that @Incarnation-p-lee has been working on so probably he knows best. I would imagine it would be separate from the generic code, and use a |
The pure sail implementation doesn't cover the BFloat format for now, but do we have something like BF32, BF64, and BF128? |
As far as I know there is only a 16 bit BFloat format. |
@Incarnation-p-lee bfloat16 is a non-IEEE floating point format, different to the IEEE binary16 format. IIRC it's just the IEEE binary32 format truncated to 16 bits, so it has way too many exponent bits, but conversion to/from normal single precision float is trivial. Popular in AI, at least it was a few years ago. I think hardware IEEE half support (Zfh) is more common now so I'm not sure how relevant it is today. |
Yes, AFAIK, BF16 is mostly popular in the AI domain. I bet it comes from the riscv bf16 spec. BTW, I think both llvm and GCC support |
As far as I know, there is a notion of adding full BF16 arithmetic to the vector ISA, but not to scalar FP. The BF16 arithmetic is mainly for AI/ML, and maybe mostly to support IME (the Integrated Matrix Extension). Also for AI/ML, there are other narrow FP formats under consideration (OCP FP8/6/4, OCP MX8/6/4) but no strong advocates. I haven't looked in a while, but it should be fairly easy to implement BF16 in SoftFloat, and port to Sail if not done directly. The (fairly) trivial conversions BF16 to/from FP32 for both scalar and vector are ratified, see Zfbf16 and similar. |
Is there any plan to support bfloat16 related extensions?
Now that float16 in sail-riscv is represented as bits(16), how will be bfloat16 represented, if I model those bf16 involved instructions?
The text was updated successfully, but these errors were encountered: