Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bfloat16 support? #653

Open
wanghuibin0 opened this issue Dec 27, 2024 · 6 comments
Open

bfloat16 support? #653

wanghuibin0 opened this issue Dec 27, 2024 · 6 comments

Comments

@wanghuibin0
Copy link

Is there any plan to support bfloat16 related extensions?
Now that float16 in sail-riscv is represented as bits(16), how will be bfloat16 represented, if I model those bf16 involved instructions?

@Timmmm
Copy link
Collaborator

Timmmm commented Dec 28, 2024

I think it's been mentioned in the past but nobody is working in it currently. At the moment all the floating point stuff is implemented with the C Berkley softfloat library but we will eventually move it to a pure Sail implementation that @Incarnation-p-lee has been working on so probably he knows best.

I would imagine it would be separate from the generic code, and use a newtype of bits(16).

@Incarnation-p-lee
Copy link

The pure sail implementation doesn't cover the BFloat format for now, but do we have something like BF32, BF64, and BF128?
For BF16, there will be newtype I think.

@jordancarlin
Copy link
Collaborator

The pure sail implementation doesn't cover the BFloat format for now, but do we have something like BF32, BF64, and BF128?

As far as I know there is only a 16 bit BFloat format.

@Timmmm
Copy link
Collaborator

Timmmm commented Dec 30, 2024

@Incarnation-p-lee bfloat16 is a non-IEEE floating point format, different to the IEEE binary16 format. IIRC it's just the IEEE binary32 format truncated to 16 bits, so it has way too many exponent bits, but conversion to/from normal single precision float is trivial. Popular in AI, at least it was a few years ago. I think hardware IEEE half support (Zfh) is more common now so I'm not sure how relevant it is today.

@Incarnation-p-lee
Copy link

Yes, AFAIK, BF16 is mostly popular in the AI domain. I bet it comes from the riscv bf16 spec.
However, given the current pure float implementation works on bits, maybe we can reuse most of exiting code and reconcile the types in the first insights.

BTW, I think both llvm and GCC support zfh and zfb already.

@aamartin0000
Copy link

As far as I know, there is a notion of adding full BF16 arithmetic to the vector ISA, but not to scalar FP. The BF16 arithmetic is mainly for AI/ML, and maybe mostly to support IME (the Integrated Matrix Extension). Also for AI/ML, there are other narrow FP formats under consideration (OCP FP8/6/4, OCP MX8/6/4) but no strong advocates.

I haven't looked in a while, but it should be fairly easy to implement BF16 in SoftFloat, and port to Sail if not done directly.

The (fairly) trivial conversions BF16 to/from FP32 for both scalar and vector are ratified, see Zfbf16 and similar.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants