You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is your question?
Why does the CUDA Toolkit only provide an implementation for double2fp8 in the conversion to FP8, while CUTLASS only provides float2fp8?
For FP16 and FP32, the CUDA Toolkit uses a step-by-step widening of bit-width to double for the conversion to FP8. Is this completely equivalent?
Why is there no implementation for fp162fp8 in non-PTX scenarios?"
The text was updated successfully, but these errors were encountered:
I speculate that perhaps because there is no performance requirement in non-PTX scenarios, and since the widening of bit-width is completely equivalent, only implementations for conversion from double or higher bit-width numbers are provided.
This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.
This issue has been labeled inactive-90d due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.
What is your question?
Why does the CUDA Toolkit only provide an implementation for double2fp8 in the conversion to FP8, while CUTLASS only provides float2fp8?
For FP16 and FP32, the CUDA Toolkit uses a step-by-step widening of bit-width to double for the conversion to FP8. Is this completely equivalent?
Why is there no implementation for fp162fp8 in non-PTX scenarios?"
The text was updated successfully, but these errors were encountered: