We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fastllm/src/devices/cuda/fastllm-cuda.cu(485): error: no suitable user-defined conversion from "__half" to "__nv_bfloat16" exists b[idx] = __hmul(__hdiv(x, __hadd(__float2half(1.0), hexp(-x))), y); 拉取最新的代码,编译报这个错,之前编译的时候没遇到过,如何解决?
-DCUDA_NO_TENSOR_CORE=ON,编译的时候加上这个参数不报错了,这个参数有什么作用,请解释下
The text was updated successfully, but these errors were encountered:
之前出现以上错误,可以通过-DCUDA_NO_TENSOR_CORE=ON选项编译通过,今天更新最新代码后,还是报同样错误,加上该选项也不管用,请问如何解决这个类型转换的错误?
Sorry, something went wrong.
应该修复了 顺便问一下是什么显卡
我刚更新了最新代码,加上CUDA_NO_TENSOR_CORE的选项可以编译过了,我的是A100,cuda是12.4的,为什么不叫CUDA_NO_TENSOR_CORE编译不过呢?
有时候是检测显卡架构有问题,不知道为啥会这样.. 也可以加一个-DCUDA_ARCH=80手动指定架构
刚试了,加了-DCUDA_ARCH=80不管用
No branches or pull requests
fastllm/src/devices/cuda/fastllm-cuda.cu(485): error: no suitable user-defined conversion from "__half" to "__nv_bfloat16" exists
b[idx] = __hmul(__hdiv(x, __hadd(__float2half(1.0), hexp(-x))), y);
拉取最新的代码,编译报这个错,之前编译的时候没遇到过,如何解决?
-DCUDA_NO_TENSOR_CORE=ON,编译的时候加上这个参数不报错了,这个参数有什么作用,请解释下
The text was updated successfully, but these errors were encountered: