lora微调 使用inference.py遇到的问题 #508
-
请问微调后生成了多个pt和pth文件,inference.py的lora路径应该是哪一个。我尝试使用了pytorch_model.pt,同时报错RuntimeError: Attempting to deserialize object on CUDA device 3 but torch.cuda.device_count() is 1. Please use torch.load with map_location to map your storages to an existing device.,请问一下这是为什么 |
Beta Was this translation helpful? Give feedback.
Answered by
zRzRzRzRzRzRzR
Dec 5, 2023
Replies: 1 comment
-
指定一下显卡,指定到0号试试 #253 更多微调问题在这里讨论 |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
zRzRzRzRzRzRzR
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
指定一下显卡,指定到0号试试 #253 更多微调问题在这里讨论