You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Then, I attempted to run the inference command from the terminal:
python FastSAM/Inference.py --model_path ./weights/FastSAM.pt --img_path ./images_for_models_comparison/png/image_tech_398_006657a8.png --imgsz 512 --device cpu
However, I encountered the following error:
Traceback (most recent call last):
File "C:\Users\mauoa\Desktop\dl-image-segmentation\notebooks\SegmentationTest\FastSAM\Inference.py", line 122, in <module>
main(args)
File "C:\Users\mauoa\Desktop\dl-image-segmentation\notebooks\SegmentationTest\FastSAM\Inference.py", line 76, in main
model = FastSAM(args.model_path)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mauoa\Desktop\dl-image-segmentation\notebooks\SegmentationTest\FastSAM\ultralytics\yolo\engine\model.py", line 107, in __init__
self._load(model, task)
File "C:\Users\mauoa\Desktop\dl-image-segmentation\notebooks\SegmentationTest\FastSAM\ultralytics\yolo\engine\model.py", line 156, in _load
self.model, self.ckpt = attempt_load_one_weight(weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mauoa\Desktop\dl-image-segmentation\notebooks\SegmentationTest\FastSAM\ultralytics\nn\tasks.py", line 578, in attempt_load_one_weight
ckpt, weight = torch_safe_load(weight) # load ckpt
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mauoa\Desktop\dl-image-segmentation\notebooks\SegmentationTest\FastSAM\ultralytics\nn\tasks.py", line 518, in torch_safe_load
return torch.load(file, map_location='cpu'), file # load
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mauoa\miniconda3\envs\segmentation_test\Lib\site-packages\torch\serialization.py", line 1470, in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed,
but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL ultralytics.nn.tasks.SegmentationModel was not an allowed global by default. Please use `torch.serialization.add_safe_globals([SegmentationModel])` or the `torch.serialization.safe_globals([SegmentationModel])` context manager to allowlist this global if you trust this class/function.
What I've Tried:
I followed the instructions from the Colab notebook and installed the requirements.
I'm running on a Windows 10 PC in a conda environment with Python 3.12.8.
I've forced inference on the CPU using the --device cpu flag.
According to the error message, one solution would be to allowlist the SegmentationModel global by using torch.serialization.add_safe_globals([SegmentationModel]).
However, I'm unsure where and how to integrate this in the FastSAM code.
Questions:
What is the recommended approach for resolving this unpickling error with ultralytics.nn.tasks.SegmentationModel?
Should I modify the repository code to use torch.serialization.add_safe_globals or use weights_only=False when calling torch.load?
Are there any other known issues with running FastSAM on Windows 10 with Python 3.12.8?
Any guidance would be appreciated. Thank you!
The text was updated successfully, but these errors were encountered:
I'm trying to run FastSAM on my Windows 10 machine using VS Code with a conda environment named segmentation_test (Python 3.12.8).
I followed the instructions from the FastSAM_example.ipynb notebook and did the following:
Cloned the repository:
git clone https://github.com/CASIA-IVA-Lab/FastSAM.git
Downloaded the weight file from:
https://huggingface.co/spaces/An-619/FastSAM/resolve/main/weights/FastSAM.pt
Installed the required packages:
Then, I attempted to run the inference command from the terminal:
python FastSAM/Inference.py --model_path ./weights/FastSAM.pt --img_path ./images_for_models_comparison/png/image_tech_398_006657a8.png --imgsz 512 --device cpu
However, I encountered the following error:
What I've Tried:
I followed the instructions from the Colab notebook and installed the requirements.
I'm running on a Windows 10 PC in a conda environment with Python 3.12.8.
I've forced inference on the CPU using the --device cpu flag.
According to the error message, one solution would be to allowlist the SegmentationModel global by using torch.serialization.add_safe_globals([SegmentationModel]).
However, I'm unsure where and how to integrate this in the FastSAM code.
Questions:
What is the recommended approach for resolving this unpickling error with ultralytics.nn.tasks.SegmentationModel?
Should I modify the repository code to use torch.serialization.add_safe_globals or use weights_only=False when calling torch.load?
Are there any other known issues with running FastSAM on Windows 10 with Python 3.12.8?
Any guidance would be appreciated. Thank you!
The text was updated successfully, but these errors were encountered: