[CoreML] ValueError: ("'activation_dtype' must be in [torch.quint8, torch.float32] (got torch.uint8)" #7587
Labels
module: coreml
Issues related to Apple's Core ML delegation
module: examples
Issues related to demos under examples directory
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
Hi there,
I'm trying to run the quantization example from the CoreML ReadMe https://github.com/pytorch/executorch/tree/main/backends/apple/coreml#quantization
I was trying to install
torch==2.4.0
but then it fails with following:Versions
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.31.2
Libc version: N/A
Python version: 3.12.5 (main, Aug 6 2024, 19:08:49) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] executorch==0.4.0a0+6a085ff
[pip3] executorchcoreml==0.0.1
[pip3] numpy==1.26.4
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchsr==1.0.4
[pip3] torchvision==0.20.0
[conda] Could not collect
The text was updated successfully, but these errors were encountered: