You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While playing around with Salus, I notice that a virtual IOMMU was enabled since there are prints(when booting debian image) like: Found RISC-V IOMMU version 0x2,and
Here comes the first question. Why implement the IOMMU as a PCI device? As my humble knowledge goes, a typical IOMMU usually be integrated with the CPU. Maybe doing this for developing convenience?
in the riscv_iommu_pci_init function. I decided to observe the behaviors of the vIOMMU. In qemu-monitor, using info pci, it says:
(qemu) info pci
info pci
Bus 0, device 0, function 0:
Host bridge: PCI device 1b36:0008
PCI subsystem 1af4:1100
id ""
Bus 0, device 1, function 0:
Class 0264: PCI device 1b36:0010
PCI subsystem 1af4:1100
IRQ 0, pin A
BAR0: 64 bit memory at 0x100000000 [0x100003fff].
id ""
Bus 0, device 2, function 0:
Class 2054: PCI device 1efd:edf1
PCI subsystem 1af4:1100
BAR0: 64 bit memory at 0x17ffff000 [0x17fffffff].
id ""
Bus 0, device 3, function 0:
Ethernet controller: PCI device 1af4:1041
PCI subsystem 1af4:1100
IRQ 0, pin A
BAR1: 32 bit memory at 0x40000000 [0x40000fff].
BAR4: 64 bit prefetchable memory at 0x100004000 [0x100007fff].
BAR6: 32 bit memory at 0xffffffffffffffff [0x0003fffe].
id ""
Since the PCI_VENDOR_ID_RIVOS,PCI_DEVICE_ID_RIVOS_IOMMU,0x0806 equals to 0x1efd,0xedf1,2054D respectively, I believed that device 2 is the riscv_iommu_pci device. But in debian, trying to find the vIOMMU using lspci -v , it only says:
root@debian:~# lspci -v
00:00.0 Host bridge: Red Hat, Inc. QEMU PCIe Host bridge
Subsystem: Red Hat, Inc. QEMU PCIe Host bridge
Flags: fast devsel
lspci: Unable to load libkmod resources: error -2
00:01.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02) (prog-if 02 [NVM Express])
Subsystem: Red Hat, Inc. QEMU NVM Express Controller
Flags: bus master, fast devsel, latency 0
Memory at 100000000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [40] MSI-X: Enable+ Count=65 Masked-
Capabilities: [80] Express Root Complex Integrated Endpoint, MSI 00
Capabilities: [60] Power Management version 3
Kernel driver in use: nvme
00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01)
Subsystem: Red Hat, Inc. Virtio network device
Flags: bus master, fast devsel, latency 0
Memory at 40000000 (32-bit, non-prefetchable) [size=4K]
Memory at 100004000 (64-bit, prefetchable) [size=16K]
Capabilities: [98] MSI-X: Enable+ Count=4 Masked-
Capabilities: [84] Vendor Specific Information: VirtIO: <unknown>
Capabilities: [70] Vendor Specific Information: VirtIO: Notify
Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg
Capabilities: [50] Vendor Specific Information: VirtIO: ISR
Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg
Kernel driver in use: virtio-pci
That leads to the other question. Why the riscv_iommu_pci device hasn't been recognized correctly by pciutils, but qemu-monitor seems normal? Possibly driver issues? mistaken setting? or any other problem?
I am researching the CoVE(or AP-TEE) currently, it would be very helpful to me if you answer my questions. 🥲
The text was updated successfully, but these errors were encountered:
While playing around with Salus, I notice that a virtual IOMMU was enabled since there are prints(when booting debian image) like:
Found RISC-V IOMMU version 0x2
,andThen I find a file called
riscv_iommu.c
(https://github.com/rivosinc/qemu/blob/salus-integration-10312022/hw/riscv/riscv_iommu.c), after reading the code I believe that this c file is the implementation codes for the virtual IOMMU. I noticed that the vIOMMU was designed as a PCI device as theTypeInfo
implies:For comparison, the
TypeInfo
inintel_iommu.c
(https://github.com/rivosinc/qemu/blob/salus-integration-10312022/hw/i386/intel_iommu.c) imples:Here comes the first question. Why implement the IOMMU as a PCI device? As my humble knowledge goes, a typical IOMMU usually be integrated with the CPU. Maybe doing this for developing convenience?
Put aside the first question, I then notice that:
in the
riscv_iommu_pci_init
function. I decided to observe the behaviors of the vIOMMU. Inqemu-monitor
, usinginfo pci
, it says:Since the
PCI_VENDOR_ID_RIVOS
,PCI_DEVICE_ID_RIVOS_IOMMU
,0x0806
equals to0x1efd
,0xedf1
,2054D
respectively, I believed thatdevice 2
is theriscv_iommu_pci
device. But in debian, trying to find the vIOMMU usinglspci -v
, it only says:That leads to the other question. Why the
riscv_iommu_pci
device hasn't been recognized correctly bypciutils
, butqemu-monitor
seems normal? Possibly driver issues? mistaken setting? or any other problem?I am researching the CoVE(or AP-TEE) currently, it would be very helpful to me if you answer my questions. 🥲
The text was updated successfully, but these errors were encountered: