diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..0972e2d --- /dev/null +++ b/404.html @@ -0,0 +1,433 @@ + + + +
+ + + + + + + + + + + + + + + +The agent is responsible for the life cycle of the computation, i.e., running the computation and sending events about the status of the computation within the TEE. The agent is found inside the VM (TEE), and each computation within the TEE has its own agent. When a computation run request is sent from from the manager, manager creates a VM where the agent is found and sends the computation manifest to the agent.
+As the computation in the agent undergoes different operations, it sends events to the manager so that the user can monitor the computation from either the UI or other client. Events sent to the manager include computation running, computation finished, computation failed, and computation stopped.
+Agent sends agent events to the manager via vsock. The manager listens to the vsock and forwards the events via gRPC. The agent events are used to show the status of the computation inside the TEE so that a user can be aware of what is happening inside the TEE.
+To run a computation in the agent, a signed certificate is required. The certificate is used to verify the user who is running the computation. The certificate is sent to the agent by the manager, and the agent verifies the certificate before running the computation.
+ + + + + + +CocosAI system is running on the host, and it's main goal is to enable:
+These features are implemented by several independed components of CocosAI system:
+++N.B. CocosAI open-source project does not provide Computation Management service. It is usually a cloud component, used to define a Computation (i.e. define computation metadata, like participant list, algorithm and data providers, result recipients, etc...). Ultraviolet provide commercial product Prism, a multi-party computation platform, that implements multi-tenant and scalable Computation Management service, running in the cloud or on premise, and capable to connect and control CocosAI system running on the TEE host.
+
Manager is a gRPC client that listens to requests sent through gRPC and sends them to Agent via vsock. Manager creates a secure enclave and loads the computation where the agent resides. The connection between Manager and Agent is through vsock, through which channel agent sends events periodically to manager, who forwards these via gRPC.
+Agent defines firmware which goes into the TEE and is used to control and monitor computation within TEE and enable secure and encrypted communication with outside world (in order to fetch the data and provide the result of the computation). The Agent contains a gRPC server that listens for requests from gRPC clients. Communication between the Manager and Agent is done via vsock. The Agent sends events to the Manager via vsock, which then forwards these via gRPC. Agent contains a gRPC server that exposes useful functions that can be accessed by other gRPC clients such as the CLI.
+EOS, or Enclave Operating System, is ...
+CoCoS CLI is used to access the agent within the secure enclave. CLI communicates to agent using gRPC, with funcitons such as algo to provide the algorithm to be run, data to provide the data to be used in the computation, and run to start the computation. It also has functions to fetch and validate the attestation report of the enclave.
+For more information on CLI, please refer to CLI docs.
+ + + + + + +Remote attestation is a process in which one side (the attester) collects information about itself and sends that information to the client (or the relying party) so that the relying party can verify the attester. The successful verification proves to the relying party that the secure virtual machine (SVM) runs the expected code on the expected hardware and is configured correctly. If the attester is deemed trustworthy, the relying party will send confidential code/data to the attester. This information implies that a secure channel needs to be formed between the attester and the relaying party. The secure channel is created using attested TLS.
+Cocos has two software components that represent the attester and the relying party:
+One of the essential parts of the attestation report is the measurement. The measurement represents the hash of the entire SVM or the hash of the HAL. This way, the measurement provides a way for the client to verify the contents of the entire SVM.
+Along with the measurement, the attestation report provides additional information about the booted SVM and underlying hardware, such as the policy with which the SVM was booted and the SNP firmware's trusted computing base (TCB) version.
+The AMD SEV-SNP attestation report can also be filled with arbitrary data. The width of this data field is 512 bits, and it is called report data. The report data content is provided by the Agent to the ASP every time the attestation report is generated.
+The last part of the report is the signature. The hardware signs the AMD SEV-SNP attestation report using the Versioned Chip Endorsement Key (VCEK). VCEK is derived from chip unique secrets and the current SNP firmware TCB. The signature is verified by obtaining the certificate for the VCEK from the AMD Key Distribution System (KDS). By verifying the signature, the relying party can be sure that the SVM is running on genuine AMD hardware and that the AMD Secure Processor (ASP) generated the attestation report.
+The Agent is responsible for fetching the attestation report from the SVM. This procedure is safe because the Kernel and the ASP can exchange encrypted messages that can only be decrypted by the Kernel and the ASP. The keys used for the encryption/decryption are inserted by the ASP into the memory of the SVM during boot, thus ensuring that only the ASP and the SVM have the keys for safe communication.
+For the relying party to send confidential data or code to the Agent, a secure channel must be established between them. This is done using attested TLS, which is a TLS connection where the server's certificate is extended with the attestation report. The SVM is the server in Cocos. The Agent generates a self-signed x.509 certificate extended with the attestation report. When fetching the attestation report, the Agent inserts the hash of the public key into it using the field report data. The whole process can be seen in the below picture. The green color represents the trusted part of the system, while the red is untrusted.
+ +The relying party uses the Cocos CLI to verify the self-signed certificate and the attestation report that is part of it. Successful verification proves to the relying party that the certificate is generated inside the SVM because the certificate's public key is part of the attestation report.
+ + + + + + +The CLI allows you to perform various tasks related to the computation and management of algorithms and datasets. The CLI is a gRPC client for the agent service.
+To build the CLI, follow these steps:
+go get github.com/ultravioletrs/cocos
.cd cocos
.make cli
.export AGENT_GRPC_URL=<agent_host:agent_host>
+
+To upload an algorithm, use the following command:
+./build/cocos-cli algo /path/to/algorithm
+
+To upload a dataset, use the following command:
+./build/cocos-cli data /path/to/dataset.csv
+
+To retrieve the computation result, use the following command:
+./build/cocos-cli result
+
+To install the CLI locally, i.e. for the current user:
+Run make install-cli
.
--help
flag with any command to see additionalinformationComputation in CocosAI is any execution of a program (Algorithm) or an data set (Data), that can be one data file, or a lot of files comping from different parties.
+Computations are multi-party, meaning that program and data providers can be different parties that do not want to expose their intellectual property to other parties participating in the computation.
+Computation
is a structure that holds all the necessary information needed to execute the computation securely (list of participants, execution backend - i.e. where computation will be executed, role of each participant, cryptographic certificates, etc...).
Computation is multi-party, i.e. has multiple participants. Each of the users that participate in the computation can have one of the follwoing roles:
+Computation
and that defines who will participate in it and with wich role (by inviting other users to the Computation)One user can have several roles - for example, Algorithm Provider can also be a Result Recipient.
+Computation Manifest represent that Computation description and is sent upon run
command to the Manager as a JSON.
Manager fetches the Computation Manifest and sends it into the TEE to Agent, via vsock.
+The first thing that Agent does upon boot, is that it fetches the Computation Manifest and reads it. For this Manifest, Agent understands who are the participants in the computation adn with wich role, i.e. from whom it can accept the connections and what data they will send. Agent also learns from the Manifest what algorithm is used and how many datasets will be provided. This way it knows when it received all necessary files to start the execution. Finally, Agent learns from the Manifest to whom it needs to send the Result of the computation.
+ + + + + + +CoCos is found on the CoCos repository. You should fork the repository in order to make changes to the repository. After forking the repository, you can clone it as follows:
+git clone <forked repository> $SOMEPATH/cocos
+cd $SOMEPATH/cocos
+
+Use the GNU Make tool to build all CoCos services:
+make
+Build artifacts will be put in the build directory.
To build the custom linux image that will host agent, run:
+git clone git@github.com:buildroot/buildroot.git
+cd buildroot
+make BR2_EXTERNAL=../cocos/hal/linux cocos_defconfig
+make menuconfig #optional for additional configuration
+make
+
+The necessary kernel modules must be loaded on the hypervisor.
+sudo modprobe vhost_vsock
+ls -l /dev/vhost-vsock
+# crw-rw-rw- 1 root kvm 10, 241 Jan 16 12:05 /dev/vhost-vsock
+ls -l /dev/vsock
+# crw-rw-rw- 1 root root 10, 121 Jan 16 12:05 /dev/vsock
+
+To launch the virtual machine containing agent for testing purposes, run:
+sudo find / -name OVMF_CODE.fd
+# => /usr/share/OVMF/OVMF_CODE.fd
+OVMF_CODE=/usr/share/OVMF/OVMF_CODE.fd
+
+sudo find / -name OVMF_VARS.fd
+# => /usr/share/OVMF/OVMF_VARS.fd
+OVMF_VARS=/usr/share/OVMF/OVMF_VARS.fd
+
+KERNEL="buildroot/output/images/bzImage"
+INITRD="buildroot/output/images/rootfs.cpio.gz"
+
+qemu-system-x86_64 \
+ -enable-kvm \
+ -cpu EPYC-v4 \
+ -machine q35 \
+ -smp 4 \
+ -m 2048M,slots=5,maxmem=10240M \
+ -no-reboot \
+ -drive if=pflash,format=raw,unit=0,file=$OVMF_CODE,readonly=on \
+ -netdev user,id=vmnic,hostfwd=tcp::7020-:7002 \
+ -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= \
+ -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 -vnc :0 \
+ -kernel $KERNEL \
+ -append "earlyprintk=serial console=ttyS0" \
+ -initrd $INITRD \
+ -nographic \
+ -monitor pty \
+ -monitor unix:monitor,server,nowait
+
+The default password is root
.
Agent once started will wait to receive its configuration via v-sock. For testing purposes you can use the script in cocos/test/manual/agent-config
. This script sends agent config and also receives logs and events from agent. Once the VM is launched you can send config including computation manifest to agent as follows:
cd cocos
+go run ./test/manual/agent-config/main.go
+
+Manager is a gRPC client and needs gRPC sever to connect to. We have an example server for testing purposes in test/manager-server
. Run the server as follows:
go run ./test/manager-server/main.go
Create two directories in cocos/cmd/manager
, the directories are img
and tmp
.
+Copy rootfs.cpio.gz
and bzImage
from the buildroot output directory files to cocos/cmd/manager/img
.
Next run manager client.
+cd cmd/manager
+MANAGER_GRPC_URL=localhost:7001 MANAGER_LOG_LEVEL=debug MANAGER_QEMU_USE_SUDO=false MANAGER_QEMU_ENABLE_SEV=false MANAGER_QEMU_OVMF_CODE_FILE=/usr/share/edk2/x64/OVMF_CODE.fd MANAGER_QEMU_OVMF_VARS_FILE=/usr/share/edk2/x64/OVMF_VARS.fd go run main.go
+
+This will result in manager sending a whoIam request to manager-server. Manager server will then launch a VM with agent running and having received the computation manifest.
+To run all of the tests you can execute:
+make test
If you've made any changes to .proto files, you should call protoc command prior to compiling individual microservices.
+To do this by hand, execute:
+make protoc
If you run ps aux | grep qemu-system-x86_64
and it returns give you something like this:
sammy 13913 0.0 0.0 0 0 pts/2 Z+ 20:17 0:00 [qemu-system-x86] <defunct>
+
+means that the a QEMU virtual machine that is currently defunct, meaning that it is no longer running. More precisely, the defunct process in the output is also known as a "zombie" process.
+qemu-system-x86_64
Processes#To kill any leftover qemu-system-x86_64
processes, use
+pkill -f qemu-system-x86_64
+The pkill command is used to kill processes by name or by pattern. The -f
flag to specify that we want to kill processes that match the pattern qemu-system-x86_64
. It sends the SIGKILL signal to all processes that are running qemu-system-x86_64
.
If this does not work, i.e. if ps aux | grep qemu-system-x86_64
still outputs qemu-system-x86_64
related process(es), you can kill the unwanted process with kill -9 <PID>
, which also sends a SIGKILL signal to the process.
Before proceeding install the following requirements. +- Golang (version 1.21.6)
+Get the cocos repository:
+git clone https://github.com/ultravioletrs/cocos.git
Get the hardware abstraction layer from the releases on the cocos repository. Two files will be required:
+- rootfs.cpio.gz
- Initramfs
+- bzImage
- Kernel
Create two directories in cocos/cmd/manager
, the directories are img
and tmp
.
+Copy the downloaded files to cocos/cmd/manager/img
.
Manager is a gRPC client and needs gRPC sever to connect to. We have an example server for testing purposes in test/manager-server
. Run the server as follows:
go run ./test/manager-server/main.go
the output should be simillar to this:
+{"time":"2024-03-19T12:27:46.542638146+03:00","level":"INFO","msg":"manager_test_server service gRPC server listening at :7001 without TLS"}
Next we need to start manager. But first we'll need to install some prerequisites.
+Virtio-vsock is a host/guest communications device. It allows applications in the guest and host to communicate. In this case, it is used to communicate between manager and agent. To enable it run the following on the host:
+sudo modprobe vhost_vsock
to confirm that it is enabled run:
+ls -l /dev/vsock
and ls -l /dev/vhost-vsock
+the output should be simillar to this respectively:
+crw-rw-rw- 1 root root 10, 121 Mar 18 14:01 /dev/vsock
and crw-rw-rw- 1 root kvm 10, 241 Mar 18 14:01 /dev/vhost-vsock
Find the ovmf code file:
+sudo find / -name OVMF_CODE.fd
+
+The output will be simillar to this:
+/usr/share/edk2/x64/OVMF_CODE.fd
+/usr/share/edk2/ia32/OVMF_CODE.fd
+
+Find the ovmf vars file:
+sudo find / -name OVMF_VARS.fd
+
+the output will be simillar to this
+/usr/share/edk2/x64/OVMF_VARS.fd
+/usr/share/edk2/ia32/OVMF_VARS.fd
+
+When manager connects to the server, it sends a whoAmI request after which the server sends a computation manifest. In response manager will sends logs and events from the computation both from manager and agent. To start run:
+cd cmd/manager
+MANAGER_GRPC_URL=localhost:7001 MANAGER_LOG_LEVEL=debug MANAGER_QEMU_USE_SUDO=false MANAGER_QEMU_ENABLE_SEV=false MANAGER_QEMU_OVMF_CODE_FILE=/usr/share/edk2/x64/OVMF_CODE.fd MANAGER_QEMU_OVMF_VARS_FILE=/usr/share/edk2/x64/OVMF_VARS.fd go run main.go
+
+The output on manager will be simillar to this:
+{"time":"2024-03-19T12:38:53.647541406+03:00","level":"INFO","msg":"/usr/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 2048M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/edk2/x64/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=/usr/share/edk2/x64/OVMF_VARS.fd -netdev user,id=vmnic,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,addr=0x2,romfile= -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 -vnc :0 -kernel img/bzImage -append \"earlyprintk=serial console=ttyS0\" -initrd img/rootfs.cpio.gz -nographic -monitor pty"}
+{"time":"2024-03-19T12:39:07.819774273+03:00","level":"INFO","msg":"Method Run for computation took 14.169748744s to complete"}
+{"time":"2024-03-19T12:39:07.821687259+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\"Method Run for computation 1 took 51.066µs to complete without errors.\" computation_id:\"1\" level:\"INFO\" timestamp:{seconds:1710841147 nanos:818774262}}"}
+{"time":"2024-03-19T12:39:07.821994067+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\"Transition: receivingAlgorithms -> receivingAlgorithms\\n\" computation_id:\"1\" level:\"DEBUG\" timestamp:{seconds:1710841147 nanos:819067478}}"}
+{"time":"2024-03-19T12:39:07.822053853+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_event:{event_type:\"receivingAlgorithms\" timestamp:{seconds:1710841147 nanos:819118886} computation_id:\"1\" originator:\"agent\" status:\"in-progress\"}"}
+{"time":"2024-03-19T12:39:07.822605252+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\"agent service gRPC server listening at :7002 without TLS\" computation_id:\"1\" level:\"INFO\" timestamp:{seconds:1710841147 nanos:819759020}}"}
+
+The output on manager test server will be simillar to this:
+{"time":"2024-03-19T12:27:46.542638146+03:00","level":"INFO","msg":"manager_test_server service gRPC server listening at :7001 without TLS"}
+{"time":"2024-03-19T12:38:53.64961785+03:00","level":"DEBUG","msg":"received who am on ip address [::1]:48592"}
+received whoamI
+&{}
+received agent event
+&{event_type:"vm-provision" timestamp:{seconds:1710841133 nanos:649982672} computation_id:"1" originator:"manager" status:"starting"}
+received agent event
+&{event_type:"vm-provision" timestamp:{seconds:1710841133 nanos:650082447} computation_id:"1" originator:"manager" status:"in-progress"}
+received agent event
+&{event_type:"vm-provision" timestamp:{seconds:1710841147 nanos:819724344} computation_id:"1" originator:"manager" status:"complete"}
+received runRes
+&{agent_port:"46693" computation_id:"1"}
+received agent log
+&{message:"Method Run for computation 1 took 51.066µs to complete without errors." computation_id:"1" level:"INFO" timestamp:{seconds:1710841147 nanos:818774262}}
+received agent log
+&{message:"Transition: receivingAlgorithms -> receivingAlgorithms\n" computation_id:"1" level:"DEBUG" timestamp:{seconds:1710841147 nanos:819067478}}
+received agent event
+&{event_type:"receivingAlgorithms" timestamp:{seconds:1710841147 nanos:819118886} computation_id:"1" originator:"agent" status:"in-progress"}
+received agent log
+&{message:"agent service gRPC server listening at :7002 without TLS" computation_id:"1" level:"INFO" timestamp:{seconds:1710841147 nanos:819759020}}
+
+From the logs we see agent has been bound to port 48592
which we can use with agent cli to send the algorithm, datasets and retrieve results. In this case the AGENT_GRPC_URL
will be localhost:48592
. To test agent proceed to CLI
HAL is a layer of programming that allows the software to interact with the hardware device at a general level rather than at the detailed hardware level. Cocos uses HAL and AMD SEV-SNP as an abstraction layer for confidential computing.
+AMD SEV-SNP creates secure virtual machines (SVMs). VMs are usually used to run an operating system (e.g., Ubuntu and its applications). To avoid using a whole OS, HAL uses:
+This way, applications can be executed in the SVM, and the whole HAL SVM is entirely in RAM, protected by SEV-SNP. Being a RAM-only SVM means that secrets that are kept in the SVM will be destroyed when the SVM stops working.
+HAL is made using the tool Buildroot. Buildroot is used to create efficient, embedded Linux systems, and we use it to create the compressed image of the kernel (vmlinuz) and the initial file system (initramfs).
+HAL configuration for Buildroot also includes Python runtime and agent software support. You can read more about the Agent software here.
+HAL is combined with AMD SEV-SNP to provide a fully encrypted VM that can be verified using remote attestation. You can read more about the attestation process here.
+Cocos uses QEMU and Open Virtual Machine Firmware (OVMF) to boot the confidential VM. During boot with SEV-SNP, the AMD Secure Processor (AMD SP) measures (calculates the hash) of the contents of the VM to insert that hash into the attestation report. This measurement is proof of what is currently running inside the VM. The problem with SEV is that it only measures the Open Virtual Machine Firmware (OVMF). To solve this, we have built OVMF so that OVMF contains hashes of the vmlinuz and initrams. Once the OVMF is loaded, it will load the vmlinuz and initramfs into memory, but it will continue the boot process only if the hashes of the vmlinuz and initramfs match the hashes stored in OVMF. This way, the attestation report will contain the measurement of OVMF, with the hashes, and OVMF will guarantee that the correct kernel and file system are booted. The whole process can be seen in the following diagram. The green color represents the trusted part of the system, while the red is untrusted:
+ +This process guarantees that the whole VM is secure and can be verified.
+After the kernel boots, the agent is started and ready for work.
+ + + + + + +CocosAI (Confidential Computing System for AI) is a SW system for enabling confidential and privacy-preserving AI/ML, i.e. execution of model training and algorithm inference on confidential data sets. Privacy-preservation is considered a “holy grail” of AI. It opens many possibilities, among which is a collaborative, trustworthy AI.
+CocosAI leverages Confidential Computing, a novel paradigm based on specialized HW CPU extensions for producting secure encrypted enclaves in memory (Trusted Execution Enviroments, or TEEs), thus isloalting confidential data and programs from the rest of the SW running on the hos
+The final product enables data scientists to train AI and ML models on confidential data that is never revealed, and can be used for Secure Multi-Party Computation (SMPC). AI/ML on combined data sets that come from different sources will unlock huge value.
+ +CoCoS.ai is enabling the following features:
+CocosAI is published under liberal Apache-2.0 open-source license.
+CcosAI can be downlaoded from its GitHub repository
+ + + + + + +Manager runs on the TEE-capable host (AMD SEV-SNP, Intel SGX or Intel TDX) and has 2 main roles:
+start
command and upload the necessary configuration into it (command line arguments, TLS certificates, etc...)Manager expsoses and API for control, based on gRPC, and is controlled by Computation Management service. Manager acts as the client of Computation Management service and connects to it upon the start via TLS-encoded gRPC connection.
+Computation Management service is used to to cnfigure computation metadata. Once a computation is created by a user and the invited users have uploaded their public certificates (used later for identification and data exchange in the enclave), a run request is sent. The Manager is responsible for creating the TEE in which computation will be ran and managing the computation lifecycle.
+Communication to between Computation Management cloud and the Manager is done via gRPC, while communication between Manager and Agent is done via Virtio Vsock. Vsock is used to send Agent events from the computation in the Agent to the Manager. The Manager then sends the events back to Computation Mangement cloud via gRPC, and these are visible to the end user.
+When TEE is booted, and Agent is autmatically deployed and is used for outside communication with the enclave (via the API) and for computation orchestration (data and algorithm upload, start of the computation and retrieval of the result).
+Agent is a gRPC server, and CLI is a gRPC client of the Agent. The Manager sends the Computation Manifest to the Agent via vsock and the Agent runs the computation, according to the Computation Manifest, while sending evnets back to manager on the status. The Manager then sends the events it receives from agent via vsock to Computation Mangement cloud through gRPC.
+git clone https://github.com/ultravioletrs/cocos
+cd cocos
+
+++N.B. All relative paths in this document are relative to
+cocos
repository directory.
QEMU-KVM is a virtualization platform that allows you to run multiple operating systems on the same physical machine. It is a combination of two technologies: QEMU and KVM.
+To install QEMU-KVM on a Debian based machine, run
+sudo apt update
+sudo apt install qemu-kvm
+
+Create img
directory in cmd/manager
. Create tmp
directory in cmd/manager
.
The necessary kernel modules must be loaded on the hypervisor.
+sudo modprobe vhost_vsock
+ls -l /dev/vhost-vsock
+# crw-rw-rw- 1 root kvm 10, 241 Jan 16 12:05 /dev/vhost-vsock
+ls -l /dev/vsock
+# crw-rw-rw- 1 root root 10, 121 Jan 16 12:05 /dev/vsock
+
+Cocos HAL for Linux is framework for building custom in-enclave Linux distribution. Use the instructions in Readme.
+Once the image is built copy the kernel and rootfs image to cmd/manager/img
from buildroot/output/images/bzImage
and buildroot/output/images/rootfs.cpio.gz
respectively.
cd cmd/manager
+
+sudo find / -name OVMF_CODE.fd
+# => /usr/share/OVMF/OVMF_CODE.fd
+OVMF_CODE=/usr/share/OVMF/OVMF_CODE.fd
+
+sudo find / -name OVMF_VARS.fd
+# => /usr/share/OVMF/OVMF_VARS.fd
+OVMF_VARS=/usr/share/OVMF/OVMF_VARS.fd
+
+KERNEL="img/bzImage"
+INITRD="img/rootfs.cpio.gz"
+
+qemu-system-x86_64 \
+ -enable-kvm \
+ -cpu EPYC-v4 \
+ -machine q35 \
+ -smp 4 \
+ -m 2048M,slots=5,maxmem=10240M \
+ -no-reboot \
+ -drive if=pflash,format=raw,unit=0,file=$OVMF_CODE,readonly=on \
+ -netdev user,id=vmnic,hostfwd=tcp::7020-:7002 \
+ -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= \
+ -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 -vnc :0 \
+ -kernel $KERNEL \
+ -append "earlyprintk=serial console=ttyS0" \
+ -initrd $INITRD \
+ -nographic \
+ -monitor pty \
+ -monitor unix:monitor,server,nowait
+
+Once the VM is booted press enter and on the login use username root
.
Agent is started automatically in the VM.
+# List running processes and use 'grep' to filter for processes containing 'agent' in their names.
+ps aux | grep cocos-agent
+# This command helps verify that the 'agent' process is running.
+# The output shows the process ID (PID), resource usage, and other information about the 'cocos-agent' process.
+# For example: 118 root cocos-agent
+
+We can also check if Agent
is reachable from the host machine:
# Use netcat (nc) to test the connection to localhost on port 7020.
+nc -zv localhost 7020
+# Output:
+# nc: connect to localhost (::1) port 7020 (tcp) failed: Connection refused
+# Connection to localhost (127.0.0.1) 7020 port [tcp/*] succeeded!
+
+Now you are able to use Manager
with Agent
. Namely, Manager
will create a VM with a separate OVMF variables file on manager /run
request.
We need Open Virtual Machine Firmware. OVMF is a port of Intel's tianocore firmware - an open source implementation of the Unified Extensible Firmware Interface (UEFI) - used by a qemu virtual machine. We need OVMF in order to run virtual machine with focal-server-cloudimg-amd64. When we install QEMU, we get two files that we need to start a VM: OVMF_VARS.fd
and OVMF_CODE.fd
. We will make a local copy of OVMF_VARS.fd
since a VM will modify this file. On the other hand, OVMF_CODE.fd
is only used as a reference, so we only record its path in an environment variable.
sudo find / -name OVMF_CODE.fd
+# => /usr/share/OVMF/OVMF_CODE.fd
+MANAGER_QEMU_OVMF_CODE_FILE=/usr/share/OVMF/OVMF_CODE.fd
+
+sudo find / -name OVMF_VARS.fd
+# => /usr/share/OVMF/OVMF_VARS.fd
+MANAGER_QEMU_OVMF_VARS_FILE=/usr/share/OVMF/OVMF_VARS.fd
+
+NB: we set environment variables that we will use in the shell process where we run manager
.
To start the service, execute the following shell script (note a server needs to be running see here):
+# download the latest version of the service
+go get github.com/ultravioletrs/cocos
+
+cd $GOPATH/src/github.com/ultravioletrs/cocos
+
+# compile the manager
+make manager
+
+# copy binary to bin
+make install
+
+# set the environment variables and run the service
+MANAGER_GRPC_URL=localhost:7001
+MANAGER_LOG_LEVEL=debug \
+MANAGER_QEMU_USE_SUDO=false \
+MANAGER_QEMU_ENABLE_SEV=false \
+./build/cocos-manager
+
+To enable AMD SEV support, start manager like this
+MANAGER_GRPC_URL=localhost:7001
+MANAGER_LOG_LEVEL=debug \
+MANAGER_QEMU_USE_SUDO=true \
+MANAGER_QEMU_ENABLE_SEV=true \
+MANAGER_QEMU_SEV_CBITPOS=51 \
+./build/cocos-manager
+
+NB: To verify that the manager successfully launched the VM, you need to open two terminals on the same machine. In one terminal, you need to launch go run main.go
(with the environment variables of choice) and in the other, you can run the verification commands.
To verify that the manager launched the VM successfully, run the following command:
+ps aux | grep qemu-system-x86_64
+
+You should get something similar to this
+darko 324763 95.3 6.0 6398136 981044 ? Sl 16:17 0:15 /usr/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 4096M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=img/OVMF_VARS.fd -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -drive file=img/focal-server-cloudimg-amd64.img,if=none,id=disk0,format=qcow2 -device scsi-hd,drive=disk0 -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::9301-:9031,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -nographic -monitor pty
+
+If you run a command as sudo
, you should get the output similar to this one
root 37982 0.0 0.0 9444 4572 pts/0 S+ 16:18 0:00 sudo /usr/local/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 4096M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=img/OVMF_VARS.fd -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -drive file=img/focal-server-cloudimg-amd64.img,if=none,id=disk0,format=qcow2 -device scsi-hd,drive=disk0 -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::9301-:9031,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -object sev-guest,id=sev0,cbitpos=51,reduced-phys-bits=1 -machine memory-encryption=sev0 -nographic -monitor pty
+root 37989 122 13.1 5345816 4252312 pts/0 Sl+ 16:19 0:04 /usr/local/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 4096M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=img/OVMF_VARS.fd -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -drive file=img/focal-server-cloudimg-amd64.img,if=none,id=disk0,format=qcow2 -device scsi-hd,drive=disk0 -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::9301-:9031,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -object sev-guest,id=sev0,cbitpos=51,reduced-phys-bits=1 -machine memory-encryption=sev0 -nographic -monitor pty
+
+The two processes are due to the fact that we run the command /usr/bin/qemu-system-x86_64
as sudo
, so there is one process for sudo
command and the other for /usr/bin/qemu-system-x86_64
.
If the ps aux | grep qemu-system-x86_64
give you something like this
darko 13913 0.0 0.0 0 0 pts/2 Z+ 20:17 0:00 [qemu-system-x86] <defunct>
+
+means that the a QEMU virtual machine that is currently defunct, meaning that it is no longer running. More precisely, the defunct process in the output is also known as a "zombie" process.
+You can troubleshoot the VM launch procedure by running directly qemu-system-x86_64
command. When you run manager
with MANAGER_LOG_LEVEL=info
env var set, it prints out the entire command used to launch a VM. The relevant part of the log might look like this
{"level":"info","message":"/usr/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 4096M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=img/OVMF_VARS.fd -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -drive file=img/focal-server-cloudimg-amd64.img,if=none,id=disk0,format=qcow2 -device scsi-hd,drive=disk0 -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::9301-:9031,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -nographic -monitor pty","ts":"2023-08-14T18:29:19.2653908Z"}
+
+You can run the command - the value of the "message"
key - directly in the terminal:
/usr/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 4096M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=img/OVMF_VARS.fd -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -drive file=img/focal-server-cloudimg-amd64.img,if=none,id=disk0,format=qcow2 -device scsi-hd,drive=disk0 -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::9301-:9031,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -nographic -monitor pty
+
+and look for the possible problems. This problems can usually be solved by using the adequate env var assignments. Look in the manager/qemu/config.go
file to see the recognized env vars. Don't forget to prepend MANAGER_QEMU_
to the name of the env vars.
qemu-system-x86_64
Processes#To kill any leftover qemu-system-x86_64
processes, use
pkill -f qemu-system-x86_64
+
+The pkill command is used to kill processes by name or by pattern. The -f flag to specify that we want to kill processes that match the pattern qemu-system-x86_64
. It sends the SIGKILL signal to all processes that are running qemu-system-x86_64
.
If this does not work, i.e. if ps aux | grep qemu-system-x86_64
still outputs qemu-system-x86_64
related process(es), you can kill the unwanted process with kill -9 <PID>
, which also sends a SIGKILL signal to the process.
CocosAI (Confidential Computing System for AI) is a SW system for enabling confidential and privacy-preserving AI/ML, i.e. execution of model training and algorithm inference on confidential data sets. Privacy-preservation is considered a \u201choly grail\u201d of AI. It opens many possibilities, among which is a collaborative, trustworthy AI.
CocosAI leverages Confidential Computing, a novel paradigm based on specialized HW CPU extensions for producting secure encrypted enclaves in memory (Trusted Execution Enviroments, or TEEs), thus isloalting confidential data and programs from the rest of the SW running on the hos
The final product enables data scientists to train AI and ML models on confidential data that is never revealed, and can be used for Secure Multi-Party Computation (SMPC). AI/ML on combined data sets that come from different sources will unlock huge value.
"},{"location":"#features","title":"Features","text":"CoCoS.ai is enabling the following features:
CocosAI is published under liberal Apache-2.0 open-source license.
"},{"location":"#github","title":"GitHub","text":"CcosAI can be downlaoded from its GitHub repository
"},{"location":"agent/","title":"Agent","text":"The agent is responsible for the life cycle of the computation, i.e., running the computation and sending events about the status of the computation within the TEE. The agent is found inside the VM (TEE), and each computation within the TEE has its own agent. When a computation run request is sent from from the manager, manager creates a VM where the agent is found and sends the computation manifest to the agent.
"},{"location":"agent/#agent-events","title":"Agent Events","text":"As the computation in the agent undergoes different operations, it sends events to the manager so that the user can monitor the computation from either the UI or other client. Events sent to the manager include computation running, computation finished, computation failed, and computation stopped.
"},{"location":"agent/#vsock-connection-between-agent-manager","title":"Vsock Connection Between Agent & Manager","text":"Agent sends agent events to the manager via vsock. The manager listens to the vsock and forwards the events via gRPC. The agent events are used to show the status of the computation inside the TEE so that a user can be aware of what is happening inside the TEE.
"},{"location":"agent/#security","title":"Security","text":"To run a computation in the agent, a signed certificate is required. The certificate is used to verify the user who is running the computation. The certificate is sent to the agent by the manager, and the agent verifies the certificate before running the computation.
"},{"location":"architecture/","title":"Architecture","text":"CocosAI system is running on the host, and it's main goal is to enable:
These features are implemented by several independed components of CocosAI system:
N.B. CocosAI open-source project does not provide Computation Management service. It is usually a cloud component, used to define a Computation (i.e. define computation metadata, like participant list, algorithm and data providers, result recipients, etc...). Ultraviolet provide commercial product Prism, a multi-party computation platform, that implements multi-tenant and scalable Computation Management service, running in the cloud or on premise, and capable to connect and control CocosAI system running on the TEE host.
"},{"location":"architecture/#manager","title":"Manager","text":"Manager is a gRPC client that listens to requests sent through gRPC and sends them to Agent via vsock. Manager creates a secure enclave and loads the computation where the agent resides. The connection between Manager and Agent is through vsock, through which channel agent sends events periodically to manager, who forwards these via gRPC.
"},{"location":"architecture/#agent","title":"Agent","text":"Agent defines firmware which goes into the TEE and is used to control and monitor computation within TEE and enable secure and encrypted communication with outside world (in order to fetch the data and provide the result of the computation). The Agent contains a gRPC server that listens for requests from gRPC clients. Communication between the Manager and Agent is done via vsock. The Agent sends events to the Manager via vsock, which then forwards these via gRPC. Agent contains a gRPC server that exposes useful functions that can be accessed by other gRPC clients such as the CLI.
"},{"location":"architecture/#eos","title":"EOS","text":"EOS, or Enclave Operating System, is ...
"},{"location":"architecture/#cli","title":"CLI","text":"CoCoS CLI is used to access the agent within the secure enclave. CLI communicates to agent using gRPC, with funcitons such as algo to provide the algorithm to be run, data to provide the data to be used in the computation, and run to start the computation. It also has functions to fetch and validate the attestation report of the enclave.
For more information on CLI, please refer to CLI docs.
"},{"location":"attestation/","title":"Attestation","text":"Remote attestation is a process in which one side (the attester) collects information about itself and sends that information to the client (or the relying party) so that the relying party can verify the attester. The successful verification proves to the relying party that the secure virtual machine (SVM) runs the expected code on the expected hardware and is configured correctly. If the attester is deemed trustworthy, the relying party will send confidential code/data to the attester. This information implies that a secure channel needs to be formed between the attester and the relaying party. The secure channel is created using attested TLS.
Cocos has two software components that represent the attester and the relying party:
One of the essential parts of the attestation report is the measurement. The measurement represents the hash of the entire SVM or the hash of the HAL. This way, the measurement provides a way for the client to verify the contents of the entire SVM.
Along with the measurement, the attestation report provides additional information about the booted SVM and underlying hardware, such as the policy with which the SVM was booted and the SNP firmware's trusted computing base (TCB) version.
The AMD SEV-SNP attestation report can also be filled with arbitrary data. The width of this data field is 512 bits, and it is called report data. The report data content is provided by the Agent to the ASP every time the attestation report is generated.
The last part of the report is the signature. The hardware signs the AMD SEV-SNP attestation report using the Versioned Chip Endorsement Key (VCEK). VCEK is derived from chip unique secrets and the current SNP firmware TCB. The signature is verified by obtaining the certificate for the VCEK from the AMD Key Distribution System (KDS). By verifying the signature, the relying party can be sure that the SVM is running on genuine AMD hardware and that the AMD Secure Processor (ASP) generated the attestation report.
"},{"location":"attestation/#how-is-the-attestation-report-fetched","title":"How is the attestation report fetched?","text":"The Agent is responsible for fetching the attestation report from the SVM. This procedure is safe because the Kernel and the ASP can exchange encrypted messages that can only be decrypted by the Kernel and the ASP. The keys used for the encryption/decryption are inserted by the ASP into the memory of the SVM during boot, thus ensuring that only the ASP and the SVM have the keys for safe communication.
"},{"location":"attestation/#attested-tls","title":"Attested TLS","text":"For the relying party to send confidential data or code to the Agent, a secure channel must be established between them. This is done using attested TLS, which is a TLS connection where the server's certificate is extended with the attestation report. The SVM is the server in Cocos. The Agent generates a self-signed x.509 certificate extended with the attestation report. When fetching the attestation report, the Agent inserts the hash of the public key into it using the field report data. The whole process can be seen in the below picture. The green color represents the trusted part of the system, while the red is untrusted.
The relying party uses the Cocos CLI to verify the self-signed certificate and the attestation report that is part of it. Successful verification proves to the relying party that the certificate is generated inside the SVM because the certificate's public key is part of the attestation report.
"},{"location":"cli/","title":"Agent CLI","text":"The CLI allows you to perform various tasks related to the computation and management of algorithms and datasets. The CLI is a gRPC client for the agent service.
"},{"location":"cli/#build","title":"Build","text":"To build the CLI, follow these steps:
go get github.com/ultravioletrs/cocos
.cd cocos
.make cli
.export AGENT_GRPC_URL=<agent_host:agent_host>\n
"},{"location":"cli/#upload-algorithm","title":"Upload Algorithm","text":"To upload an algorithm, use the following command:
./build/cocos-cli algo /path/to/algorithm\n
"},{"location":"cli/#upload-dataset","title":"Upload Dataset","text":"To upload a dataset, use the following command:
./build/cocos-cli data /path/to/dataset.csv\n
"},{"location":"cli/#retrieve-result","title":"Retrieve Result","text":"To retrieve the computation result, use the following command:
./build/cocos-cli result\n
"},{"location":"cli/#installation","title":"Installation","text":"To install the CLI locally, i.e. for the current user:
Run make install-cli
.
--help
flag with any command to see additionalinformationComputation in CocosAI is any execution of a program (Algorithm) or an data set (Data), that can be one data file, or a lot of files comping from different parties.
Computations are multi-party, meaning that program and data providers can be different parties that do not want to expose their intellectual property to other parties participating in the computation.
Computation
is a structure that holds all the necessary information needed to execute the computation securely (list of participants, execution backend - i.e. where computation will be executed, role of each participant, cryptographic certificates, etc...).
Computation is multi-party, i.e. has multiple participants. Each of the users that participate in the computation can have one of the follwoing roles:
Computation
and that defines who will participate in it and with wich role (by inviting other users to the Computation)One user can have several roles - for example, Algorithm Provider can also be a Result Recipient.
"},{"location":"computation/#computation-manifest","title":"Computation Manifest","text":"Computation Manifest represent that Computation description and is sent upon run
command to the Manager as a JSON.
Manager fetches the Computation Manifest and sends it into the TEE to Agent, via vsock.
The first thing that Agent does upon boot, is that it fetches the Computation Manifest and reads it. For this Manifest, Agent understands who are the participants in the computation adn with wich role, i.e. from whom it can accept the connections and what data they will send. Agent also learns from the Manifest what algorithm is used and how many datasets will be provided. This way it knows when it received all necessary files to start the execution. Finally, Agent learns from the Manifest to whom it needs to send the Result of the computation.
"},{"location":"developer-guide/","title":"Developer Guide","text":""},{"location":"developer-guide/#getting-cocos","title":"Getting CoCos","text":"CoCos is found on the CoCos repository. You should fork the repository in order to make changes to the repository. After forking the repository, you can clone it as follows:
git clone <forked repository> $SOMEPATH/cocos\ncd $SOMEPATH/cocos\n
"},{"location":"developer-guide/#building","title":"Building","text":""},{"location":"developer-guide/#prerequisites","title":"Prerequisites","text":"Use the GNU Make tool to build all CoCos services: make
Build artifacts will be put in the build directory.
To build the custom linux image that will host agent, run:
git clone git@github.com:buildroot/buildroot.git\ncd buildroot\nmake BR2_EXTERNAL=../cocos/hal/linux cocos_defconfig\nmake menuconfig #optional for additional configuration\nmake\n
"},{"location":"developer-guide/#testing-hal-image","title":"Testing HAL image","text":""},{"location":"developer-guide/#enable-v-sock","title":"Enable V-Sock","text":"The necessary kernel modules must be loaded on the hypervisor.
sudo modprobe vhost_vsock\nls -l /dev/vhost-vsock\n# crw-rw-rw- 1 root kvm 10, 241 Jan 16 12:05 /dev/vhost-vsock\nls -l /dev/vsock\n# crw-rw-rw- 1 root root 10, 121 Jan 16 12:05 /dev/vsock\n
"},{"location":"developer-guide/#launch-the-vm","title":"Launch the VM","text":"To launch the virtual machine containing agent for testing purposes, run:
sudo find / -name OVMF_CODE.fd\n# => /usr/share/OVMF/OVMF_CODE.fd\nOVMF_CODE=/usr/share/OVMF/OVMF_CODE.fd\n\nsudo find / -name OVMF_VARS.fd\n# => /usr/share/OVMF/OVMF_VARS.fd\nOVMF_VARS=/usr/share/OVMF/OVMF_VARS.fd\n\nKERNEL=\"buildroot/output/images/bzImage\"\nINITRD=\"buildroot/output/images/rootfs.cpio.gz\"\n\nqemu-system-x86_64 \\\n -enable-kvm \\\n -cpu EPYC-v4 \\\n -machine q35 \\\n -smp 4 \\\n -m 2048M,slots=5,maxmem=10240M \\\n -no-reboot \\\n -drive if=pflash,format=raw,unit=0,file=$OVMF_CODE,readonly=on \\\n -netdev user,id=vmnic,hostfwd=tcp::7020-:7002 \\\n -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= \\\n -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 -vnc :0 \\\n -kernel $KERNEL \\\n -append \"earlyprintk=serial console=ttyS0\" \\\n -initrd $INITRD \\\n -nographic \\\n -monitor pty \\\n -monitor unix:monitor,server,nowait\n
The default password is root
.
Agent once started will wait to receive its configuration via v-sock. For testing purposes you can use the script in cocos/test/manual/agent-config
. This script sends agent config and also receives logs and events from agent. Once the VM is launched you can send config including computation manifest to agent as follows:
cd cocos\ngo run ./test/manual/agent-config/main.go\n
"},{"location":"developer-guide/#testing-manager","title":"Testing Manager","text":"Manager is a gRPC client and needs gRPC sever to connect to. We have an example server for testing purposes in test/manager-server
. Run the server as follows:
go run ./test/manager-server/main.go
Create two directories in cocos/cmd/manager
, the directories are img
and tmp
. Copy rootfs.cpio.gz
and bzImage
from the buildroot output directory files to cocos/cmd/manager/img
.
Next run manager client.
cd cmd/manager\nMANAGER_GRPC_URL=localhost:7001 MANAGER_LOG_LEVEL=debug MANAGER_QEMU_USE_SUDO=false MANAGER_QEMU_ENABLE_SEV=false MANAGER_QEMU_OVMF_CODE_FILE=/usr/share/edk2/x64/OVMF_CODE.fd MANAGER_QEMU_OVMF_VARS_FILE=/usr/share/edk2/x64/OVMF_VARS.fd go run main.go\n
This will result in manager sending a whoIam request to manager-server. Manager server will then launch a VM with agent running and having received the computation manifest.
"},{"location":"developer-guide/#runnung-tests","title":"Runnung Tests","text":"To run all of the tests you can execute: make test
If you've made any changes to .proto files, you should call protoc command prior to compiling individual microservices.
To do this by hand, execute: make protoc
If you run ps aux | grep qemu-system-x86_64
and it returns give you something like this:
sammy 13913 0.0 0.0 0 0 pts/2 Z+ 20:17 0:00 [qemu-system-x86] <defunct>\n
means that the a QEMU virtual machine that is currently defunct, meaning that it is no longer running. More precisely, the defunct process in the output is also known as a \"zombie\" process.
"},{"location":"developer-guide/#kill-qemu-system-x86_64-processes","title":"Killqemu-system-x86_64
Processes","text":"To kill any leftover qemu-system-x86_64
processes, use pkill -f qemu-system-x86_64
The pkill command is used to kill processes by name or by pattern. The -f
flag to specify that we want to kill processes that match the pattern qemu-system-x86_64
. It sends the SIGKILL signal to all processes that are running qemu-system-x86_64
.
If this does not work, i.e. if ps aux | grep qemu-system-x86_64
still outputs qemu-system-x86_64
related process(es), you can kill the unwanted process with kill -9 <PID>
, which also sends a SIGKILL signal to the process.
Before proceeding install the following requirements. - Golang (version 1.21.6)
"},{"location":"getting-started/#getting-cocos","title":"Getting CoCos","text":"Get the cocos repository: git clone https://github.com/ultravioletrs/cocos.git
Get the hardware abstraction layer from the releases on the cocos repository. Two files will be required: - rootfs.cpio.gz
- Initramfs - bzImage
- Kernel
Create two directories in cocos/cmd/manager
, the directories are img
and tmp
. Copy the downloaded files to cocos/cmd/manager/img
.
Manager is a gRPC client and needs gRPC sever to connect to. We have an example server for testing purposes in test/manager-server
. Run the server as follows:
go run ./test/manager-server/main.go
the output should be simillar to this: {\"time\":\"2024-03-19T12:27:46.542638146+03:00\",\"level\":\"INFO\",\"msg\":\"manager_test_server service gRPC server listening at :7001 without TLS\"}
Next we need to start manager. But first we'll need to install some prerequisites.
"},{"location":"getting-started/#vsock","title":"Vsock","text":"Virtio-vsock is a host/guest communications device. It allows applications in the guest and host to communicate. In this case, it is used to communicate between manager and agent. To enable it run the following on the host: sudo modprobe vhost_vsock
to confirm that it is enabled run: ls -l /dev/vsock
and ls -l /dev/vhost-vsock
the output should be simillar to this respectively: crw-rw-rw- 1 root root 10, 121 Mar 18 14:01 /dev/vsock
and crw-rw-rw- 1 root kvm 10, 241 Mar 18 14:01 /dev/vhost-vsock
Find the ovmf code file:
sudo find / -name OVMF_CODE.fd\n
The output will be simillar to this:
/usr/share/edk2/x64/OVMF_CODE.fd\n/usr/share/edk2/ia32/OVMF_CODE.fd\n
Find the ovmf vars file:
sudo find / -name OVMF_VARS.fd\n
the output will be simillar to this
/usr/share/edk2/x64/OVMF_VARS.fd\n/usr/share/edk2/ia32/OVMF_VARS.fd\n
"},{"location":"getting-started/#run","title":"Run","text":"When manager connects to the server, it sends a whoAmI request after which the server sends a computation manifest. In response manager will sends logs and events from the computation both from manager and agent. To start run:
cd cmd/manager\nMANAGER_GRPC_URL=localhost:7001 MANAGER_LOG_LEVEL=debug MANAGER_QEMU_USE_SUDO=false MANAGER_QEMU_ENABLE_SEV=false MANAGER_QEMU_OVMF_CODE_FILE=/usr/share/edk2/x64/OVMF_CODE.fd MANAGER_QEMU_OVMF_VARS_FILE=/usr/share/edk2/x64/OVMF_VARS.fd go run main.go\n
The output on manager will be simillar to this:
{\"time\":\"2024-03-19T12:38:53.647541406+03:00\",\"level\":\"INFO\",\"msg\":\"/usr/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 2048M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/edk2/x64/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=/usr/share/edk2/x64/OVMF_VARS.fd -netdev user,id=vmnic,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,addr=0x2,romfile= -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 -vnc :0 -kernel img/bzImage -append \\\"earlyprintk=serial console=ttyS0\\\" -initrd img/rootfs.cpio.gz -nographic -monitor pty\"}\n{\"time\":\"2024-03-19T12:39:07.819774273+03:00\",\"level\":\"INFO\",\"msg\":\"Method Run for computation took 14.169748744s to complete\"}\n{\"time\":\"2024-03-19T12:39:07.821687259+03:00\",\"level\":\"INFO\",\"msg\":\"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\\\"Method Run for computation 1 took 51.066\u00b5s to complete without errors.\\\" computation_id:\\\"1\\\" level:\\\"INFO\\\" timestamp:{seconds:1710841147 nanos:818774262}}\"}\n{\"time\":\"2024-03-19T12:39:07.821994067+03:00\",\"level\":\"INFO\",\"msg\":\"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\\\"Transition: receivingAlgorithms -> receivingAlgorithms\\\\n\\\" computation_id:\\\"1\\\" level:\\\"DEBUG\\\" timestamp:{seconds:1710841147 nanos:819067478}}\"}\n{\"time\":\"2024-03-19T12:39:07.822053853+03:00\",\"level\":\"INFO\",\"msg\":\"Agent Log/Event, Computation ID: 1, Message: agent_event:{event_type:\\\"receivingAlgorithms\\\" timestamp:{seconds:1710841147 nanos:819118886} computation_id:\\\"1\\\" originator:\\\"agent\\\" status:\\\"in-progress\\\"}\"}\n{\"time\":\"2024-03-19T12:39:07.822605252+03:00\",\"level\":\"INFO\",\"msg\":\"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\\\"agent service gRPC server listening at :7002 without TLS\\\" computation_id:\\\"1\\\" level:\\\"INFO\\\" timestamp:{seconds:1710841147 nanos:819759020}}\"}\n
The output on manager test server will be simillar to this:
{\"time\":\"2024-03-19T12:27:46.542638146+03:00\",\"level\":\"INFO\",\"msg\":\"manager_test_server service gRPC server listening at :7001 without TLS\"}\n{\"time\":\"2024-03-19T12:38:53.64961785+03:00\",\"level\":\"DEBUG\",\"msg\":\"received who am on ip address [::1]:48592\"}\nreceived whoamI\n&{}\nreceived agent event\n&{event_type:\"vm-provision\" timestamp:{seconds:1710841133 nanos:649982672} computation_id:\"1\" originator:\"manager\" status:\"starting\"}\nreceived agent event\n&{event_type:\"vm-provision\" timestamp:{seconds:1710841133 nanos:650082447} computation_id:\"1\" originator:\"manager\" status:\"in-progress\"}\nreceived agent event\n&{event_type:\"vm-provision\" timestamp:{seconds:1710841147 nanos:819724344} computation_id:\"1\" originator:\"manager\" status:\"complete\"}\nreceived runRes\n&{agent_port:\"46693\" computation_id:\"1\"}\nreceived agent log\n&{message:\"Method Run for computation 1 took 51.066\u00b5s to complete without errors.\" computation_id:\"1\" level:\"INFO\" timestamp:{seconds:1710841147 nanos:818774262}}\nreceived agent log\n&{message:\"Transition: receivingAlgorithms -> receivingAlgorithms\\n\" computation_id:\"1\" level:\"DEBUG\" timestamp:{seconds:1710841147 nanos:819067478}}\nreceived agent event\n&{event_type:\"receivingAlgorithms\" timestamp:{seconds:1710841147 nanos:819118886} computation_id:\"1\" originator:\"agent\" status:\"in-progress\"}\nreceived agent log\n&{message:\"agent service gRPC server listening at :7002 without TLS\" computation_id:\"1\" level:\"INFO\" timestamp:{seconds:1710841147 nanos:819759020}}\n
From the logs we see agent has been bound to port 48592
which we can use with agent cli to send the algorithm, datasets and retrieve results. In this case the AGENT_GRPC_URL
will be localhost:48592
. To test agent proceed to CLI
HAL is a layer of programming that allows the software to interact with the hardware device at a general level rather than at the detailed hardware level. Cocos uses HAL and AMD SEV-SNP as an abstraction layer for confidential computing.
AMD SEV-SNP creates secure virtual machines (SVMs). VMs are usually used to run an operating system (e.g., Ubuntu and its applications). To avoid using a whole OS, HAL uses:
This way, applications can be executed in the SVM, and the whole HAL SVM is entirely in RAM, protected by SEV-SNP. Being a RAM-only SVM means that secrets that are kept in the SVM will be destroyed when the SVM stops working.
"},{"location":"hal/#how-is-hal-constructed","title":"How is HAL constructed?","text":"HAL is made using the tool Buildroot. Buildroot is used to create efficient, embedded Linux systems, and we use it to create the compressed image of the kernel (vmlinuz) and the initial file system (initramfs).
HAL configuration for Buildroot also includes Python runtime and agent software support. You can read more about the Agent software here.
"},{"location":"hal/#how-does-it-work","title":"How does it work?","text":"HAL is combined with AMD SEV-SNP to provide a fully encrypted VM that can be verified using remote attestation. You can read more about the attestation process here.
Cocos uses QEMU and Open Virtual Machine Firmware (OVMF) to boot the confidential VM. During boot with SEV-SNP, the AMD Secure Processor (AMD SP) measures (calculates the hash) of the contents of the VM to insert that hash into the attestation report. This measurement is proof of what is currently running inside the VM. The problem with SEV is that it only measures the Open Virtual Machine Firmware (OVMF). To solve this, we have built OVMF so that OVMF contains hashes of the vmlinuz and initrams. Once the OVMF is loaded, it will load the vmlinuz and initramfs into memory, but it will continue the boot process only if the hashes of the vmlinuz and initramfs match the hashes stored in OVMF. This way, the attestation report will contain the measurement of OVMF, with the hashes, and OVMF will guarantee that the correct kernel and file system are booted. The whole process can be seen in the following diagram. The green color represents the trusted part of the system, while the red is untrusted:
This process guarantees that the whole VM is secure and can be verified.
After the kernel boots, the agent is started and ready for work.
"},{"location":"manager/","title":"Manager","text":"Manager runs on the TEE-capable host (AMD SEV-SNP, Intel SGX or Intel TDX) and has 2 main roles:
start
command and upload the necessary configuration into it (command line arguments, TLS certificates, etc...)Manager expsoses and API for control, based on gRPC, and is controlled by Computation Management service. Manager acts as the client of Computation Management service and connects to it upon the start via TLS-encoded gRPC connection.
Computation Management service is used to to cnfigure computation metadata. Once a computation is created by a user and the invited users have uploaded their public certificates (used later for identification and data exchange in the enclave), a run request is sent. The Manager is responsible for creating the TEE in which computation will be ran and managing the computation lifecycle.
Communication to between Computation Management cloud and the Manager is done via gRPC, while communication between Manager and Agent is done via Virtio Vsock. Vsock is used to send Agent events from the computation in the Agent to the Manager. The Manager then sends the events back to Computation Mangement cloud via gRPC, and these are visible to the end user.
"},{"location":"manager/#manager-agent","title":"Manager <> Agent","text":"When TEE is booted, and Agent is autmatically deployed and is used for outside communication with the enclave (via the API) and for computation orchestration (data and algorithm upload, start of the computation and retrieval of the result).
Agent is a gRPC server, and CLI is a gRPC client of the Agent. The Manager sends the Computation Manifest to the Agent via vsock and the Agent runs the computation, according to the Computation Manifest, while sending evnets back to manager on the status. The Manager then sends the events it receives from agent via vsock to Computation Mangement cloud through gRPC.
"},{"location":"manager/#setup-and-test-manager-agent","title":"Setup and Test Manager <> Agent","text":"git clone https://github.com/ultravioletrs/cocos\ncd cocos\n
N.B. All relative paths in this document are relative to cocos
repository directory.
QEMU-KVM is a virtualization platform that allows you to run multiple operating systems on the same physical machine. It is a combination of two technologies: QEMU and KVM.
To install QEMU-KVM on a Debian based machine, run
sudo apt update\nsudo apt install qemu-kvm\n
Create img
directory in cmd/manager
. Create tmp
directory in cmd/manager
.
The necessary kernel modules must be loaded on the hypervisor.
sudo modprobe vhost_vsock\nls -l /dev/vhost-vsock\n# crw-rw-rw- 1 root kvm 10, 241 Jan 16 12:05 /dev/vhost-vsock\nls -l /dev/vsock\n# crw-rw-rw- 1 root root 10, 121 Jan 16 12:05 /dev/vsock\n
"},{"location":"manager/#prepare-cocos-hal","title":"Prepare Cocos HAL","text":"Cocos HAL for Linux is framework for building custom in-enclave Linux distribution. Use the instructions in Readme. Once the image is built copy the kernel and rootfs image to cmd/manager/img
from buildroot/output/images/bzImage
and buildroot/output/images/rootfs.cpio.gz
respectively.
cd cmd/manager\n\nsudo find / -name OVMF_CODE.fd\n# => /usr/share/OVMF/OVMF_CODE.fd\nOVMF_CODE=/usr/share/OVMF/OVMF_CODE.fd\n\nsudo find / -name OVMF_VARS.fd\n# => /usr/share/OVMF/OVMF_VARS.fd\nOVMF_VARS=/usr/share/OVMF/OVMF_VARS.fd\n\nKERNEL=\"img/bzImage\"\nINITRD=\"img/rootfs.cpio.gz\"\n\nqemu-system-x86_64 \\\n -enable-kvm \\\n -cpu EPYC-v4 \\\n -machine q35 \\\n -smp 4 \\\n -m 2048M,slots=5,maxmem=10240M \\\n -no-reboot \\\n -drive if=pflash,format=raw,unit=0,file=$OVMF_CODE,readonly=on \\\n -netdev user,id=vmnic,hostfwd=tcp::7020-:7002 \\\n -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= \\\n -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 -vnc :0 \\\n -kernel $KERNEL \\\n -append \"earlyprintk=serial console=ttyS0\" \\\n -initrd $INITRD \\\n -nographic \\\n -monitor pty \\\n -monitor unix:monitor,server,nowait\n
Once the VM is booted press enter and on the login use username root
.
Agent is started automatically in the VM.
# List running processes and use 'grep' to filter for processes containing 'agent' in their names.\nps aux | grep cocos-agent\n# This command helps verify that the 'agent' process is running.\n# The output shows the process ID (PID), resource usage, and other information about the 'cocos-agent' process.\n# For example: 118 root cocos-agent\n
We can also check if Agent
is reachable from the host machine:
# Use netcat (nc) to test the connection to localhost on port 7020.\nnc -zv localhost 7020\n# Output:\n# nc: connect to localhost (::1) port 7020 (tcp) failed: Connection refused\n# Connection to localhost (127.0.0.1) 7020 port [tcp/*] succeeded!\n
"},{"location":"manager/#conclusion","title":"Conclusion","text":"Now you are able to use Manager
with Agent
. Namely, Manager
will create a VM with a separate OVMF variables file on manager /run
request.
We need Open Virtual Machine Firmware. OVMF is a port of Intel's tianocore firmware - an open source implementation of the Unified Extensible Firmware Interface (UEFI) - used by a qemu virtual machine. We need OVMF in order to run virtual machine with focal-server-cloudimg-amd64. When we install QEMU, we get two files that we need to start a VM: OVMF_VARS.fd
and OVMF_CODE.fd
. We will make a local copy of OVMF_VARS.fd
since a VM will modify this file. On the other hand, OVMF_CODE.fd
is only used as a reference, so we only record its path in an environment variable.
sudo find / -name OVMF_CODE.fd\n# => /usr/share/OVMF/OVMF_CODE.fd\nMANAGER_QEMU_OVMF_CODE_FILE=/usr/share/OVMF/OVMF_CODE.fd\n\nsudo find / -name OVMF_VARS.fd\n# => /usr/share/OVMF/OVMF_VARS.fd\nMANAGER_QEMU_OVMF_VARS_FILE=/usr/share/OVMF/OVMF_VARS.fd\n
NB: we set environment variables that we will use in the shell process where we run manager
.
To start the service, execute the following shell script (note a server needs to be running see here):
# download the latest version of the service\ngo get github.com/ultravioletrs/cocos\n\ncd $GOPATH/src/github.com/ultravioletrs/cocos\n\n# compile the manager\nmake manager\n\n# copy binary to bin\nmake install\n\n# set the environment variables and run the service\nMANAGER_GRPC_URL=localhost:7001\nMANAGER_LOG_LEVEL=debug \\\nMANAGER_QEMU_USE_SUDO=false \\\nMANAGER_QEMU_ENABLE_SEV=false \\\n./build/cocos-manager\n
To enable AMD SEV support, start manager like this
MANAGER_GRPC_URL=localhost:7001\nMANAGER_LOG_LEVEL=debug \\\nMANAGER_QEMU_USE_SUDO=true \\\nMANAGER_QEMU_ENABLE_SEV=true \\\nMANAGER_QEMU_SEV_CBITPOS=51 \\\n./build/cocos-manager\n
"},{"location":"manager/#verifying-vm-launch","title":"Verifying VM Launch","text":"NB: To verify that the manager successfully launched the VM, you need to open two terminals on the same machine. In one terminal, you need to launch go run main.go
(with the environment variables of choice) and in the other, you can run the verification commands.
To verify that the manager launched the VM successfully, run the following command:
ps aux | grep qemu-system-x86_64\n
You should get something similar to this
darko 324763 95.3 6.0 6398136 981044 ? Sl 16:17 0:15 /usr/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 4096M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=img/OVMF_VARS.fd -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -drive file=img/focal-server-cloudimg-amd64.img,if=none,id=disk0,format=qcow2 -device scsi-hd,drive=disk0 -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::9301-:9031,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -nographic -monitor pty\n
If you run a command as sudo
, you should get the output similar to this one
root 37982 0.0 0.0 9444 4572 pts/0 S+ 16:18 0:00 sudo /usr/local/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 4096M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=img/OVMF_VARS.fd -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -drive file=img/focal-server-cloudimg-amd64.img,if=none,id=disk0,format=qcow2 -device scsi-hd,drive=disk0 -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::9301-:9031,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -object sev-guest,id=sev0,cbitpos=51,reduced-phys-bits=1 -machine memory-encryption=sev0 -nographic -monitor pty\nroot 37989 122 13.1 5345816 4252312 pts/0 Sl+ 16:19 0:04 /usr/local/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 4096M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=img/OVMF_VARS.fd -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -drive file=img/focal-server-cloudimg-amd64.img,if=none,id=disk0,format=qcow2 -device scsi-hd,drive=disk0 -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::9301-:9031,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -object sev-guest,id=sev0,cbitpos=51,reduced-phys-bits=1 -machine memory-encryption=sev0 -nographic -monitor pty\n
The two processes are due to the fact that we run the command /usr/bin/qemu-system-x86_64
as sudo
, so there is one process for sudo
command and the other for /usr/bin/qemu-system-x86_64
.
If the ps aux | grep qemu-system-x86_64
give you something like this
darko 13913 0.0 0.0 0 0 pts/2 Z+ 20:17 0:00 [qemu-system-x86] <defunct>\n
means that the a QEMU virtual machine that is currently defunct, meaning that it is no longer running. More precisely, the defunct process in the output is also known as a \"zombie\" process.
You can troubleshoot the VM launch procedure by running directly qemu-system-x86_64
command. When you run manager
with MANAGER_LOG_LEVEL=info
env var set, it prints out the entire command used to launch a VM. The relevant part of the log might look like this
{\"level\":\"info\",\"message\":\"/usr/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 4096M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=img/OVMF_VARS.fd -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -drive file=img/focal-server-cloudimg-amd64.img,if=none,id=disk0,format=qcow2 -device scsi-hd,drive=disk0 -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::9301-:9031,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -nographic -monitor pty\",\"ts\":\"2023-08-14T18:29:19.2653908Z\"}\n
You can run the command - the value of the \"message\"
key - directly in the terminal:
/usr/bin/qemu-system-x86_64 -enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=64 -m 4096M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=img/OVMF_VARS.fd -device virtio-scsi-pci,id=scsi,disable-legacy=on,iommu_platform=true -drive file=img/focal-server-cloudimg-amd64.img,if=none,id=disk0,format=qcow2 -device scsi-hd,drive=disk0 -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::9301-:9031,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -nographic -monitor pty\n
and look for the possible problems. This problems can usually be solved by using the adequate env var assignments. Look in the manager/qemu/config.go
file to see the recognized env vars. Don't forget to prepend MANAGER_QEMU_
to the name of the env vars.
qemu-system-x86_64
Processes","text":"To kill any leftover qemu-system-x86_64
processes, use
pkill -f qemu-system-x86_64\n
The pkill command is used to kill processes by name or by pattern. The -f flag to specify that we want to kill processes that match the pattern qemu-system-x86_64
. It sends the SIGKILL signal to all processes that are running qemu-system-x86_64
.
If this does not work, i.e. if ps aux | grep qemu-system-x86_64
still outputs qemu-system-x86_64
related process(es), you can kill the unwanted process with kill -9 <PID>
, which also sends a SIGKILL signal to the process.
A trusted execution environment (TEE) is a separate part of the main memory and the CPU that encrypts code/data and enables \"on the fly\" executions of the said encrypted code/data. One of the examples of TEEs is Intel Secure Guard Extensions (SGX) and AMD Secure Encrypted Virtualization (SEV).
"},{"location":"tee/#amd-sev","title":"AMD SEV","text":"AMD SEV and its latest and most secure iteration, AMD Secure Encrypted Virtualization - Secure Nested Paging (SEV-SNP), is the AMD technology that isolates entire virtual machines (VMs). SEV-SNP encrypts the whole VM and provides confidentiality and integrity protection of the VM memory. This way, the hypervisor or any other application on the host machine cannot read the VM memory.
In CocosAI, we use an in-memory VM image called the Hardware Abstraction Layer (HAL). You can read more on HAL here.
One of the critical components of the SEV technology is the remote attestation. Remote attestation is a process in which one side (the attester) collects information about itself and sends that information to the client (or the relying party) for the relying party to assess the trustworthiness of the attester. If the attester is deemed trustworthy, the relying party will send confidential code/data or any secrets to the attester. You can read more on the attestation process here.
"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 0000000..3de02fd --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,58 @@ + +A trusted execution environment (TEE) is a separate part of the main memory and the CPU that encrypts code/data and enables "on the fly" executions of the said encrypted code/data. One of the examples of TEEs is Intel Secure Guard Extensions (SGX) and AMD Secure Encrypted Virtualization (SEV).
+AMD SEV and its latest and most secure iteration, AMD Secure Encrypted Virtualization - Secure Nested Paging (SEV-SNP), is the AMD technology that isolates entire virtual machines (VMs). SEV-SNP encrypts the whole VM and provides confidentiality and integrity protection of the VM memory. This way, the hypervisor or any other application on the host machine cannot read the VM memory.
+In CocosAI, we use an in-memory VM image called the Hardware Abstraction Layer (HAL). You can read more on HAL here.
+One of the critical components of the SEV technology is the remote attestation. Remote attestation is a process in which one side (the attester) collects information about itself and sends that information to the client (or the relying party) for the relying party to assess the trustworthiness of the attester. If the attester is deemed trustworthy, the relying party will send confidential code/data or any secrets to the attester. You can read more on the attestation process here.
+ + + + + + +