Skip to content

Latest commit

 

History

History
294 lines (206 loc) · 8.13 KB

getting-started.md

File metadata and controls

294 lines (206 loc) · 8.13 KB

Getting Started

A quick start guide to get KubeVirt up and running inside Vagrant.

Note: This guide was tested on Fedora 23 and Fedora 25.

Note: Fedora 24 is known to have a bug which affects our vagrant setup.

Building

Go (1.8 or higher)

Go needs to be setup to be able to compile the sources.

Note: Go is pretty picky about paths, thus use the suggested ones.

    # If you haven't set it already, set a GOPATH
    echo "export GOPATH=~/go" >> ~/.bashrc
    echo "export PATH=$PATH:$GOPATH/bin" >> ~/.bashrc
    source ~/.bashrc

    mkdir -p ~/go

    sudo dnf install golang

Note: Some code within k8s.io/client-go and k8s.io/apimachinery uses features from the Go standard libaries introduced in version 1.8.

If needed, a helpful tool to dynamically manage multiple versions of Go is gimme

Vagrant

Vagrant is used to bring up a development and demo environment:

    sudo dnf install vagrant vagrant-libvirt
    sudo systemctl restart virtlogd # Work around rpm packaging bug
    sudo systemctl restart libvirtd

On some systems Vagrant will always ask you for your sudo password when you try to do something with a VM. To avoid retyping your password all the time you can add yourself to the libvirt group.

sudo gpasswd -a ${USER} libvirt
newgrp libvirt

On CentOS/RHEL 7 you might also need to change the libvirt connection string to be able to see all libvirt information:

export LIBVIRT_DEFAULT_URI=qemu:///system

Build dependencies

Now we can finally get to the sources, before building KubeVirt we'll need to install a few build requirements:

    # We are interfacing with libvirt
    sudo dnf install libvirt-devel

    cd $GOPATH
    # Use goimports for package import ordering
    go get golang.org/x/tools/cmd/goimports
    # Setup glide which is used to track dependencies
    go get github.com/Masterminds/glide

Note: Make sure you're using the glide version from your $GOPATH. If you have a version installed via your system's package manager, it's likely older and might not be able to work with k8s.io/client-go. Github Issue

Sources

Now we can clone the project into your $GOPATH:

    git clone https://github.com/kubevirt/kubevirt.git $GOPATH/src/kubevirt.io/kubevirt
    cd $GOPATH/src/kubevirt.io/kubevirt

Compile and run it

And finally build all required artifacts and launch the Vagrant environment:

    # Building and deploying kubevirt in Vagrant
    vagrant up
    make vagrant-deploy

This will create a VM called master which acts as Kubernetes master and then deploy Kubevirt there. To create one or more nodes which will register themselves on master, you can use the VAGRANT_NUM_NODES environment variable. This would create a master and two nodes:

    VAGRANT_NUM_NODES=2 vagrant up

If you decide to use separate nodes, pass VAGRANT_NUM_NODES variable to all vagrant interacting commands. However, just running master is enough for most development tasks.

You could also run some build steps individually:

    # To build all binaries
    make

    # Or to build just one binary
    make build WHAT=cmd/virt-controller

    # To build all docker images
    make docker

Code generation

Note: This is only important if you plan to modify sources, you don't need code generators just for building

Currently we use code generators for two purposes:

  • Generating swagger documentation out of struct and field comments for go-restful
  • Generating mock interfaces for gomock

So if you add or modify comments on structs in pkg/api/v1 or if you change interface definitions, you need to rerun the code generator.

First install the generator tools:

go get -u github.com/golang/mock/gomock
go get -u github.com/rmohr/mock/mockgen
go get -u github.com/rmohr/go-swagger-utils/swagger-doc
go get github.com/onsi/ginkgo/ginkgo

Then regenerate the code:

make generate

Testing

After a successful build you can run the unit tests:

    make test

They don't require vagrant. To run the functional tests, make sure you have set up Vagrant. Then run

    make vagrant-deploy # synchronize with your code, if necessary
    make functest # run the functional tests against the Vagrant VMs

Use

Congratulations you are still with us and you have build KubeVirt.

Now it's time to get hands on and give it a try.

Cockpit

Cockpit is exposed on http://192.168.200.2:9090 The default login is root:vagrant

It can be used to view the cluster and verify the running state of components within the cluster. More information can be found on that project's site.

Create a first Virtual Machine

Finally start a VM called testvm:

    # This can be done from your GIT repo, no need to log into a vagrant VM
    # You might want to watch the Cockpit Cluster topology while running these commands

    # Create a VM
    ./cluster/kubectl.sh create -f cluster/vm.json

    # Sure? Let's list all created VMs
    ./cluster/kubectl.sh get vms

    # Enough, let's get rid of it
    ./cluster/kubectl.sh delete -f cluster/vm.json


    # You can actually use kubelet.sh to introspect the cluster in general
    ./cluster/kubectl.sh get pods

This will start a VM on master or one of the running nodes with a macvtap and a tap networking device attached.

Basic verification is possible by running

    bash cluster/vm-isolation-check.sh

Example

$ ./cluster/kubectl.sh create -f cluster/vm.json
vm "testvm" created

$ ./cluster/kubectl.sh get pods
NAME                        READY     STATUS    RESTARTS   AGE
haproxy                     1/1       Running   4          10h
virt-api                    1/1       Running   1          10h
virt-controller             1/1       Running   1          10h
virt-handler-z90mp          1/1       Running   1          10h
virt-launcher-testvm9q7es   1/1       Running   0          10s

$ ./cluster/kubectl.sh get vms
NAME      LABELS                        DATA
testvm    kubevirt.io/nodeName=master   {"apiVersion":"kubevirt.io/v1alpha1","kind":"VM","...

$ ./cluster/kubectl.sh get vms -o json
{
    "kind": "List",
    "apiVersion": "v1",
    "metadata": {},
    "items": [
        {
            "apiVersion": "kubevirt.io/v1alpha1",
            "kind": "VirtualMachine",
            "metadata": {
                "creationTimestamp": "2016-12-09T17:54:52Z",
                "labels": {
                    "kubevirt.io/nodeName": "master"
                },
                "name": "testvm",
                "namespace": "default",
                "resourceVersion": "102534",
                "selfLink": "/apis/kubevirt.io/v1alpha1/namespaces/default/virtualmachines/testvm",
                "uid": "7e89280a-be62-11e6-a69f-525400efd09f"
            },
            "spec": {
    ...

Accessing the Domain via the VMs SPICE subresource

First make sure you have remote-viewer installed. On Fedora run

dnf install virt-viewer

Then, after you made sure that the VM testvm is running, type

cluster/kubectl.sh spice testvm

to start a remote session with remote-viewer.

To print the connection details to stdout, run

cluster/kubectl.sh spice testvm --details

To directly query the config, do

curl 192.168.200.2:8184/apis/kubevirt.io/v1alpha1/namespaces/default/virtualmachines/testvm/spice -H"Accept:text/plain"

Accessing the Domain via the SPICE primary resource

Since kubectl does not support TPR subresources yet, the above cluster/kubectl.sh spice magic is just a wrapper.

API Documentation

The combined swagger documentation of Kubernetes and KubeVirt can be accessed under /swaggerapi. There is also an embedded swagger-ui instance running inside the cluster. It can be accessed via /swagger-ui.