Explore the docs »
Web Site
-
Code Page
-
Report Bug
-
Request Feature
TABLE OF CONTENT
This project aims to help students or professionals to learn the main concepts of GNULinux and free software
Some GNULinux distributions like Debian and RPM will be covered
Installation and configuration of some packages will also be covered
By doing this you can give the whole community a chance to benefit from your changes.
Access to the source code is a precondition for this.
Use vagrant for up machines and execute labs and practice content in this article.
I have published in folder Vagrant a Vagrantfile with what is necessary
for you to upload an environment for studies
For starting the learning, see the documentation above.
Clone the repo
git clone https://github.com/marcossilvestrini/learning-lpic-3-305-300.git
cd learning-lpic-3-305-300
Customize a template Vagrantfile-topic-XXX. This file contains a vms configuration for labs. Example:
- File Vagrantfile-topic-351
- vm.clone_directory = "<your_driver_letter>:\\<to_machine>\#{VM_NAME}-instance-1" Example: vm.clone_directory = "E:\Servers\VMWare\#{VM_NAME}-instance-1"
- vm.vmx["memsize"] = ""
- vm.vmx["numvcpus"] = ""
- vm.vmx["cpuid.coresPerSocket"] = ""
Customize network configuration in files configs/network.
Use this repository for get learning about LPIC-3 305-300 exam
Switch a Vagrantfile-topic-xxx template and copy for a new file with name Vagrantfile
cd vagrant && vagrant up
cd vagrant && vagrant destroy -f
cd vagrant && vagrant reload
Important: If you reboot vms without vagrant, shared folder not mount after boot.
If you use Windows platform, I create a powershell script for up and down vms.
vagrant/up.ps1
vagrant/destroy.ps1
- Create repository
- Create scripts for provisioning labs
- Create examples about Topic 351
- Create examples about Topic 352
- Create examples about Topic 353
- Upload simulated itexam
0.The freedom to run the program as you wish, for any purpose (freedom 0).
1.The freedom to study how the program works, and change it so it does
your computing as you wish (freedom 1).
Access to the source code is a precondition for this.
2.The freedom to redistribute copies so you can help others (freedom 2).
3.freedom to distribute copies of your modified versions to others (freedom 3).
type COMMAND
apropos COMMAND
whatis COMMAND --long
whereis COMMAND
COMMAND --help, --h
man COMMAND
Weight: 6
Description: Candidates should know and understand the general concepts, theory and terminology of virtualization. This includes Xen, QEMU and libvirt terminology.
Key Knowledge Areas:
- Understand virtualization terminology
- Understand the pros and cons of virtualization
- Understand the various variations of Hypervisors and Virtual Machine Monitors
- Understand the major aspects of migrating physical to virtual machines
- Understand the major aspects of migrating virtual machines between host systems
- Understand the features and implications of virtualization for a virtual machine, such as snapshotting, pausing, cloning and resource limits
- Awareness of oVirt, Proxmox, systemd-machined and VirtualBox
- Awareness of Open vSwitch
Hypervisor
Hardware Virtual Machine (HVM)
Paravirtualization (PV)
Emulation and Simulation
CPU flags
/proc/cpuinfo
Migration (P2V, V2V)
Runs directly on the host's physical hardware, providing a base layer to manage VMs without the need for a host operating system.
- High performance and efficiency.
- Lower latency and overhead.
- Often used in enterprise environments and data centers.
- VMware ESXi: A robust and widely used hypervisor in enterprise settings.
- Microsoft Hyper-V: Integrated with Windows Server, offering strong performance and management features.
- Xen: An open-source hypervisor used by many cloud service providers.
- KVM (Kernel-based Virtual Machine): Integrated into the Linux kernel, providing high performance for Linux-based systems.
Runs on top of a conventional operating system, relying on the host OS for resource management and device support.
- Easier to set up and use, especially on personal computers.
- More flexible for development, testing, and smaller-scale deployments.
- Typically less efficient than Type 1 hypervisors due to additional overhead from the host OS.
- VMware Workstation: A powerful hypervisor for running multiple operating systems on a single desktop.
- Oracle VirtualBox: An open-source hypervisor known for its flexibility and ease of use.
- Parallels Desktop: Designed for Mac users to run Windows and other operating systems alongside macOS.
- QEMU (Quick EMUlator): An open-source emulator and virtualizer, often used in conjunction with KVM.
- Deployment Environment:
- Type 1 hypervisors are commonly deployed in data centers and enterprise environments due to their direct interaction with hardware and high performance.
- Type 2 hypervisors are more suitable for personal use, development, testing, and small-scale virtualization tasks.
- Performance:
- Type 1 hypervisors generally offer better performance and lower latency because they do not rely on a host OS.
- Type 2 hypervisors may experience some performance degradation due to the overhead of running on top of a host OS.
- Management and Ease of Use:
- Type 1 hypervisors require more complex setup and management but provide advanced features and scalability for large-scale deployments.
- Type 2 hypervisors are easier to install and use, making them ideal for individual users and smaller projects.
In the context of hypervisors, which are technologies used to create and manage virtual machines, the terms P2V migration and V2V migration are common in virtualization environments.
They refer to processes of migrating systems between different types of platforms.
P2V migration refers to the process of migrating a physical server to a virtual machine.
In other words, an operating system and its applications, running on dedicated physical hardware, are "converted" and moved to a virtual machine that runs on a hypervisor (such as VMware, Hyper-V, KVM, etc.).
- Example: You have a physical server running a Windows or Linux system, and you want to move it to a virtual environment, like a cloud infrastructure or an internal virtualization server.
The process involves copying the entire system state, including the operating system, drivers, and data, to create an equivalent virtual machine that can run as if it were on the physical hardware.
V2V migration refers to the process of migrating a virtual machine from one hypervisor to another.
In this case, you already have a virtual machine running in a virtualized environment (like VMware), and you want to move it to another virtualized environment (for example, to Hyper-V or to a new VMware server).
- Example: You have a virtual machine running on a VMware virtualization server, but you decide to migrate it to a Hyper-V platform. In this case, the V2V migration converts the virtual machine from one format or hypervisor to another, ensuring it can continue running correctly.
HVM leverages hardware extensions provided by modern CPUs to virtualize hardware, enabling the creation and management of VMs with minimal performance overhead.
- Hardware Support: Requires CPU support for virtualization extensions such as Intel VT-x or AMD-V.
- Full Virtualization: VMs can run unmodified guest operating systems, as the hypervisor provides a complete emulation of the hardware environment.
- Performance: Typically offers near-native performance because of direct execution of guest code on the CPU.
- Isolation: Provides strong isolation between VMs since each VM operates as if it has its own dedicated hardware.
VMware ESXi, Microsoft Hyper-V, KVM (Kernel-based Virtual Machine).
- Compatibility: Can run any operating system without modification.
- Performance: High performance due to hardware support.
- Security: Enhanced isolation and security features provided by hardware.
- Hardware Dependency: Requires specific hardware features, limiting compatibility with older systems.
- Complexity: May involve more complex configuration and management.
Paravirtualization involves modifying the guest operating system to be aware of the virtual environment, allowing it to interact more efficiently with the hypervisor.
- Guest Modification: Requires changes to the guest operating system to communicate directly with the hypervisor using hypercalls.
- Performance: Can be more efficient than traditional full virtualization because it reduces the overhead associated with emulating hardware.
- Compatibility: Limited to operating systems that have been modified for paravirtualization.
Xen with paravirtualized guests, VMware tools in certain configurations, and some KVM configurations.
- Efficiency: Reduces the overhead of virtualizing hardware, potentially offering better performance for certain workloads.
- Resource Utilization: More efficient use of system resources due to direct communication between the guest OS and hypervisor.
- Guest OS Modification: Requires modifications to the guest OS, limiting compatibility to supported operating systems.
- Complexity: Requires additional complexity in the guest OS for hypercall implementations.
- HVM: Can run unmodified guest operating systems.
- Paravirtualization: Requires guest operating systems to be modified to work with the hypervisor.
- HVM: Typically provides near-native performance due to hardware-assisted execution.
- Paravirtualization: Can offer efficient performance by reducing the overhead of hardware emulation, but relies on modified guest OS.
- HVM: Requires specific CPU features (Intel VT-x, AMD-V).
- Paravirtualization: Does not require specific CPU features but needs modified guest OS.
- HVM: Provides strong isolation using hardware features.
- Paravirtualization: Relies on software-based isolation, which may not be as robust as hardware-based isolation.
- HVM: Generally more straightforward to deploy since it supports unmodified OS.
- Paravirtualization: Requires additional setup and modifications to the guest OS, increasing complexity.
NUMA (Non-Uniform Memory Access) is a memory architecture used in multiprocessor systems to optimize memory access by processors.
In a NUMA system, memory is distributed unevenly among processors, meaning that each processor has faster access to a portion of memory (its "local memory") than to memory that is physically further away (referred to as "remote memory") and associated with other processors.
- Local and Remote Memory: Each processor has its own local memory, which it can access more quickly. However, it can also access the memory of other processors, although this takes longer.
- Differentiated Latency: The latency of memory access varies depending on whether the processor is accessing its local memory or the memory of another node. Local memory access is faster, while accessing another node’s memory (remote) is slower.
- Scalability: NUMA architecture is designed to improve scalability in systems with many processors. As more processors are added, memory is also distributed, avoiding the bottleneck that would occur in a uniform memory access (UMA) architecture.
- Better Performance in Large Systems: Since each processor has local memory, it can work more efficiently without competing as much with other processors for memory access.
- Scalability: NUMA allows systems with many processors and large amounts of memory to scale more effectively compared to a UMA architecture.
- Programming Complexity: Programmers need to be aware of which regions of memory are local or remote, optimizing the use of local memory to achieve better performance.
- Potential Performance Penalties: If a processor frequently accesses remote memory, performance may suffer due to higher latency. This architecture is common in high-performance multiprocessor systems, such as servers and supercomputers, where scalability and memory optimization are critical.
-
oVirt: https://www.ovirt.org/
-
Proxmox: https://www.proxmox.com/en/proxmox-virtual-environment/overview
-
Oracle VirtualBox: https://www.virtualbox.org/
-
Open vSwitch: https://www.openvswitch.org/
Abstracts physical hardware to create virtual machines (VMs) that run separate operating systems and applications.
Data centers, cloud computing, server consolidation.
VMware ESXi, Microsoft Hyper-V, KVM.
Allows multiple isolated user-space instances (containers) to run on a single OS kernel.
Microservices architecture, development and testing environments.
Docker, Kubernetes, LXC.
Combines hardware and software network resources into a single, software-based administrative entity.
Software-defined networking (SDN), network function virtualization (NFV).
VMware NSX, Cisco ACI, OpenStack Neutron.
Pools physical storage from multiple devices into a single virtual storage unit that can be managed centrally.
Data management, storage optimization, disaster recovery.
IBM SAN Volume Controller, VMware vSAN, NetApp ONTAP.
Allows a desktop operating system to run on a virtual machine hosted on a server.
Virtual desktop infrastructure (VDI), remote work solutions.
Citrix Virtual Apps and Desktops, VMware Horizon, Microsoft Remote Desktop Services.
Separates applications from the underlying hardware and operating system, allowing them to run in isolated environments.
Simplified application deployment, compatibility testing.
VMware ThinApp, Microsoft App-V, Citrix XenApp.
Integrates data from various sources without physically consolidating it, providing a unified view for analysis and reporting.
Business intelligence, real-time data integration.
Denodo, Red Hat JBoss Data Virtualization, IBM InfoSphere.
- Resource Efficiency: Better utilization of physical resources.
- Cost Savings: Reduced hardware and operational costs.
- Scalability: Easy to scale up or down according to demand.
- Flexibility: Supports a variety of workloads and applications.
- Disaster Recovery: Simplified backup and recovery processes.
- Isolation: Improved security through isolation of environments.
Weight: 3
Description: Candidates should be able to install, configure, maintain, migrate and troubleshoot Xen installations. The focus is on Xen version 4.x.
Key Knowledge Areas:
- Understand architecture of Xen, including networking and storage
- Basic configuration of Xen nodes and domains
- Basic management of Xen nodes and domains
- Basic troubleshooting of Xen installations
- Awareness of XAPI
- Awareness of XenStore
- Awareness of Xen Boot Parameters
- Awareness of the xm utility
Xen is an open-source type-1 (bare-metal) hypervisor, which allows multiple operating systems to run concurrently on the same physical hardware.
Xen provides a layer between the physical hardware and virtual machines (VMs), enabling efficient resource sharing and isolation.
- Architecture: Xen operates with a two-tier system where Domain 0 (Dom0) is the privileged domain with direct hardware access and manages the hypervisor. Other virtual machines, called Domain U (DomU), run guest operating systems and are managed by Dom0.
- Types of Virtualization: Xen supports both paravirtualization (PV), which requires modified guest OS, and hardware-assisted virtualization (HVM), which uses hardware extensions (e.g., Intel VT-x or AMD-V) to run unmodified guest operating systems. Xen is widely used in cloud environments, notably by Amazon Web Services (AWS) and other large-scale cloud providers.
XenSource was the company founded by the original developers of the Xen hypervisor at the University of Cambridge to commercialize Xen.
The company provided enterprise solutions based on Xen and offered additional tools and support to enhance Xen’s capabilities for enterprise use.
- Acquisition by Citrix: In 2007, XenSource was acquired by Citrix Systems, Inc. Citrix used Xen technology as the foundation for its Citrix XenServer product, which became a popular enterprise-grade virtualization platform based on Xen.
- Transition: After the acquisition, the Xen project continued as an open-source project, while Citrix focused on commercial offerings like XenServer, leveraging XenSource technology.
Xen Project refers to the open-source community and initiative responsible for developing and maintaining the Xen hypervisor after its commercialization.
The Xen Project operates under the Linux Foundation, with a focus on building, improving, and supporting Xen as a collaborative, community-driven effort.
- Goals: The Xen Project aims to advance the hypervisor by improving its performance, security, and feature set for a wide range of use cases, including cloud computing, security-focused virtualization (e.g., Qubes OS), and embedded systems.
- Contributors: The project includes contributors from various organizations, including major cloud providers, hardware vendors, and independent developers.
- XAPI and XenTools: The Xen Project also includes tools such as XAPI (XenAPI), which is used for managing Xen hypervisor installations, and various other utilities for system management and optimization.
Xen Store is a critical component of the Xen Hypervisor.
Essentially, Xen Store is a distributed key-value database used for communication and information sharing between the Xen hypervisor and the virtual machines (also known as domains) it manages.
Here are some key aspects of Xen Store:
-
Inter-Domain Communication: Xen Store enables communication between domains, such as Dom0 (the privileged domain that controls hardware resources) and DomUs (user domains, which are the VMs). This is done through key-value entries, where each domain can read or write information.
-
Configuration Management: It is used to store and access configuration information, such as virtual devices, networking, and boot parameters. This facilitates the dynamic management and configuration of VMs.
-
Events and Notifications: Xen Store also supports event notifications. When a particular key or value in the Xen Store is modified, interested domains can be notified to react to these changes. This is useful for monitoring and managing resources.
-
Simple API: Xen Store provides a simple API for reading and writing data, making it easy for developers to integrate their applications with the Xen virtualization system.
XAPI, or XenAPI, is the application programming interface (API) used to manage the Xen Hypervisor and its virtual machines (VMs).
XAPI is a key component of XenServer (now known as Citrix Hypervisor) and provides a standardized way to interact with the Xen hypervisor to perform operations such as creating, configuring, monitoring, and controlling VMs.
Here are some important aspects of XAPI:
-
VM Management: XAPI allows administrators to programmatically create, delete, start, and stop virtual machines.
-
Automation: With XAPI, it's possible to automate the management of virtual resources, including networking, storage, and computing, which is crucial for large cloud environments.
-
Integration: XAPI can be integrated with other tools and scripts to provide more efficient and customized administration of the Xen environment.
-
Access Control: XAPI also provides access control mechanisms to ensure that only authorized users can perform specific operations in the virtual environment.
XAPI is the interface that enables control and automation of the Xen Hypervisor, making it easier to manage virtualized environments.
- Xen: The core hypervisor technology enabling virtual machines to run on physical hardware.
- XenSource: The company that commercialized Xen, later acquired by Citrix, leading to the development of Citrix XenServer.
- Xen Project: The open-source initiative and community that continues to develop and maintain the Xen hypervisor under the Linux Foundation.
- XenStore: Xen Store acts as a communication and configuration intermediary between the Xen hypervisor and the VMs, streamlining the operation and management of virtualized environments.
- XAPI is the interface that enables control and automation of the Xen Hypervisor, making it easier to manage virtualized environments.
Domain0, or Dom0, is the control domain in a Xen architecture. It manages other domains (DomUs) and has direct access to hardware.
Dom0 runs device drivers, allowing DomUs, which lack direct hardware access, to communicate with devices. Typically, it is a full instance of an operating system, like Linux, and is essential for Xen hypervisor operation.
DomUs are non-privileged domains that run virtual machines.
They are managed by Dom0 and do not have direct access to hardware. DomUs can be configured to run different operating systems and are used for various purposes, such as application servers and development environments. They rely on Dom0 for hardware interaction.
PV-DomUs use a technique called paravirtualization. In this model, the DomU operating system is modified to be aware that it runs in a virtualized environment, allowing it to communicate directly with the hypervisor for optimized performance.
This results in lower overhead and better efficiency compared to full virtualization.
HVM-DomUs are virtual machines that utilize full virtualization, allowing unmodified operating systems to run. The Xen hypervisor provides hardware emulation for these DomUs, enabling them to run any operating system that supports the underlying hardware architecture.
While this offers greater flexibility, it can result in higher overhead compared to PV-DomUs.
Paravirtualised Network Devices
Domain0 (Dom0), DomainU (DomU)
PV-DomU, HVM-DomU
/etc/xen/
xl
xl.cfg
xl.conf # Xen global configurations
xentop
oxenstored # Xenstore configurations
# Xen Settings
/etc/xen/
/etc/xen/xl.conf - Main general configuration file for Xen
/etc/xen/oxenstored.conf - Xenstore configurations
# VM Configurations
/etc/xen/xlexample.pvlinux
/etc/xen/xlexample.hvm
# Service Configurations
/etc/default/xen
/etc/default/xendomains
# xen-tools configurations
/etc/xen-tools/
/usr/share/xen-tools/
# create a pv image
xen-create-image \
--hostname=lpic3-pv-guest \
--memory=1gb \
--vcpus=2 \
--lvm=vg_xen \
--dhcp \
--pygrub \
--dist=bookworm
# delete a pv image
xen-delete-image lpic3-pv-guest --lvm=vg_xen
# list xen interfaces
brctl show
# view xen information
xl infos
# list Domains
xl list
# view dmesg information
xl dmesg
# monitoring domain
xl top
xentop
xen top
# Limit mem Dom0
xl mem-set 0 2048
# Limite cpu (not permanent after boot)
xl vcpu-set 0 2
# manual conf
man xl.conf
# manual cfg - about guest configuration
man xl.cfg
# create DomainU - virtual machines
xl create /etc/xen/lpic3-pv-guest.cfg
# create DomainU virtual machine and connect to guest
xl create -c /etc/xen/lpic3-pv-guest.cfg
# connect in domain guest
xl console <id>|<name> (press enter)
xl console 1
xl console lpic3-pv-guest
#How do I exit domU "xl console" session
#Press ctrl+] or if you're using Putty press ctrl+5.
# Poweroff domain
xl shutdown lpic3-pv-guest
# destroy domain
xl destroy lpic3-pv-guest
# reboot domain
xl reboot lpic3-pv-guest
Weight: 4
Description: Candidates should be able to install, configure, maintain, migrate and troubleshoot QEMU installations.
Key Knowledge Areas:
- Understand the architecture of QEMU, including KVM, networking and storage
- Start QEMU instances from the command line
- Manage snapshots using the QEMU monitor
- Install the QEMU Guest Agent and VirtIO device drivers
- Troubleshoot QEMU installations, including networking and storage
- Awareness of important QEMU configuration parameters
Kernel modules: kvm, kvm-intel and kvm-amd
/dev/kvm
QEMU monitor
qemu
qemu-system-x86_64
ip
brctl
tunctl
# list links
ip link show
Weight: 9
Description: Candidates should be able to manage virtualization hosts and virtual machines (‘libvirt domains’) using libvirt and related tools.
Key Knowledge Areas:
- Understand the architecture of libvirt
- Manage libvirt connections and nodes
- Create and manage QEMU and Xen domains, including snapshots
- Manage and analyze resource consumption of domains
- Create and manage storage pools and volumes
- Create and manage virtual networks
- Migrate domains between nodes
- Understand how libvirt interacts with Xen and QEMU
- Understand how libvirt interacts with network services such as dnsmasq and radvd
- Understand libvirt XML configuration files
- Awareness of virtlogd and virtlockd
libvirtd
/etc/libvirt/
virsh (including relevant subcommands)
foo
Weight: 3
Description: Candidates should be able to manage virtual machines disk images. This includes converting disk images between various formats and hypervisors and accessing data stored within an image.
Key Knowledge Areas:
- Understand features of various virtual disk image formats, such as raw images, qcow2 and VMDK
- Manage virtual machine disk images using qemu-img
- Mount partitions and access files contained in virtual machine disk images using libguestfish
- Copy physical disk content to a virtual machine disk image
- Migrate disk content between various virtual machine disk image formats
- Awareness of Open Virtualization Format (OVF)
qemu-img
guestfish (including relevant subcommands)
guestmount
guestumount
virt-cat
virt-copy-in
virt-copy-out
virt-diff
virt-inspector
virt-filesystems
virt-rescue
virt-df
virt-resize
virt-sparsify
virt-p2v
virt-p2v-make-disk
virt-v2v
virt-sysprep
foo
Weight: 7
Description: Candidates should understand the concept of container virtualization. This includes understanding the Linux components used to implement container virtualization as well as using standard Linux tools to troubleshoot these components.
Key Knowledge Areas:
- Understand the concepts of system and application container
- Understand and analyze kernel namespaces
- Understand and analyze control groups
- Understand and analyze capabilities
- Understand the role of seccomp, SELinux and AppArmor for container virtualization
- Understand how LXC and Docker leverage namespaces, cgroups, capabilities, seccomp and MAC
- Understand the principle of runc
- Understand the principle of CRI-O and containerd
- Awareness of the OCI runtime and image specifications
- Awareness of the Kubernetes Container Runtime Interface (CRI)
- Awareness of podman, buildah and skopeo
- Awareness of other container virtualization approaches in Linux and other free operating systems, such as rkt, OpenVZ, systemd-nspawn or BSD Jails
timeline
title Time Line Containers Evolution
1979 : chroot
2000 : FreeBSD Jails
2004 : Solaris Containers
2006 : cgroups
2008 : LXC
2013 : Docker
2014 : Kubernetes
nsenter
unshare
ip (including relevant subcommands)
capsh
/sys/fs/cgroups
/proc/[0-9]+/ns
/proc/[0-9]+/status
foo
Weight: 6
Description: Candidates should be able to use system containers using LXC and LXD. The version of LXC covered is 3.0 or higher.
Key Knowledge Areas:
- Understand the architecture of LXC and LXD
- Manage LXC containers based on existing images using LXD, including networking and storage
- Configure LXC container properties
- Limit LXC container resource usage
- Use LXD profiles
- Understand LXC images
- Awareness of traditional LXC tools
lxd
lxc (including relevant subcommands)
foo
Weight: 9
Description: Candidate should be able to manage Docker nodes and Docker containers. This include understand the architecture of Docker as well as understanding how Docker interacts with the node’s Linux system.
Key Knowledge Areas:
- Understand the architecture and components of Docker
- Manage Docker containers by using images from a Docker registry
- Understand and manage images and volumes for Docker containers
- Understand and manage logging for Docker containers
- Understand and manage networking for Docker
- Use Dockerfiles to create container images
- Run a Docker registry using the registry Docker image
dockerd
/etc/docker/daemon.json
/var/lib/docker/
docker
Dockerfile
# Examples of docker
Weight: 3
Description: Candidates should understand the importance of container orchestration and the key concepts Docker Swarm and Kubernetes provide to implement container orchestration.
Key Knowledge Areas:
- Understand the relevance of container orchestration
- Understand the key concepts of Docker Compose and Docker Swarm
- Understand the key concepts of Kubernetes and Helm
- Awareness of OpenShift, Rancher and Mesosphere DC/OS
Weight: 2
Description: Candidates should understand common offerings in public clouds and have basic feature knowledge of commonly available cloud management tools.
Key Knowledge Areas:
- Understand common offerings in public clouds
- Basic feature knowledge of OpenStack
- Basic feature knowledge of Terraform
- Awareness of CloudStack, Eucalyptus and OpenNebula
IaaS, PaaS, SaaS
OpenStack
Terraform
# examples
Weight: 2
Description: Candidates should be able to use Packer to create system images. This includes running Packer in various public and private cloud environments as well as building container images for LXC/LXD.
Key Knowledge Areas:
- Understand the functionality and features of Packer
- Create and maintain template files
- Build images from template files using different builders
packer
# examples
Weight: 3
Description: Candidates should able to use cloud-init to configure virtual machines created from standardized images. This includes adjusting virtual machines to match their available hardware resources, specifically, disk space and volumes.
Additionally, candidates should be able to configure instances to allow secure SSH logins and install a specific set of software packages.
Furthermore, candidates should be able to create new system images with cloud-init support.
Key Knowledge Areas:
- Understanding the features and concepts of cloud-init, including user-data, initializing and configuring cloud-init
- Use cloud-init to create, resize and mount file systems, configure user accounts, including login credentials such as SSH keys and install software packages from the distribution’s repository
- Integrate cloud-init into system images
- Use config drive datasource for testing
cloud-init
user-data
/var/lib/cloud/
# examples
Weight: 3
Description: Candidate should be able to use Vagrant to manage virtual machines, including provisioning of the virtual machine.
Key Knowledge Areas:
- Understand Vagrant architecture and concepts, including storage and networking
- Retrieve and use boxes from Atlas
- Create and run Vagrantfiles
- Access Vagrant virtual machines
- Share and synchronize folder between a Vagrant virtual machine and the host system
- Understand Vagrant provisioning, i.e. File and Shell provisioners
- Understand multi-machine setup
vagrant
Vagrantfile
# examples
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
- This project is licensed under the MIT License * see the LICENSE.md file for details
Marcos Silvestrini - [email protected]
Project Link: https://github.com/marcossilvestrini/learning-lpic-3-305-300
- Richard Stallman's
- GNU
- Kernel
- Linux Standard Base
- Free Software
- License
- Distros
- Desktop Environments
- Protocols
- DNS
- Package Manager
- Shell Script
- Others Tools
- Virtualization Definitions
- KVM
- Xen
- XenServer
- Wiki XenProject
- Network Interfaces
- Xen Tools
- LPI Blog: Xen Virtualization and Cloud Computing #01: Introduction
- LPI Blog: Xen Virtualization and Cloud Computing #02: How Xen Does the Job
- LPI Blog: Xen Virtualization and Cloud Computing #04: Containers, OpenStack, and Other Related Platforms
- Xen Virtualization and Cloud Computing #05: The Xen Project, Unikernels, and the Future
- Xen Project Beginners Guide
- Crazy Book
- Unikernels
- Openstack Docs
- Open vSwitch
- LPIC-3 305-300 Exam