- Packer Cookbook
Packer automates the creation of customized images in a repeatable manner. It supports multiple platforms including AWS, Azure, GCP, Openstack, VMware, Docker.
- Create Golden Images across platforms and environments
- Establishes an Image Factory Based on New Commits for Continous Delivery
- Automate Your Monthly Patching For New/Existing Workloads
- Create Immutable Infrastructure Using Packer in CI/CD Pipeline
- Version Controlled
- Consistent Images
- Automates Everything
Packer is written in Go language and compiled as single binary for various operating systems (Windows, Linux, macOS). It is modular and very extensible.
Packer builds images using a tempalte. Templates can be build using either json
(old) or hcl2
(recommended for v1.7.0+).
Templates defines settings using blocks:
- Original Image to Use (source)
- Where to Build the Image (AWS, VMware, Openstack)
- Files to Upload to the Image (scripts, packages, certificates)
- Installation and Configuration of the Machine Image
- Data to Retrieve when Building
- Source
- Builders
- Provisioner
- Post-Processors
- Communicators
- Variables
Source defines the initial image to use to create your customized image. Any defined source is reusable within build blocks.
For example:
- Building a new AWS image (AMI), you need to point to an existing AMI to customize
- Create a new vSphere template requires the name of the source VM
- Building a new Google Compute images needs a source image to start
source "azure-arm" "azure-arm-centos-7" {
image_offer = "CentOS"
image_publisher = "OpenLogic"
image_sku = "7.7"
os_type = "Linux"
subscription_id = "${var.azure_subscription_id}"
}
- Builders are responsible for creating machines from the base images, customizing the image as defined, and then createing a resulting image
- Builders are plugins that are developed to work wit ha specific platform (AWS, Azure, VMware, OpenStack, Docker)
- Everything done to the image is done within the BUILD block
- This is where customization "work" happens
build {
source = ["source.azure-arm.azure-arm-centos-7"]
provisioner "file" {
destination = "/tmp/package_a.zip"
source = "${var.package_a_zip}"
}
}
- Provisioners use built-in and third-party integration to install packages and configure the machine image
- Built-in integrations include file and different shell options
- Third-party integrations include:
- Ansible -run playbooks
- Chef - run cookbooks
- InSpec - run InSpec profiles
- PowerShell - execute PowerShell scripts
- Puppet - run Puppet manifest
- Salt - configure baed on Salt state
- Windows Shell - runs commands using Windows cmd
- Post-processors are executed after the image is build and provisioners are complete. They can be used to upload artifacts, execute uploaded scripts, validate installs, or import an image.
- Examples include:
- Validate a package using a checksum
- Import a package to AWS as an AMI
- Push a Docker image to registry
- Convert the artifact into a Vagrant box
- Create a VMware tempalte from the resulting build
- Communicators are the mechanism that Packer will use to communicate with the new build and upload files, execute scripts, etc.
- Two Communicators available today:
- SSH
- WinRM
- HashiCorp Packer can use variables to define defaults during a build
- Variables can be declared in a .pkrvars.hcl file or .auto.pkrvars.hcl, the default .pkr file, or any other file name if referenced when executing the build.
- You can also declare individually using the var option.
variable "image_id" {
type = string
description = "The id of the machine image (AMI) to use for server."
default = "ami-1234abcd"
validation {
condition = length(var.image_id) > 4 && substr(var.image_id, 0, 4) == "ami-"
error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"."
}
}
Packer is distributed as single binary. There are multiple ways how to download and install packer. Including the following options:
- Dowloading and storing precompiled binary (
wget
,curl
) - Installing from source
- Using system's default or custom package manager (
yum
,apt
,brew
,chocolatey
) - Running docker container (
docker container run -it hashicorp/packer:light
)
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install packer
You can optionally enable autocompletion for Packer CLI
packer -autocomplete-install
Subcommands available in Packer:
Usage: packer [--version] [--help] <command> [<args>]
Available commands are:
build build image(s) from template
console creates a console for testing variable interpolation
fix fixes templates from old versions of packer
fmt Rewrites HCL2 config files to canonical format
hcl2_upgrade transform a JSON template into an HCL2 configuration
init Install missing plugins or upgrade plugins
inspect see components of a template
validate check that a template is valid
version Prints the Packer version
Most of the commands accept or require flags or arguments to execute the desired functionality.
Takes a Packer template and runs all the defined builds to generate the desired artifacts. The build command provides the core functionality of Packer.
packer build base-image.pkr.hcl
Important arguments:
- -debug - enables debug mode for step-by-step troubleshooting
- -var - sets a variable in the Packer template
- -var-file - use a separate variable file
Takes a tempalte and finds backwards incompatible parts of it and brings it up to date so it can be used with the latest version of Packer. Use after you update Packer to a new release version.
Used to format your Packer templates and files to the preferred HCL canonical format and style.
Shows all components of a Packer template including variables, builds, sources, provisioners and post-procesors.
Validates the syntax and the configration of your packer template. This is your first validation for templates after writing or updating them.
Translates a template written in the older JSON format to the new HCL2 format.
Packer has a few environment variables that you should know:
- PKACER_LOG - enable Packer detauled logs (off by default)
- PACKER_LOG_PATH - set the path for Packer logs to specified file (rather than stderr)
- PKR_VAR_ - define a variable value using ENV rather than in a template
# Enable Detailed Logs
export PACKER_LOG=1
# Set a path for logs
export PACKER_LOG_PATH=/var/log/packer.log
# run the packer build
packer build base-image.pkr.hcl
# Declare a value for the aws_region variable using ENV
export PKR_VAR_aws_region=us-east-1
packer build aws-base-image.pkr.hcl
- HCL2 Template
- Packer Build
- Provision Instance
- Run Provisioners (pull artifacts if required)
- Create AMI
- Register AMI
- Destroy Instance
The core functionality and behavior of Packer is defined by a template. Templates consist of declarations and command, such as what plugins (builders, provisioners, etc.) to use, how to configure the plugins. and what order to run them.
Packer currently supports two format for templates:
- JSON (Javascript Object Notation)
- HCL2 (HashiCorp Configuration Language)
- Configuration format is VCS friendly (multi-line lists, training commands, auto-formatting)
- Only code blocks built into the HCL language are available to use
- Packer uses a standard file name for simplicity .pkr.hcl
- Uses Syntax Constructs like Blocks and Arguments
- New features will only be implemented for the HCL format moving forward
- In general, the ordering of root blocks is not significat within a Packer template since Packer uses a declarative model. References to other resources do not depend on the order they are defined.
- Blocks can even span multiple Packer template files.
- The order of provisioner or post-processor blocks within a build is the only major feature where block order matters.
HCL2 supports comment to use throughout the configuration file:
# this is a comment
source "amazon-ebs" "example" {
ami_name = "abc123"
}
// this is also a comment
/* <-this is a multi-line comment
source "amazon-ebs" "example {
amin_name = "abc123"
}
*/
variable "example" {
- Like Terraform, we can use interpolation syntax to refer to other blocks within the template
- Allows us to orgamize code as well as reuse values that are already defined or have been retrieved
- Builders, provisioners, post-processors, and data sources are simply plugins that are consumed during the Packer build process
- This allows new functionality to be added to Packer without modifying the core source code
- Builders are responsible for creating machines and generating images from them for various platforms
- You can specify one or more builder blocks in a template.
- Each builder block can reference one or more source blocks.
- There are many configuration options for a given builder. Some options are required, and others are optional. The optional are dependent on the what the builder type supports.
Popular buidlers include:
- AWS AMI Builder
- Azure Resource Manager Builder
- VMware Builder from ISO
- VMware vSphere Clone Builder
- VMware vSphere Builder from ISO
- Docker Builder
- Google Compute Builder
- Null Builder
- QEMU Builder
- Virtual Box Builder
When using multi-image or multi-cloud packer templates, it may be useful to limit the scope of the build by using only
and except
options.
# Display packer build options
packer build --help | grep 'only\|except'
-except=foo,bar,baz Run all builds and post-processors other than these.
-only=foo,bar,baz Build only the specified builds.
# Build image only for AWS
packer build -only="*amazon*" agnostic/ubuntu.pkr.hcl
- Packer can use variables to define defaults and values during a build
- Work a lot like variables from other programming languages
- Allow you to remove hard coded vallues and pass parameters to your configuration
- Can help make configuration easier to understand and increase reusability
- Must always have a value. Variables are optional, and they can have a default value
- Use variables to pass value to your configuration
- Refector existing configuraiton to use variables
- Keep sensitive data out of source control
- Pass variable to Packer in several ways
- Variables can be declared and defined in a
.pkrvars.hcl
file or.auto.pkrvars.hcl
the default .pkr file, or any other file name if referenced when executing the build. - You can also declare individually using the
-var
option. - Declare variables can be accessed through the template where needed within expressions.
- The type is a constant, meaning that's the only value that will be accepted for that variable.
- The most common variable types are string, number, list and map.
- Other support types include bool (true/false), set, objects, and tuple
- If type is omitted, it is inferred from the default value
- If neither type nor default is provided, the type is assumed to be string
- You can also specify complex types, such as collections
variable "image_id" {
type = string
description = "The id of machine image (AMI)."
default = "ami-1234abcd"
}
variable "image_id" {
type = list(string)
description = "The id of the machine image (AMI)."
default = ["ami-1234abcd", "ami-1z2y3x445v"]
}
A variable can be marked as senstive if required telling packer to obfuscate it from the output.
variable "ssh_password" {
sensitive = true
default = {
key = "SuperSecret123"
}
}
$ packer inspect password.pkr.hcl
Packer Inspect: HCL2 mode
> input-variables:
var.ssh_password: "{\n \"key\" = \"<sensitive>\"\n }"
There are two main how we can refer to a variable.
- General Referral in Packer
var.<name>
- Interpolation within a String
"${var.<name>}"
Example of general referral in template:
image_name = var.image_name
subnet_id = var.subnet
vpc_id = var.vpc
Example of interpolation within a string:
image_name = "${var.prefix}-iamge-{{timestamp}}"
- Declared variables can be accessed throughout the template where needed
- Reference variables using expressions such as
var.<name>
or"${var.<name>}"
source "amazon-ebs" "aws-example" {
ami_name = "aws-${var.ami_name}"
instance_type = "t3.medium"
region = "var.region
source_ami_filter {
filters = {
name = var.source_ami_name
root-device-type = "ebs"
virtualization-type = "hvm"
}
owners = [var.source_ami_owner]
}
}
To change the default variable value we can use =
operator, for example:
variable "image_id" {
type = string
description = "The id of the machine image (AMI)"
default = "ami-1234abcd"
}
Define variable in another file e.g. variables.pkrvars.hcl
image_id = "ami-5678wxyz"
Or define variable with command line argument:
packer build -var image_id=ami=5467wxyz aws-build.pkr.hcl
Lowest to highest priority:
- Default Values
- Environment Variables
- Variable Definition File
- Using the
-var
or-var-file
CLI option - Variables Entered via CLI Prompt
- Similar to input variables, assign a name to an expression or value
- Locals cannot be overridden at runtime - they are constants
- Can use a
local {}
orlocals {}
block - can mark local as senstive - Using
locals {}
is more compact and efficient
- Can use a
- Referenced in a Packer file through interpolation - local.
locals {
timestamp = regex_repace(timestamp(), "[- TZ:]", "")
}
variable "image_name" {
image_name = "${var.ami_prefix}-${local.timestamp}"
}
- Variables can also be set using environment variables
- Great solution for setting credentials or variables that might change often
- Packer will read environment variables in the form of
PKR_VAR_<name>
# set environment variables
export PKR_VAR_secret_key=AIOAJSFJAIFHEXAMPLE
export PKR_VAR_access_key=wPWOIAOFIJwohfalskfhiAUHFhnalkfjhuwahfi
# run packer build that will use ENV
packer build aws-linux.pkr.hcl
Packer will automatically define certain commonly used environment variables at build time that can be referenced
PACKER_BUILD_NAME - set to the name of the build that Packer is running PACKER_BUILD_TYPE - set the type of build that was used to create the machine PACKER_HTTP_ADDR - set to the address of the http server for file transfer (if used)
It is worth noting some contextual variables that can be used during the build.
- Provisioners user built-in and third-party integrations to install packages and configure the machine image
- Built-in integrations include
file
and differentshell
options. - Third-party integrations include Ansible, Chef, InSpec, PowerShell, Puppet, Salt, Windows Shell and many more.
Provisioners prepare the system for use, therefore common use cases are:
- installing packages
- patching the kernel
- creating users
- downloading application code
File provisioner is used to upload file(s) to image being built.
provisioner "file" {
source = "packer.zip"
desctionation = "/tmp/packer.zip"
}
provisioner "file" {
source = "/files"
destination = "/tmp"
}
Shell provisioner can execute script or individual commands within image being built.
provisioner "shell" {
script = "install_something.sh"
}
provisioner "shell" {
inline = [
"echo Updating package list and installing software",
"sudo apt-get update",
"sudo apt-get install -y nginx"
]
}
Ansible provisioner runs playbooks. It dynamically creates an inventory file configured to use SSH.
provisioner "ansible" {
ansible_env_vars = ["ANSIBLE_HOST_KEY_CHECKING=False"]
extra_arguments = ["--extra-vars", "desktop=false"]
playbook_file = "${path.root}/playbooks/playbook.yml"
user = var.ssh_username
}
provisioner "powershell" {
script = [".scripts/win2019.ps1"]
}
Provisioners supports only
and except
options to run only on specific builds. The override
options can be useful when building images across different platforms so you end up with a like-for-like images.
provisioner "shell" {
inline = ["./tmp/install_vmware-tools.sh"]
override = {
aws = {
inline = ["./tmp/install_cloudwatch_agent.sh"]
}
}
}
The error-cleanup-provisioner
can invoke a provisioner that only runs if related provsioner fails. It runs before the instane is shutdown or terminated. For example write data to a file, unsubscribe from a service or clean up custom work.
build {
sources = [
"source.amazon-ebs.ubuntu"
]
provisioner "shell" {
inline = ["sudo yum update -y"]
}
error-cleanup-provisioner "shell-local" {
inline = ["echo 'update provisioner failed'> packer_log.txt"]
}
}
The pause_before
option can provide a waiting period. This is useful when it takes a bit for the OS to compe up, or other processes are running that could conflict with provisioner.
build {
sources = [
"source.amazon-ebs.ubuntu"
]
provisioner "shell" {
inline = ["sudo apt-get update -y"]
pause_before = "10s"
}
}
The max_retries
option can restart provisioner if it failed. It is helpful when provisioner depends on external data/processes to complete successfully.
build {
sources = [
"source.amazon-ebs.ubuntu"
]
provisioner "shell" {
inline = ["sudo yum update -y"]
max_retries = 5
}
}
The timeout
option can be used to define maximum time that the provisioner should complete its task, before it is considered as failed.
build = {
sources = [
"source.amazon-ebs.ubuntu"
]
provisioner "shell" {
inline = ["./install_something.sh"]
timeout = "5m"
}
}
- Post-processors are executed after provisioners are complete and the image is built. It can be used to upload artifacts, execute scripts, or import an image.
- Post-processors are completely optional
- Examples include:
- Execute a local script after the build is completed (shell-local)
- Create a machine-readable report of what was build (manifest)
- Incorporate within a CI/CD build pipeline to be used for additional steps
- Compute a checksum for the artifact so you verify it later (checksum)
- Import a packege to AWS after building in your data center (AWS)
- Convert the artifact into a Vagratn box (Vagrant)
- Create a VMware template from the resulting build (vSphere Template)
Defined in build block, each post-processor runs after each defined build. The post-processor takes the artifact from a build, uses it, and deletes the artifact after it is done (default behavior)
Post-processor defines a sinle post-processor.
The manifest creates a JSON file with a list of all the artifacts that packer created during build. The file is invoked each time a build completes and the file is updated.
Default file name is packer-manifest.json
but can be updated using the output
option.
build {
sources = [
"source.amazon-ebs.ubuntu"
]
provisioner "shell" {
inline = [
"echo Updating Packages and Installing nginx",
"sudo apt-get update -y",
"sudo apt-get install -y nginx"
]
}
post-processor "manifest" {
output = "my-first-manifest.json"
}
}
The local shell post processor enables you to execute scripts locally after the machine image is built. It is helpful for chaining tasks to your Packer build after it is completed. You can pass in environment variables, customize how the command is executed, and specify the script to be executed.
build {
sources = [
"source.amazon-ebs.ubuntu"
]
provisioner "shell" {
inline = [
"sudo apt-get update -y"
]
}
post-processor "shell-local" {
environment_vars = ["ENVIRONMENT=production"]
scripts = ["./scripts/update_docs.sh"]
}
}
- Takes the final artifact and compresses it into a single archive
- By default, this post-processor compresses files into as ingle tarball (.tar.gz file)
- However, the following extensions are supported: .zip, .gz, .tar.gz, .lz4 and .tar.lz4
- Very helpful if you're build packages locally - vSphere, Vagrant. etc
build {
sources = [
"source.amazon-ebs.amazonlinux-2"
]
post-processor "compress" {
output = "{{.BuildName}}-image.zip
}
}
- Computes the checksum for the current artifact
- Useful to validate no changes occured to the artifact since running the packer build
- Cane be used during validation phase of a CI/CD pipelien
build {
sources = [
"source.amazon-ebs.amazonlinux-2"
]
post-processor "checksum" {
checksuym_types = ["sha1", "sha256"]
output = "packer_{{.BuildName}}_{{.ChecksumType}}.cheksum"
}
}
- Packer configuration can be a single file or split across multiple files. Packer will process all files in the current working directory which end in .pkr.hcl and .pkvars.hcl.
- Sub-folders are not included (non-recursive)
- Files are processed in lexicographical (dictionary) order
- Any files with a different extensions are ignored
- Generally, the order in which things are defined doesn't matter
- Parsed configurations are appended to each other, not merged
- Sources with the same name are not merged (this will produce and error)
- Configuration syntax is declarative, so references to other resources do not depend on the order they are defined
Pattern A:
$ ls
main.pkr.hcl
variables.pkvars.hcl
Pattern B:
$ ls
aws.pkr.hcl
azure.pkr.hcl
gcp.pkr.hcl
vmware.pkr.hcl
variables.pkvars.hcl
Pattern C:
$ ls
ubuntu.pkr.hcl
windows.pkr.hcl
rhel.pkr.hcl
variables.pkvars.hcl
Pattern D:
everything.pkr.hcl
# Validate and Build all Items
# in a working directory
$ packer validate .
$ packer build .
# Specify certain cloud target
$ packer build -only "*.amazon.*"
# Specify certain OS types
$ packer build -only "*.ubuntu.*"
# Specify individual template
$ packer build aws.pkr.hcl
Plugins for Packer/HCL exists for most major editors, but Terraform tends to work best of one does not exist for Packer.
In order to display live debug debug information, you can set the PACKER_LOG
environment variable.
export PACKER_LOG=1
$env:PACKER_LOG=1
In order to save debug information. you can set the PACKER_LOG_PATH
environment variable to desired file.
export PACKER_LOG_PATH="packer_log.txt"
$env:PACKER_LOG_PATH="packer_log.txt"
To disable logging change the variable values to defaults.
export PACKER_LOG=0
export PACKER_LOG_PATH=""
$env:PACKER_LOG=0
$env:PACKER_LOG_PATH=""
You can also leverage packer build with the -debug
option to step through the build process. This however disabled parallel build process. This is useful for remote buidls in cloud environment.
Packer also provides the ability to inspect failures durin the debug process. The on-error=ask
option allows you to inspect failures and try out solutiosn before restarting the build.
packer build --help | grep 'on-error'
-on-error=[cleanup|abort|ask|run-cleanup-provisioner] If the build fails do: clean up (default), abort, ask, or run-cleanup-provisioner.
This provisioner will pause until user presses enter to resume the build. This is useful for debugging.
packer build packer-breakpoints.pkr.hcl
null.debug: output will be in this color.
==> null.debug: Running local shell script: /tmp/packer-shell2159196625
null.debug: hi
==> null.debug: Pausing at breakpoint provisioner with note "this is a breakpoint".
==> null.debug: Press enter to continue.
==> null.debug: Running local shell script: /tmp/packer-shell389208221
null.debug: hi 2
Build 'null.debug' finished after 1 second 317 milliseconds.
==> Wait completed after 1 second 317 milliseconds
==> Builds finished. The artifacts of successful builds are:
--> null.debug: Did not export anything. This is the null builder
Packer provides two types of provisioners that work with Ansible. Ansible Remote that assumes that Ansible is available on the provisioning host and Ansible Local that assumes that Ansible is available in the template being build. In either cases the goal is to provision software and configuration through Ansible playbooks.
The benefit of using playbooks during image building is that they can be reused again during instance provisioning.
Terraform is a tool that uses declarative configuration files written in HashiCorp Configuration Language (HCL) similar to Packer. It is great tool for deploying instances from images that were created by Packer.
# Prepare your working directory (verify config, install plugins)
terraform init
# Show changes required for current configuration
# This will ask for AMI ID which can be retrieved
# from Packer manifest file. (e.g. ami-013b85e4903b8d807)
terraform plan
# Create or update infrastructure
terraform apply
# Destroy previously-created infrastructure
terraform destroy
You can also reference an existing image using data block inside terraform template.
data "aws_ami" "packer_image" {
most_recent = true
filter {
name = "tag:Created-by"
values = ["Packer"]
}
filter {
name = "tag:Name"
values = [var.appname]
}
owners = ["self"]
}
And then validate and plan the new deployment.
$ terraform validate
Success! The configuration is valid.
# Perform a dry run
$ terraform plan -var 'appname=ClumsyBird'
# Execute the deployment
$ terraform apply -var 'appname=ClumsyBird'
HashiCorp Vault is a secrets management tool. In order to retrieve secrets from Vault you need to set variables for accessing the Vault.
# For demostration run in dev mode
vault server -dev
export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN=s.sYzb7wS4PATgcp5sn9TUT72l
# Validate the connection to vault
vault status
# Validate and build the template
packer validate vault_integration.pkr.hcl && packer build vault_integration.pkr.hcl
HashiCorp Vault supports AWS credentials through secrets engine. This is useful when we need to create and retrieve temporary credentials dynamically through Vault, therefore not storing long-term credentials on build server.
# Without this integration a build would fail
packer build examples/vault/vault_aws_engine.pkr.hcl
amazon-ebs.rhel: output will be in this color.
Build 'amazon-ebs.rhel' errored after 6 seconds 339 milliseconds: no valid credential sources for found.
Please see
for more information about providing credentials.
Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Once the Vault is setup with AWS Secrets Engine with correct role and IAM policy. You can include the following block inside the source block in Packer template.
vault_aws_engine {
name = "my-role"
}
python -c "import crypt;print(crypt.crypt(input('clear-text-pw: '), crypt.mksalt(crypt.METHOD_SHA512)))"
clear-text-pw: test
$6$BfeENzHTV2I.T6Ec$EQXrqQ/YiZM4lBOlBTZmcJtkdqjOo2Ja.3Y3poxb2pC9APzSNoFvrE4Otqhf9vfcCUKO8Ge7fmFsybxxhu3nO.
Before running automated installation using Packer, you can quickly validate that your debian-preseed, cloud-init or kickstart configuration is valid using a simple http server provided by python.
# Navigate to folder where the answer file is located
python3 -m http.server -p 8000 -d .
Next, update kernel parameters in the installer, for cloud-init it would be as follows:
linux /casper/vmlinuz autoinstall ds='nocloud-net;s=http://10.0.2.2:8000/' ---
initrd /casper/initrd
boot
In case the answer file was not downloaded (you don't see any get request for user-data
, meta-data
and vendor-data
in python server log) you can invoke a shell console using Alt + F2
. Here you can verify the kernel arguments:
grep 'Command line' /var/log/dmesg
Verify logs from cloud-init.
less /var/log/cloud-init.log
You can verify if are able to download the answer file manually
curl -o user-data http://192.168.56.1:8000/user-data
Some common issues happen from mistyping the command line args, host firewall or access to internet, and answer file misconfiguration, including line ending.
# Install domain
sudo virt-install \
--name rhel01 \
--vcpus 2 \
--memory 2048 \
--disk path=/var/lib/libvirt/images/rhel01.qcow2,format=qcow2,size=10 \
--location=/var/lib/libvirt/images/rhel-8.5-x86_64-dvd.iso \
--nographics \
--initrd-inject="$HOME/Downloads/rhel85-ks.cfg" \
--extra-args="inst.ks=file:/rhel85-ks.cfg ip=dhcp console=ttyS0,115200n8" \
--os-variant=rhel8.5
# Once completed you can exit from
# virtual console using Ctrl + Shift then ]
# To open the console again use virsh console <domain-name>
# To retrieve domains and its IP address
virsh list
virsh domifaddr <domain>
# Verify the current partition size
df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 40G 3.7G 35G 10% /
# Verify the volume group free (remaining) size
sudo vgdisplay
--- Volume group ---
VG Name ubuntu-vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <60.95 GiB
PE Size 4.00 MiB
Total PE 15602
Alloc PE / Size 10361 / 40.47 GiB
Free PE / Size 5241 / 20.47 GiB
VG UUID vxXC56-9R5I-9l30-TGzA-Bnyp-G3e0-W4RsTi
# Extend the partition and resize the filesystem
sudo lvextend -L +10G /dev/mapper/ubuntu--vg-ubuntu--lv -r
Size of logical volume ubuntu-vg/ubuntu-lv changed from 40.47 GiB (10361 extents) to 50.47 GiB (12921 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
old_desc_blocks = 6, new_desc_blocks = 7
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 13231104 (4k) blocks long.
# Verify the current partition size
df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 50G 3.7G 44G 8% /
When you try to access a VM provisioned by Vagrant on Windows with a dynamically generated key you will receive the following error.
vagrant@172.17.134.225: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
In order to fix it you need to update the following environment variable.
set VAGRANT_PREFER_SYSTEM_BIN=0
When running packer with vmware-iso builder for the first time, you may encounter the following error:
...
==> vmware-iso.vm: Building and writing VMX file
==> vmware-iso.vm: Could not determine network mappings from files in path: C:/Program Files (x86)/VMware/VMware Workstation
==> vmware-iso.vm: Deleting output directory...
Build 'vmware-iso.vm' errored after 4 seconds 948 milliseconds: Could not determine network mappings from files in path: C:/Program Files (x86)/VMware/VMware Workstation
==> Wait completed after 4 seconds 948 milliseconds
==> Some builds didn't complete successfully and had errors:
--> vmware-iso.vm: Could not determine network mappings from files in path: C:/Program Files (x86)/VMware/VMware Workstation
...
The solution is to run the Virtual Network Editor
as Administrator so the mappings file will be generated.