In short you need a Network (I use a NAT network, because I like the isolation), a Storage Pool (I use an LVM backed pool for performance), and DNS (so you don't have to be rain man to access your systems).
Before I get to the Hypervisor setup, you might ask how I create these VM's:
Creating a VM using a qcow cloud image, and cloud-init
sudo virt-install --name rhel7.5 --vcpus 2 --ram 4096 --disk Downloads/rhel-server-7.5-x86_64.qcow --disk cidata.iso,device=cdrom --disk pool=default,size=15,bus=virtio --network network=default --import --noautoconsoleCreating a VM using and ISO and a KickStart File
sudo virt-install -n rhel7.5 --vcpus 2 -r 4096 --os-type=linux --disk pool=default,size=10,bus=scsi --disk pool=default,size=15,bus=virtio, --network network=default--location=Downloads/rhel-server-7.5-x86_64-dvd.iso --initrd-inject=kickstart.ks -x "ks=file:/kickstart.ks" --noautoconsoleAs you can see, with this setup I start from a common install source (base) set of images, that you can pull from a provider (RHEL, CentOS, or Fedora, etc). The second part that you will find interesting about these commands is the network that I bind the VM's to, and the disks that I create for the VM's to use.
In both examples, and extra 15GB disk is provided to each VM (usually allocated to /dev/vdb, because these are virtio disks). This disk, is created in my default storage pool.
To see what pools you have and/or what resources they provide, you can use the following:
[sudo] virsh pool-listIf you need to create a storage pool (an LVM pool, using a blank block device) you can run the following:
[sudo] virsh pool-info default
[sudo] virsh pool-define-as --name virt-pool --type logical --source-format lvm2 --target /dev/DEVICEShould you need to, clean up from creating such a pool, the following commands completely clean up from this:
[sudo] virsh pool-destroy virt-poolThe network that the VM's are attached too is similarly defined:
[sudo] virsh pool-undefine virt-pool
[sudo] virsh pool-delete virt-pool[sudo] vgremove virt-pool
[sudo] virsh net-create network_config.xmlFile Example:
<network>As you can see from this, the VMs I put in this network dhcp boot, using an IP from the defined range(.1 to .253).
<bridge name='virtbr-100' stp='on' delay='0' />
<domain name='minlab.lan' localOnly='yes'/>
<ip address='192.168.100.254' netmask='255.255.255.0'>
<range start='192168.100.1' end='192.168.100.253' />
Should you need to, clean up from creating such a network, the following commands completely clean up from this:
[sudo] virsh net-destroy minilabHowever, even with these tools (the network, and the storage pool) yet another component is needed, to make your lab work as it would in say AWS, GCP, or Azure. This missing component is DNS.
[sudo] virsh net-undefine minilab
[sudo] virsh net-delete minilab
To setup DNS on the hypervisor, the simplest thing to do is have NetworkManager use dnsmasq as a caching server, and point at the Network above's dnsmasq instance to get DNS names for the VM's on the network.
You can enable dnsmask with NetworkManager in 2 ways:
[main]You can place this in either /etc/NetworkManager/conf.d/localdns.conf or /etc/NetworkManager/NetworkManager.conf and simply restart NetworkManager, to being using this.
However .... you still need to define what DNS servers you want to reference. This is done by placing a forwarder configuration (as stated above) that points to the dnsmasq instances (per network) started by libvirt.
- Note: This configuration is finky (as in make sure you read the man page, and don't mess it up as debugging can be extremely challenging and painful)
While this process gives you fine grain control over your system and the VM's that you provision, its not overly FAST for creating large lab deployments (more than 2-3 vms), or helpful if you plan to create / delete your labs (as I do).
As a result automation is almost always needed. To simplify the pocess above, https://github.com/sferich888/fedora-minilab/ helps, by allowing me to define an inventory file (lab) that can be re-used or modified to fit my needs.