#
mv /etc/grub.d/20_linux_xen /etc/grub.d/09_linux_xen
#
update-grub
xen-create-image
command, which largely automates the task. The only mandatory parameter is --hostname
, giving a name to the domU; other options are important, but they can be stored in the /etc/xen-tools/xen-tools.conf
configuration file, and their absence from the command line doesn't trigger an error. It is therefore important to either check the contents of this file before creating images, or to use extra parameters in the xen-create-image
invocation. Important parameters of note include the following:
--memory
, to specify the amount of RAM dedicated to the newly created system;
--size
and --swap
, to define the size of the “virtual disks” available to the domU;
--debootstrap
, to cause the new system to be installed with debootstrap
; in that case, the --dist
option will also most often be used (with a distribution name such as jessie).
--dhcp
states that the domU's network configuration should be obtained by DHCP while --ip
allows defining a static IP address.
--dir
option, is to create one file on the dom0 for each device the domU should be provided. For systems using LVM, the alternative is to use the --lvm
option, followed by the name of a volume group; xen-create-image
will then create a new logical volume inside that group, and this logical volume will be made available to the domU as a hard disk drive.
#
xen-create-image --hostname testxen --dhcp --dir /srv/testxen --size=2G --dist=jessie --role=udev
[...] General Information -------------------- Hostname : testxen Distribution : jessie Mirror : http://ftp.debian.org/debian/ Partitions : swap 128Mb (swap) / 2G (ext3) Image type : sparse Memory size : 128Mb Kernel path : /boot/vmlinuz-3.16.0-4-amd64 Initrd path : /boot/initrd.img-3.16.0-4-amd64 [...] Logfile produced at: /var/log/xen-tools/testxen.log Installation Summary --------------------- Hostname : testxen Distribution : jessie MAC Address : 00:16:3E:8E:67:5C IP-Address(es) : dynamic RSA Fingerprint : 0a:6e:71:98:95:46:64:ec:80:37:63:18:73:04:dd:2b Root Password : adaX2jyRHNuWm8BDJS7PcEJ
vif*
, veth*
, peth*
and xenbr0
. The Xen hypervisor arranges them in whichever layout has been defined, under the control of the user-space tools. Since the NAT and routing models are only adapted to particular cases, we will only address the bridging model.
xend
daemon is configured to integrate virtual network interfaces into any pre-existing network bridge (with xenbr0
taking precedence if several such bridges exist). We must therefore set up a bridge in /etc/network/interfaces
(which requires installing the bridge-utils package, which is why the xen-utils-4.4 package recommends it) to replace the existing eth0 entry:
auto xenbr0 iface xenbr0 inet dhcp bridge_ports eth0 bridge_maxwait 0
xl
command. This command allows different manipulations on the domains, including listing them and, starting/stopping them.
#
xl list
Name ID Mem VCPUs State Time(s) Domain-0 0 463 1 r----- 9.8 #
xl create /etc/xen/testxen.cfg
Parsing config from /etc/xen/testxen.cfg #
xl list
Name ID Mem VCPUs State Time(s) Domain-0 0 366 1 r----- 11.4 testxen 1 128 1 -b---- 1.1
testxen
domU uses real memory taken from the RAM that would otherwise be available to the dom0, not simulated memory. Care should therefore be taken, when building a server meant to host Xen instances, to provision the physical RAM accordingly.
hvc0
console, with the xl console
command:
#
xl console testxen
[...] Debian GNU/Linux 8 testxen hvc0 testxen login:
xl pause
and xl unpause
commands. Note that even though a paused domU does not use any processor power, its allocated memory is still in use. It may be interesting to consider the xl save
and xl restore
commands: saving a domU frees the resources that were previously used by this domU, including RAM. When restored (or unpaused, for that matter), a domU doesn't even notice anything beyond the passage of time. If a domU was running when the dom0 is shut down, the packaged scripts automatically save the domU, and restore it on the next boot. This will of course involve the standard inconvenience incurred when hibernating a laptop computer, for instance; in particular, if the domU is suspended for too long, network connections may expire. Note also that Xen is so far incompatible with a large part of ACPI power management, which precludes suspending the host (dom0) system.
shutdown
command) or from the dom0, with xl shutdown
or xl reboot
.
init
process, and the resulting set looks very much like a virtual machine. The official name for such a setup is a “container” (hence the LXC moniker: LinuX Containers), but a rather important difference with “real” virtual machines such as provided by Xen or KVM is that there's no second kernel; the container uses the very same kernel as the host system. This has both pros and cons: advantages include excellent performance due to the total lack of overhead, and the fact that the kernel has a global vision of all the processes running on the system, so the scheduling can be more efficient than it would be if two independent kernels were to schedule different task sets. Chief among the inconveniences is the impossibility to run a different kernel in a container (whether a different Linux version or a different operating system altogether).
/sys/fs/cgroup
. Since Debian 8 switched to systemd, which also relies on control groups, this is now done automatically at boot time without further configuration.
/etc/network/interfaces
, moving the configuration for the physical interface (for instance eth0
) to a bridge interface (usually br0
), and configuring the link between them. For instance, if the network interface configuration file initially contains entries such as the following:
auto eth0 iface eth0 inet dhcp
#auto eth0 #iface eth0 inet dhcp auto br0 iface br0 inet dhcp bridge-ports eth0
eth0
as well as the interfaces defined for the containers.
/etc/network/interfaces
file then becomes:
# Interface eth0 is unchanged auto eth0 iface eth0 inet dhcp # Virtual interface auto tap0 iface tap0 inet manual vde2-switch -t tap0 # Bridge for containers auto br0 iface br0 inet static bridge-ports tap0 address 10.0.0.1 netmask 255.255.255.0
br0
interface.
root@mirwiz:~#
lxc-create -n testlxc -t debian
debootstrap is /usr/sbin/debootstrap Checking cache download in /var/cache/lxc/debian/rootfs-jessie-amd64 ... Downloading debian minimal ... I: Retrieving Release I: Retrieving Release.gpg [...] Download complete. Copying rootfs to /var/lib/lxc/testlxc/rootfs... [...] Root password is 'sSiKhMzI', please change ! root@mirwiz:~#
/var/cache/lxc
, then moved to its destination directory. This allows creating identical containers much more quickly, since only copying is then required.
--arch
option to specify the architecture of the system to be installed and a --release
option if you want to install something else than the current stable release of Debian. You can also set the MIRROR
environment variable to point to a local Debian mirror.
/var/lib/lxc/testlxc/config
) and add a few lxc.network.*
entries:
lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.hwaddr = 4a:49:43:49:79:20
br0
bridge on the host; and that its MAC address will be as specified. Should this last entry be missing or disabled, a random MAC address will be generated.
lxc.utsname = testlxc
root@mirwiz:~#
lxc-start --daemon --name=testlxc
root@mirwiz:~#
lxc-console -n testlxc
Debian GNU/Linux 8 testlxc tty1 testlxc login:
root
Password: Linux testlxc 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1 (2015-05-24) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. root@testlxc:~#
ps auxwf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 28164 4432 ? Ss 17:33 0:00 /sbin/init root 20 0.0 0.1 32960 3160 ? Ss 17:33 0:00 /lib/systemd/systemd-journald root 82 0.0 0.3 55164 5456 ? Ss 17:34 0:00 /usr/sbin/sshd -D root 87 0.0 0.1 12656 1924 tty2 Ss+ 17:34 0:00 /sbin/agetty --noclear tty2 linux root 88 0.0 0.1 12656 1764 tty3 Ss+ 17:34 0:00 /sbin/agetty --noclear tty3 linux root 89 0.0 0.1 12656 1908 tty4 Ss+ 17:34 0:00 /sbin/agetty --noclear tty4 linux root 90 0.0 0.1 63300 2944 tty1 Ss 17:34 0:00 /bin/login -- root 117 0.0 0.2 21828 3668 tty1 S 17:35 0:00 \_ -bash root 268 0.0 0.1 19088 2572 tty1 R+ 17:39 0:00 \_ ps auxfw root 91 0.0 0.1 14228 2356 console Ss+ 17:34 0:00 /sbin/agetty --noclear --keep-baud console 115200 38400 9600 vt102 root 197 0.0 0.4 25384 7640 ? Ss 17:38 0:00 dhclient -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.e root 266 0.0 0.1 12656 1840 ? Ss 17:39 0:00 /sbin/agetty --noclear tty5 linux root 267 0.0 0.1 12656 1928 ? Ss 17:39 0:00 /sbin/agetty --noclear tty6 linux root@testlxc:~#
/var/lib/lxc/testlxc/rootfs
). We can exit the console with Control+a q.
--daemon
option of lxc-start
. We can interrupt the container with a command such as lxc-stop --name=testlxc
.
lxc-autostart
which starts containers whose lxc.start.auto
option is set to 1). Finer-grained control of the startup order is possible with lxc.start.order
and lxc.group
: by default, the initialization script first starts containers which are part of the onboot
group and then the containers which are not part of any group. In both cases, the order within a group is defined by the lxc.start.order
option.
qemu-*
commands: it is still about KVM.
/proc/cpuinfo
.
virtual-manager
is a graphical interface that uses libvirt to create and manage virtual machines.
apt-get install qemu-kvm libvirt-bin virtinst virt-manager virt-viewer
. libvirt-bin provides the libvirtd
daemon, which allows (potentially remote) management of the virtual machines running of the host, and starts the required VMs when the host boots. In addition, this package provides the virsh
command-line tool, which allows controlling the libvirtd
-managed machines.
virt-install
, which allows creating virtual machines from the command line. Finally, virt-viewer allows accessing a VM's graphical console.
eth0
physical interface and a br0
bridge, and that the former is connected to the latter.
/var/lib/libvirt/images/
) is fine.
root@mirwiz:~#
mkdir /srv/kvm
root@mirwiz:~#
virsh pool-create-as srv-kvm dir --target /srv/kvm
Pool srv-kvm created root@mirwiz:~#
virt-install
's most important options. This command registers the virtual machine and its parameters in libvirtd, then starts it so that its installation can proceed.
#
virt-install --connect qemu:///system --virt-type kvm --name testkvm --ram 1024 --disk /srv/kvm/testkvm.qcow,format=qcow2,size=10 --cdrom /srv/isos/debian-8.1.0-amd64-netinst.iso --network bridge=br0 --vnc --os-type linux --os-variant debianwheezy
Starting install... Allocating 'testkvm.qcow' | 10 GB 00:00 Creating domain... | 0 B 00:00 Guest installation complete... restarting guest.
The --connect option specifies the “hypervisor” to use. Its form is that of an URL containing a virtualization system (xen:// , qemu:// , lxc:// , openvz:// , vbox:// , and so on) and the machine that should host the VM (this can be left empty in the case of the local host). In addition to that, and in the QEMU/KVM case, each user can manage virtual machines working with restricted permissions, and the URL path allows differentiating “system” machines (/system ) from others (/session ).
| |
Since KVM is managed the same way as QEMU, the --virt-type kvm allows specifying the use of KVM even though the URL looks like QEMU.
| |
The --name option defines a (unique) name for the virtual machine.
| |
The --ram option allows specifying the amount of RAM (in MB) to allocate for the virtual machine.
| |
The --disk specifies the location of the image file that is to represent our virtual machine's hard disk; that file is created, unless present, with a size (in GB) specified by the size parameter. The format parameter allows choosing among several ways of storing the image file. The default format (raw ) is a single file exactly matching the disk's size and contents. We picked a more advanced format here, that is specific to QEMU and allows starting with a small file that only grows when the virtual machine starts actually using space.
| |
The --cdrom option is used to indicate where to find the optical disk to use for installation. The path can be either a local path for an ISO file, an URL where the file can be obtained, or the device file of a physical CD-ROM drive (i.e. /dev/cdrom ).
| |
The --network specifies how the virtual network card integrates in the host's network configuration. The default behavior (which we explicitly forced in our example) is to integrate it into any pre-existing network bridge. If no such bridge exists, the virtual machine will only reach the physical network through NAT, so it gets an address in a private subnet range (192.168.122.0/24).
| |
--vnc states that the graphical console should be made available using VNC. The default behavior for the associated VNC server is to only listen on the local interface; if the VNC client is to be run on a different host, establishing the connection will require setting up an SSH tunnel (see Phần 9.2.1.3, “Creating Encrypted Tunnels with Port Forwarding”). Alternatively, the --vnclisten=0.0.0.0 can be used so that the VNC server is accessible from all interfaces; note that if you do that, you really should design your firewall accordingly.
| |
The --os-type and --os-variant options allow optimizing a few parameters of the virtual machine, based on some of the known features of the operating system mentioned there.
|
virt-viewer
can be run from any graphical environment to open the graphical console (note that the root password of the remote host is asked twice because the operation requires 2 SSH connections):
$
virt-viewer --connect qemu+ssh://root@server/system testkvm
root@server's password: root@server's password:
libvirtd
for the list of the virtual machines it manages:
#
virsh -c qemu:///system list --all Id Name State ---------------------------------- - testkvm shut off
#
virsh -c qemu:///system start testkvm
Domain testkvm started
vncviewer
):
#
virsh -c qemu:///system vncdisplay testkvm
:0
virsh
subcommands include:
reboot
to restart a virtual machine;
shutdown
to trigger a clean shutdown;
destroy
, to stop it brutally;
suspend
to pause it;
resume
to unpause it;
autostart
to enable (or disable, with the --disable
option) starting the virtual machine automatically when the host starts;
undefine
to remove all traces of the virtual machine from libvirtd
.
debootstrap
, as described above. But if the virtual machine is to be installed with an RPM-based system (such as Fedora, CentOS or Scientific Linux), the setup will need to be done using the yum
utility (available in the package of the same name).
rpm
to extract an initial set of files, including notably yum
configuration files, and then calling yum
to extract the remaining set of packages. But since we call yum
from outside the chroot, we need to make some temporary changes. In the sample below, the target chroot is /srv/centos
.
#
rootdir="/srv/centos"
#
mkdir -p "$rootdir" /etc/rpm
#
echo "%_dbpath /var/lib/rpm" > /etc/rpm/macros.dbpath
#
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/centos-release-7-1.1503.el7.centos.2.8.x86_64.rpm
#
rpm --nodeps --root "$rootdir" -i centos-release-7-1.1503.el7.centos.2.8.x86_64.rpm
rpm: RPM should not be used directly install RPM packages, use Alien instead! rpm: However assuming you know what you are doing... warning: centos-release-7-1.1503.el7.centos.2.8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY #
sed -i -e "s,gpgkey=file:///etc/,gpgkey=file://${rootdir}/etc/,g" $rootdir/etc/yum.repos.d/*.repo
#
yum --assumeyes --installroot $rootdir groupinstall core
[...] #
sed -i -e "s,gpgkey=file://${rootdir}/etc/,gpgkey=file:///etc/,g" $rootdir/etc/yum.repos.d/*.repo