How I setup a Qemu Virtual Machine on Linux Slackware64 (64-Bit) 15.0



This tutorial will demonstrate how I setup a virtual machine using Qemu running on Linux Slackware64 15.0

Note: I have the following SlackBuild packages already compiled and installed : vde2 usbredir spice-protocol spice libiscsi tpm qemu and all the packages dependencies.

If you need to compile and install the software here is the slackware how-to about compiling and installing the software: Slackware Virtualization Compile and Install How-To

1.) First I modified the /etc/rc.d/rc.modules.local file and add the following lines.

Note: If you just modprobe kvm-intel or kvm-amd then modprobe should automatically load kvm as a dependency, I'm just being thorough here.
  
# Intel Kernel Virtualization Support
VMX=$( cat /proc/cpuinfo | grep vmx )

if [ "$VMX" != "" ]; then
	/sbin/modprobe -a kvm kvm-intel
	echo "Intel KVM Support Loaded"
fi

# AMD Kernel Virtualization Support
SVM=$( cat /proc/cpuinfo | grep svm )

if [ "$SVM" != "" ]; then
	/sbin/modprobe -a kvm kvm-amd
	echo "AMD KVM Support Loaded"
fi
  
2.) I then wrote this script to bring up a vitual switch (vde2) , that I call rc.vswitch and I place it in the /etc/rc.d/ directory
  
#!/bin/sh
# This script creates a switch for use by my VM's within a host systems with the
# intention of creating a bridged interface.  I have therefore set this to load
# before rc.inet1 within the rc.M script.  There should be no need to set an IP
# address for the tap0, as the virtual machines will appear on the network just
# like any other computer on the local network.

start(){
   echo -n "Starting VDE Switch... Running as Daemon"

   # Load tun module
   modprobe tun || { echo "Error, cannot load 'tun' module. Exiting..." ; exit 1 ; } 
   sleep 1

   # Start tap switch
   # Note: I'm not sure if putting the vmanage in the vswitch directory
   # will work without problems but I'm currently doing it!
   vde_switch -sock /tmp/vswitch -M /tmp/vswitch/vmanage -tap tap0 -daemon
   sleep 1

   # Change the user:group ownership of the switch
   chown -R root:users /tmp/vswitch

   # Change the mod of the directory and sockets
   find /tmp/vswitch -type d -exec chmod 0770 {} \;
   find /tmp/vswitch -type s -exec chmod 0660 {} \;

    echo
}

stop(){
   echo -n "Stopping VDE Switch... "

      # Kill VDE switch
      kill $(pgrep vde_switch)
      sleep 1

      echo
}

case "$1" in
   start)
	   start
	   ;;
   stop)
	   stop
	   ;;
   restart)
	   stop
	   start
	   ;;
   *)
	   echo "Usage: $0 {start|stop|restart}"
	   ;;
esac
  
3.) I modified the /etc/rc.d/rc.M and initialize the vswitch -> !!!! before initializing the networking ( /etc/rc.d/rc.inet1 ) !!!! <- this will allow the network to take advantage of the vswitch, when it comes up.
  
# Initialize the vswitch a.k.a. tap interface
# With the intention of creating a bridged interface
if [ -x /etc/rc.d/rc.vswitch ]; then
  . /etc/rc.d/rc.vswitch start
fi

# Initialize the networking hardware.
# After Initializing the rc.vswitch
if [ -x /etc/rc.d/rc.inet1 ]; then
  /etc/rc.d/rc.inet1
fi
  
4.) I then modify the /etc/rc.d/rc.inet1.conf and set the bridge example as follows:
  
# Example of how to configure a bridge:
# Note the added "BRNICS" variable which contains a space-separated list
# of the physical network interfaces you want to add to the bridge.
# Replace the IP address with the IP address of your choice
IFNAME[0]="br0"
BRNICS[0]="eth0 tap0"

# Replace the IP address with the IP address of your choice
IPADDR[0]="192.168.0.4/24"

# Or Setup the DHCP to obtain an IP address from DHCP server
# USE_DHCP[0]=""
# DHCP_HOSTNAME[0]=""
  
5.)
If
[you don't want to reboot the machine] then;
  You can manually stop the network interface, load the modules, start the rc.vswitch, and then restart your network again.
else
  I however just reboot my machine.

When the machine comes back up from the reboot my ifconfig looks something like this.

  
[root@host] # ifconfig
br0: flags=4163  mtu 1500
        inet 192.168.0.4  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 xxxx::xxx:xxxx:xxxx:xxxx  prefixlen 64  scopeid 0x20
        ether XX:XX:XX:XX:XX:XX  txqueuelen 0  (Ethernet)
        RX packets 16537  bytes 6833253 (6.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 15292  bytes 1709461 (1.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4419  mtu 1500
        inet6 xxxx::xxx:xxxx:xxxx:xxxx  prefixlen 64  scopeid 0x20
        ether XX:XX:XX:XX:XX:XX  txqueuelen 1000  (Ethernet)
        RX packets 16423  bytes 7116198 (6.7 MiB)
        RX errors 0  dropped 219  overruns 0  frame 0
        TX packets 15152  bytes 1754955 (1.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 6386  bytes 1314903 (1.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6386  bytes 1314903 (1.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap0: flags=4419  mtu 1500
        inet6 xxxx::xxx:xxxx:xxxx:xxxx  prefixlen 64  scopeid 0x20
        ether XX:XX:XX:XX:XX:XX  txqueuelen 500  (Ethernet)
        RX packets 386  bytes 57432 (56.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5739  bytes 819527 (800.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  
Note: When I launch each of my virtual machines they connect to my local network as though they are real computers on my local network.
Virtual Machine 1 obtains IP from DHCP server
Virtual Machine 2 obtains IP from DHCP server
Virtual Machine 3 obtains IP from DHCP server
Virtual Machine 4 obtains IP from DHCP server
ect ....


6.) I then created raw imgs with qemu-img to use as my virtual hard drives.
  
[root@host] # qemu-img create -f raw SlackX64150.img 100G
[root@host] # qemu-img create -f raw SlackX32150.img 100G
  
7.) I then wrote the following bash scripts to launch my various Qemu Virtual Machines.

Slackware 64-Bit This qemu launch script is for running Slackware 64-bit as a generic 64-bit to compile packages.

  
#!/bin/bash
# Slackware150-X64.sh - by Jerry Nettrouer II - j2 (at) inpito.org
# My Slackware64-15.0 Virtual Machine Launch Script
# slackware64-15.0-install-dvd.iso is the dvd iso image for Slackware64 (64-bit)

CWD=$( pwd )

qemu-system-x86_64 -enable-kvm -cpu host -smp cores=1 -machine pc \
	-m 4096 -vga std -display sdl -usb -usbdevice tablet -device ES1370 -boot menu=on \
	-drive driver=raw,file=$CWD/SlackX64150.img,media=disk,cache=unsafe \
	-drive file=$CWD/slackware64-15.0-install-dvd.iso,index=2,media=cdrom \
	-net nic,model=e1000,macaddr=CE:6A:10:9E:98:23 -net vde,sock=/tmp/vswitch
  
Slackware 32-Bit This qemu launch script is for running Slackware 32-bit as a generic 32-bit to compile packages.

  
#!/bin/bash
# Slackware150-X32.sh - by Jerry Nettrouer II - j2 (at) inpito.org
# My Slackware-15.0 (32-bit) Virtual Machine Launch Script

CWD=$( pwd )

qemu-system-x86_64 -enable-kvm -cpu host -smp cores=1 -machine pc \
	-m 4096 -vga std -display sdl -usb -usbdevice tablet -device ES1370 -boot menu=on \
	-drive driver=raw,file=$CWD/SlackX32150.img,media=disk,cache=unsafe \
	-drive file=$CWD/slackware32-15.0-install-dvd.iso,index=2,media=cdrom \
	-net nic,model=e1000,macaddr=CE:6A:10:9E:98:24 -net vde,sock=/tmp/vswitch
  
C4C Ubuntu-24.04 This qemu launch script is for running the 64-bit Computers 4 Christians live dvd as a virtual machine.

  
#!/bin/bash
# C4C-Ubuntu-24.04.sh - by Jerry Nettrouer II - j2 (at) inpito.org
# My C4C-Ubuntu-24.04 (64-bit) Live DVD Virtual Machine Launch Script

CWD=$( pwd )

qemu-system-x86_64 -enable-kvm -cpu host -smp cores=1 -machine pc \
	-m 4096 -vga std -display sdl -usb -usbdevice tablet -device ES1370 -boot menu=on \
	-drive file=$CWD/C4C-Ubuntu-24.04.iso,index=2,media=cdrom \
	-net nic,model=e1000,macaddr=CE:6A:10:9E:98:30 -net vde,sock=/tmp/vswitch
  
Note: I setup an NFS server on the host system and export a directory that I then mount on the virtual machines so all my machines have a shared directory space.