1.) First I modified the /etc/rc.d/rc.modules.local file and add the following lines.
Note: If you just modprobe kvm-intel or kvm-amd then modprobe should automatically load kvm as a dependency, I'm just being thorough here.
# Intel Kernel Virtualization Support
VMX=$( cat /proc/cpuinfo | grep vmx )
if [ "$VMX" != "" ]; then
/sbin/modprobe -a kvm kvm-intel
echo "Intel KVM Support Loaded"
fi
# AMD Kernel Virtualization Support
SVM=$( cat /proc/cpuinfo | grep svm )
if [ "$SVM" != "" ]; then
/sbin/modprobe -a kvm kvm-amd
echo "AMD KVM Support Loaded"
fi
2.) I then wrote this script to bring up a vitual switch (vde2) , that I call rc.vswitch and I place it in the /etc/rc.d/ directory
#!/bin/sh
# This script creates a switch for use by my VM's within a host systems with the
# intention of creating a bridged interface. I have therefore set this to load
# before rc.inet1 within the rc.M script. There should be no need to set an IP
# address for the tap0, as the virtual machines will appear on the network just
# like any other computer on the local network.
start(){
echo -n "Starting VDE Switch... Running as Daemon"
# Load tun module
modprobe tun || { echo "Error, cannot load 'tun' module. Exiting..." ; exit 1 ; }
sleep 1
# Start tap switch
# Note: I'm not sure if putting the vmanage in the vswitch directory
# will work without problems but I'm currently doing it!
vde_switch -sock /tmp/vswitch -M /tmp/vswitch/vmanage -tap tap0 -daemon
sleep 1
# Change the user:group ownership of the switch
chown -R root:users /tmp/vswitch
# Change the mod of the directory and sockets
find /tmp/vswitch -type d -exec chmod 0770 {} \;
find /tmp/vswitch -type s -exec chmod 0660 {} \;
echo
}
stop(){
echo -n "Stopping VDE Switch... "
# Kill VDE switch
kill $(pgrep vde_switch)
sleep 1
echo
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
*)
echo "Usage: $0 {start|stop|restart}"
;;
esac
3.) I modified the /etc/rc.d/rc.M and initialize the vswitch -> !!!! before initializing the networking ( /etc/rc.d/rc.inet1 ) !!!! <- this will allow the network to take advantage of the vswitch, when it comes up.
# Initialize the vswitch a.k.a. tap interface
# With the intention of creating a bridged interface
if [ -x /etc/rc.d/rc.vswitch ]; then
. /etc/rc.d/rc.vswitch start
fi
# Initialize the networking hardware.
# After Initializing the rc.vswitch
if [ -x /etc/rc.d/rc.inet1 ]; then
/etc/rc.d/rc.inet1
fi
4.) I then modify the /etc/rc.d/rc.inet1.conf and set the bridge example as follows:
# Example of how to configure a bridge:
# Note the added "BRNICS" variable which contains a space-separated list
# of the physical network interfaces you want to add to the bridge.
# Replace the IP address with the IP address of your choice
IFNAME[0]="br0"
BRNICS[0]="eth0 tap0"
# Replace the IP address with the IP address of your choice
IPADDR[0]="192.168.0.4/24"
# Or Setup the DHCP to obtain an IP address from DHCP server
# USE_DHCP[0]=""
# DHCP_HOSTNAME[0]=""
5.)
If [you don't want to reboot the machine] then;
You can manually stop the network interface, load the modules, start the rc.vswitch, and then restart your network again.
else
I however just reboot my machine.
When the machine comes back up from the reboot my ifconfig looks something like this.
Note: When I launch each of my virtual machines they connect to my local network as though they are real computers on my local network.
Virtual Machine 1 obtains IP from DHCP server
Virtual Machine 2 obtains IP from DHCP server
Virtual Machine 3 obtains IP from DHCP server
Virtual Machine 4 obtains IP from DHCP server
ect ....
6.) I then created raw imgs with qemu-img to use as my virtual hard drives.
[root@host] # qemu-img create -f raw SlackX64150.img 100G
[root@host] # qemu-img create -f raw SlackX32150.img 100G
7.) I then wrote the following bash scripts to launch my various Qemu Virtual Machines.
Slackware 64-Bit This qemu launch script is for running Slackware 64-bit as a generic 64-bit to compile packages.
#!/bin/bash
# Slackware150-X64.sh - by Jerry Nettrouer II - j2 (at) inpito.org
# My Slackware64-15.0 Virtual Machine Launch Script
# slackware64-15.0-install-dvd.iso is the dvd iso image for Slackware64 (64-bit)
CWD=$( pwd )
qemu-system-x86_64 -enable-kvm -cpu host -smp cores=1 -machine pc \
-m 4096 -vga std -display sdl -usb -usbdevice tablet -device ES1370 -boot menu=on \
-drive driver=raw,file=$CWD/SlackX64150.img,media=disk,cache=unsafe \
-drive file=$CWD/slackware64-15.0-install-dvd.iso,index=2,media=cdrom \
-net nic,model=e1000,macaddr=CE:6A:10:9E:98:23 -net vde,sock=/tmp/vswitch
Slackware 32-Bit This qemu launch script is for running Slackware 32-bit as a generic 32-bit to compile packages.
C4C Ubuntu-24.04 This qemu launch script is for running the 64-bit Computers 4 Christians live dvd as a virtual machine.
#!/bin/bash
# C4C-Ubuntu-24.04.sh - by Jerry Nettrouer II - j2 (at) inpito.org
# My C4C-Ubuntu-24.04 (64-bit) Live DVD Virtual Machine Launch Script
CWD=$( pwd )
qemu-system-x86_64 -enable-kvm -cpu host -smp cores=1 -machine pc \
-m 4096 -vga std -display sdl -usb -usbdevice tablet -device ES1370 -boot menu=on \
-drive file=$CWD/C4C-Ubuntu-24.04.iso,index=2,media=cdrom \
-net nic,model=e1000,macaddr=CE:6A:10:9E:98:30 -net vde,sock=/tmp/vswitch
Note: I setup an NFS server on the host system and export a directory that I then mount on the virtual machines so all my machines have a shared directory space.