Hardware (why the Qotom device?)
For quite some time I’ve been looking for a power efficient home server that could run OPNsense together with some other services. These are the options I looked at:
- netgate and applianceshop offer nice hardware for running OPNsense (or PfSense)
- The APU2 boards seem great for small budgets.
- Build the device myself, there are many forums where hardware configurations are tested and shared.
- Although I really liked the idea of building the device myself, I ended up buying a Qotom mini pc because it ticks al my requirements:
- no noise (there’s no fan, the case doubles as a heatsink)
- multiple network ports
- power efficient
- comes in powerful configurations
- (relatively) cheap
This is the machine I bought:
This particular machine comes with 4 network ports but in this guide just 2 ports will be used.
Port 1: WAN with MAC: 1A:1A:1A:1A:1A:1A:
Port 4: LAN with MAC: 4D:4D:4D:4D:4D:4D
Port 1 will be directly connected to the OPNsense virtual using PCI passthrough.
Port 4 will be connected to the default bridge created by Proxmox VE.
Our internal network will be 192.168.1.0/24
OPNsense will be configured on 192.168.1.1
Proxmox will be configured on 192.168.1.10
The MAC and IP addresses will be different in your situation, make sure to make a note of your MAC Addresses.
Install Proxmox VE
Let’s start by installing Proxmox VE:
- Create a bootable Proxmox USB key
- Boot from the USB key
- Select “Install Proxmox VE”
- Select the disk to wipe and install Proxmox on.
- Enter the “Country” field.
- Validate Time zone and Keyboard Layout
- Enter Password and E-mail
- Now, be sure to select the correct management interface
- Enp4s0 – 4D:4D:4D:4D:4D:4D
- Enter Proxmox hostname
- Enter IP information
- IP Address: 192.168.1.10
- Netmask: 255.255.255.0
- Gateway: 192.168.1.1
- DNS Server: 192.168.1.1
- Wait for the installation to complete
- Reboot machine
- From the management machine open: https://192.168.1.10:8006/
The management machine needs to be in the same subnet to resolve the Proxmox VE console webpage, you may need to define a static IP address.
- Logon as root with the password entered during the setup.
Add the OPNsense image to Proxmox
The Proxmox management console will show 2 types of storage:
- “local” storage (this is file storage and can be used to store ISO files)
- “local-lve” storage (this is block storage and can be used as disk space for virtuals)
Let’s upload the OPNsense ISO:
- Open storage “local”
- Select the “Content” tab, click “Upload”
- Select “ISO image”
- Browse to the file and click “Upload”
Create the OPNsense virtual machine
- Click on Create VM (top of the screen)
- Enter a name (something like “OPNsense” is fine)
- Click “Next”
- Next to ISO image, select the “OPNsense…” ISO
- Under Guest OS, choose “Other” (OPNsense is based on FreeBSD)
- Click Next
- Next to Bus/Device select “SCSI”
The “VirtIO SCSI” is the fastest according to the Proxmox VE Admin guide:
“A SCSI controller of type VirtIO SCSI is the recommended setting if you aim for performance and is automatically selected for newly created Linux VMs since Proxmox VE 4.3. Linux distributions have support for this controller since 2012, and FreeBSD since 2014.”
- Set Disk size (GiB) to 10
- Click Next
- Set CPU cores to “2”
- Change CPU type to “host”
Proxmox VE Admin guide states: “In short, if you care about live migration and moving VMs between nodes, leave the kvm64 default. If you don’t care about live migration or have a homogeneous cluster where all nodes have the same CPU, set the CPU type to host, as in theory this will give your guests maximum performance.”
- Click Next
- Set memory to 2048
- Click Next
- Under network, select bridge “vmbr0”
- After model select “VirtIO (paravirtualized)”
Proxmox VE Admin guide states: “the VirtIO paravirtualized NIC should be used if you aim for maximum performance. Like all VirtIO devices, the guest OS should have the proper driver installed.”
- Click Next
- Click Finish
You’ve now created the OPNsense virtual, don’t start it yet, first we need to configure the VM to also use network adapter 1.
Configure PCI passthrough for network adapter 1
Now we’re going to implement PCI passthrough for network adapter 1.
First let’s show the number of network interfaces in Proxmox VE by opening a terminal session and running “ip addr”
You should now be able to see the devices like enp1s0, enp2s0, enp3s0, enp4s0.
In my case “enp1s0” is network adapter 1. Next step is to determine the PCI card address by running “lspci”
On my machine the network adapter is located on address “01:00.0”
01:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
Make a note of your specific PCI address.
Before PCI passthrough is available in Proxmox several settings will need to be configured:
This is only applicable for a machine with an Intel CPU!
Configuration for AMD CPU’s is explained in the Proxmox PCI passthrough article.
- Edit “/etc/default/grub”
- run “update-grub“
- Edit “/etc/modules”
- Add the lines:
Now we need to open the configuration file of the virtual machine and add the PCI address:
- Edit “/etc/pve/qemu-server/[VMID].conf”
If this is your first VM, the file is 100.conf
- Add “hostpci0: 01:00.0” to the end of the file
- Reboot Proxmox VE
Let’s install the OPNsense virtual.
- Start the OPNsense virtual
- During boot enter the “manually configure network adapters” mode
- The WAN adapter can be identified by the MAC address: 1A:1A:1A:1A:1A:1A, it will be identified as igb0.
- The LAN adapter will be identified as vtnet0
- Once OPNsense has started, you can run the installer and your done 🙂 !
Extra: validate Network Adapter 1 is not available to Proxmox VE
Once the virtual machine with PCI passthrough has been started, the network device isn’t available to Proxmox VE anymore. By running “ip addr” you will notice only devices enp2s0, enp3s0 and enp4s0 are shown.
Alternatively, you can take a look at the hardware in the VM by running:
- qm monitor [vmid] (e.g. “qm monitor 100″)
- info pci
You should be presented with 2 “ethernet controller” devices
- Press q to quit.