New project in the works. I want to build a computer that runs Sophos UTM and act as my firewall/internal gateway. Sophos would allow me to have greater control in terms of QoS, device management, routing, and security.
The reason for this build is the consistent reset of the provided Hitron combo unit. Whenever the ISP pushes out a new firmware, it erases all configurations, but retains WiFi settings. Since there are many port forwarding rules applied so I can access various services externally, it takes time to reconfigure the ports. By building a dedicated box for routing/firewall, I wouldn’t need to re-add the rules every time the ISP pushes new firmware to the combo unit. Instead, I would be configuring it in bridged mode: the modem handles DOCSIS WAN, while my firewall would handle routing.
Now there were a few ideas I had in mind as to how to implement this. The basic hardware requirements for this project would be a functional computer with an add-in PCIe NIC. The first option was to re-purpose a Q6600 machine. There are a few reasons as to why I scrapped the idea. The case it is in is a Thermaltake full tower. I didn’t want to use it since it was huge and didn’t have a power supply. I would have wanted to use a mATX case, but since the mother board is standard ATX, it wouldn’t fit. Furthermore, since the Q6600 is built on a 65nm lithography, it would consume a lot of power and be overall inefficient.
The second option is to purchase a refurbished small form factor machine either by Dell, or HP. The issue with this is that it only fits half height PCIe expansion cards. This means the selection of NICs I would be able to use is reduced. That’s fine, most Intel I340 cards come with low profile brackets. Problem solved.
The small form factor would be really nice to have since it doesn’t take up a lot of space, but since it is manufactured by Dell or HP. That means they use proprietary power supplies – one of the many practices I’m not really fond of. Where’s the upgrade-ability path? Where’s the ability to expand the amount of hard drives? Small form factor is starting to turn me away.
Another issue I found in conjunction to the previous form factor issue is the other use case I want to use this machine for. Since these machines were equipped with decent Haswell processors, they were still very capable and would be underused in terms of processing if they only ran a Sophos UTM. Virtualization perhaps? I could install Proxmox and run multiple virtual machines – Sophos being the main use case. But it becomes a spiraling effect – Using it as a hypervisor environment meant I would need more RAM. If I am going to be virtualizing, more cores means better performance; i7 perhaps?Stronger CPU and more RAM would mean greater cost to the project; didn’t I just want a dedicated router? I do want it to be upgradeable – Small form factor is compact – I like it if it didn’t mean I can’t upgrade it later down the road. If I were able to move the motherboard, CPU, and RAM to the mATX case I already have, along with being able to use a standard power supply, it would change things in that I could increase the drive count and have a beefier virtual machine handler.
So what are my options? Realistically, I would be spending ~70-80CAD on the quad gigabit NIC. There is an eBay listing for a Dell 9020. There are options to add a 24 to 8 pin power supply adapter. This solves the issue of not being able to use the standard power supply along with more storage. If this works, I would be able to have add another large storage pool and store Virtual machine files for the environment in addition to iSCSI storage from my FreeNAS box.
So I was able to play around with Proxmox as a virtual machine on my FreeNAS box. It was able to run since it was using a Linux kernel, but there seems to be an issue running Windows based VMs in the FreeNAS VM container. I believe AMD CPUs aren’t entirely supported yet, and because of that, I wasn’t able to run virtual machines within Proxmox. Weird. The thing that makes me wonder is that the FreeNAS 11 VM part works for running Linux VMs (or it’s limited to Proxmox…) but nested VMs don’t work. Now that I have an idea as to how Proxmox is setup, there were a few things to learn how to configure.
Since Proxmox was running as a VM, it needed a space to store VM files. I played with iSCSI before in the virtualization labs, but wasn’t too sure as to how it worked. A bit of research came out as “direct block level access through IP”. This made more sense for me since all it wanted to do was write block data through a different medium than conventional SATA. 6 Things needed to be set up:
- Associated Targets
A zvol is needed – I set it to about 1TB. Portals are usually configured as default ports with the listen address as 0.0.0.0:3260. The Portal Group ID is defaulted to 1. I set Initiators to all and authorized network to all. Targets is set to access the zvol defined by the Extent. Extent is set to point to the zvol location. Associated targets pair these two definitions together.
Within Proxmox, I would then add storage with ID set as whatever i want the volume labelled. The address would be the IP address of the iSCSI host; in this case it would be the IP address of the FreeNAS box.
I’ll be able to play around with this Proxmox application further once I get an Intel platform (since they play a lot nicer with these virtualization packages).