I had a lot of errors and file corruption on the previous system and so I replicated it to the new drives only to identify the old RAM as bad.Continue reading “Unrecoverable/Checksum errors on ZFS pools”
Ever since applying a drive password on these ES.2 drives, I’ve been getting READ:CAM status ATA errors whenever the system is rebooted – causing increased boot time and an annoying export/import of the pool using the ES.2 drives.Continue reading “Remove SED Password on Seagate ES.2 Constellation Drives”
There are several ways to create a backup infrastructure since the amount of data and number of clients varies. Here is how I set mine up at home. Continue reading “Backup Strategy”
Many things to share but not enough time to create individual posts so I’ll just make one. Even writing this took over a month to finish…
The current setup of storage for my Proxmox server is mostly local storage with ISOs located on the FreeNAS server connected via NFS. With only 500GB of hard drive space on the VM host, there’s not a lot of space for VMs nor disk IO.
With a single gigabit connection between the VM host and file server, the throughput still imposes a large amount of latency. Perhaps bonding the 2 interfaces on the remaining Intel Pro 1000 on the VM host to create a logical interface would increase throughput. Of course the other end would have a pair of gigabit interfaces to go along with it too.