I would suggest XigmaNas (formerly NAS4Free). The software runs great, and the support forum is actually very supportive. I am computer savvy but a networking noob, and the people there have helped me out more than once.
ZFS does have an experimental and unsupported way to checksum ram with the ZFS_DEBUG_MODIFY flag. Might be worth looking into if you can’t use ecc and are that paranoid.
https://www.reddit.com/r/zfs/comments/8102nf/any_experience_with_the_unsupported_openzfs/
Well, you currently have a 3-2-1 solution (Two local copies - Synology and Stablebit, with GCloud as an off-site copy). The concern you have is because you're running SHR1 instead of SHR2.
If you convert to a single, bigger array (I counted 12 drives mentioned in total), you would lose one of your two local copies, but you really wouldn't add redundancy by changing the NAS to a SHR2 / RAID6 / RAIDZ2. (Currently, your risk factor for redundancy was still two disks - if you lost two disks on the Synology, you'd lose the Synology, but you would still have the Stablebit DrivePool, and if you lost two disks on the Stablebit DrivePool, you'd still have the Synology. If you lost three disks, you'd still have the data on the NAS that lost the least amount of disks.
The thing is, you could be making your situation potentially worse if you did switch to a single NAS. You could buy a fairly large case (Phanteks Enthoo Pro, Corsair Obsidian Series 750D, etc.) and move the drives in there, but while you have more disk redundancy, you'd lose the redundancy you had before (CPU, RAM, PSU, etc.)
I don't think you're wasting the drives in the drive pool (it's your second local copy), but if you really want to change it, then you could check out Brian's DIY NAS: EconoNAS 2019 running FreeNAS. A number of users here seem to like his solutions.
I personally like the SuperMicro CSE-847 (check out used servers on eBay) running XigmaNAS, but it's pretty noisy unless you swap out the PSU and make fan adjustments. It's also massive overkill for most people, but you are on /r/DataHoarder, so...
This command would give an idea of available space on each filesystem.
df -Tm -x tmpfs -x devtmpfs
There is also
df -i
which would give an idea of inodes available as well.
Also, what does
lsof +D /media/pidrive/torrent-complete
output when the torrent is active?
According to this post, apparently Transmission uses a sub-folder in your home folder for some files.
10Gig LAN, cool stuff. The dev team at https://www.xigmanas.com/forums/ would most likely enjoy hearing your plans and use case
I used iperf3 to test between my Xigmanas & desktops but that was only gigabit LAN
Yes, Transmission.
https://www.xigmanas.com/index.php?id=3
However something that XigmaNAS does come preconfigured with is VirtualBOX for VM's. Unlike most other NAS solutions, it ships 'ready to go' for VM's.
If you're trying to torrent cleverly, you spin up a tiny VM, and block all ports in its firewall, except your VPN port.
That way it has its own 'killswitch'. if the VPN drops, well, it won't be using port XX anymore, and once it reconnects, it will :)
Even if you didn't want to do the firewall trick, I'll always recommend full VM's over dockers to anything that's WAN facing, compared to your NAS which is likely only LAN facing.
While dockers are secure today, unless you're truly short that 100mb of RAM to spin up a mini VM, I'm not one to risk a zero day exploit existing.
>In regards to an 'Appliance' OS, I recommend XigmaNAS, as it's an original and cleaner fork of the original FreeNAS.
Their recommendation: For using ZFS, we do recommend an absolute minimum of 8GB RAM for greater system performance.
Well good for you to try save the guy some $30 by going the absolute minimum system requirement.
>wall of text
The core is ZFS benefits from ram, thats it. Hanging up on words and arguing minutia is pointless.
>That's odd, Do you deploy many servers in your career?
Few here and there. We never bothered with any pentiums, though hp microserver looked interesting for one application but the lack of m.2 for system so that sata could be used for storage put it back to meh. I always assumed it was non ecc with how cheap it was.
Anyway, welcome to the topic of "diy" as I cant google consumer boards for these pentiums to be used in these build... do you know of few?
No, it was the original FreeNAS. Here's your proof.
> XigmaNAS has gone through several name changes throughout its lifetime, but it has always been the original open source NAS distribution. Originally called “FreeNAS” when development began in 2005, the project changed its name to “NAS4Free” in 2011 to avoid legal issues when iXsystems acquired the trademark to the “FreeNAS” name.
Mirrored vdevs are considered by many to be the optimal pool solution. It has been posted many times here before. They have better performance, are easier to expand, and resilver much faster when replacing disks.
The fact that I have my data backed up to a drive on-site and another off-site should tell you I have a plan for when hard drives fail.
You can go ahead and move on. It's obvious you do not have the knowledge to contribute to the conversation, and you have contributed nothing so far.
Seems to be a known issue with that board and 11.2
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230172#c10
I know this is an old thread, but just in case anyone comes across this via Google (as I did while searching for something else): Newer versions of XigmaNAS (NAS4Free) do have a basic GUI file manager via the webUI.
Screenshot on this XigmaNAS wiki page.
For what you want, if you're just using a single pool, I'd suggest XigmaNas (formerly Nas4Free) This is which is a lighter weight FreeNAS fork, before FreeNAS went 'fancy', but was still feature rich.
Don't get caught up in the high requirements for ZFS, it scales very well. 512mb of ram per TB is more than enough, if you're not using dedup (and you won't be).
If you take 4GB as the bare minimum, then add 256mb for each 'no features, other than redundancy' TB of data, i'd wager you'll be fine.
https://www.xigmanas.com/wiki/doku.php?id=documentation:setup_and_user_guide:hardware_requirements
You'll see there; 2GB is the minimum; 4GB is the safe-minimum, and 8GB is the recommended; my 16TB runs fine on 4GB.
However, if this is a massive media pool mainly?
OpenMediaVault is great!
You can use SnapRaid for your unchanging media pool, and add the ZFS plugin to have a 2 drive mirror, for important irreplaceable data.
https://www.youtube.com/watch?v=rxEyR30iNt0 (watch him at 1.25x speed, my god...)
Whatever you do, if power is cheap where you are, and you're going for a single pool, and you decide you like the ZFS features, then go a 6 disk RaidZ2, because you want 'set and forget', double parity is your friend.
Storage is cheap, re-taking photos isn't. Once you delete the RAW file it's gone forever.
> Once my year is over I store them in external drives and keep going on my way.
I would highly recommend some kind of NAS for storage instead of external drives. You can buy an off-the-shelf one or build one yourself (just a server-class PC with something like Xigmanas). You can build something with 32TB usable storage for around €1500.
Also make sure to follow the 3-2-1 rule for backups. Keep 3 copies of your data (working copy + 2 backups), on 2 media types (e.g. disk + cloud), and one of those backups should be off-site (e.g. in the cloud, external drive at a friends house, etc.)
Okay, you CAN do this. It's a really bad idea, but it IS possible. This article here which is from XigmaNAS (the old NAS4Free) outlines how to do it, and it requires command line work.
That said, let me reiterate: THIS IS A REALLY REALLY BAD IDEA. It is a GREAT way to test your backup solution and spend 4 months and/or lots of trips making recoveries, and you will only find out if this is necessary after the data on the reserved drives has been destroyed.
You would be far better off doing what both /u/keeperofdakeys and the single reply to the above thread recommend: Take a couple of 3-4TB drives and use them for the RAIDZ2, transfer the data, and then do zpool replace one disk at a time. If the replace fails, you can just put back in the old drive, and your data is present.
Or, you know, buy an external drive or two and then return it after you wipe the drive(s).
But is the whole endevour worthwhile? most of those file types you mentioned are fairly small files so you also have to factor in your time and effort compressing and recompressing them manually and this may "cost" you more than the cost of the extra space the storage can consume regardless of whether it is local disk or cloud storage. As far as "open-source software to store files with high compression so I can store more data on a given disk." goes the easiest way to do this is to setup up a NAS with ZFS and set the compression to maximum gzip=9 unless they have since added even stronger compression algorithms. https://www.xigmanas.com/wiki/doku.php?id=zfs:compression:comparison
> rule of thumb is at least 1GB per TB of hard drive space
This is a myth from ZFS early days. You only need a lot of RAM in ZFS if you use deduplication (and not even that much RAM). Even the hardware requirement of XigmaNAS don't say anything like that. https://www.xigmanas.com/wiki/doku.php?id=documentation:setup_and_user_guide:hardware_requirements
Get a used PC. Any old i3 will do. Just make sure it has enough RAM (rule of thumb is at least 1GB per TB of hard drive space). Buy some 4-6TB hard drives (that's the sweet spot for price per TB right now I think). Set up XigmaNAS (formerly NAS4Free). It's a Free BSD based NAS software.
Xigma NAS will provide access via SMB, NFS, FTP, and has a Torrent client as well. It will boot from a USB stick and run in memory, so no additional disk is needed to run the OS from. Configuration is done via web interface.
They use FreeBSD operating system and ZFS for the on-disk file-system. For any NAS purposes you want ZFS and the ZFS code in FreeBSD is exceptionally solid ; ZFS in Linux lags behind.
I put each of my services in their own Linux container (using LXC) ; host OS is a vanilla Debian Stretch installation. Tremendous flexibility but reliability and stability at the same time. LXC lets you have different "flavors" of containers (i.e. a Fedora container or Ubuntu) on your base Linux installation (whatever that is, Arch, Debian, etc).
>9212-4i
The best one I've found is the one by u/techmattr for the Dell H310 and H200 series cards. As they are all essentially just re-brands of the same LSI SAS2008 based card, that guide should work if you substitute the right files in for your card. This was the place I read that it was SAS2008 based.
Edit: I found this one as well that is specifically for your card.
if you're looking for a simple setup I suggest nas4free/xigmanas, it's a a very light weight nas os, best installed too and booted from a flashdrive/sdcard. you can do a jbod with a spanning zfs across the drives fairly easily in the webui. and it has a BUNCH of connection methods, including iSCSI target. for speed... well i have it running on a computer i literally found in the trash, a Compaq.. lga775 celeron, 4gb ddr3 ram, and a few random sata disks i had laying around.