Look online for an H310 PERC card from Dell. They can be had very cheap and then flashed into LSI (mode?). That's most likely what u/onmf is talking about is the H310. I have one and then paired it with two of these SAS to SATA cables and was able to go from a max of like 5 drives to a max of 13 drives (2xSAS -> 4xSATA). Was fairly straightforward and worked great in FreeNAS pools and is working great now in Windows Storage Spaces with SnapRAID.
As long as it's flashed to "IT mode" so you can control the disks directly, it will be fine, just plug and play. Similar cards are used by many on this subreddit (myself included). Those cards will not be any kind of bottleneck, and yes, any hard drive you use will not be able to saturate the bandwidth of that card or cables.
The cable is called an SAS SFF-8087 connector to forward SATA breakout cable. Like this: https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
3.3ft (1m) is longest you'll want to go.
> I have seen them mentioned for the dell rack servers but I did not know that they will work in a regular desktop computer too?
Yup, they work fine in a desktop computer. Like you need an appropriate slot (pci-e) and all that, but that should be a given.
>I was also curious about the cables since most don't come with any when I looked them up on ebay.
Yeah they don't usually, so buy the cables on eBay or Amazon or the like. If you're going from the card to individual drives, you need what's commonly called a sas breakout cable. I can't vouch for that specific one's quality, price, etc., just provided as an example of what they look like. Note that a reverse breakout cable is different (though they're rare).
> Also the IT firmware vs without is confusing. Will it still do what I need being I am going to be using in a homemade desktop based nas using openmediavault?
What's commonly called "IR firmware" on a H200/H310 gives you the ability to create raid0 or raid1 arrays with your drives, in its little onboard bios. If you don't create those arrays, it just passes the drives through to the host as if they were plugged into onboard SATA ports, essentially. It also gives you the ability to boot from the card, so you could if you wanted to make a raid1 of drives and boot your OS off there.
What's commonly called "IT firmware" removes the ability to create those raid0/raid1s, and removes the ability to boot from the card. People recommend flashing it because they're not aware that if you just don't screw with it, it works fine.
If you essentially just want more SATA ports, a H200/H310 with a breakout cable and no fapping about with the flashing will do that just fine.
Not just ethical but often the counterfeit parts are made with sub par quality assurance standards, and the wrong manufacturing to save cost. In my experience even on the simplest ICs many listed features don't actually work properly, if at all. In the case of a 24 port HBA, I'm sure they copied a controller rated for this many drives.
Even if not a single established brand would bother making a card with 24 SATA ports, because why the fuck would you, I'm sure they'll find a market of boomers to fall for it, even if only half the ports work. At that price point especially, they'll make a fat margin.
This is not a normal SATA port, this is SAS.
You need mini SAS to SATA like this
I can't se the terminal but it's something like this mini-SAS to SATA breakout cable. One mini-SAS terminal nets you up to four SATA terminals.
Yes an LSI 8i has two slots to connect cables to, you can connect a 4 way cable splitter. You need to check what cable type is supported for your card, most LSI use SFF-8087. Then you can just get a cable such as this one this one (this is for SATA, you'll need a different cable for SAS), plus SATA power cables.
As for RAID expansion, I'm not sure. Make sure you're using software RAID and the card is in IT mode.
Yeah for sure, it really aint no thang. The only thing I'd have to add is that your disk order will change. You'll need to get a pair of Mini Sas to sata cables as well
Coming from a dumb dumb like me who finally bought one a few years ago after buying a ton of consumer crap its just worked on any motherboard i tried. Flashed into IT mode Ive never had a problem.
I thought the same as you and never got an answer so I just bought a Dell Perc H2000 preflashed on ebay, worked fine in my Supermicro (enterprise), My Dell T20, My random asus consumer board, and my random consumer AMD motherboard.
All just picked it up and thats it.
The only problem I had was the right cables. Mine needed these theres another type I have that fit but dont work, I have no idea why and someone else might know why but I dont so just verify what kind of cable you need.
So it looks like the drive is SFF-8482. Can I mix and match the SAS tape drive and SATA hard drives on the same card? Like one 8087 to 8482 and the other 8087 to SATA? Would the 8482 maybe work with SATA drives?
Yep, that one should be just fine. As for the cable, I'm having trouble finding the manual or the exact spec, but it looks like SFF-8087 (forward breakout cable). If you need to use the external ports later, that's 8088 There's another cable type (SFF-8643) for some cards, which is why I'm trying to confirm, but this is the usual one.
Something like this should be what you need https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
PCIe 1.0x8 is 2GBps, so that'll even handle multiple SATA drives OK.
> if it is flashed as RAID instead is that then the RAID controller (assuming my mobo had one) would need to deal with the traffic to all the drives, not just this one - is that right?
No. If its flashed as a RAID controller, you'd have to go into the card's BIOS, and configure the lone drive as a 1-drive JBOD or RAID 0, just to use it, and may lose out on being able to get all the SMART data.
> Also - not sure what this 34pin SCSI connection is about? (that seems to be common on those cards)
Dunno. They normally have 1/2/4 mini SAS connectors, that can break out into 4 individual SAS or SATA connectors (example).
Fair enough. Basically sas is back compatable to sata and supports four drives per port so you can use a breakout cable that makes wiring management really easy and if you’re not using ssds you can get away with using a pcie 2.0 2 port adapter with no real performance drop since rust doesn’t even come close to 6gbps
Hi that's a good idea but then I'm back to square one; powering the ssd's... The beauty of the silverstone adapters is they provide power, and the sata->sas cable (I have one of these) provides the data... But not with the P420 :-(
That card is an SAS card, so you would need some forward breakout cables in order to connect a SATA drive to it.
These are the ones I have used in the past: https://www.amazon.ca/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
Are you planning on using a video card in that build in the future? It's running an ITX board with a single x16 PCIe slot, which would be populated with the card you linked above.
Funny enough that was the original cable I purchased when I first had the issue and it didn't work. Noticed it wasn't labeled for Forward or Reverse and bought this one:https://www.amazon.com/gp/product/B012BPLYJC/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 which did work.
I had a 2700x and 9900k. The P2000 blows them away for transcoding. I would keep the P2000.
​
Now you mentioned your only real concern was 10 SATA ports. This is an easy fix. eBay - Look for a LSI LSI 9211-8i (can be found for under $40 and will run 8 SATA drives ) or 16i (little more pricy will run 16 SATA drives) for more ports then get some breakaway cables. This will be in addition to your onboard SATA ports. Can take move them to any machine. I had them running under windows and now unraid with 20 drives.
Onboard video chip or an APU.
An APU can save you from having an onboard video chip or using dGPU (and loose the lone PCIe on mITX). But being an APU that means less CPU power. This will be okay for most NAS usage, but when someone wants more then more cores are better. I've always asked here about an 8C/16T mobile APU with very small iGPU for high-end laptops and such applications. These applications either doesn't need a powerful GPU like a server/NAS or it will already have a dGPU like AIO, high-end laptops or SFF systems.
​
Zen actually support ECC, but it's up to the motherboard maker to implement it to fully support it or not.
​
8x SATA ports on mITX can be hard (although they exist). But things can be compact if we go for more server'y like two mini SAS port, each can handle 4x SATA with simple & low cost adapters.
​
These board should really have at least 2x 1GbE or better a 1x 10GbE + 1x 1GbE, or 2x 10GbE for more high-end versions.
>My only gripe it's lacking in SATA III ports. Any tweeks to meet my above goals would be greatly appreciated.
Flash an LSI (or similar branded) SAS raid controller and you'll get 8 sata 3 ports at your disposal (note requires SAS - 4 sata cables [eg. this]). You get ex server ones quite cheap. /r/homelab could probably point you at which ones are worth getting now, I've not looked into it since buying my own 6 years ago.
I'm going to give this one a whirl, hopefully there isn't some issue with my motherboard preventing it from working. Got a H310 pre-flashed off ebay and ordered two of these and let's hope this fixes all my issues.
I tried to put another 8tb in my server this morning and it wouldn't work even on a mb port. Not sure what's up.
Not saying this is the best one to get, just grabbed this as an example of a quick Google Search:
https://www.amazon.com/Cable-Matters-Internal-Mini-SAS-Breakout/dp/B012BPLYJC
The standard internal "Mini SAS" 4 lane connector is called SFF-8087.
This is by far the most common connector found on SAS HBAs / SAS RAID Cards and Drive Backplanes.
The external version of that is SFF-8088.
Some older HBAs use a SFF-8484 internally.
And then some newer 12G External SAS uses something called Mini SAS HD (SFF-8644).
Each of those is 4 SAS Lanes in 1 cable.
So I just realized the P812 has the Mini SAS on the board whereas the P800 has the larger SAS ports on the board. Therefore, you'll actually need these breakout cables.
https://www.amazon.com/Cable-Matters-Internal-Mini-SAS-Breakout/dp/B012BPLYJC
If you get two of those, you attach them to the internal ports and then that gives you 8 total internal drives. If you needed more than that, then you would get the SAS Expander, run SAS cables from the P812 to the SAS expander, and then use more of those breakout cables on the SAS expander to get more internal drives.
I haven't used the SAS expander for HP so I am not sure how well it works or what additional configuration you will need.
You would need those other cables I listed if you were going to use the P800, but I wouldn't recommend it since that card only supports up to 2TB drives where the P812 supports MUCH larger drives and up to 108 total drives (if you really wanted to).
They share the same chip (LSI 2008) and have identical specs. I'm honestly not sure if there is a difference because I have 8 IBM M1015 cards and the FRU is the same between 9211 and 9220 labeled cards. This will work fine your ZFS software RAID5 setup and would make expansion in the future very simple. If you decide in the future to ditch ZFS and want hardware raid (RAID 0, RAID 1, RAID 1e and RAID 10) just flash to "IR mode" and use any OS.
I highly recommend this mini-SAS cable: http://www.amazon.com/gp/product/B012BPLYJC
If you don’t have a backplane, you can get this to use.
Cable Matters Internal Mini SAS to SATA Cable (SFF-8087 to SATA Forward Breakout) 3.3 Feet https://www.amazon.com/dp/B012BPLYJC/ref=cm_sw_r_cp_api_glc_fabc_aR82FbER30CFW
Cables: 2x of these: https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC/
SAS/SATA Controller Card: https://www.amazon.com/SAS9211-8I-8PORT-Int-Sata-Pcie/dp/B002RL8I7M/
You might find it on eBay for less. Just posting the Amazon links for clarity.
Plug the card into a free PCI-e slot on your mobo.
Plug the Mini SAS SFF-8087 connectors into the two ports on the HBA card.
Plug the SATA connectors into the back of your 5-drive hotswap bay cage.
Insert the HDDs into the hotswap trays (if it uses trays).
Turn things on. Bob's your uncle.
P.S. if you want PCI-e 3.0 version of the HBA card, you'll need to look for "LSI 93xxx" versions of the card. They're more expensive. Also, some others go for different manufacturer cards. I prefer LSI brand.
If you just want to RAID the whole thing, there are cheaper alternatives, but hardware level RAID HBA cards suck IMHO. With this type of HBA SAS/SATA Controller, you can basically pass-through the drives straight to your computer, and they'll show up as individual drives. Later you can then RAID them via software, or not.
I bought mine from the ebay seller "the-it-mart." Arrived well packaged and works like a charm!
As for cables, I got these from Amazon.
Yeah looks similar to the ones that I have. Mine are longer, hence the higher price.
https://www.amazon.com/gp/product/B012BPLYJC/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
This SAS card is what I personally use and then use these breakout cables.
If you go with a different model, it's nice for it to already be in "IT mode" so you can just use it for JBOD. You can flash it yourself but it can be annoying to do.
No name Chinese cards are always asking for troue. Anything made by LSI should work for you.
8 drive or 16 is your options. 16 will no doubt cost more. So using 2 ports from Mobo will save you money
https://www.ebay.com/itm/115430553325
You will use two of these to connect card to backplane
https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
You want a card that can be flashed to IT mode so that it is an HBA instead of a RAID card. Most people feel software RAID is better than hardware RAID for many reasons I won't go into here. A SAS HBA card is designed for SAS drives, but you can get breakout cables that go from 2 SAS ports to 8 SATA using 2x SFF-8087 cables.
Here a list of cards:
Here a guide to cross flashing the cards:
Here is an example of the cables:
https://www.amazon.ca/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
Not with an SAS splitter.
There's actually only two SAS3 12GB/s (SFF-8087) connectors on the HBA.
In the enterprise environment, you usually direct connect these to a server storage backplane, but many people in the homelab use SFF-8087 to SATA cables so that they can connect 4 drives per SFF-8087 connection as a lot of people aren't running enclosures with backplanes.
I've been using these Cable Matters breakout cables.
On the bottom right side of the board, there should be 2 miniSAS connectors there, if they are labeled, they will say SAS4-7 and SAS0-3 (numbering drives respectively. https://www.supermicro.com/manuals/motherboard/C606_602/MNL-1258.pdf
If you wanted to use them, you could plug miniSAS to SAS breakout cables there and use them. The board controller is a SAS controller (it's located to the right of the bottom PCIe slot). These would use breakout cables like these : https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC The SAS ports are called SCU, and support SATA2, although they should be able to support SAS as well since they are both basically serial connections, someone who has tried it on that board may need to comment, but I would assume it would work.
Since your board doesn't have an nvme controller, it won't support those drive types natively, only SAS and SATA, but if you were to add, say, a https://www.amazon.com/gp/product/B081SJYCTL/ card like this, it will give you 2 m2 nvme ports in each PCIe slot. There are MANY other cards out there, and they can get very pricey when you start going down that path. That card, I use in a few systems, and it works fine so far. There is no cache on this card, however, so if you want cache, you're going to need to go the route of putting in a raid controller with the cache and ports you want. This will also be PCIe devices. You have plenty of PCIe ports there for expansion, so it can easily do it, and they are all x16 and not bifurcated from the looks of it.
As far as if the controller will let you create raid on those nvme drives? That will be up to the controller. It may, it may not. There's really only one way to find out, and that's load 'em and see. The card I linked does not have a raid controller, it's just basically a jbod card and doesn't do anything natively except support nvme drives. The board controller that is built in does support raid 0,1,5,10 on SATA, and 0,1,10 on SAS through SCU (the breakout cables) and often will often support 5 as well.
Note : you cannot boot to these PCIe devices *unless* they have a raid controller on board, you can, however, boot vmware to a usb key or something similar, and create datastores on these disks and boot virtual machines on them without a problem.
Hope I didn't confuse you more, lol.
Item | Current | Lowest | Reviews |
---|---|---|---|
Cable Matters Internal Mini SAS to SATA Cable (SF… | $12.98 | $12.98 | 4.8/5.0 |
^Item Info | Bot Info | Trigger
You can shop for better deals but this is what you're looking for:
​
https://www.ebay.com/itm/313337725426?epid=8047438006&hash=item48f46241f2:g:M7sAAOSwk2Je3Sxb
And cables:
https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
Something like this listing here is what I have in my server. You also need to buy SAS breakout cables, something like these would work. IMHO, PCIE x1at 250MB/s will likely run into bandwidth issues if you have a lot of drives connected, and you'll soak the connection with even one SSD on that bus. The SAS card is PCIEx8
No; I plug the drives straight into the controller (with one of these). My PCIe LSI connects to the drive cage and so all 12 drives (LFF) there hang on that.
I have the same case for my Unraid setup. It has worked great for me. Installed a p2000 and 2 of these https://www.ebay.com/itm/164266166231 with 4 of these https://www.amazon.com/dp/B012BPLYJC/ref=cm_sw_r_cp_api_glt_fabc_G5N4TNE8NK1GZKD6E6P1?_encoding=UTF8&psc=1
https://www.ebay.com/itm/284099688366
​
Depending on your case and drive orientation, the breakout cables may/not reach.
​
There is a slightly longer version:
https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
​
As for supplying power, you might need to buy a 2nd power supply and pin-jump it. (Be careful, I take no responsibility for mess-ups here.) Do not use a bare paperclip, find some alligator-clip wiring and strip the ends.
​
https://linustechtips.com/topic/785072-how-to-jump-a-psu/?tab=comments
​
At any rate, buy the highest-rated power supply (don't settle for Bronze, go for Gold or Platinum) and connect everything to a UPS.
Sorry it took a bit to get back to this.
Up front, the setup I have is a little heavy on PCI-E slot usage, as you'll see. That might be an issue, depending on your motherboard.
Step one is a LSI 9211-8i based SAS card. The 9211-8i is a fairly low end 6Gbps SAS/SATA RAID controller LSI made, and they sold some cards with it directly under their name, but it was also popular in entry level server cards. IBM is probably the most notable, where they called it the M1015, and Dell called it both the PERC H200 and H310. Regardless of the name, it's a PCI-e x8 2.0 card with a pair of SFF-8087 ports on it. The important bit is that there is LSI firmware (namely, the P20 firmware) available to use the chip in IT (Initiator/Target) mode - basically a nicely featured HBA with no RAID abilities that just passes the disks through to the operating system directly. If you're buying a used server pull at this point, there's a decent chance it's already been flashed to that version, but it's easy enough to do yourself if not. Prices vary - they look like they're currently going for around $120, but I paid closer to half that a few months ago.
If you only need 8 (or fewer) drives, just toss that card in your system and get some SFF-8087 to SATA breakout cables like these and call it a day, though there's no particular advantage of that over any other decent (ie, not PCI-e 2.0 x1) 8 port SATA controller.
Where the 9211-8i really shines, though, is as a cheap SAS HBA that supports both SAS expanders and SATA drives for people who need a lot of cheap SATA ports. The cheap solution here is an HP SAS expander, which is a PCI-e x4 card (it only draws power from the slot, so PCI-e version and available lanes don't really matter, and is totally invisible to the OS) that serves as something akin to a networking switch for SAS, and has 8 internal SFF-8087 and one external SFF-8088 ports.
From this point, there's a few ways you can set things up, but there's a couple things to consider when you do. One is that the HP SAS expanders will only run SATA drives at 3Gbps, which is fine for HDDs that generally can't saturate that anyways, but makes them a bit of a poor fit for SSDs. They also support teaming on the HBA connection, daisy chaining, and 6Gbps/3Gbps encapsulation. This means that you can optionally run either one or two SFF-8087 connections (each of which carries 4x 6 Gbps SAS connections) between the HBA and the expander, and that each 6Gbps SAS lane can carry the traffic to light up two 3Gbps SATA connections at the same time (bandwidth is allocated automatically across all drives, this is just the number at one moment). You can also stick expanders behind expanders, up to a total of 255 (IIRC) drives, but limited by the bandwidth available to the HBA.
I would consider the "typical" setup with those two cards to have one 9211-8i HBA connected to one SAS expander with a pair of SFF-8087 cables (so there's 48Gbps connectivity between expander and HBA), and from there either the breakout cables I linked above or SFF-8087 cables to a backplane to connect up to 24 drives. This also leaves an external SFF-8088 connector that will provide 24Gbps to an external SAS disk shelf(s)* if you need even more drives.
In an archival sort of set up (which doesn't sound like it applies to you), you could also hang a pair of expanders off a single 9211-8i with a single SFF-8087 connection each (for up to 56 SATA ports total), and then use one or both external ports to feed a tape backup solution, but that is less common.
*Worth noting, actual purpose made SAS disk shelves tend to be expensive, and it can be cheaper to buy a case, pop a dirt cheap low end motherboard in it, and use that to power another HP SAS expander that uses the external port as the input instead. They're handy things!
I went with a card from "The Art of Server" eBay store: https://www.ebay.com/itm/Genuine-LSI-6Gbps-SAS-HBA-LSI-9211-8i-9201-8i-P20-IT-Mode-ZFS-FreeNAS-unRAID/163846248833
They are slightly more expensive but you don't have to worry about flashing the firmware and making sure it's the correct version and whatnot.
One card will get allow you to connect 8 SATA drives via 2 cables.
And these cables: https://www.amazon.com/gp/product/B012BPLYJC/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
If you need ever more ports you can get an expansion card:
and one of these cables:
https://www.amazon.com/gp/product/B07L9TZJKB/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
Note: The expansion cards only uses the PCI port for power and not data. Data goes through the first card.
Do you have a board recommendation for building in a traditional ATX tower case like the Define 7 XL? I saw a video or something about some mobos with SB coolers interfering w/ GPU installation. Dunno if that is something to worry about w/ P2000 GPU since they look kinda short.
I started looking into HBAs last night, having never used one before, or even seen one, honestly. This is the one I was eyeballing:
https://www.newegg.com/p/14G-0006-00159?Item=9SIAHFHE5G4001&Tpk=9SIAHFHE5G4001
I believe each SAS port can be split to handle 4 SATA connectors each using something like this:
Since the case would support up to 18 HDDs, I assume I'd need to have 2 HBA cards (+ the SATA ports on the mobo) to saturate the whole case worth of HDDs. Is that right?
Honestly alot of the HBA specs on the newegg pages I either don't grasp or seems like overkill for what I'll be doing (possibly because I haven't grasped it yet). How do I tell the most important specs to focus on? I have the TrueNAS build guide pulled up and was reading the HBA section for clues:
--Try 2x - 4x mirrored SSDs on a SAS 4-drive breakout cable with the SAS card in HBA mode - no hardware RAID. YMMV with lz4 compression / encryption, so I would use the fastest available (Ryzen?) CPU within your budget.
​
https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
​
--Be prepared to spend some $$$ for speed. Samsung 860 Pro drives are ~$100/ea on amzn for 512GB but they're considered some of the best in the industry (outside of Optane) for write endurance. SAS Card shouldn't be too expensive these days but you may have to "flash" it to HBA mode.
​
​
--If you go with SAS I highly recommend putting a fan on the chipset so it doesn't overheat:
https://www.amazon.com/gp/product/B0069W28SU/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
​
--The reason I recommend SAS with SATA drives instead of straight SATA is full-duplex comms vs single duplex.
​
REF:
Cables: https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC HBA: https://www.amazon.com/StorageTekPro-9211-8i-FreeNAS-unRAID-6Gbps/dp/B07JZ6FYVC/ref=mp_s_a_1_3?dchild=1&keywords=LSI+SAS+9211-8i&qid=1615574792&sr=8-3
These are just the first links I found, please research eBay for cheaper hbas, eBay is full of them.
Ever heard of breakout cables?
https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
I have a 9201-8i running 8 drives with 2 breakout cables.
There may be a second cable for power, but that's a very common SATA breakout breakout cable for the data. What model is your server?
https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
I'm not sure what you're budget is but an adapter 6805 can be had on eBay for $30 with sas cables to a backplane if you're not using a backplane Amazon has breakout cables relatively affordable.
This is what I do I host a VM that I pass the raid controller to that manages all the drives and present it to ESXi as a NFS device. This VM is hosted on a SSD so that it's always accessible to ESXi in the case you need to take the NAS server down for maintenance. But is very painless.
The case comes with two rows of backplane for the top 4 and bottom 4 hot swap bays, they connect with a SFF-8087 interface. I have two SFF-8087 cables connecting from the above interfaces to an expansion card, I managed to find a cheap one on ebay for about $40 USD
You could however just get a break out cable from the (SFF-8087 to 4x SATA) connections (that link is just the first google result so feel free to shop around).
I did have a PCPartPicker build for what the NAS has internally but I seem to have lost the link for it. I went a bit overboard on the internals and used an Asrock Rack X470D4U2-2T and a Ryzen 3600, due to height constraints I used a Noctua NH-L9a-AM4 to cool it, and currently have 32gig of Crucial DDR4 2666mhz memory.
Let me know if you want any more details.
[EDIT]: I almost forgot, the Noctua cooler requires you to pry the backplate from the motherboard that the stock cooler retention uses, it seems to be glued on for that Asrock motherboard for some reason and I needed to use an iFixit kit with the plastic tools to pry it from the motherboard. Just a heads up.
The only thing I’m an advocate for is Using LSI raid card over using sata ports. Getting an flashed already from eBay is not that much. Runs 8 drives on 1 card.
Any SATA card will do, or you may want to consider a SAS LSI 8-port HBA Card (about $50 off eBay) . With an HBA, you can use a SAS to SATA breakout cable which is very nice for cable management:
https://www.amazon.ca/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC
HBAs don't support SSDs well depending on the firmware version, so only use them for hard drives.
First things first: RAID IS NOT A BACKUP
RAID is mainly used for redundancy purposes where you can't afford to have downtime caused by a failed drive. It allows a system (most often a server) to limp along until downtime can be scheduled, the drive can be replaced, and the array rebuilt. Lets say you have your "backup" RAID in the PC that you're backing up and a surge fries your PC. RAID doesn't matter because that RAID array was zapped along with all of your other drives.
If you're looking for a backup solution go and get something like a 4tb WD MyBook and do weekly backups. WD includes software with their NASs that can automate this process. Or you can go all out and get a synology NAS or DIY it, it's really up to you. To answer your questions though:
> Should I do software or hardware RAID
Assuming this is a Windows host, I wouldn't touch software RAID with a ten foot pole. It requires you to convert disks to dynamic for RAID and recovering a dynamic disk isn't easy if the array does fail.
> what are your suggestions for the most budget-friendly way to do it
For hardware RAID, you can get recycled Dell RAID controllers for pretty cheap. I've never used one with a consumer OS but I've heard that the H310 (which is built off of the LSI 9211-8i) should be plug and play as far as drivers are concerned. You'd then need an SFF-8087 to SATA breakout cable to use the drives, you can get one on Amazon for like $12.
It depends on the card. They don't all use the same connectors.
I have a 9207-8i and I'm using two sets of these: https://www.amazon.ca/gp/product/B012BPLYJC/ref=ppx_yo_dt_b_asin_title_o03_s01?ie=UTF8&psc=1
For the external ports on the 9207-8e you want these: https://www.amazon.ca/CableCreation-External-26pin-SFF-8088-Cable/dp/B013G4F5QK/ref=sr_1_1?dchild=1&keywords=SFF8088&qid=1592340374&s=electronics&sr=1-1
Side note: I am on the Amazon Canada not the US site.
so i never can remember whether you want forward or reverse breakout cables here looks like forward. amazon link
the intel RES2SV240 does not need to occupy a pcie slot if you can provide power to the molex connector on the top.
​
you run two minisas to minisas from your m1015 card to the intel RES2SV240, leaves 4 ports, using the forward breakout cable, thats 16 sata ports.
I'm currently using a ASRock E3C224D4I-14S which has integrated LSI 2308 controller. It has 3x mini SAS ports which I use breakout cables giving me 12 drives. To get more ports I can use a SAS expander card or an extra pci controller.
Would I be able to reuse the MSI MOBO if I purchase the Pcie linked? In addition, seems I would need additionally below, right?
​
Yes you would put the LSI 9201-16e into a PCIe slot on your current HTPC. Then you would run up to four SAS to SATA cables inside the back of the new case. There will be a big rectangular hole in the back because the tiny power board doesn't have an IO shield (or any IO in the back at all). The cables are a meter long so they should reach just fine.
As for some kind of pre-built all-in-one unit, the only thing that comes to mind is something like the NetApp DS4243. These can be found pretty cheap on eBay but I don't recommend one because they only use server power supplies that are VERY loud. Like jet-engine loud. Seriously, unless you keep both computers in a garage or basement you don't want to actually live anywhere near these things.
If you do have a basement or something though and want to get one, you would still use the LSI 9201-16e but instead use SAS to SAS cables as they have four SAS ports on the back and you would just slide the drives into the front. Everything inside is already connected.
Be aware though that most don't come with HDD trays so you'll have to buy them separately, and the ones that do usually have old 1 TB drives in them already which drives up the price. But even if you think 24 TB extra is good for whatever they're asking, you have to remember they are heavily used in a server environment and likely to die soon, not mention the electricity costs of powering all those drives with 4 server PSUs, and again the NOISE.
Plus there are compatibility issues that even I don't fully understand. You definitely should read up on them before buying one, but really it's not worth the trouble IMO.
I think you're better off going with one of the other options.
This, I've been using a LSI SAS SATA 9201-8I
with no issues and it works great and keeps cabling clean with a Mini-SAS to 4port 3gbs SATA adapter
https://www.amazon.com/dp/B012BPLYJC/ref=psdc_6795231011_t3_B008KF7H8U
edit: 9211-8i has two of those ports, so you'll need two sets of those cables, like this:
https://www.amazon.com/dp/B0728KRZYB/ref=psdc_6795231011_t3_B012BPLYJC
Maybe get some of these sorts of SATA cables would make it easier - https://www.aliexpress.com/item/4PCS-Free-Shipping-DIY-Black-sata-3-SATA-III-3-Data-Cable-Dual-channel-aluminum-foil/1582341251.html
Or get a SATA controller that uses Mini-SAS to SATA cables and get these - https://www.amazon.com/Cable-Matters-Internal-Mini-SAS-Breakout/dp/B012BPLYJC
Would make running separate SATA cables a bit easier and more manageable
Use a forward breakout cable?
maybe an h200 flash it to it mode, then a foward breakout cable then just figure out how to power it all
I had something like 2 of these on hand:
https://www.amazon.com/Cable-Matters-Internal-Mini-SAS-Breakout/dp/B012BPLYJC
Just curious - what are you going to be using this storage for?
crash
I need new cables to connect the back panel to the LSI don't I?
I currently have https://www.amazon.com/gp/product/B012BPLYJC/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
I am looking at rolling my own. Here are the components I am thinking about using.
Not too worried about which type of RAID, long as it supports the drives at full throughput. The 9211-8i looks pretty nice.
Would this work as a breakout cable for the drives? https://www.amazon.com/dp/B012BPLYJC