If your motheboard supports PCIe bifurcation, it is cheap and easy.
https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
But this is a quite modern feature, so you do have to read your manual to know for sure
Is your mobo doesn't do bifurcation you will need a card that has a plx bridge chip and those cards will be around $300
The board in question is a X10SDV-6C+-TLN4F-O by SuperMicro. It seems that bifurcation may be supported (works on the X10SDV-TLN2F) with 8x4x4 and 4x4x4x4 configs.
So in theory - something like this should work (since the board has gen 3 PCie):
Problem is that the PSU may be a limiting factor indeed. This is all gonna be in a SuperMicro SC721TQ-250B chassis and its' 3.3v rail maxes at 12 amps :\
Well, keep in mind that if you have an extra PCIe x16 slot available you can always drop in a card to install M.2 drives on.
I'm personally running a small form factor build, so it's just the GPU for me and that's it.
I am not familiar running the m.2 SSD through SATA. I used a [PCIe m.2 adapter, which worked fine. One caviat, I forget if it was an r710 or r720xd I used it on, but doubt it would make a difference. ASUS m.2 PCIe card
Yeah but I mean, 5 M.2 slots? I would prefer the more useful x16 slots where I can plug in other expansion cards too. And if needed you can just use something like this IF you need more M.2: https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
>I have been waiting for years for a NVMe based NAS from Qnap/Synology etc that meets some perhaps lofty goals:
Gee, I wonder why they haven't made it.
>Support for 6-8 NVMe drives
One NVMe drive will saturate a 10gb link, so if you just need more fast tier capacity, buy a couple of those bifurcation cards and pop them in until you have enough drive slots. You'd need a CPU/board that supports bifurcation and enough slots, of course, but since you're looking into the higher end stuff that shouldn't be a hard get.
>Also - I assume the CPU will have to handle a LOT when processing all the drive data
What "processing" of "drive data" are you expecting it to do? Pulling data off drives and putting it on the network isn't all that CPU intensive, and even less so if you use jumbo frames.
Sorry to bring up an old thread, but I was hoping to clarify something as I'm still struggling to understand this. If I read this correct, I can pick up a 24 bay R730 XD, put one of these ASUS cards in, and have 4 x NVMe drives available to the OS?
If I'm correct so far, then I could either do a software raid on those, or just put proxmox on and have both a blazing fast storage for VMs, and a decent chunk of spinning storage for media....
Am I missing something?
I was using this: https://www.amazon.com/gp/product/B07NQBQB6Z/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1
But since there's no bifurcation support, I'm thinking of returning it and getting an adapter that supports non bifurcation motherboards. I'm hoping that's the issue.
According to my local retailer there is a second m.2 socket that runs SATA but I could not see it, I would personally just go for a 2.5 SSD as it would cost less and keep the m.2 as my boot drive. If you are wanting to get another m.2 though you might have to get an expansion card, this should not slow the speeds as it still goes through a PCIe lane. I would recommend something similar to this:
https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
This is the one I used and it didn't work. I'm assuming it is different than the one your posted?
ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card V2 Supports 4 NVMe M.2(2242/2260/2280/22110)Up to 128 Gbps for Intel VROC & AMD https://www.amazon.ca/dp/B07NQBQB6Z/ref=cm_sw_r_apan_glt_i_GEK9MN164VXJM3F1EGN7
You can actually fit a lot more M.2 than just the 2, since if you use only X8 lanes in the primary PCI-e slot, you can devote more to the other slot, and use a M.2 PCI-e card like this:
There’s a ton of good X570 motherboards out there, like the MSI MEG X570 Unify, the ASUS X570-E, and Gigabyte Aorus X570 Master. Watch some reviews, especially from the likes of Buildzoid, Gamers Nexus, Hardware Unboxed, etc..
You should look into AMD cards too, some of them are available on the shelves right now.
If your motherboard supports PCIe bifircation you can use one of the $50 4x cards to connect 4 nvme drives.
https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
You can then raid these in your OS and have 4x more slc cache and 4x more throughput so you will be unlikely to fill the cache.
Honestly it's absolutely worth buying an NVME expansion card since everyone only runs a single GPU now anyways. This one from Asus is $80 and supports an additional 4 SSDs, which is well worth it considering gen 3 NVME and SATA SSDs are about the same price generally.
Sorry to bother again, would something like this allow for software raid, and how can I know on my own next time? Also the fact that I wouldn't be able to use any other slots on the machine wouldn't be a huge issue as it is a minecraft server that I remote into.
They do; you need the PCIE lanes to handle it, tho.
Currently come in 2 flavors; one type is RAID on the card, and the other type makes them individual drives (but your MB has to support PCIE Bifurcation to turn a PCIe x16 slot into a x4/x4/x4/x4 slot on the bigger ones).
I thought you mean to keep up with the 2Gb eternal connection. didn't realize you meant you were trying to keep up with 10Gb local network.
generally that takes storage arrays.
here.
https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
just run Storage Spaces on 4x NVMe :D
I was planning on getting a mobo with dual pic-e 16 slots, and then getting a gen 4 pci-e storage card that has 4 slots for m.2 NVMe drives (like this, only gen 4), and then slotting 1-2 GB of NVMe drives in there (probably 4x 250gb drives, but if I find a good deal on 500gb, obviously jump at it). I can drop that into a gen 3 slot, but if i'm going through all that, might as well get that little bit more out of it.
Interesting, I missed that. Thanks for catching that. I couldn't find anything that supports just 2 M.2 SSDs.
However, you can get one which supports four like the ASUS Hyper M.2 (4xM.2) adapter which supports up to 4 M.2 SSDs, but you can just use two of them if that's all you need.
Let's also remember that this was in 2017, so this has had 3 years of development since then, which Linus cited it needed more time for drivers to progress, which it has and the device has great reviews. The big issue was just using a Threadripper since it's technically 2 CPUS.
Also that the PS5 hasn't been independently reviewed for real world bandwidth, so I'd assume that the 8GB/s is theoretical until a review backs up the marketing, as Linus is testing real world performance, not "AMD's BS raw file system stuff".
Finally, the LTT video shows this as a clear bootable drive, which means everything can be running at those speeds, whereas PS5 architecture could have this as an Optane-like drive where it's only for the game that's currently running. This means while the game could run at the high speeds, you could have a OS that's slow and clunky as opposed to this full solution.
Also, the device 100% absolutely is available, IDK what you're talking about. It's $55 on amazon with great reviews and people achieving up to 7 GB/s real world performance: https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
So all in all, the PC market is not only caught up, it looks like its ahead already.
Agree with the 1TB SSD option. But the Asus Z390-A Prime supports NvMe SSD, so why not get an NvMe M.2 PCI-e card and get a 1TB NvMe SSD instead? That will improve the SSD speed by 3-4x. https://smile.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z?sa-no-redirect=1
Also I would get 32GB memory if you are doing programming AI
Does your motherboard support PCIe bifircation? If so any of the $50 4x cards will work. It has to be installed in a proper x16 slot.
https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
If not you will need one of the $500 cards with a Plx chip onboard
https://www.amazon.com/High-Point-SSD7103-Bootable-Controller/dp/B07S8F1BGJ
So 4 individual single nvme cards might be more cost effective if you have the slots.
or you could get something like this in the future https://www.amazon.co.uk/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
Hey, I'm using this: https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
Works great.
I found an ASUS Hyper M.2 x16 Card V2 on amazon.in.
Costs Rupees 9500 or about USD 117.
You can get something like this to add move NVME drives to your cache pool.
As someone who loves SFF builds, it could be a cool project. The immediate challenges that come to mind are:
If I were you, I'd try to split the problem in 2 halves: (1) build a tiny ITX box as the head unit, and (2) build a custom enclosure for the drives.
The ITX part is relatively easy, and you could solve the specific problems of the disk enclosure (fans, power, compact drive layout, etc.) on their own. You could have something like this 3D printed as a start.
Or, get a Meshilcious, use a full-size ATX PSU, sticky-tape all the drives wherever they'll fit on the graphics card side, and put a fan on the top 🤷♂️. Depends on how much of a project you want to make it.
https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
Slap this bad boy on your second x16 slot and go to town
Looking to buy the hyper m.2 expansion card here
I have a gigabyte z390 UD with i5-9600k, I'm reading on the MB manual here, as I understand to use the expansion card above I need 4x4x4x4
I don't think the 4 slot expansion card would work on my z390? would it be better to get a 2 slot expansion card instead?
First of all, the idea that motherboard form factor has any effect on CPU performance is nonsense.
What you want is feasible, but it really depends what you consider 'small form factor without losing any performance'. If you need something ultra portable you can travel with.. you'll be up against some difficult choices and will definitely need to change your plan, but if you just need something small to sit on your desk at home then it's not a problem.
ITX boards suffer on memory because they only have 2 DIMM slots and often only support up to 64gb. mATX boards have 4 memory slots and a bit more expandability, but then you have a bit of a problem with finding 'small' cases to fit them in.
For 3 monitors you'd also need some kind of dedicated video card, since most motherboards at most may have 1 or maybe 2 outputs. Fortunately there are some relatively low cost multi-monitor GPUs for this purpose, so you don't need to get sucked in to buying 'gaming' GPUs.
For NVME storage, you can use a PCIE expansion card such as this to add a lot of extra M2 drives.
To give you an example, you could build something like this, or something like it: https://pcpartpicker.com/list/vft7PX
Asus Hyper M.2 x16 PCIe 3.0 x4 is the only affordable option for four drives. I haven't been able to find it on Amazon for awhile, but I think it was about $100 when I got it a year ago. I'm using it in my Unraid server.
It does require your motherboard support PCIe bifurcation.
The adapters that have their own bifurcation chip (can't for the life of me remember the name), are much more expensive, such as this $400 Highpoint card.
You're going to be hard pressed to find a Gen4 card, and anyway, your server doesn't support it, so you're going to be limited to Gen3 speeds.
No matter the generation, a NVMe m.2 drive is going to use four lanes. You can't add more lanes, it's a physical limitation of the interface.
The drac won't work with the nvme. I got this one
https://www.amazon.com/dp/B07NQBQB6Z
and it works like a charm with 4 2tb Nytro nvme doing a software raid 5. 🔥🔥🔥🔥 I can tell you the setup was more expensive than the computer.
There's also the Kingston Datacenter SSDs, though they have lower endurance than the Seagate Ironwolf Pros; 876TBW vs 1750TBW for the 1TB models. They're probably cheaper.
https://www.kingston.com/unitedstates/us/ssd/dc500-data-center-solid-state-drive
The Samsung 883 DCT is also an option. They have PLP and have 1400TBW for the 1TB model.
https://www.samsung.com/semiconductor/minisite/ssd/product/data-center/883dct/
If you're moving that much data, you need a drive with endurance, not these consumer grade drives.
I have NVMe on my X9 server. I'm using one of these. My X10 server has this in an x8 slot. I might eventually add another 2 NVMe drives to my X9 server if the need arises, which is why I have the Asus adapter. Either should work in your X9 board.
You'll need to update the motherboard firmware to 3.4 to get PCI-e bifurcation to work. Had to live without that feature for a year until Supermicro updated their firmware. I was surprised they bothered with the X9's. This is the hack I used to update the BIOS from the web interface.
If you want to take a risk, want 10G speeds, and want a cache that will basically never die because the NAND runs out, check this video out from Craft Computing. Hopefully you can find them still.
why $100 for 2, when you can spend $55 for 4?
https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
I have an R730 that acts as my SAN serving iSCSI targets to my ESXi hosts. This was already providing 8TB+ of SSD storage, but I suddenly found myself needing more storage in a pinch, so I ordered this:
https://www.amazon.com/gp/product/B07NQBQB6Z/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1
And populated it with four of these:
https://www.microcenter.com/product/608649/inland-premium-2tb-ssd-3d-nand-m2-2280-pcie-nvme-30-x4-internal-solid-state-drive
Set the PCI-e bifurcation to x4x4x4x4, and was in business. I setup a software-based RAID 5 array and had my hosts use the 6TB as a VMFS6 datstore. I have most of my top-level infrastructure running on it and I can honestly say I notice a difference; my vCenter is very snappy now.
No idea where to get the NVME backplane, but from looking it up its really only so you can stick NVMe drives in the flex bay slots.
Do you need to have the NVMe Drives in the flex bay? If you don't actually need the drives in the the flexbay then there are numerous alternatives to an NVMe drive into your tower.
NVMe is really just a PCIe card in a different form factor. You can get a M.2 to PCIe adapter like this or like this, mount your drive, and insert in any open PCIe slot and it will work exactly like it would if it was in a flexbay (with exception to hotswap).
An alternative is you can get a SFF-8639 cable and plug it into one of the free U.2 ports on the motherboard and then a M.2 to U.2 adapter and achieve the same thing.
that's what you get something like this (though, obviously, the pci-e v4 version): https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z/ref=pd_ybh_a_91?_encoding=UTF8&psc=1&refRID=5C8P1CZS6E8RYQFEM64N
An alternative card that I’m fairly certain works with macOS and is much cheaper than the OWC without any storage.
You’ll have to supply your own NVMe’s so it’s ultimately only relatively cheaper.
The catch with this card (and most multi-NVMe cards, I think) is that your motherboard PCIe slot will need to support bifurcation with x4 x4 x4 x4 otherwise you’re only going to get one of the drives being recognized. There are some cards that have an onboard controller that will handle the bifurcation for you, effectively making the card appear as a single x16 or x8 or whatever, but they charge a premium for the additional logic.
I can say from my own experience that this particular card runs at x8 just fine.
https://www.amazon.ca/dp/B07PRN2QCV/ref=cm_sw_r_cp_api_fab_qLoEFb1R5KKB1
You’re probably already aware but I feel the need to mention that you should make sure that your motherboard can support the full x16 bandwidth before pulling the trigger otherwise you’ll likely be disappointed. AFAIK, most consumer chipsets only have 16 CPU lanes to allocate. So if you’re already using a GPU and you populate the second slot on these chipsets, you’ll halve the bandwidth available to your GPU and they’ll both run at x8. Do note that with the halving of bandwidth to your GPU, there’s varying reports of how this affects performance from barely at all to notably slower.
All I did was google pcie 4 drive m.2 cards
https://www.amazon.ca/dp/B07NQBQB6Z/ref=cm_sw_r_cp_apa_i_DClHFbZ39KNTE
https://www.sonnettech.com/product/legacyproducts/m2-4x4-pcie-card.html
I'd go a little larger on the NVMe personally. You don't want to run out of space to create new VMs or put them on platters. Or, what I did myself is use the bifurcation features of your motherboard and get one of those add-on cards that supports 4 more NVMe sticks, each at x4. Make sure you get an adapter that supports x4x4x4x bifurcation and not something that just has its own switching logic. I got this one for $60 https://www.amazon.com/dp/B07NQBQB6Z
That is, of course, if you're not using the slot for anything else. This, plus my single on-mobo NVMe gave me 5 NVMEs I used for raidZ2 (I like my drive redundancy). You could do 6 drives with your two on the motherboard.
Spending other people's money is fun.
You could also free up a SATA for future expansion by booting from USB.
Edit: Strange, your motherboard's bifurcation specs say x8/x4+x4, but not x4x4x4x4. But then it has a footnote that says you can use the Hyper M.2 card I linked and select it in the BIOS (since it's an Asus thing). So, I guess their special "Hyper M.2" BIOS setting just enables x4x4x4x4 using a special name instead of just saying x4x4x4? I guess that's just a marketing thing or something.
Yes, I have bought and used this one. I quickly moved to the Hypercard adapter once I built the new box and filled its 2 M.2 NVME slots and wanted to add 3 more NVME M.2 2280 drives.
Thanks for this - interesting stuff.
The NVME's in question are Corsair Force MP510 but we couldn't find any mention of them being SATA so if you happened to have any links about them being SATA, that'll'd be really welcome and appreciated.
Apologies for being a n00b with this stuff. We figured as the PCIe bus is 32 GBps (with supposed support for ACS/ARI overide) and the expansion card is rated upto 128 GBps, it's 4x4x bifurcation would allow for close to the rated speeds of the NVME's/PCIe bus. Again and as we are learning, we'd really appreciate any resources you might have to the contrary.
If a PCIe slot can only deliver to one of the 4 NVME's at a time on an expansion card, that's really bad news. The ASUS Hyper card specifically states it's designed to provide 4 times the bandwith so wondering what you think they are meaning by that?
Thank you so much in advance for any clarification to what you meant as this is incredibly important to many many people.
Thanks!
Improve the internal airflow... with servers it is the only way.
My T610 wasn't really made for NVME drives. I use with a PCI adapter, and it does not get a good airflow above it (it has a plastic shroud that force the airflow above the processors and the RAM, but all the PCI cards have no airflow.
I have a small heatsink on my NVME cache that I salvaged from E-waste....
​
but nowadays I would consider something like the Asus PCI-e 16x 4x NVME adapter
https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z/
Check compatibility before, but it seem to allow 4 NVME drives and have cooling solution on it.
Here is an ASUS for $54 holds 4 drives
Silverstone for $37 holds 2 drives
​
No experience with them, but i don't think they limit performance as they are mostly a pass-through. afaik
​
EDIT: seems like you might need to check for bifurcation support
For anyone still interested in this topic, there's 2 versions of the card listed above.
​
Gen 3 ($58 at time of writing): https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z/ref=sr_1_2?dchild=1&keywords=asus+nvme+raid&qid=1595855954&sr=8-2
​
Gen 4 ($69 at time of writing): https://www.amazon.com/ASUS-M-2-X16-Expansion-Card/dp/B084HMHGSP/ref=sr_1_1?dchild=1&keywords=asus+nvme+raid&qid=1595855954&sr=8-1
M.2 uses PCIe lanes anyways, I suspect you will have comparable speed. My best guess on why you are running into this issue is because PCIe only has so many lanes, and manufacturers are dedicating some to the actual PCIe expansion slots, and some to the storage controllers in a balanced fashion. I don't know any boards off the top of my head that prioritize the storage controllers over expansion slots like that. This would basically adapt some PCIe lanes BACK to storage. This is what I've seen recommended for a high-end solution, I don't really have any personal experience on some of the cheaper solutions but they do exist.
https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z/ref=asc_df_B07NQBQB6Z/
Now that I think about it, you may want to prioritize NVMe speed over a potential SATA speed degradation, so you could also just use the onboard NVMe and add a SATA add-in card which removes the risk of the card degrading your fastest drives. Here's an example, but there are literally a million of these on the market and they all look relatively similar.
https://www.amazon.com/N-ORANIE-Adapter-Marvell-Chipset-Non-Raid/dp/B07H8CXK9F/
Giving it some more thought, I would definitely go with the second option.
Ryzen CPUs have 24 PCIe lanes - the graphics card takes 16, each M.2 NVMe SSD takes 4, and everything else shares 4.
Since you're saying "I'll likely add some 1TB M.2 drives", and you want a small PC under $1,000, and you want to reuse your existing SATA SSDs. You're gonna have a lot of devices sharing 4 lanes, and it will seriously bottleneck. Your motherboard chipset, WiFi, Ethernet, all but 1 SSD, all your hard drives, audio, Bluetooth, will all be sharing the same 4 lanes.
My advice is to go with Ryzen 2000 Threadripper which has 64 lanes: You can then squeeze in card like this to add 4 NVMe SSDS: https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z/
And a PCIe 2.5 Gigabit Ethernet adapater: https://www.amazon.com/dp/B08952DDML/
5 NVMe SSDs (1 on mobo + 4 on card) - 20 lanes 1 GPU - 16 lanes 1 2.5 Gigabit adapter - 1 lanes Built in bluetooth - 1 lane
That's 38 lanes + the chipset
Ryzen offers 20 lanes for your devices + 4 for the chipset. 20 lanes is half of what you need.
This motherboard has 8 SATA ports as well, with Ryzen when you use an M.2 device, you're cut back to 4 SATA ports. This is not unique to Ryzen, Intel does the same. Modern consumer platforms are not made for someone with as many devices as you have.
Type | Item | Price |
---|---|---|
CPU | AMD Threadripper 2920X 3.5 GHz 12-Core Processor | $487.66 @ Amazon |
CPU Cooler | Noctua NH-U9 TR4-SP3 46.44 CFM CPU Cooler | $69.90 @ Amazon |
Motherboard | ASRock X399M Taichi Micro ATX sTR4 Motherboard | $329.99 @ Amazon |
Memory | GeIL EVO SPEAR Phantom Gaming 16 GB (2 x 8 GB) DDR4-3200 CL16 Memory | $52.99 @ Newegg |
Memory | GeIL EVO SPEAR Phantom Gaming 16 GB (2 x 8 GB) DDR4-3200 CL16 Memory | $52.99 @ Newegg |
Prices include shipping, taxes, rebates, and discounts | ||
Total | $993.53 | |
Generated by PCPartPicker 2020-09-15 14:02 EDT-0400 |
everyone's jerking sony off over "fastest ssd ever" despite LTT doing it in 2017 here: https://youtu.be/lzzavO5a4OQ
and it's readily available for $55 on amazon here: https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
That's also without mentioning lack of real world compared to theoretical and this gives your whole drive that performance, not just for the game.
The PS5 is already 3 years behind.
Depends on budget. If you want to be able to add disks as you need the space, you’d have to go with a drobo. (Or unraid yuck) (Drobo has a USBC (that’s compatible with usb3 and a thunderbolt unit and qnap just has thunderbolt 2 and 3)
QNAP if you want to build it out. Maybe get a 16 bay and then an expansion module when you need to double it. They just came out with some nice expansion units that offer full speeds. But if you’re transferring a lot of small files, the only thing that will really make that work well is using all SSDs. You could try using one of the few enterprise QNAPs that use SAS drives, but I think that would only improve multi user transfers or if you plan on doing a lot of reading and writing at the same time.
If you’re usually accessing the same small files, you could just put a few SSDs into the mix as cache. This works with drobo and qnap. Might even work for all your needs.
10Gb Ethernet a must. Or thunderbolt. Qnap and drobo support thunderbolt 3 which can give you double the speed of 10Gbe. Only downside is you need to be close to your drives. Though there is finally one company that makes a fiber thunderbolt cable up to 100m I believe. Expensive of course.
And ... accessing your volumes via iSCSI helps a lot. Instead of mounting as a SMB share. But using Thunderbolt 3 has always been the fastest for me. (Haven’t ever tried dual 10Gbe aggregated on the client side)
Oh the lower end, mobius still makes a USB3 unit, 5 bay, that can do hardware raid or just a bunch of disks. I’ve had many of them. Only downside is that it’s slow. The raid card in it isn’t the best. That’s typically the case with USB3 housings.
I also recently bought and setup a USB3 Mediasonic H82-SU3S2 ProBox 8 Bay ($270 amazon) because I had a bunch of spare drives around. (Of the same make and model). Works pretty well. Again can be raid or just act as a hot swap for disks.
Thinking out of the box, why not get 4 or 5 M.2 NVME to USB C Adapter Enclosures, and set up a RAID with Softraid. Or USB3.1 enclosures. Mount inside a metal box with a 120mm fan, and put a USB3.1 hub inside.
Or this 4 nvme PCIe card. 3500MBps. ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card V2 Supports 4 NVMe M.2 (2242/2260/2280/22110) Upto 128 Gbps for Intel VROC and AMD Ryzen Threadripper NVMe Raid
Or a enclosure. OWC Express 4M2 4-Slot M.2 NVMe SSD Enclosure w/ Thunderbolt3 Ports RHQ
The games you mentioned need only a tiny fraction of that GPU power, and photoshop takes little to no advantage of the GPU outside very specific tasks. Do specific workloads make full use of GPU accelleration?
The prices on those SSDs are comically absurd, you could get 9TB (1 1TB + 4 2TB) NVMe storage for the same cost, if you went with a PCIe card for them. Alternatively a single 4TB Sabrent Rocket is still cheaper than that Samsung abomination.
PSU is ludicrous, it would be fine with less than half that.
Case is overpriced and nothing special, you could get a high quality fractal or NZXT design case for half that price instead of a box with minimal filters and no soundproofing. Not a big concern, though.
I know very little about 3rd gen TR boards, cant comment on that. Gigabyte has a horrible track record for AMD in general, though.
Something like this PCIe x16 to 4 NVMe card with the following would be cheaper and better. Also a much better GPU for the same price. PCPartPicker Part List
Well depends on what interface you're looking for. If you are looking for an external one using USB 3.0, well I can't find any and I doubt any exist due to bottlenecking. If you're looking for an internal pci expansion card then yes they exist. From quick amazon search.
https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQBQB6Z
https://www.amazon.com/EZDIY-FAB-Adapter-Heatsink-Support-Expansion/dp/B07JGRJS5X
My question is would you even get a performance boost if you exceed what the CPU capable for using?
Yeah, you can buy a quad m.2 NVMe board. It won't work as hardware RAID unless you have a specific motherboard for it, but is there any reason you're looking for 4x NVMe drives?
> There isn't really 16 lane version of u.2
And there aren't any controller chips that support more than 4x NVMe / PCIe 3.0 lanes that I'm aware of. Basically, SSDs can only support 4x lanes. Even Optane is only 4x lanes.
RAIDed arrays of 4x 4xlane is pretty nifty. But that's utilizing the M.2 form factor more so than anything else.
> This adds a bit to latency.
NVMe SSDs have an IOPS rating of 100,000, or roughly 1us of latency (best case scenario... and that's a big stretch). Cable-length and buffers have latency of 0.005 us, or roughly on the order of nanoseconds. Its a complete non-issue.
The only one working on breaking down the latency issue is Intel Optane with their DIMM-based SSD arrays. Breaking the microsecond barrier probably will require something crazy, like Gen-Z or whatever. For now, with PCIe based technology, U.2 and M.2 are sufficient. No real point getting PCIe SSDs... its cheaper and more effective to join the mass production of the industry and support U.2 and M.2.
Yes. However, if you are not planning to have any redundancy say RAID 1 then you shouldn't have to worry about getting drives that are the same model you could do it any way you want.
If you really wanted to RAID NVMe drives, I would look at Asus M.2 x16 Card, you get 4x more slots. However, it is limited when it comes to compatibility and only works with some motherboards.