AFAIK, the hardware prevents a drive from spinning up unless there is a disk; but you might be able to modify (solder) the drive to work that way. You could use fio put some load on the drive so it spins/seeks as much as possible.
If the drive is retractable, you could have a script continuously eject/retract the drive which makes some noise.
As an alternative, the CPU fan is usually pretty loud and can easily be pushed to max speed with fancontrol or heavy cpu load, and similar for GPU.
> I'd need to see at least a 20% boost (IMO)
You could try testing on c5 spot instances?
> runs a little more predictably with more RAM
At least in my experience that seems to be the case with many PHP apps. Which PHP version?
> maybe even r5?
r5 for CPU bound? I had mixed results for CPU bound apps (requiring extra memory) on r4's.
I did some experiments using fio against m4's and r4's with similar specs (~memory) a little over a year and half ago. Very niche db work load and configs (block size, etc) but m5 typically gave a 1.5x performance boost with same EBS config.
Might be worth re-running those tests w/m5 versus r5 in the future.
You might want to check fio, which is already somewhat portable (Linuxes, BSDs and Windows), and maybe write a GUI for it instead of redeveloping everything from scratch. Fio has lots and lots of options to make your test scenario exactly the way you need it, but I found its configuration somewhat confusing—a good UI would be very helpful.
> https://github.com/axboe/fio/blob/master/examples/ssd-test.fio
Oh, wow, I didn't know about the runtime
argument. That's pretty neat!
I'm not sure how reliable just running that test at various bs
values is for uncovering the underlying structure of your SSD, though. I think you probably still need to destroy and recreate the pool with the various ashift
values to be sure.
fio is what I use. Its capabilities are vast, but for simple tests it is easy to configure and one of the example configurations may suffice for your purposes. You can use it to test performance of either a raw disk or a filesystem on top of that disk.
You probably need to dig into options to get it what it exactly does (or even source code) but no 2 benchmarks from different programs will be same, I wouldn't bother comparing results of 2 different programs; but there seem to be some cache control options under -S.
Personally I usually use fio as it is multiplatform and has a ton of backends, included in most Linux distros (there is also precompiled windows version altho I haven't tested it), using config with variety of workloads
https://github.com/axboe/fio use this instead.
Just doing scp/cp/dd/nc/rsync will probably end up in you getting mixed results that are hard to explain since linux does alot of smart things to get files written and read faster.
for example, in DD you can specify flags to write directly to disk, this would make cp alot faster since it would write to a buffer in the ram of the target system. There is also file caches in ram on systems.
With fio you get a complete test suite with alot of metrics other than just bandwidth, you gets I/O per second and latency. which is very relevant as well since there can be network issues with either system.
Look at you citing sources....you sure you're a gentoo user?
In testing the fsync issue is resolved. XFS and ext4 are equals when it comes to that issue. Testing was mainly done with fio with some gcc compiles in there. All done on dl3xx SSDs & a few different PCI based flash cards and RHEL6&7.
General recommendations from experience; it probably doesn't matter. Go with distro default. If you wanna get into it, xfs for many files you need to access semi-randomly. ext4 if you've got a few large files(database type shit). EDIT: No, i can't post benchmarks this was all internal testing.
Strange, I get faster read results on my Sabrent Rocket 1TB (PCIE 3). And with nocow directory it's close to 3GB/s for read and write.
What do you get with fio?
Try these settings:
directory=/mnt/storage0/bench/fio/ filename=fio_test_file
direct=1 buffered=0 size=1g
startdelay=3 ramp_time=3 runtime=5 time_based
ioengine=io_uring force_async=4
rw=read bs=1m iodepth=8
Change the directory and save it to a text file. Then you can run it with fio
.eg:
fio read.fio
If you want to use the more common libaio switch ioengine
to libaio
and remove force_async=4
.
if you're looking for throughput numbers, use fio:
if you dont want to build from source, there's packages for most linux distros. check the examples directory in the github link for job files, but fio-seq-{read,write} should get you started
that won't exactly replicate your transfer/copy commands, but it should give you the bandwidth of each of your local file systems. if you were to copy amongst them, the most throughput you'll get is the lower of the two results.
I'd be weary of using dd
to benchmark drive, as explained here. My understanding is that while it was probably fine a tool to benchmark in the past, modern drives are built in such a way that cache etc. will skew the results.
FIO might be a better option, which is what the Linux folks use for IO tests apparently.
you'd probably want Cygwin running on your Windows machine ... and compile bonnie++ (exists in OpenBSD ports), or get fio source. of the two bonnie++ is more well know.
everything will be IO bound so Cygwin won't really matter (???) and affect the actual measurement of the underlying filesystem (NTFS vs FFS) and disk.
> https://github.com/axboe/fio/blob/master/examples/ssd-test.fio
Oh, wow, I didn't know about the runtime
argument. That's pretty neat!
I'm not sure how reliable just running that test at various bs
values is for uncovering the underlying structure of your SSD, though. I think you probably still need to destroy and recreate the pool with the various ashift
values to be sure.
I'm not sure if I see a real problem. Your tests are all with seemingly a large request size because troughput in MB/s is amazing and that would explain the low number of IOPs. Sequential and 'random' reads seems to bottleneck at the same MB/s so could that be networking? Check the interface of both the storage nodes and your client.
1500MB/s sounds like an odd number.
​
If you really want to test your cluster, use FIO for testing. https://github.com/axboe/fio
​
But the tests using tools like FIO show otherwise. On the same exact hardware platform, you can benchmark storageos' architecture to ceph/rook and see completely different performance. What is more is that ceph can leverage spindles from multiple servers when needed.