No performance stats in the WebUI, but you can see what's going on using drbdadm and drbdtop from the CLI.
The DRBD satellites are aka your PVE hosts in this use case. The controller manages the satellites and also hosts the API that Proxmox talks to in order to provision DRBD storage when you create containers and VMs.
After it is provisioned, your DRBD storage will continue to operate even if the controller is offline, but you won't be able to make any changes to it until your controller is operational again. So, you can run the controller on one of your hosts, or you can create a HA container or VM and run your controller on that instead, ensuring that a controller is always running in your cluster somewhere.
This article goes into more detail. Since I'm not a paying subscriber to Linstor, I do not have access to their virtual appliance. But setting up a Debian container, adding the Linstor repo, and installing the controller software is pretty easy, so I don't think their appliance is really necesssary.
https://www.linbit.com/blog/linstor-setup-proxmox-ve-volumes/ https://kvaps.medium.com/deploying-linstor-with-proxmox-91c746b4035d
It's a bit outdated and some things might change, but basically: 1. Install linstor-controller on some server (vm or lxc is fine) 2. Install linstor-satellite on each pve node where you want to use drbd as a storage 3. On each pve node create zpool (you should be able to put linstor-controller on this storage, I have different zpool for local storage, than for drbd) 4. Inside linator controller connect to satellites 5. Configure storage as per instructions in provided links 6. Edit /etc/pve/storage.cfg and add drbd storage
In those tutorials they put linstor-controller on the drbd itself, I don't recommend that. I have mine in different zpool replicated to each node every 30 minutes to achieve HA.
Linstor might fit your needs, with a few caveats:
First, multi-master is great! Until it stops working. Per another comment, I found Galera to be not very good on a MySQL server that is busy writing/updating on a server that actually uses the disk/hardware I/O.
You are not far off from a reasonably good architecture with your current setup. You can script promoting a slave any number of ways. The most reliable is probably using keepalived. Lots of examples out there. Know that keepalived isn't really set up to promote/demote nodes in an interactive way. It's not bad. It's Just not a pacemaker cluster.
Don't use a NAS with a 1 GB NIC as the backend. It won't scale. And then getting off of it will be difficult.
If you aren't already married to MySQL, Postgresql will probably do the job better. Especially if your IOT devices send JSON. Otherwise, know that your log server should be it's own SQL server and optimized for writes, a ton of memory not required. Then replicate the data to a second server that has indexes and RAM for fast select queries.
Lots and lots of examples of running active/passive MySQL Linux clusters out there. Know that you will need at least three servers. The two node examples using pacemaker won't failover in a predictable way. Here's a very thorough DRBD example: https://www.linbit.com/downloads/tech-guides/DRBD8_MySQL_HA_on_the_Pacemaker_cluster_Stack.pdf
There are also examples out there using NBD as the way to get failover. This is not a bad solution, but you need at least 3 hosts with 10Gb NICs that support SAN packets. (Jumbo frames?) You still need a heartbeat cluster and three servers.
Note that I've done all of the setups described on absolutely vanilla desktop hardware with a few Intel nics,dedicated drives for the database and it worked great.
Hi, I was using DRBD (https://www.linbit.com/en/high-availability/) for more than 10 years in my previous job and I think it continue working there.
In my case I used it for SMB file sharing, and my experience with VMs wasn't good because the disk replication has big latency.
But maybe with new disks and new hardware your experience is different.
> Docker is great if you have a bunch of services/applications that you want to keep segregated from each other on a filesystem level, but only have one host to run them on
Though it's worth bearing in mind that you can start running Docker with persistent volumes on top of DRBD (Here is a good primer on the subject) and if you run your Docker containers as services in a swarm you have a "cheap and dirty" failover, replication and so on. I have been playing with it recently and have 3 servers in my swarm so far... it's really nice to be able to move my mail server, mail front end and so on around as I need to in order to maintain my systems with virtually no downtime.
It's also worth understanding that Docker swarms have their own back-end network that runs over the wire so that ports mapped out from containers on one host are also available from any other server in the swarm... pretty cool stuff. Though I front-end most of my services through the HAProxy running on my PFSense box :)