Yes, it does have too much information. The usernames are actually the biggest problem; the other information that actually means anything can be figured out by someone with relative ease should they stumble across the public IP address, or it's information that just doesn't matter.
One thing; PHP scripts that call server binaries directly are not a very good thing to have running on a publically facing server usually.
Public status pages should be traffic lights, possible with comments, but no more. This is more like a monitoring solution, albeit one that you actually have to look at rather than it reaching out to you when needed.
You're probably better off trying out something like monit to take proactive action and reach out to you should it need help, if it's just a single server you're planning on dealing with.
For me, nagios + nconf made things much easier for me. These days, everyone is saying to switch to incinga, a fork of nagios because it has a better API and web interface. I setup my nagios installs before I really heard about incinga, but it's supposed to read the nagios config just fine (and nconf is supposed to work with it as well). I just haven't built up the time to try and make that switch.
EDIT: For smaller deployments where you don't care about dependencies, you might like monit. It is much simpler to configure, but (at least when I last used it), had no concept of parent/child relationships, at least as far as automatically suspending monitoring went.
Then install something like monit or another reactive monitoring system. Add a check that tests for the specific problem problem you have, and then restarts/corrects the specific service with the problem, and sends out notification to everyone so that you can continue working on fixing the root problem.
If installing a dependency is an option, on non systemd systems use something like Monit or Daemon Tools. If you are building your own daemon, it is probably going to be easier to use one of these than to write your own init script and juggle pids and stuff yourself.
Do you have unattended upgrades installed? There's a switch to have it install AND reboot, maybe that's why it rebooted?
There's been many threads on here about monitoring. If you're going to go the Nagios route, check out the OMD distro (http://omdistro.org/). I just started using it, check_mk is especially nice.
Something simpler would be monit (http://mmonit.com/monit/) which could email you about reboots and a bunch of other things.
I use http://mmonit.com/monit/ for basic monitoring of processes, file systems, etc. I had to customize each process/folder/whatever I wanted it to watch, but this means I only get emails for things I want (ie - memory limit reached, processes failing or restarting, etc.). Monit isn't quite the same thing as logwatch, but it might be able to fulfill the same purpose.
If it is just a single server you could use monit. It can send email notifications, and automatically restart services if they fail. I've been saved a couple of times from having to travel to remote sites as it's restarted sshd for me :) M/Monit for multiple hosts charges, but single host is no cost.
I don't know of any product that would do that out-of-the-box. Monit can monitor services and even test basic HTTP functionality but for what you're looking for, you would probably need to create custom Selenium scripts that do these tests for you and then run them from your monitoring system or cron or whatever.
Check out monit. It can monitor ICMP and files too.
Really simple to setup, after you install it, check out its default configuration - it contains several examples.
have all of your servers log to a central syslog server and use logcheck to monitor and notify you of interesting activity. for custom actions, you could run 'monit' on each server itself (http://mmonit.com/monit/documentation/monit.html#file_content_testing is one way to watch logs, tho i dont know off the top of my head whether it knows where it left off on its last check or if it parses the entire file again on each run. the latter could be a problem if logs get large). there are lots of other ways to verify services are working properly as well. alternatively, you could combine 'logtail' (logcheck utility) with egrep, regexes and monit.
There are, but your database is more reliable than they are. Just monitor everything.
> Or is there a better way to notify the admin if the webserver or database goes down?
You are looking for monit.
You should set up something like http://mmonit.com/monit/ to do actions when the service goes down (like attempt to restart automatically). Instead of just monitoring only.
While you still have to find the underlying issue causing the crashes in the first place it can help automate recovery when it does crash. I run it on my personal servers and its pretty handy.
Obviously nagios is a good option too but I find for one off installs monit is a lot easier/faster to setup initially. If you do grow into more than a few servers monit will then be clunky to manage and nagios will be a much better option.
My company is doing just this with our product.
Things to watch out for:
PM if you have any question. And if anyone is curious, the product we make is a door RFID system called the Ctrl-O.
Edit: list formatting
I would recommend setting up monit if you haven't already. It's a watchdog program that can monitor apache, mysql, and pretty much any other process and ensure that they are running. For example, if apache crashes for some unknown reason in the middle of the night, monit will just restart it automatically.
You might want to check out monit. You'll probably still have to some hacking to make it work for the game server. Tho google might point you at a prewritten monit script for it if you're lucky.
You suggested monit, which I have used in the past. I thought this could be the answer until I checked the details.
monit does have depends so that when a service is started everything that it depends upon is started before it. But it turns out that monit does not wait for the dependencies to start and it does not check that they are working before it goes on to launch the service.
This makes sense in its way. I can see why monit was coded to start things and then only check if they are working in the next cycle. It means that monit can be written in a single thread and not get stuck waiting for 30 minutes while some elaborate service starts up.
You were quite correct when you said the relevant pieces will all keep on retrying until they succeed. But I was hoping for something that felt a bit more structured: start A, then when A is working start B. monit might be close enough and I may end up using it.