Icinga 1 is a direct fork of Nagios and many developers joined that project.
Icinga 2 was a rewrite and is very impressive with distributed and secure monitoring. All original checks work and you can port the checks over. Really easy to setup with about a day playing in the lab.
Sure, that's possible, but that's mainly up to the monitoring solution you use. Metrics are emitted by the plugin in a format (called Performance Data Metrics which is parsable by Icinga (or Nagios) for example.
Furthermore Icinga can forward those data to a time-series database like Graphite or InfluxDB (see Graphs and Metrics). There are either plugins to visualize this directly in the Icinga Web interface, or if you can use software like Grafana for that.
Collecting reports from multiple servers can turn into a fairly large to-do. That's why things like Icinga, Sensu, or a bunch of others exist - to continuously monitor and alert for a bunch of machines/services/etc.
I'm not sure exactly what you mean by "pull a report for multiple servers" but a simple way to receive the results of your cronjobs can just look like the following
[email protected]
0 0 12 * * ? /path/to/script arg1 arg2
That should just email any output from the cron job to the target address, regardless of return code.
This also requires you to configure the system to send email to that address, or the messages won't go anywhere. A simple tool for doing so would be something like ssmtp, but you have endless options here (most of them more complex).
Hi, you will find great install guide on the official website. Also you need to decide if you will use high availability mode or just master <-> client setup. Install icinga on some VM and then setup it to be the master, then you can start installing it on clients. https://icinga.com/docs/icinga-2/latest/doc/02-installation/
icinga2 + graphana integration. https://icinga.com
Can easily monitor 1 - 10K hosts with the plugins it has out of the box. Writing custom plugins is also easy to implement. Setup is trivial and works flawlessly on Ubuntu. If you use puppet, there is several modules available to install and configure all the aspects of it. We have been using it for years and I have not found a reason to switch to anything else. here is a demo page if you want to have a look
https://icinga.com/demo/dashboard
The demo page is kinda messy at times but it will give you a good idea as to what to expect out of the box.
Some of it can be migrated rather easily to Icinga https://icinga.com/. Icinga forked from Nagios many years ago, they rewrote the engine and have done a nice WebUI. It is able to support e.g. business branches using "satellites" that act as proxy to the main server/ server cluster. I was one of the two guys doing the setup for a company with multiple branch offices/ factories and during the time I was there it ran very reliably and improved the overall monitoring considerably preventing at least a handful of outages.
Icinga also supports Windows a lot better now. Icinga have interesting approach to performant PowerShell based monitoring in the pipeline. It was quite new, when I left the job. I looked into it but didn't actually use it.
We wrote meerkat as a pure icinga API client. It doesn't support multiple icinga backends, but to make it similar to thruk I spose it could. https://icinga.com/blog/2020/12/25/sol1-releases-meerkat-next-generation-dashboards-for-icinga2/ If you can't restructure your monitoring layout you could probably deploy a standalone icinga2 instance each DC and an umbrella one that handles notifications and dashboards, purely checking the API. Kinda silly though. I would encourage you to consider netbox as well. It's a whole different idea, moving your doco to a dcim and then integrating it to icinga using director. Then stuff gets monitored as it's documented.
Use joins to add in other objects. Then you can reference it in your API filter.
https://icinga.com/docs/icinga-2/latest/doc/12-icinga2-api/#icinga2-api-config-objects-query-joins
perl snippet from a script I have to acknowledge all service problems on a host that might give you a sense of how it works.
my $service_filter = "service.state != ServiceOK && service.downtime_depth == 0.0 && service.acknowledgement == 0.0 && regex(\"$host\", host.name) ";
my $client = REST::Client->new();
$client->setHost($self->{'host'});
$client->addHeader("Accept", "application/json");
$client->addHeader("Authorization", "Basic " . encode_base64($self->{'userpass'}));
my %json_data = (
type => "Service",
filter => $service_filter,
joins => [ 'host.name', 'host.address' ],
author => $user,
comment =>$comment,
notify => 1,
);
my $data = encode_json(\%json_data);
$client->POST("/v1/actions/acknowledge-problem", $data);
You could add in another filter based on the service.name or something to get really specific on your acknowledgement.
This is what I like about icinga2 daemon -C
I guess my Nagios configurations must have been simple enough to not remember having to do that. Mostly I remember no one remembering to add new Linux systems to the linux-servers hostgroup and checks not being performed as a result.
if you are interested in a bit more involved monitoring that is also self-hosted, I can always recommend Icinga2. It was once a fork of the very widely used Nagios, but is now really running circles around it. You can integrate it with grafana via influxdb. It can be configured via config files or a web configurator.
Laut Doku bietet icinga2 schon ein Dashboard Feature: https://icinga.com/docs/icinga2/latest/doc/13-addons/
Im Forum gibts ein Thread dazu, indem erwähnt wird, dass es dazu eine Demo in dem Vagrant-Demosetup gibt: https://monitoring-portal.org/woltlab/index.php?thread/36883-icinga2-status-page-konzept/
Edit: die vagrant Demo ist etwas fies, weil die mit Puppet automatisiert ist. Deshalb nicht unbedingt sofort leicht verständlich.
Allerdings, wenn es das Plugin schon gibt, wird dazu auch Blogartikel im Netz geben
Die Demo gibts hier: https://github.com/Icinga/icinga-vagrant
Wirf Icinga weg und nimm Sensu! (just kidding)
Du könntest ein Dashing-Dashboard basteln.
Oder die Icinga-Daten in Graphite reinladen. Das ist dann einfacher abzufragen, IMHO. Gibt auch ein Grafana-Plugin für Icinga2 web.
Bash wäre mir dafür zu unhandlich.
Btw, borgbackup kann man mit borgmatic gut automatisieren.
https://icinga.com/docs/icinga2/latest/doc/06-distributed-monitoring/#endpoints
All endpoints in the same zone work as high-availability setup. For example, if you have two nodes in the master zone, they will load-balance the check execution.
Your user is not an admin and in fact has no privileges so you're only seeing limited options
If you're using database authentication you can insert an admin user with the instructions here
Mein Projekt ging damals um ein verteiltes Monitoring-System mit Icinga2, welches man sehr einfach bei mehreren Kunden platzieren kann.
Eine entfernte Monitoring-Node (Satellite) auf z. B. einem Raspberry Pi sendet dabei ihre Daten an unsere zentrale Instanz (Master).
Hier gibt es mehr Infos: https://icinga.com/docs/icinga2/latest/doc/06-distributed-monitoring/
Das kam bei den Prüfern in der mündlichen ganz gut an und ich konnte die abzuquizenden Themen in eine mir gut liegende Richtung lenken. Habe zum Beispiel die Möglichkeiten einer günstigen "USV-Lösung" für die Nodes mit regulären PowerBanks angesprochen, weil ich gerne über USVs ausgefragt werden wollte statt über sowas wie Subnetze berechnen, etc.
Not sure what you're running up against. I've always just used apply rules then automated the adding of host objects. Never needed to manually configure a service to a host or notifications just let the apply rules to that for me.
It's more common to enable a feature to forward the data to a specific service, the influxdb writer feature is an example of that and it what I use. A grafana instance using the influx data gives you a powerful tool. If you really want to do your own analysis you can read the perfdata files yourself as documented here: https://icinga.com/docs/icinga2/latest/doc/14-features/#writing-performance-data-files
Current
Hardware
Whitebox Proxmox Build
* 4 U Case
* SuperMicro X11SSL-FC Board
* Xeon E3-1220v5
* 32 GB DDR4 ECC
* 2 x 64 GB Intel SSD for System (ZFS Mirror)
* 4 SSDs mixed size in Raid 10 (ZFS Striped mirror)
Synology DS916+ * 2 GB RAM * 3 x 2 TB WD RED
Software * Bind (LXC) * MariaDB (LXC) * InfluxDB (LXC) * GOGS (LXC) * NGINX Proxy (LXC) * 2 x Wordpress (LXC) * Docker Server running Portainer, Unifi, Plex, Plexpy, Monero (Ubuntu) (KVM)
Planned * Had to expand my Storage with additional disk. Perhaps i changed it into a DS1817+ for future proove * Get my Monitoring with ICINGA up and running * Look into Ansible or Puppet