Hate to be that guy, but the third line of the upgrade documentation should answer your question:
While upgrading Zabbix agents is not mandatory (but recommended), Zabbix server and proxies must be of the same version. Therefore, in a server-proxy setup, Zabbix server and all proxies have to be stopped and upgraded.
Configuration -> Actions.
In the operations you define steps. Go for step 1 - 0 (infinite)
Once per interval (default 1 hour) the notification will be send.
https://www.zabbix.com/documentation/current/manual/config/notifications/action/escalations
Look at list supported distributions at: https://www.zabbix.com/download
any of them will work great, it just matter os preference
there are two teams: debian/ubuntu and redhat/centos. not sure about oracle and suse.
both teams will say that it very well documented and works well and you should definitely use it :)
I am from Debian/ubuntu camp, so .. use Debian if you want simple and arguably more reliable system with slightly older software. Or ubuntu if you want most bells and whistles and latest packages.
Don't use redhat, it is complicated and odd (just kidding)
Zabbix has amazing documentation online.
From: https://www.zabbix.com/documentation/2.0/manual/config/items/itemtypes/zabbix_agent/win_keys
perf_counter[counter,<interval>]
counter - path to the counter interval - last N seconds for storing the average value. The interval must be between 1 and 900 seconds (included) and the default value is
Good question! I haven't had the need to do that but here's how I would tackle it: using simple checks: https://www.zabbix.com/documentation/current/manual/config/items/itemtypes/simple_checks .
Get the Ups for a discovery using for example SNMP and then the simple check as a discovery item.
Another not so pretty solution would be to create multiple hosts for the same device and put them in a group representing the real host.
There may be better ideas, don't take this too seriously
For our customers we built custom VMware monitoring with Zabbix. There's full integration between Zabbix and VMWare using the VMWare API.
Check-out: https://www.zabbix.com/integrations/vmware
Or if you need help building this contact us at: https://zabbix-consultancy.com
Slide Shows are native in Zabbix. https://www.zabbix.com/documentation/current/manual/config/visualisation/slides
If you already have the data in screens, it is super easy to crete a slideshow.
zabbix also has solid documentation for upgrading to 4.0 on several distros. I went through it when I upgraded and didn’t run into any problems.
https://www.zabbix.com/documentation/4.0/manual/installation/upgrade/packages
A trigger can be in two states: PROBLEM or OK.
The trigger expression is a calculation with requirements for when it should go into PROBLEM state.
An expression contains simple math and/or logic, using item data. As an example, let's assume item with item key "icmpping" on HostA:
{HostA:icmpping.last()}=0
The above trigger uses the "last()" functions (Zabbix trigger functions) on the item key.
When Zabbix performs "icmpping", if successful at pinging the host, it returns "1". If not successful, it returns "0".
The triggers check every time data is gathered, if the trigger expressions is TRUE or FALSE. If "icmpping" returns "1", the expression returns FALSE, because it requires the latest value to be "0". If "icmpping" returns "0", the expression is now TRUE, the trigger fires and goes into problem state.
A variation of this trigger could be:
{HostA:icmpping.max(#3)}=0
"max()" looks at X amount (in this case 3) of the latest gathered data points of "icmpping" and returns a single value, the highest one. So, in this case, if the host has been unavailable for 3 consecutive checks, the trigger goes into PROBLEM state.
In reverse, to return to the OK state, we have to receive an "icmpping" value of "1" just once with the past 3 data points, to return to the OK state.
Triggers can become quite involved, but they are essentially just math, and sometimes string comparisons, being performed by Zabbix, to figure out if something is TRUE or FALSE, which Zabbix can then react to.
5.4 merge applications into tags and it did not give us a good ways to group them or search between them. I heard that is is going to me address in a minor release very soon but until then I recommend sticking with 5.2.
About your second question I don't get exactly what you want to do. I recommend you read this two wiki entries, they have a ton of information about LDD.
https://www.zabbix.com/documentation/current/manual/discovery/low_level_discovery
https://www.zabbix.com/documentation/current/manual/config/macros/lld_macros
https://www.zabbix.com/documentation/current/manual/config/notifications/action/operation
>Zabbix server does not create alerts if access to the host is explicitly "denied" for the user defined as action operation recipient or if the user has no rights defined to the host at all.
​
That might be your issue...
Why don't you enable SNMP on the synology server and have Zabbix server/proxy scan it? There is several templates for it too:
https://www.zabbix.com/integrations/synology
This is from my own experience. Good luck!
Zabbix 4.2 is expected to be released in March if everything goes according to the plan: https://www.zabbix.com/roadmap Our internal synthetic tests were quite promising, steady performance pattern even with rather large databases. Waiting for real world results from impatient alpha/beta users.
3.2 and before: just instruct Zabbix to store the value of your item as "delta", and it'll do the maths for you.
3.4: this has been replaced by a "Change per second" preprocessing step to the same effect.
https://www.zabbix.com/documentation/3.4/manual/config/items/item
Once you have the right data, the frontend doesn't matter.
Hi there.
If you reference the Appendix 1. Reference commentary of API documentation you will see that 'query' type simply accepts 'extend' or 'count' values, which either return you extended output or count of what you want.
Here is example of curl request and respective reply:
curl -s --header "Content-Type: application/json" -d '{"jsonrpc": "2.0", "method": "host.get", "params": { "filter": {"host": "Snmp-self-monitoring"}, "selectItems": "count"}, "auth": "14360876e320f6fdb594b6dc95a92fc7", "id": 1}' "http://localhost/zabbix/api\_jsonrpc.php"
{"jsonrpc":"2.0", "result":[ { "hostid":"10319", ..., "items":"76" } ], "id":1 }
Hope it helps!
Zabbix server and proxy have to be of the same major version. There is no way around it. 5.2 and 5.2 or 5.4 and 5.4. Nothing else. Everything is unsupported and also won’t work.
https://www.zabbix.com/documentation/current/manual/appendix/compatibility
First approach:
Some people think it's a good idea, to execute the server or an agent on the server with the root user.
Maybe you are lucky and your former boss was amongst them.
You could then create a Script (https://www.zabbix.com/documentation/3.4/manual/web_interface/frontend_sections/administration/scripts) or try to run an agent 'system.run' Agent Item on the server (https://www.zabbix.com/documentation/3.4/manual/config/items/itemtypes/zabbix_agent), to find out. Let either one execute the Linux command 'whoami'. If you get 'root', you are lucky.
You could then try to create a new sudo user "username" with password "changeme" to login, with another Script / Agent item:
'useradd -g sudo -p $(openssl passwd -1 changeme) username'
Second approach:
In Zabbix 3.4, passwords saved in Macos could not be disguised. Maybe you are lucky and you can find a working password in the Macros somewhere.
Third approach:
Exploit the box. Maybe it's old enough to find some decent Remote code execution vulnerabilities.
Fourth approach:
Start from scratch.
Cheers and good luck!
You are looking for an "action" here: https://www.zabbix.com/documentation/current/manual/config/notifications/action
​
Basically: if trigger goes into problem state, execute action <whatever> and in this action you should define a step to execute a remote command on the zabbix server.
Ok first step, take a breath and read the doco.
https://www.zabbix.com/documentation/current/manual/discovery/network_discovery
It states that Lost is a host being down after it was up.
And Down is a host that's been down for a few checks.
So if a host has been down for 10 days it'll be Down not Lost.
Service Up means the host has been up the last few times Zabbix discovered it.
And yes if you remove it, and the real host comes back online Zabbix will rediscover it.
That helps!
Check: https://www.zabbix.com/documentation/current/manual/config/items/itemtypes/zabbix_agent
You are looking for itemkey vfs.file.contents[file,<encoding>] which will allow you to get the contents towards your Zabbix server.
The fact that these are virtual machines does not matter. You can monitor linux hosts via SNMP or the Zabbix Agent, both require some set up of the machine you want to monitor.
After you have SNMP or the Zabbix agent configured on the target machine you add them to the zabbix server just like any other host.
Start here, everything you want to monitor is a type of "host" https://www.zabbix.com/documentation/5.0/manual/config/hosts
Nice! Time to update your windows service discovery filter: https://www.zabbix.com/forum/zabbix-help/372904-excluding-certain-windows-services
In Administration > general > regular expressions
Do you have any items on those hosts that are of type "Zabbix agent" - NOT "Zabbix agent (active)"? The Zabbix server needs at least one of those (I recommend agent.ping) to properly determine host availability.
It's also possible that you don't have the port opened up (default is 10050) for the agent to listen, or perhaps the ServerIP configured in the agent config file. Now that I think about it, I suspect that's your most likely issue - you need to set the ServerIP on the agents before they'll respond to your server.
You'll find information on configuring the agent here: https://www.zabbix.com/documentation/current/manual/concepts/agent
Haven't done it myself, but sounds like something that could be achieved with escalations.
You could introduce an action that has a delay on it, I assume if the problem still exists by the time the delay is passed then you could have an action to modify the severity of the problem. I'm assuming that it is possible to change the severity of an existing problem, but if not then just create a new problem with the desired severity.
This Link is your best friend:
https://www.zabbix.com/documentation/current/manual/installation/upgrade_notes_500
Especially the part about PHP 7.2 and also important the part about the Float conversion SQL script. But read all of it carefully and decide for yourself.
Personally I never upgrade before the first or second bugfix release.
https://www.zabbix.com/documentation/4.0/manual/config/macros/usermacros#user_macro_context
Thats the way to go for your scenario.
The change function seems like an odd choice and change(#10) just seems wrong. If it's comparing the two most recent values (default without the #10) then it's looking for a case where the current value (let's say its 32) is less than the prior value (let's say 38). .change of 38 and 32 is -6. The #10 may be causing it to look at 10th and 11th values back or it might be getting ignored since it's not indicated that this function takes any parameters.
The .last(#10) is definitely taking the 10th value back.
What you may want to do is create a calculated item and store the .change value in it. Let's call the item upsChargeChange. You can then have a trigger of upsChargeChange.max(#4)<0.
Where the .last function returns the nth value specified by #xx, .max(#xx) will cause it to evaluate the xx number of recent values.
By check to see if the past four values are all < 0 you are triggering if there is a discharge noted for 4 values in a row. This effectively ignores it if the discharge only lasts for 3 minutes.
The one thing to be careful of here is if your change value ever goes up even though you're overall in a discharge scenario for which you'd want to get alerted. If your data values looked like "-3, -5, 0, -2, -3, 0" then you would not get alerted. As such you might want to make <0 be <1 instead. Even then, if you ever get a 1 (or greater) during discharge such that you never get 4 in a row then it'll go undetected.
Make sure your network equipment supports SNMP counters and you have SNMP setup. You can get a lot of low level network information that way. This is covered in the documentation.
https://www.zabbix.com/documentation/current/manual/config/items/itemtypes/snmp
If you want to go up the network stack to the OS and/or application layer, you need to make sure you are running zabbix agent so you can query servers / services directly. I usually use WMI or Powershell scripts on Windows. On Linux it's mostly shell or python scripts. Again there is documentation on this.
https://www.zabbix.com/documentation/current/manual/appendix/items/activepassive
If the housekeeping process runs at 100% it is most likely waiting for you database.
You should tune your db accordingly. https://www.zabbix.com/forum/zabbix-troubleshooting-and-problems/10176-mysql-database-grow-how-optimize-parameters?t=9925
Especially innodb_file_per_table really accelerates housekeeping.
Things that are auto discovered will expire eventually after they stop being discovered.
Keep lost resources period is how long it will keep them. Their documentation is fairly helpful.
Another thing that helps is that you can disable the item IN the discovery rule. That way they aren't monitored by default, but you are able to easily enable the item.
https://www.zabbix.com/documentation/3.4/manual/discovery/low_level_discovery
While creating a new item and storing as a delta would work, all it does is make your trigger definition simpler as you would just be checking for a delta change greater than X.
You can probably get away with just adding a new trigger to your existing item. Have a look at example 11:
https://www.zabbix.com/documentation/3.4/manual/config/triggers/expression
Basically, using the time_shift parameter you can compare the current item to itself at a previous time. In this case using the average value of the item over the last 1 hour period to the average item value over 1 hour period 24 hours/1 day in the past. (And if it's doubled, greater than 2 in the example the trigger would fire).
Or just read this and be done with it in three minutes in stead of 12, ignoring the extra time wasted because I cannot copy-paste stuff from a youtube video.
https://www.zabbix.com/documentation/3.4/manual/installation/install_from_packages/debian_ubuntu
I concur with what /u/fredprod said, start with the housekeeper. I have found it beneficial to store some of the internal metrics for Zabbix as well. They're useful for performance tuning and for spotting major changes. The metrics I would suggest storing are VPS (zabbix[requiredperformance]), total items (zabbix[items]), unsupported items (zabbix[items_unsupported]) and the process items. The 3.4 docs for this are here. I would suggest capturing these items on at most a 5-15 min basis (5 min for process, 15 min for item counts), anything more frequent puts unnecessary load on the Zabbix server and database.
Over time you should see your VPS graph be stable. If you see it spike it will likely have been caused by either of the following adding/removing hosts/items, items becoming supported or unsupported or template changes. The first can then be correlated to the total items count, the second to the unsupported count. If your VPS swing does not correlate to either of those the most likely culprit is someone changing the templates, or maybe space aliens messing with your server.
As for monitoring the internal processes, this is useful to help figure out how long some of the processes are taking. On one Zabbix server I maintained it took approximately 24 hours to run the housekeeper. In my case I was using postgres and spent a lot of time tuning. For me the biggest gains were from configuring the database to automatically partition the history and trends tables. If I recall the history tables were done daily and the trends were weekly. Next improvement was with tweaking the cache and fork settings, however too many forks and performance will get worse.
Zabbix Agent could work in two ways, active and passive. In active mode the agent opens a connection to the server, the server listen in 10051 port. In passive mode (the default one), Zabbix agent listens to server commands, the 10050 port must be open in server -> agent way.
https://www.zabbix.com/documentation/3.0/manual/appendix/items/activepassive
The feature you're looking for is low-level discovery: https://www.zabbix.com/documentation/2.4/manual/discovery/low_level_discovery
You will write a discovery rule to enumerate the items and then use your script to check each item.
InfoBlox decided to mark that OID as a "STRING" and thus Zabbix will read it as such.
A work-around could be to use snmpwalk in an external script that parses the output and turns it into a numeric value.
Look here for an example: https://www.zabbix.com/forum/showpost.php?p=159773&postcount=5
The script needs to be edited to:
#!/bin/bash /usr/bin/snmpwalk -On -v 2c -c <your-community> $1 .1.3.6.1.4.1.7779.3.1.1.2.1.1 | awk -F[+\ ] '{print $2}'
This is very possibile with actions + "remote command" operations:
https://www.zabbix.com/documentation/2.0/manual/config/notifications/action https://www.zabbix.com/documentation/2.0/manual/config/notifications/action/operation/remote_command
There are some limitations: it's not supported for active or proxied agents and you must EnableRemoteCommands in your zabbix_agentd.
The Zabbix web interface is served through a standard virtual host and can be SSL-enabled quite easily.
This should get you going with a free Let's Encrypt certificate: https://certbot.eff.org/all-instructions/
This was a discussion on the Zabbix forum a while back. I remember reading it, but I'm not sure if there was a resolution or even if it is still relevant.
https://www.zabbix.com/forum/zabbix-troubleshooting-and-problems/8466-many-time_wait-connection
In Zabbix go Dashboards, create a new one and create the graphs you're looking for with the items you're fetching from your devices. Grafana go download the Zabbix data source (https://grafana.com/grafana/plugins/alexanderzobnin-zabbix-app/) and graph away in much the same way.
I'm also still fairly new to Zabbix, I found the Zabbix dashboards are nice for things you can put on a graph, but showing things like gauges for CPU and memory usage (Or a simple traffic light for up/down) seems to work much much better in Grafana.
> https://www.zabbix.com/documentation/current/manual/installation/upgrade/packages/rhel_centos
While upgrading Zabbix agents is not mandatory (but recommended), Zabbix server and proxies must be of the same major version
> https://www.zabbix.com/documentation/current/manual/appendix/compatibility
Supported Zabbix proxies
To be compatible with Zabbix 5.4, the proxy must be of the same major version; thus only Zabbix 5.4.x proxies can work with Zabbix 5.4.x server.
https://www.zabbix.com/integrations/telegram
> 2. If you want to send personal notifications, you need to obtain chat ID of the user the bot should send messages to.
> Send "/getid" to "@myidbot" in Telegram messenger.
> Ask the user to send "/start" to the bot, created in step 1. If you skip this step, Telegram bot won't be able to send messages to the user.
> 3.If you want to send group notifications, you need to obtain group ID of the group the bot should send messages to. To do so:
> Add "@myidbot" and "@your_bot_name_here" to your group. Send "/getgroupid@myidbot" message in the group. In the group chat send "/start@your_bot_name_here". If you skip this step, Telegram bot won't be able to send messages to the group.
a) create the bot
b) create the GROUP (because it would be easier that way in the long run)
c) perform the steps in the #3
d) test Media with ID of your group
e) be sure to: add the media to the user who should be notified;
add the trigger action to send out notifications WITH media;
Are you running the proxy on windows? We are using the credentials file + ODBC connection template from this:
What template requires running the proxy as an AD account?
https://www.zabbix.com/documentation/current/manual/appendix/items/proc_mem_notes#linux
> lib Size of shared libraries VmLib
So probably you can create a calculated field with
proc.mem[,web1,,,rss] - proc.mem[,web1,,,lib]
We put all the juicy information for a SNMPTrap trigger into Tags.
Then use https://www.zabbix.com/documentation/current/manual/config/event_correlation
To make the OK SNMPTrap trigger close the old PROBLEM SNMPTrap Trigger.
​
So you're super close, just ditch the Recovery Expressions and use Event Correlation.
Do note, this can cause Problems to remain open forever if the OK Trap never shows up or gets lost in the Ether.
One other thing that occurred to me.
I don't know how your network or devices are laid out, but you might want to investigate trigger dependencies. This will allow you to say "if this device is triggering an alert, do not trigger an alert for the other devices behind that device". This allows you to quickly see the actual device that has a problem, and not have to sift through an avalanche of alarms.
> Is it possible to monitor the availability of the database server itself?
Why not? Zabbix can monitor the database server whichever it is, a separate host, service or both. https://share.zabbix.com/databases/mysql
> Ideally I would like to use my existing Postgres server which runs already on a different virtual server
https://www.zabbix.com/documentation/current/manual/installation/requirements#required_software
> But if Zabbix breaks completely if I restart that other virtual server, the database goes down etc., this would not be good.
The only true way to know what your monitoring solution is dead is to have another monitoring, at a separate place, to monitor the first one.
That's why I have two Zabbix servers, one primary for all the stuff and one another in another DC in another country. The biggest dependecy for is the same SMTP provider I use for both, lol.
And yes, I did it that way because one day I started wondering why I don't receive any notifications from Zabbix (run out of space on AIO installation).
icmpping[<target>,<packets>,<interval>,<size>,<timeout>]
Simple check. Just add the items you need and then add triggers for each. Alternatively create a macro that has all the IPS and an LLD that builds the items and triggers… or an LLD that grabs the IPs from interfaces and does the same.
https://www.zabbix.com/documentation/current/manual/config/items/itemtypes/simple_checks
If you really need, I can whip this up tonight. Do you have 5.4?
I think LLD expects upper case macro names, but you may be able to use the JSONpath parm on the macro def screen to convert that.
See https://www.zabbix.com/forum/zabbix-help/383827-json-and-lld-understanding
Low Level Discovery
>Low-level discovery provides a way to automatically create items,
triggers, and graphs for different entities on a computer. For instance,
Zabbix can automatically start monitoring file systems or network
interfaces on your machine, without the need to create items for each
file system or network interface manually. Additionally, it is possible
to configure Zabbix to remove unneeded entities automatically based on
actual results of periodically performed discovery.
https://www.zabbix.com/documentation/current/manual/discovery/low\_level\_discovery
Relevant Doc: Zabbix Doc: Scripts
No items involved, just initiate the script from the GUI.
I'm on v4.4 so unsure if it changed in the new GUI. But to run a script against a host I click on the host name from the Dashboard and all the scripts are there. Ready to run.
it is documented: https://www.zabbix.com/documentation/current/manual/appendix/functions/history
Reason i know? i was looking for the 'regexp' trigger function, which was present in 5.0/5.2 but moved (hidden) in 5.4... :/
Wow, yes, I totally was. The find function apparently was added in v5.4 and not documented in the manual yet: https://www.zabbix.com/documentation/current/manual/appendix/functions/string. I missed it when scanning through the dropdown box of functions. Thank you!
Is there a way to test a trigger (like with a test value)?
You've seen the built in itemkeys in Agent2 regarding systemd? https://www.zabbix.com/documentation/current/manual/config/items/itemtypes/zabbix_agent/zabbix_agent2
​
Not sure if it's what you're looking for :)
I'd take a step back and consider the situation.
You have one or more ports (of a particular naming standard I hope) that are more important than other ports.
Are you on Zabbix 5.2?
If so, make use of the newer feature of LLD overrides
https://www.zabbix.com/documentation/current/manual/discovery/low_level_discovery
Bold statement here on my side: It be much better for others who use your template, if you wouldn't require the MIB file, and just OIDs everywhere. This is also suggested in the template guidelines: 1.3.11 SNMP
> SNMP OID field should not use any MIB objects, so templates would be working without MIBs imported.
https://www.zabbix.com/documentation/guidelines/thosts#items
According to the docs:
Supported keys can be used with Zabbix agent 2 only on Linux/Windows, both as a passive and active check.
Supported since Zabbix 5.2.5.
If you use the official Zabbix switch templates, they are especially designed to shut off individual port triggers:
Quote:
>If you do no want to monitor this condition for a specific interface create a user macro with context with the value 0. For example: where Gi0/0 is {#IFNAME}. That way the trigger is not used any more for this specific interface.
Even if you use a not official template, you can copy the (brilliant) Trigger definition from Zabbix SIA to your own template, so you can shut off individual ports.
Another (very) simple option would be to manually disable the discovered Trigger in the switch. No Tirgger = no alarm.
Yeah Zabbix has scripts
Also the course is great, but get work to pay for it. Expensive.
What you have configured with the items using SNMPV2 AGENT is polling. Zabbix will reach out to the device at the given interval and requires the configured OID's value.
There is additional configuration on the server or proxy required to get Zabbix to work for SNMP trap and the item type I believe would be "SNMP Trapper"
Reading the workflow description for SNMP Traps might make this a bit clearer. https://www.zabbix.com/documentation/4.0/manual/config/items/itemtypes/snmptrap
Mhh that would make sense, but I can't find this filter.
If i go Template OS Linux > Discovery Rules > Mounted filesystem disc. > Filters there only is "{#FSTYPE} matches @ File systems for discovery"
But i guess i figured it out: https://www.zabbix.com/documentation/4.0/manual/regular_expressions
It's filtering everything according to that expression and the only one matching it is "/" being ext4.
Thank you for your help.
EDIT: One more question. If i wanted to let say include a fs that is not on the list. How would i go about it? If its too much to explain id appreciate it if you just pointed me in the right direction
You probably want a combination of multiple interfaces, and using the {HOST.IP<1-9>}> macros.
https://www.zabbix.com/documentation/current/manual/appendix/macros/supported_by_location
Search for "HOST.IP"
It might be helpful if you provide what version of Zabbix you are using.
If you are trying to monitor MSSQL in Zabbix 5.0, there is a first party official template that you can use here
as /u/Dizzybro said the diff function in triggers, or: https://www.zabbix.com/documentation/current/manual/introduction/whatsnew500#string_comparison_allowed
It’s in the documentation: https://www.zabbix.com/documentation/current/manual/appendix/install/db_scripts
“Character set utf8 and utf8_bin collation is required for Zabbix server/proxy to work properly with MySQL database.”
mysql> create database zabbix character set utf8 collate utf8_bin;
Actually, that now confused me more :)
According to ZBXNEXT-170, I'm reading it is implemented in the 5.x releases, or at least what is the current alpha branch.
In the other, ZBXNEXT-5473, it's saying that under Configuration -> Hosts I should be able to check off multiple hosts, Mass Update, go to the Templates tab, and there should be the buttons for Link, Replace & Unlink. It states this was added in 4.4.5rc1.
So at home, I run 4.4.7 and those buttons do not show up on the Templates tab. In the office, I have an LTS server, so I can see why I wouldn't see that functionality there.
I'm now more curious if it was implemented in 4.4.5, but then reverted in either 4.4.6 or 4.4.7?
On the docs page: https://www.zabbix.com/documentation/current/manual/config/templates/linking
Under "Linking templates to several hosts", it even has the buttons there, but on my 4.4.7 server at home, those buttons do not exist.
Yes I think there's a confusion here.
How it works is, you connect Zabbix to LDAP. Then in Zabbix make a User with the same login that you want to login with. Then you can login with your AD account.
This is weird because in most applications, you dont have to do the manually creating a user in the application step. But in Zabbix you do.
Source: https://www.zabbix.com/documentation/4.0/manual/web_interface/frontend_sections/administration/authentication "External LDAP authentication can be used to check user names and passwords. Note that a user must exist in Zabbix as well, however its Zabbix password will not be used."
You have to set the "skip" mode on the eventlog item, see the description of the item in the docs, this will ignore all values existing prior to the creation of the item.
https://www.zabbix.com/documentation/current/manual/config/items/itemtypes/zabbix_agent/win_keys
Take a look at: str (<pattern>,<sec|#num>) in the docs -> https://www.zabbix.com/documentation/current/manual/appendix/triggers/functions
Here's the documentation page that explains what you're seeing: https://www.zabbix.com/documentation/4.4/manual/appendix/triggers/functions. Most important point - a number for the first argument is IGNORED for both strlen() and last(), because it doesn't make sense.
If what you want is a time shift, put a comma before it (making it the second argument, as described in the page I linked). If you want to look back at the data point 600 places back (rather than 600 seconds), use (#600) instead of (,600). That said, be aware that it's going to trigger at that point even if things have recovered since, which probably isn't what you want.
Better yet, use min() as /u/narmkhang suggested. I don't think there's a point in using sum() AND min() on the same item, though, at least in this case. It seems like checking web.test.fail should be sufficient. Using last() is usually best avoided, and time shifts can get even trickier.
Well, we use a modular template design. We develop the templates on one instance and then export them. The exported templates are then manually committed into the git repo.
We use Saltstack to manage the servers, once a day we run a Salt highstate on the servers to keep the configuration up to date. During this run, the git repo is synced to the local disk of the Zabbix servers and is recursively imported through a small api script based around the import api call.
That is basically it 😄
Look at escalations.
https://www.zabbix.com/documentation/4.2/manual/config/notifications/action/escalations
I think Example 2 is what you are looking for.
The same page where you checked for the exclamation mark. Select the discovery rule and look at the bottom. There are a few buttons there.
https://www.zabbix.com/documentation/current/manual/config/items/check_now
You're confusing trigger configuration with generated events. Once the conditions for a trigger is true, it will generate an event. When you look under an event you'll have the option to change the severity level.
For more information check this link with the new features in 4: https://www.zabbix.com/documentation/4.0/manual/introduction/whatsnew400?s[]=event&s[]=severity#problem_severity_can_be_changed
https://www.zabbix.com/documentation/3.4/manual/config/items/userparameters
It's pretty straight-forward... if you're using selinux then you could have to do some extra steps not covered in the docs.
>much to be desired. That's why most people use another service, like Grafana to display data imported from Zabbix.
zabbix 3.4 supports ElasticSearch for historical data storage - https://www.zabbix.com/documentation/4.0/manual/appendix/install/elastic_search_setup . THis can be used to plugin to grafana.
https://www.zabbix.com/documentation/3.4/manual/vm_monitoring As per the link you posted,
>Currently only datastore, network interface and disk device statistics and custom performance counter items are based on the VMware performance counter information.
That seemed to work. I also see they have instructions on their website for 18.04: https://www.zabbix.com/documentation/3.4/manual/installation/install_from_packages/debian_ubuntu
Will continue later, as i don't have much time right now.
It doesn't, simple check are executed by zabbix server (or proxies), they dont even need a zabbix agent.
From zabbix wiki:" Simple checks are normally used for remote agent-less checks of services.
Note that Zabbix agent is not needed for simple checks. Zabbix server/proxy is responsible for the processing of simple checks (making external connections, etc). "
https://www.zabbix.com/documentation/3.4/manual/config/items/itemtypes/simple_checks#icmp_pings
So you need to ping the server that you want to check from zabbix server in this case.
You could try taking a look at the Zabbix API. I've only ever used it to dump data from items but there is a section on Triggers.
https://www.zabbix.com/documentation/3.4/manual/api/reference/trigger
Current LTS https://www.zabbix.com/documentation/3.0/manual/installation/upgrade_packages/rhel_centos
Current stable https://www.zabbix.com/documentation/3.4/manual/installation/upgrade_packages/rhel_centos
Be sure to read through the upgrade notes.
Similar in that it's basically up and running. I used the ISO and ran through the installation guide below. It's mostly a wizard with a few text files to edit for timezones.
https://www.zabbix.com/documentation/3.0/manual/appliance
Since then I've made lots of tweaks to the configuration file but nothing too crazy.
The internal check "zabbix[host,<type>,available]" can do exactly what you want :)
https://www.zabbix.com/documentation/3.2/manual/config/items/itemtypes/internal#supported_checks
On your hosts/template, setup this item:
Type: Zabbix internal Key: zabbix[host,agent,available] Type of information: Numeric (unsigned) Data type: Decimal
This item will return, for each host, this data:
0 - not available, 1 - available, 2 - unknown.
So, now you're able to create this trigger on your hosts/template:
{TEMPLATEorHOST:zabbix[host,agent,available].last()}<>1
This will trigger as soon at the previous item returns anything else but '1', which means your agent, by your Zabbix server, has been detected as "unavailable" or "unknown".
This internal check also supports snmp, ipmi and jmx.
You need to use Filters:
https://www.zabbix.com/forum/showthread.php?t=25634
and
https://www.zabbix.com/documentation/3.2/manual/discovery/low_level_discovery
Filters A filter can be used to generate real items, triggers, and graphs only for certain file systems. It expects a POSIX Extended Regular Expression. For instance, if you are only interested in C:, D:, and E: file systems, you could put {#FSNAME} into “Macro” and "^C|^D|^E" regular expression into “Regular expression” text fields. Filtering is also possible by file system types using {#FSTYPE} macro (e.g. "^ext|^reiserfs") and by drive types (supported only by Windows agent) using {#FSDRIVETYPE} macro (e.g., "fixed"). You can enter a regular expression or reference a global regular expression in “Regular expression” field. In order to test a regular expression you can use “grep -E”, for example: for f in ext2 nfs reiserfs smbfs; do echo $f | grep -E '^ext|^reiserfs' || echo "SKIP: $f"; done {#FSDRIVETYPE} macro on Windows is supported since Zabbix 3.0.0. Defining several filters is supported since Zabbix 2.4.0. Note that if some macro from the filter is missing in the response, the found entity will be ignored.
If you have few hosts (below 5%), I'd go with changing the template to use a trigger threshold defined in a user macro on the template and then edit host-level macros for the hosts that have special circumstances.
https://www.zabbix.com/documentation/3.0/manual/config/macros/usermacros
Any more hosts than that and I'd consider re-evaluating if my catch-all template really does "catch all" and possibly split it into two or more templates that fit my needs better.
The number you receive when polling ifHCInOctets (OID: 1.3.6.1.2.1.31.1.1.1.6) is in bytes. It simply counts every eighth bit.
http://www.oid-info.com/cgi-bin/display?oid=1.3.6.1.2.1.31.1.1.1.6&submit=Display&action=display > "The total number of octets received on the interface, including framing characters. This object is a 64-bit version of ifInOctets."
You're dealing with bytes (octets of bits) and you wish to measure the speed of bits transferred over X amount of time in seconds.
First of all make sure your "ifHCInOctets" items are using a custom multiplier of "8". SNMP reports values in octets, so by multiplying with "8" we convert that to "bits" instead.
Then you use a unit of "b". This tells Zabbix to use multiples of 1,000 when calculating Kb/s, Mb/s, Gb/s and so on. Usually when presenting speeds, multiples of 1,000 are used. When describing storage capacities, multiples of 1,024 are used, as well as using bytes instead of bits.
Your item(s) should also be set to "Store value: Delta (speed per second)". Zabbix will then automatically compare current and previously received values and store them as bits per second. When presented on graphs, correct prefixes are added, K, M, G, etc.
For the trigger, when dealing with units, Zabbix treats "m" and "M" very differently. "m" is "minutes" and "M" is "Mega". You can read more about that here: https://www.zabbix.com/documentation/3.0/manual/config/triggers/suffixes
Your correct trigger would then be: {ifHCInOctets[FastEthernet0/1].min(15m)}>1.9M
This would trigger if all values received for the past 15 minutes are above 1.900.000 bits (per second).
Trigger expression would be something like:
{host:item.delta(300) = 0}
where 300 is the time in seconds and 'item' is your network traffic item.
Thanks for the links. I tried making /etc/odbc.ini
like so, but it does not work
[testdb] Driver = ODBC Driver 17 for SQL Server Server = tcp:192.168.100.103,1433
Then I test with isql
$ isql -v testdb [28000][unixODBC][Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user ''. [ISQL]ERROR: Could not SQLConnect
I tried adding UID=zabbix in /etc/odbc.ini
but I get the same error. Port 1433 is open and it is listening on that port
nmap 192.168.100.103 -p 1433
Starting Nmap 6.40 ( http://nmap.org ) at 2020-09-02 22:00 UTC Nmap scan report for 192.168.100.103 Host is up (0.00027s latency). PORT STATE SERVICE 1433/tcp open ms-sql-s
Nmap done: 1 IP address (1 host up) scanned in 0.12 seconds
We've dedicated a portion in our book (https://www.amazon.com/Zabbix-Infrastructure-Monitoring-Cookbook-maintaining-ebook/dp/B09M6VYG1P/) to this widget as well (and the other dashboard improvements of course)
What a NICE improvement is this, sure quite some users will benefit from this improvement 👌👌👌
> are you getting values in your entPhysicalTable that are not in the cefcFanTrayOperStatus ?
Of course, entPhysicalTable
contains all hardware components in a switch (power supplies, slots, linecards, transceivers etc.) while cefcFanTrayStatusTable
only contains fans.
Here is how complete entPhysicalTable
dump looks like for a Cisco Nexus 3000 class switch: https://hastebin.com/ejapibokub.rb
<mode> is not a support parameter, its in the error.
Either select one of the modes available as per: https://www.zabbix.com/documentation/current/en/manual/config/items/itemtypes/zabbix_agent
OR
Leave it blank and allow the default to be used.
vfs.fs.size[fs,]
As per bottom of this page: https://www.zabbix.com/documentation/current/en/manual/web_interface/frontend_sections/configuration/hosts#reading_host_availability
​
Most likely cause is you have no Passive items assigned to that Host.
Good to know.
I am however running into an identical problem with a trigger tag that I cant wrap my head around. This forum post seems to adequately explain the problem but of course no resolution.
I was about to type up a very long troubleshooting account with screenshots and everything, but I tested it again this morning after resetting the server and its now working as intended. My guess is the host tags will work now too but I dont know that I am going to bother testing it.
Thanks for chiming in.
Haha, ok, I was thinking I was missing something obvious here :D
To clarify: it's not Global vs User. It's User Global vs. User Host level. Simply put, you can define Global Macro in "Administration -> General -> Macros" and it will be applied to all hosts. But if you want to modify Macro value for a single host you can go Host -> Macros and define same macro with different value. And host level macro beats global level macros.
Macros Arcane secrets can be dived into here ;)
I don't believe that user defined macros can be used on a host for tags.
Looking at this page, it shows that user defined macros can be used in tags for templates, but doesn't mention anything about hosts.
https://www.zabbix.com/documentation/4.0/en/manual/appendix/macros/supported_by_location_user
I recall one of the Zabbix trainers during my course mentioning something like this.
Someone used a script to update a Zabbix Global Macro every x minutes https://www.zabbix.com/documentation/current/en/manual/api/reference/usermacro/updateglobal
So then monitoring could use that key to connect. Cant recall the details at this point in time. Ill need to dig.
I probably can, self-hosted Veeam instance? We have one. Here are the included on the vendor page. https://www.zabbix.com/integrations/veeam. None of them are API based though— looks like they need a local agent or SMTP to pull stats.
If I’ve got time I’ll see if I can figure something out in the next week or so.
> I'm not familiar with discovery scripting and rules but will do some reading up on it.
Zabbix has a bunch of this built in already for things like NICs, filesytems, you can do a little fenagling to make it work with databases, and a bunch of others. For this though you'd likely need to write a Linux script that returns a list of usernames and puts them into a JSON data structure in the right format. Something like:
[ { "{#USERNAME}": "root" }, { "{#USERNAME}": "sally" }, { "{#USERNAME}": "jim" }, { "{#USERNAME}": "bob" } ]
Using that and https://www.zabbix.com/documentation/5.0/en/manual/discovery/low_level_discovery here you should be able to achieve what you want.
Unfortunately Zabbix can't pick out particular users with the system.cpu.* item keys. The 'user' parameter you're using is a kernel process state, it doesn't necessarily refer to specific user ids on the system.
You might be able to spin something with proc.cpu.util from what I'm reading on https://www.zabbix.com/documentation/current/en/manual/config/items/itemtypes/zabbix_agent and looking at the proc.cpu.util documentation, I've admittedly never tried.
Now that I think about it, I wonder if some discovery scripting & rules would do just this...
As per: https://www.zabbix.com/documentation/5.0/en/manual/api/reference/event/get
Did you set the selectHosts parameter?