You have to set up loki properly. Grafana does not store any data, loki does. You need to feed that log file to loki using promtail.
See: https://grafana.com/docs/loki/latest/getting-started/get-logs-into-loki/
Personally I wouldn't recommend doing it this way. Promtail has a syslog listener. You should have syslog-ng send the logs to Promtail, and then have Promtail forward them to Loki
https://grafana.com/docs/loki/latest/clients/promtail/scraping/#syslog-ng-output-configuration
https://grafana.com/docs/loki/latest/clients/promtail/configuration/#syslog
Grafana panels are made with Javascript so if you could find someone who's made the tachometer in Javascript you might be able to adapt it into ReactJS and a Grafana panel plugin.
Edit: A quick google showed at least one similar project.
Here it says:
>Where the section name is the text within the brackets. Everything should be uppercase, . and - should be replaced by _.
So in your case it should be: GF_PLUGIN_MARCUSOLSSON_CSV_DATASOURCE_ALLOW_LOCAL_MODE=true
The Graph panel (among others) has an option that alters how it deals with null values, as outlined here.
The one you’re looking for is called ‘Null Value’, and you’d need to change it to either “null” or “null as zero”, as it’s likely set to “connected” right now.
Hi! Here's the doc for the time range controls: https://grafana.com/docs/grafana/latest/dashboards/time-range-controls/
Query options are explained here: https://grafana.com/docs/grafana/latest/panels/queries/#query-options
Hope this helps :)
Wordpress seems a little "overkill" for just creating a few pages to embed grafana graphs in. Maybe something like this might be better: http://picocms.org/
I've had no experience with it, but it seems like it should do the trick and everything is stored in flat files (no databases).
You need to research Telegraf & InfluxDB at a minimum, I suggest the full TICK stack. Can start here: https://www.influxdata.com/time-series-platform/telegraf/
I don't have experience with Lansweeper but I do use MSSQL as data source in my grafana instance.
Lansweeper seems to have documentation on the table layout:
What I usually do is open up a SQL Server Management Studio session next to Grafana so I can inspect the tables and table layout and then craft queries from there. Once I get a result I think I can use, I copy it over to Grafana. Rinse, refine, repeat.
Hope this helps at least a bit.
That sounds like you want to rewrite the timestamp in Promtail. That will make the timestamps match the in-game progression, not time of collection. Though it might be displayed as starting from 1970.
Also, here are the docs for metric queries.
This documented here under Step 9.
You specify the Grafana role for a group in the manifest using the "value" property.
> Is it possible to get all 10k lines from 1 stream ?
In the grafana UI, no. But you can use the logcli tool to do a larger query and just up the limits.
https://grafana.com/docs/loki/latest/getting-started/logcli/
The home dashboard is editable and copyable, or you can use any other dashboard you want as your home dashboard, your team's home dashboard or organisation's home dashboard!
I would switch to openwrt, and use this guide instead:
​
​
also, you cloud use netdata, scrape prometheus, and use remote write to send to cloud.
I personally prefer prometheus as a data source
Basically Grafana doesn't actually store any metric data - it is a visualisation tool instead. So you need to decide first which data source you would like to go for. Influx is completely fine depending on what you want to do
Check this out https://grafana.com/docs/grafana/latest/datasources/
It has a list of the ones you can use.
I don't think that's possible. The information on why it is not possible is a little bit spread out between two places:
If you look in Loki's best practices, there's a recommendation to not do that. It's a bit counter-intuitive, but it comes from Loki being a very different tool compared to Elasticsearch, Elasticsearch is optimized for having great indexing capabilities whereas Loki is optimized to reduce data storage costs and sift through large amounts of unindexed data quickly. Because of these design differences, having too many indexes can end up hurting Loki's performance.
Instead, you'll want to parse the JSON at query time using LogQL.
No, but if you put the password in a file you can configure Grafana to read it from that file instead of from the configuration, see https://grafana.com/docs/grafana/latest/administration/configuration/#variable-expansion
You shouldn't have (to) change that part of the config, just paste it as is. By entering bogus information (the ip address) you broke the config.
> Some Objects do not have instances to select from at all. Here only one option is valid if you want data back, and that is to specify Instances = ["------"]
.
https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters#instances
By the way, there is an existing dashboard for Hyper-V: https://grafana.com/grafana/dashboards/2618
You should add the shown config to the config supplied with Telegraf (it also 'wants' the other metrics.)
Not sure if you're still stuck on this or not, but I like to use reverse proxies with apache to set all of my services with weird ports to subdomains on my website. I.e. Instead of typing "mywebsite.com:3000" or the like, I go to "grafana.mywebsite.com" and apache will act as a middle man to fetch grafana locally, and then send it to the web client over port 80, or 443 in my case since I force ssl. It reduces your open ports, allows you to use ssl for services that don't natively support it, and makes accessing it remotely much easier to remember. Digital Ocean has a great article on this, and if you need help you can message me and I'll do my best.
What's the scrape interval in Prometheus? A range vector function won't work if the data points are farther apart than the range selector duration. Grafana guesses that your scrape interval is 15 seconds by default (this can be set in the data source configuration) and $__rate_interval is a calculation based on the scrape interval.
https://grafana.com/docs/grafana/latest/datasources/prometheus/#using-__rate_interval
Not being an expert, I would think set a threshold as the max for that period with the live value being, you know, the instantaneous value.
https://grafana.com/docs/grafana/next/visualizations/gauge-panel/
Grafana Enterprise License Restrictions
"License URL is the root URL of your Grafana instance. The license will not work on an instance of Grafana with a different root URL."
So basically, you need a license per unique URL.
You need to add a second datasource connecting to it as a flux data source
https://grafana.com/docs/grafana/latest/datasources/influxdb/influxdb-flux/
https://www.influxdata.com/blog/how-grafana-dashboard-influxdb-flux-influxql/
I haven't really played around with S3, so I can't speak to the speed. As far as costs though, if you're running this on prem, you can stand up Minio, which is S3 compatible object storage. I was planning on trying this very thing out on my homelab this weekend. Grafana's docs have an example here, where it even mentions using Minio.
I suppose you could run both in containers on the same system, which would remove some of the slowdown from running it between two systems on a network.
Thanks! I just wrote a comment explaining a bit more what was my intent with the dashboard. As to how I’m pulling the data, I use
I think that’s it :)
By workstation , if you want the essentials. First try out Node Exporter. Which will give you essentials, on top of that use this https://grafana.com/grafana/dashboards/12542
Which exposes battery level metrics exposed by the Kernel
Are you referring to this... Geomap?
I haven't used this panel in Grafana yet, but I have in Kibana. You'll need to convert the source IP into some geographical coordinates (latitude & longitude).
From the description...
>The Geomap panel needs a source of geographical data. This data comes from a database query, and there are four mapping options for your data.
Not sure how the output is structured (and if I understood your problem correctly), but if you have the timeseries you want to represent on the json, you can use the following plugin to query json directly:
https://grafana.com/grafana/plugins/marcusolsson-json-datasource/
What version of Grafana are you running? I have a 8.3.X instance running and there is a setting in the Y Axis called Decimals which lets you override the decimal precision for the axis. If you set that to 0, it won't read out any decimal places, that might get you what you want.
You could also use a separate panel altogether for the on/off stat. The "State Timeline" panel works incredibly well for this.
https://grafana.com/docs/grafana/latest/visualizations/state-timeline/
did you get things working? if not...
for your bind_dn line use %s. %s-explained
like this:
bind_dn = "domain name\\%s"
bind_dn = "contoso\\%s"
also, we comment out the bind_password line and force users to input their password. try this...
#bind_password = """somepassword"""
You aren't just trying to get the rate of the log messages, correct? I'm sure you've read this, but you can use the 'rate' function to do that.
For metric based queries, the documentation on the Loki site is a little confusing but helpful:
https://grafana.com/docs/loki/latest/logql/metric_queries/
If you don't want a straight rate, but a number representation of the difference between a set point in time and the timestamp in the log, you could trip "unwrapping" the timestamp and subtracting that from your starting time. I am not positive that will work, just an idea.
Without some details about the log format, it is difficult though. I've found a lot of this to be trial and error.
You can use the Enterprise installer without paying, without an Enterprise license is just runs as the OSS version.
The installer page actually recommends this:
https://grafana.com/docs/grafana/latest/installation/debian/
Note: Grafana Enterprise is the recommended and default
edition. It is available for free and includes all the features of the
OSS edition. You can also upgrade to the full Enterprise feature set, which has support for Enterprise plugins."
Thanks everyone for the comments. It turned out I was using the wrong dashboards. All the ones that are labeled Windows Node Exporter didn't work cause it was not grabbing any of the Prometheus inputs. I used this one:
https://grafana.com/grafana/dashboards/14510
And it worked! Now I can see all the info that's being collected by Prometheus.
What I don't get is why the others did not work? Why didn't they read all the information from exporters from all the Windows nodes etc? Is there a way to configure them? All the other dashboards were configured using the same Prometheus as the data source, so not sure why it did not work. How do we configure each dashboard so it can read the correct sources from Prometheus?
My advise: Make sure the persistent volumes are indeed persistent. I'm not using IBM's cloud, so I cannot help you how to do this.
Also get rid of /plugins as it just adds confusion. If it's a file store for plugins (e.g. tar or zip files), keep them in a different volume. or at least a different sub-path. You might have some tools to update those files and that might wipe out Grafana plugins. Different volume will solve this potential problem.
Last point: Try to find out when exactly plugins disappear. They should never do that. The volume should never be deleted either (as per https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). So if they do disappear, is the volume being deleted? Is something cleaning it up?
Also check your log messages. Most debugging is finished quickly when you look at the correct logs. Of course the problem is to find "correct"...
check the following page for configuration settings. its typically an ini file you'll need to modify.
https://grafana.com/docs/grafana/latest/administration/configuration/
Loki is not a syslog listener, it has an HTTP based API which uses the JSON format. Loki is not meant to be a log collector, it is a log aggregator and requires a Loki client to send it log streams in a specific JSON format. The syslog protocol is not based on HTTP, it's its own network protocol. https://en.m.wikipedia.org/wiki/Syslog https://grafana.com/docs/loki/latest/api/#post-lokiapiv1push
Promtail is really needed here.
I'm trying to do something similar using the template function regexReplaceAll or regexReplaceAllLiteral. I haven't gotten any of the template functions to work though.
https://grafana.com/docs/loki/latest/logql/template\_functions/#regexreplaceall-and-regexreplaceallliteral
Graylog uses Elasticsearch behind the scenes. You need to construct queries that extract the information you want to dashboard.
It’s hard to give any explicit advice, because it entirely depends on what you want to chart.
https://grafana.com/docs/grafana/latest/datasources/elasticsearch/
I really like the speedtest-cli component of this (https://github.com/sivel/speedtest-cli/). I've been looking for something like this that interfaces with public speedtest services for a while. Thanks for posting!
>but idk if it'll work
You'll have to try to fine out. The README is very well-written, it lists all the commands to get you started. No worries, you can't really break anything.
Ah, one thing I'm not sure if you've noticed yet:
This pihole-exporter is a Prometheus exporter. It does serve the metrics as an HTTP page, which a Prometheus instance has to collect. Grafana is only a visualiser, not a storage; you need a data source like Prometheus to feed it data.
So you also have to set up Prometheus next to Grafana and the exporter.
> or even if it'll auto run when I boot up the pi
https://docs.docker.com/config/containers/start-containers-automatically/
Or download the binary and write a systemd service file.
> like I said I'm a beginner and have little to no experience with docker and such
Sure, but we can only answer you specific question. Nobody here will have time to give you a complete introduction into Linux, system administration, config management, containers generally, Docker specifically, Prometheus and Grafana.
You'll have to read the respective docs for each to find out how it all works. It will take weeks to understand properly, especially containers are a complex topic. But it's doable and you'll learn a lot. Of you have specific questions to which you couldn't find the answer using a search engine, you can post it and people might be able to help you.
No I know, I'm just saying the number of query functions (https://prometheus.io/docs/prometheus/latest/querying/functions/) in Prometheus still greatly outnumbers what is available today in Loki. So if you want to do some advanced math on the metric values, there is a chance it won't be possible in Loki yet.
consul might be one way for service discovery. but there are quite some more possibilities.
https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config
will give a clue
You are thinking way too complicated. Prometheus has several prebuilt functions that take care auf counter resets: https://prometheus.io/docs/prometheus/latest/querying/functions/#increase
> increase(v range-vector) calculates the increase in the time series in the range vector. Breaks in monotonicity (such as counter resets due to target restarts) are automatically adjusted for.
Grafana is just a dashboard. I assume you mean you'd like to monitor services with Prometheus?
Like others have said, there's the Windows Exporter to export perfmon counters. There are loads more Prometheus exporters. Hopefully you can find contributed exporters for the services you are deploying--see here for a more complete list: https://prometheus.io/docs/instrumenting/exporters/
It appears you are using label values to store metrics...that's not how Prometheus should be used. You should read through the docs and change how you are storing data.
I'd suggest the following.
1.telegraf+influxdb configuration
2.set up a telegraf http listen module and set the format to json
3.alter the ps script and dump the table using [convertto-json] cmdlet(format the object as instructed in the document)
4.Post telegraf url with the json contemt in the body field with [invoke-webrequest] cmdlet
5.setup the grafana to read the influxdb
Further instruction as below https://www.influxdata.com/integration/http-listener-v2/
Hope it helps.
This is a fairly straightforward setup. Configure Telegraf to poll your kit with SNMP and write to a DB in Influx. Then add that DB as a data source in Grafana and you can start drawing the graphs.
This guide from Inflix might help you get started with the SNMP polling. https://www.influxdata.com/blog/monitor-your-snmp-devices-with-telegraf/
Write a python script to grab the data and write it into your influxdb, then setup it up as a cron job. Here's an article that explains how to use python to write to influxdb.
An alternate option to expand on what @sup3rlativ3 mentioned would be SNMP Telegraf plugin, and pull the metrics from SNMP. It is light weight, and you can get a lot of good data just from the default MIB sets for Linux. Syslog will not necessarily log things like disk space, load, or Ethernet utilization so it also depends on what you want to graph on your dashboard.
You need to send data to your database. You can do this in various ways. One mehod is using Telegraf
But I would recommend you to lookup some guides
http://startrinity.com/InternetQuality/InternetConnectionMonitor.aspx
https://www.softperfect.com/products/networx/
I am not sure if these could help but these are what i use to ensure connectivity (the 1st link) and ethernet speed on the 2nd link.
I dont know if there is export options to send to a db.
think you are referring to this: https://grafana.com/docs/grafana/latest/variables/variable-types/add-custom-variable/
In your case, I guess you want the user to be able to select one or more ranges like <10.0µm
but use the value particulate_matter_10_0um_concentration
in your query/panels instead.
So you would enter it like this in the Values separated by comma textbox:
<10.0µm : particulate_matter_10_0um_concentration,<2.5µm : particulate_matter_2_5um_concentration,<1.0µm : particulate_matter_1_0um_concentration
The key is what is shown to the user. The value is what gets replaced in the query where you use the variable.
Note that the format for key/value is key : value
, with a space before and after the colon.
Also, if you allow the user to select multiple values, there are other options you can use with the variable to extrapolate it into the right list format:
https://grafana.com/docs/grafana/latest/variables/advanced-variable-format-options/
Hi,
don't understand if you only want to edit that query or be able to use that query with that variable?
Did you configure a variable server
to be able to do a selection on it?
https://grafana.com/docs/grafana/latest/variables/variable-examples/
Hi, dont think that there is an easy way for that. Maybe insert users/data direct into the used database (mysql, postgres)? But for professional usage I would suggest to use a central authentication service (ldap, oauth) which can be setup in configfile.
The only thing which can be done via cli is set a new admin-password: https://grafana.com/docs/grafana/latest/administration/cli/?pg=docs#reset-admin-password
Hey, for this type of query you'd need to reduce the series from query A and B using the reduce expression. After that you should be able to do another (math in this case) expression that is $A + $B > 95
​
You can also do provisioning on datasources : https://grafana.com/docs/grafana/latest/administration/provisioning/
We're also saving dashboard JSON in Git and we use both dashboard and datasource provisioning to get consistent dashboards on all our platforms (with a bit of Ansible for plateform dependant variables in datasources like IP address, URL, ...).
If you use provisioning for dashboard, you can't save your editing in the GUI (That's what you normally expect => You want to be consistent with your Git repo), but you can "Save as..." a copy of your work in the GUI, edit it and then replace your "old" JSON file with new content. One thing to care about is the dashboard uid, you can edit it to what you want but keep it unique and consistent in your repo.
Great idea! Since grafana already has [a bunch of different notification services](https://grafana.com/docs/grafana/latest/alerting/old-alerting/notifications/) it would be very cool to add Matrix to these.
I think it should be pretty feasible to just do through prometheus-alertmanager. Alertmanager already has a Matrix integration that looks pretty decent (haven't tried it).
I work in a company that uses discord for internal communications; so for my own usage, posting notifications to discord is enough (because I bridge those discord rooms to my Matrix account).
Yeah, as far as I know the pie chart and world map plugin were not built into this old version.
You should update (latest version was a few days ago something like 8.1.*) - or install the plugins via CLI:
https://grafana.com/grafana/plugins/grafana-piechart-panel/
I would go the first way: Update, because you have some cool new features in version 8!
You may use grafanas documentation on that:
https://grafana.com/docs/grafana/latest/administration/configure-docker/
Does it need to be everything within one pre-build grafana Image? And do you know how to build docker Images?
You will have to tighten up your dashboards - so they fit the screen or you use some more "smaller" dashboards and use Grafanas Playlist-functionality: https://grafana.com/docs/grafana/latest/dashboards/playlist/
What about a playlist?
Create a couple of dashboards and rotate between them. You can always duplicate very important stats on all dashboards.
https://grafana.com/docs/grafana/latest/dashboards/playlist/
You can use webhook notifications and write your own piece of middleware that notifies you on local network somehow.
But how are you going to send stuff to your phone from local network and no internet? does your boat host it's own telco platform of sorts?
Grafana 8 requires plugins to be signed now. You can start here for some information:
https://grafana.com/docs/grafana/latest/plugins/plugin-signatures/
Including details on how to sign your custom plugin or modify your configuration to selectively allow unsigned plugins (though not recommended.)
What you could do is install Grafana Loki to trail the log file of Grafana itself via promtail.
Then you can query the log file on 'Successful Log.*' and visualize that.
Here's an example
> default_home_dashboard_path
> Path to the default home dashboard. If this value is empty, then Grafana uses StaticRootPath + “dashboards/home.json”
> static_root_path
> The path to the directory where the front end files (HTML, JS, and CSS files). Defaults to public which is why the Grafana binary needs to be executed with working directory set to the installation path.
https://grafana.com/docs/grafana/latest/administration/configuration/
hmm ok that must have changed in the last couple versions... here the docu.
https://grafana.com/docs/grafana/latest/administration/preferences/change-home-dashboard/
admin permissions or in the ini as default_home_dashboard_path
You can use this: https://grafana.com/grafana/plugins/cloudspout-button-panel/
Have that hit an API to run your shell command. I'm doing something similar using home assistant and my security cameras, but not through grafana. Might look into this myself.
You could use the grafana kiosk mode to achieve this: https://grafana.com/blog/2019/05/02/grafana-tutorial-how-to-create-kiosks-to-display-dashboards-on-a-tv/
Running an nginx url redirect in front of his could help you keep the URL pretty and small as well.
See https://grafana.com/docs/grafana-cloud/billing-and-usage/metrics/ for an explanation how metrics usage is determined. It's different than InfluxDB
Thank you! It's definitely the GROUP BY
. I began playing with the queries and found the first query below to offer some aggregate that doesn't make sense, but the second query (without the GROUP BY
) shows the same results as the phone (which is doing some simple, daily, weekly, monthly aggregation).
I'm reading through the docs on this but If someone has a good ELI5 I'd be eager to read that.
SELECT sum("count") FROM "activity_step_count" WHERE ("source" = 'iPhone') AND time >= now() - 7d GROUP BY time(10m) fill(null)
SELECT sum("count") FROM "activity_step_count" WHERE ("source" = 'iPhone') AND time >= now() - 7d
Please forgive the copypasta here, but, lots of people have asked for my setup across a couple different subreddits and I wanted to make sure those that asked got to see the response.
tl;dr BBQ Guru CyberQ -> Linux VM for SNMP -> influxdb -> Grafana
The software I'm using to display the data is called Grafana (https://grafana.com/). For me, it's part of a fairly large home automation setup I have, so if anything seems a bit much for "just monitoring a smoker" just know that it's a VERY small part of a much bigger solution.
In this case, I have a Big Green Egg where I use a BBQ Guru CyberQ (the original one, not the cloud version) for temperature control. I don't believe they make this specific one version anymore. But, the controller's web server provides a .XML with current state data.
I have some Linux VMs running, which have some custom cron jobs that pull data from different sources and make them accessible via. SNMP. Using that method I can pull data from the BGE using SNMP. This is always running, even when not actively using cooking.
From there, it's a matter of getting the data into influxdb and building the graphs in Grafana, which is standard for Grafana.
I'd be happy to answer any questions or share some of my scripts if anyone's interested.
Please forgive the copypasta here, but, lots of people have asked for my setup across a couple different subreddits and I wanted to make sure those that asked got to see the response.
tl;dr BBQ Guru CyberQ -> Linux VM for SNMP -> influxdb -> Grafana
The software I'm using to display the data is called Grafana (https://grafana.com/). For me, it's part of a fairly large home automation setup I have, so if anything seems a bit much for "just monitoring a smoker" just know that it's a VERY small part of a much bigger solution.
In this case, I have a Big Green Egg where I use a BBQ Guru CyberQ (the original one, not the cloud version) for temperature control. I don't believe they make this specific one version anymore. But, the controller's web server provides a .XML with current state data.
I have some Linux VMs running, which have some custom cron jobs that pull data from different sources and make them accessible via. SNMP. Using that method I can pull data from the BGE using SNMP. This is always running, even when not actively using cooking.
From there, it's a matter of getting the data into influxdb and building the graphs in Grafana, which is standard for Grafana.
I'd be happy to answer any questions or share some of my scripts if anyone's interested.
Use the grafana annotation http api to write alerts as annotations. Just make sure to summarize as opposed to dumping 100 of the same alert in.
https://grafana.com/docs/grafana/latest/http_api/annotations/
Grafana has a sql lite db within it that is the default internal db that annotations and other items are stored in that are outside of the time series db you are using. You can opt to configure an external db of another type, but all of that is irrelevant to passing alerts into notations in the grafana dashboard.
You can chose for an annotation
Ignore me I've just noticed a Windows Event Log section in the promtail documentation. This should keep me busy today https://grafana.com/docs/loki/latest/clients/promtail/scraping/
I've used Oracle, AppDynamics, Datadog, Lightstep, Splunk, ServiceNow, but there is also Snowflake, Jira, Wavefront, etc.
You can see the premium/enterprise plugins here: https://grafana.com/grafana/plugins?enterprise=1
To avoid confusion: If you plan to go Telegraf + Influx you won't need Prometheus, if you want to go the prometheus way you need the node_exporter instead of telegraf and write to prometheus. Grafana is used in both cases for Visualisation. Look into the grafana.com dashboard library, there is plenty of examples for both stacks.
​
Personally I prefer the Influx way, because the query language is more intuitive and longterm storage is not an issue.
This is where my understanding starts to wane on how Grafana works. I understand the concept of variables and how they work with the query in the singular sense. I don't know how grafana handles adding multiple selected items into an InfluxQL query.
You might be able to find some help here
https://grafana.com/docs/grafana/latest/variables/formatting-multi-value-variables/
I would suggest trying to get the data your seeking through a regular InfluxQL query (via influxd
or a tool like Chronograf). Once you can confirm you can query the data directly, then work that query into Grafana.
It appears you are very close! hopefully you can find what you need.
Are you using journal to bring the logs in? It has a 'max_age' option you can use (see https://grafana.com/docs/loki/latest/clients/promtail/configuration/)
The default is 7h, this is basically to protect people from bringing in old logs if that's not what they wanted, but you can increase the max age to whatever you want
Have a look at the community dashboards-- if you are using some of the standard exporters (Node, WMI, etc.) you might find a good starting point (or inspiration).
Yes, there's nothing stopping you from downloading the Grafana Enterprise binary and running it without a license. It will tell you it's not licensed though which might be annoying.
The Enterprise features won't function so there's not much point in doing it unless you plan to buy a license later on, but even if you did, it's trivial to swap the OSS binary for the Enterprise one!
Unlicensed Enterprise is equivalent to OSS, feature wise.
I have used this plugin and works fine.
https://grafana.com/grafana/plugins/alexanderzobnin-zabbix-app
You need to install the plugin first then enable.
You can use the http api to do it
https://grafana.com/docs/grafana/latest/http_api/dashboard/
Takes a little goofing around with to get it working though, the create/update endpoint isn't super straightforward
Theres also a table here: https://grafana.com/products/enterprise/#features
Mostly we focus Grafana Enterprise on integrations between Grafana and commercial datasources - if you're paying them, we'd like you to also pay us ;-)
(NB I work at Grafana Labs)
install telegraf, cd to the directory where it installs. vi telegraf.conf
input something like the example in the page link:https://www.howtoforge.com/tutorial/how-to-install-tig-stack-telegraf-influxdb-and-grafana-on-ubuntu-1804/
Look under the telegraf section.
It looks like there are specific inputs for the collector based on the link you originally posted. Look at the collection configuration details. https://grafana.com/grafana/dashboards/10578
You'll have to read up, I would need to type in the code block with markdown and I don't have the time at the moment so you'll have to read up a bit. ;)
Few ideas here to give you some inspiration if you're not sure what to chat about: https://grafana.com/blog/2020/09/14/observabilitycon-is-coming.-what-will-you-talk-about/
We use the Azure Monitor plugin to gather data directly from Azure via the according APIs from Azure Monitor and Azure Application Insights. When adding the Azure Monitor data source and clicking "Save and Test", we get the following error message: "1. Application Insights: Bad Gateway: Cannot connect to Application Insights REST API. 2. Azure Log Analytics: Bad Gateway: Cannot connect to Azure Log Analytics REST API."
https://grafana.com/grafana/plugins/grafana-azure-monitor-datasource
Prometheus is one of the common data store and collection mechanisms. I would have a look I to it, rather than writing into Influx, but that's just another pattern. For alerts, check out https://grafana.com/docs/grafana/latest/alerting/create-alerts/ which will describe how to create an alert based on metric values. You should be able to write a similar rule to mine using that.
In laymans terms, you can use variables in your other queries.
I have a $server variable that basically says:
SHOW TAG VALUES FROM "cpu" WITH KEY = "host"
This will give you a dropdown at the top where you can select A server (so you can make a dashboard that shows detailed information about a host, and you can swap from host to host) If you have enabled multi-value or an the all option, you can show many hosts at the same time)
You can also use this variable in queries:
SELECT mean("usage_user") FROM "cpu" WHERE ("host" =~ /^$server$/) AND $timeFilter GROUP BY time($interval) fill(null)
And you can also use this variable to create a repeat option, this means it will create that panel for all the times it finds a unique hostname in the variable.
​
You can also create a simple "drill down" dashboard with the same variable, let's say you used the repeat option as mentioned above, and it made 3 panels based on the hits in the variable.
You can then create a Data Link that says:
https://grafana.com/d/unique-thing/detailed-dashboard?orgId=1&from=now-30m&to=now&var-server=${server}
Which basically forwards you to detailed dashboard and picks the server you clicked on, this does require you to have the same variable in that dashboard too.
​
I hope this made some sense.
See if this helps:
https://grafana.com/docs/grafana/latest/http_api/annotations/
Take special note of the times being millisecond epoch.
Grafana Provisioning (https://grafana.com/docs/grafana/latest/administration/provisioning/) is what you're looking for. At my place I host Grafana inside of a container with all of the dashboards stored in version control. When there is a new commit to the repo, a new image is created and deployed using Jenkins. In Prod all users have rights to view but not edit. In Dev, users can edit and customize the dashboard as they see fit, export the json, commit to VC, and submit a PR to Prod.
You can copy the json from the settings of a dashboard and use it on the import dashboard section with the necessary changes (you also need to remove the uid value in the json before importing)
​
https://grafana.com/docs/grafana/latest/reference/export_import/#importing-a-dashboard
You can do it one of two ways:
Environment variables https://grafana.com/docs/grafana/latest/administration/configuration/#configure-with-environment-variables
Or by creating a volume that you mount on creation.
I use both in our setup as we need specific plugin versions, so they sit on the cloud disk (that we mount as a volume) and the config is specifically set using environment variables.
You can! The secret is Grafana template variables - see https://grafana.com/docs/grafana/latest/variables/templates-and-variables/
Also checkout the kubernetes-mixin, which already has a set of dashboards for this usecase: https://github.com/kubernetes-monitoring/kubernetes-mixin
You mentioned you have multiple clusters, so you might want to checkout Cortex or Thanos, they allow you to query metrics from all your Prometheus servers in one place and aggregate across different cluster.
(I'm one of the Cortex authors and I work at Grafana Labs btw)
Yes, but if you're using any non-Grafana Labs maintained backend plugins that aren't signed they may not work - https://grafana.com/docs/grafana/latest/installation/upgrading/#upgrading-to-v7-0
Not exactly what you are looking for, but maybe horizontal bar-chart with 0 in the middle would be a good alternative.
https://grafana.com/grafana/plugins/michaeldmoore-multistat-panel
I've changed the following parameter and set it to false:
check_for_updates = false
This and reporting_enabled = false
does stop grafana from sending those queries.
I'm still running version 6.7.3 and the check_for_updates
wasn't present in the template configuration file. But it's clearly explained at https://grafana.com/docs/grafana/latest/installation/configuration/ so I guess I should have RTFM in the first place.
Thanks!
You could simply use an env variable to set allow_embedding in grafana config:
GF_SECURITY_ALLOW_EMBEDDING=true