Thank you for highlighting this. That is correct we do store passwords using bcrypt, and make use of salts. We've just posted an update on our website.
Please do reach out if I can help answer any questions.
He even stated he used "preliminary" numbers. Michigan allows same day registration and this stuff is easily verifiable.
Go look at the Zeher affidavit in the same case. That guy's a lawyer and has 3 exhibits of letters, because that evidence supports a central claim to his argument. If there's not evidence submitted the defense, nor judge have to consider it relevant.
I didn't see Ramsland submit anything in Constantino, nor do I see Powell's name attached. It's possible I missed her name, but I'm fairly certain she's just a figurehead.
https://www.datadoghq.com/blog/visualize-statsd-metrics-counts-graphing/
So, I know this isn't exactly what you wanted, but this goes into some data analysis theory as to why you might prefer to use a rate or count. Basically it boils down to how you would use the data and how it's being reported. My main point here is that using a decimal doesn't describe nefarious intent.
> Right - why would you ever restart a VM if a process exits? Why did you make this statement?
Please read this comment for context.
If you want to architect an application in a container style but using VMs instead of containers, then if something goes wrong with a VM that's wrapping a single process, in general you need to restart the VM because you can't be sure of its state.
> If that's a primary use case for Docker, I think I just replaced it with something even more lightweight:
Except that doesn't address filesystem isolation, dependency modularity, network port modularity and remapping, private binding between services, and a long list of other things that Docker provides.
There's a reason that larger companies are leading the adoption of Docker, which is that it provides very powerful capabilities for deploying and operating complex systems at scale. It's easy to look at any single features of it and think you can do that some other way - bash scripts, iptables, chroot maybe if you're desperate, and so on, but the benefits have a lot to do with the total package in which all the features work together in standard ways that you don't get by munging bash scripts together.
Hi, This old article from Datadog should give you a good starting point: https://www.datadoghq.com/blog/ec2-monitoring/
I don’t see AWS recommending certain alarms for EC2 like it does for services like Elasticsearch. The key metrics you set alarm on and their respective thresholds will come down to your use case. For example, for my application I like to set the threshold for CPU Utilization at 80%.
>Looking at positions that list ~2 years of experience as a requirement should be the right level because you have more experience than a typical new grad. At Datadog, we don't put weight toward how big your current team/company is -- we care more about the impact of the work you've done. Some of our roles don't list years of experience but qualifications instead, so checking to see if you meet those is a good start! For example:
>
>https://www.datadoghq.com/careers/detail/?gh\_jid=1825867
Same goes for ~3 years of experience -- see above!
/u/aww_a_puppy We felt it was key to send out the notifications immediately, while we worked on getting a public notice posted. You can find our security notice online now at https://www.datadoghq.com/blog/2016-07-08-security-notice/
As always, you can expect a post mortem from us.
Great question! Looking at positions that list ~2 years of experience as a requirement should be the right level because you have more experience than a typical new grad. At Datadog, we don't put weight toward how big your current team/company is -- we care more about the impact of the work you've done. Some of our roles don't list years of experience but qualifications instead, so checking to see if you meet those is a good start! For example: https://www.datadoghq.com/careers/detail/?gh\_jid=1825867
Show your legal and marketing team this: https://www.datadoghq.com/docker-adoption/
People always look forward to their reports because they give detailed analysis and useful insight on real data.
Having been on the receiving end of a marketing whitewash I can sympathize with your plight, hopefully your marketing department can learn that removing technical content from articles intended for a technical audience does nothing but make people not take it seriously. Right now it just reads that one vendor really understands docker trends, and one does not.
Having different monitoring vendors doing this kind of work is actually useful for people and I hope they just let you dig in deeper and come out with an updated version with more meat, good luck!
Sounds like you are not interested in extra complexity of setting up, configuring, and maintaining a solution. So I'd suggest you have a look at Datadog: https://www.datadoghq.com/
I'm not affiliated in any way, and I don't actually use it because it seems expensive if you have a lot of servers. But for a single server, it seems like a good solution.
Yes, we have anomaly detection and outlier detection for algorithm alerting. When it comes to Kubernetes and Docker, between our integrations and the built in service discovery I believe we have one of the more compelling monitoring solutions.
In any case, check out our recent Kubernetes monitoring guide, the bulk of the guide is not Datadog specific; other than part 4 which focuses on how to implement the best practices with a few clicks in Datadog.
EDIT: I added an alternative method for compression to your other posting: http://www.reddit.com/r/golang/comments/2llsvt/compression_example/clwaz3j
Recently on the Go mailing list Egon had an amazing point regarding Go. Labeled point 0.
> 0. If it hurts, then you are doing something wrong. > > e.g. > > * If you notice that code isn't nice -> cleanup your code / separate concerns. > * You can't specify type hierarchy -> don't create hierarchies, use interfaces + implementers; depth adds complexity. > * I'm not able to understand how code works -> don't use that many levels in code or add/remove interfaces. > * I can't understand how to use unsafe -> don't use unsafe. > * I can't put the folders the way I like -> use the standard Go way. > * I can't format code the way I like -> use gofmt and stop worrying about minor things. > * I can't write a generic code -> duplicate code (its easier to understand) or use interface.
If an interface doesn't make much sense to you -- consider that you might not be "getting" it yet. Think about it a bit more, don't blindly try to emulate interfaces, understand why they exist. For example, your compression example (I dug up the post), you need to step back for a moment and think about WHY the io.Reader and io.Writer interfaces work they way they do... a lot of it is to save memory (versus like ioutil.ReadFile, which is memory suicide with multiple GB compressed files). Once you understand the WHY behind an interface, you will understand if you should be using it our not, but don't blindly flail to follow them. https://www.datadoghq.com/2014/07/crossing-streams-love-letter-gos-io-reader/
Wait what are you talking about, the link to apply for an internship is here: https://www.datadoghq.com/careers/detail/?gh_jid=3338042. It appears SF is not one of the cities that they have a building in.
I would slightly disagree about "same" or "easier" part. These two articles are worth reading:
https://learnk8s.io/allocatable-resources
https://www.datadoghq.com/blog/engineering/moving-a-jobsystem-to-kubernetes/
You’re one of those people huh? Something comes out and thinks businesses started using it immediately? Datadog “State of Containers” 2020 Report. Point 11. Nginx is the most widely used container image. https://www.datadoghq.com/container-report/
I personally don’t like nginx either (my work went through 7 sales meetings with them before they’d get serious with us, we switched to another ingress before they bothered to be helpful) but it’ll be here for a while.
If you’re new to Docker / containers I would recommend Datadog’s Anuual report on the subject.
https://www.datadoghq.com/container-report/
If you wanted to do it as a job or understand the industry the stats are pretty revealing. In particular, is number 11 on their list where they graph the most used software inside a container. Lots of databases, data messaging brokers (Kafka) and data processing (Elasticsesrch)
In the realm of data analytics, have you tried Splunk, Salesforce, DataDog ? They should still be actively recruiting (according to LinkedIn/Google Jobs / their own site).
There is indeed many companies that reside entirely on AWS. Your right in they'll usually you'll have dynamic scaling through docker containers and kubernetes, which eases the creation and destruction of services. Openstack and Ansible tend to be widely used here for this purpose.
Any company using Data Dog in a large scale is also very likely doing this.
I like that you asked about collecting data, a good step in the right direction - I love collecting metrics across automation test repos!
Whatever you build, no matter what it is from a quality perspective, hook up every part of your process to publish metrics to a monitoring service like Datadog.
AWS isn't really a host. If you're coming from shared hosting and expecting the same, you're going to be completely lost. It is not a drop-in replacement.
There's no cPanel unless you install it (and WHM) and pay the license. There's no FTP (partially because you shouldn't be using it). There's no anything unless you set it up. It's designed that way intentionally to provide the most flexibility.
> Some of the webpages I need to track clicks on are redirect pages and there's not enough time for Analytics to load everything before the page is redirected and the view is not counted.
You generally track clicks as events through analytics. Meaning the click is recorded on the page you come from, before any redirects occur.
Tracking logs has its advantages, but mostly in terms of performance (monitoring 400 and 500 error codes) and intrusion detection.
If you wanted to track clicks via AWS, you'd use Amazon Pinpoint. But, as you'll quickly notice, it's designed to orchestrate your own funnels, journeys and channels. It's not software that just runs on your server.
If you want to analyze logs, you'd generally start with CloudWatch. From there, use an assortment of various services to run your own evaluations, or potentially a 3rd party service like DataDog.
Again, though, none of that is trivial to setup, especially if you're coming from shared hosting.
In short, I'd recommend registering clicks as events in google analytics and reporting that way. With that in place, you're free to move to any hosting environment you wish, including flat-file.
Yes. Your explanation is correct.
The process may not consume 100% cpu as the task might have finished before that so in that case the CPU will be idle. As in your example the allocation is 2 CPUs but utilisation can be less if the container does not need any processing to be done. The times when a container is running short of CPU can be found from the "cpu.stat" under "/sys/fs/cgroups/" dir of the docker host. A good explanation is available in the datadog blogs : https://www.datadoghq.com/blog/how-to-collect-docker-metrics/ . I was not able to translate those numbers to CPU shares though.
DataDog has a 3 article set about exactly this. Solid ~15 minute read and applies to the meta idea here, not DataDog specifically. I share it with everyone that lives in daily alert fatigue.
https://www.datadoghq.com/blog/monitoring-101-alerting/
Note: I don't work for DD, just used them at a previous gig. This article I keep bookmarked.
/u/Murray_TAPEDTS I work on our containers integrations and would be happy to answer any questions. Your deployment of one Datadog Agent per host is the recommended approach, and we would not charge for each containers as a host. Some more details on container billing is available in our KB.
Would be curious to get your feedback on our new live containers view as well.
Feel free to DM me. Happy to chat here or to setup a call.
We have the data from the firehose that's coming from the containers and is tagged with the application and so on.
So there's some data from the application level.
However, what I meant was that we haven't yet put in a service broker so that we can get APM or statsd from the apps yet. Right now it's a lot more infrastructure data than application data.
I had a call with ES just before they changed their name they said the Silver plan was $7000/node or 35,000 per 10 nodes. Developer is in the $20k range but it's 1 year or or 10 incidents whichever comes first.
As /u/thnetos said you can buy marvel for much less on it's own. The support subscriptions do come with technical and architecture support but they're very much priced from large business and enterprises.
Fargate ECS/EKS works, you need the Datadog agent deployed as a sidecar container per https://www.datadoghq.com/blog/aws-fargate-monitoring-with-datadog/
Any issues with this integration in particular? I agree with other comments, it's a bit pricey, but pretty decent for a full APM/monitoring/logging solution.
It's a webinar, which is something companies do. I'm constantly following webinars by datalog https://www.datadoghq.com/webinars/ and synopsis https://www.synopsys.com/webinars.html
Their products cost thousands of dollars but I don't need to buy it to watch the webinar. OP should consider next time to add [Video] or [Webinar] to the title
DM me if you want my referral link but you shouldn’t have an issue with the visa from DD: https://www.datadoghq.com/careers/detail/?gh_jid=3004247
My teams last intern was supposed to be based in NYC for the summer from Toronto
> Docker is good for prototypes of software, but is not production ready. For production I would choose K8s or OpenShift. Docker is not ready for production use by its design.
I believe I expressed it clearly about Docker & development, but if not - I'm stating about anyone encouraged to run production on pure Docker. While honestly, for small projects [which can fit in small VPS] it works fine. I personally run some low-risk stuff in docker and feel no shame for that.
> Then in case of catastrophic failure it's developer responsible for fixing it, so it requires sysadmin basic knowledge from developers, and right?
Here is "DevOps" or "Full Stack Developer" badges come in place - knows-enough-of-ops-to-do-simple-server-stuff :) I see a lot of analogy with cars - fixing flat tire is expected to be done by user, fixing engine - expected to be NOT done by user.
And honestly, what do you expect/describe as "catastrophic failure" in the world where
> Trend Number 1 - Nearly 90 percent of Kubernetes users leverage cloud-managed services ( https://www.datadoghq.com/container-report/#1 )
?
Thanks! Yeah, they apparently did so, unless I am mistaken:
https://www.datadoghq.com/blog/real-time-performance-monitoring-with-tracing-without-limits/
https://docs.datadoghq.com/metrics/faq/metrics-without-limits/
I've used this as a reference point to get the type of metrics I need for nginx.
https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#alerting-on-http-response-code
However, I have instrumented using a variety of Grafana + nginx plugin, appdynamics metrics browser extension and logs to Splunk.
Happy to go through a specific example depending on which route you take.
k8s and containerd still have a significant overhead over "bare VM". 20% is optimistic, and even companies with dozens of full-time engineers on their k8s stack are celebrating going down to 38% overhead.
k8s will allow you to bin-pack workloads that don't utilize a full VM and run ephemeral jobs on spare / spot capacity, but for cpu-bound autoscaled workloads you should not expect significant gains.
The gains will be on standardizing ops across, getting away from puppet/ansible, and eventually gaining some developper efficiency, once you go over the significant learning curve.
The logging module lets you have different destinations for different types of messages; for example stack traces to the debug.log and error messages to the console.
https://www.datadoghq.com/blog/python-logging-best-practices/
> feature Mozes li podeliti iskustvo onda? Ne znam sta fali Datadog za monitoring https://www.datadoghq.com/golang-performance-monitoring/
Ovde pricamo o samome jeziku, deluje mi kao da skrecemo temu. Ne govorim da je Java krs, samo ukazujem na probleme, a glavna tema o kojoj bih da diskutujem je konkurentnost. Go je bolji od Jave u velikoj konkurentosti i GC mu uopste ne prave problem (pogotovo posle verzije 1.13 ako se ne varam). Goroutine su daleko bolje od threading modela za scenario web servera, pa ne razumem zasto indiciras da ce biti problem kada dodjes na ozbiljniji scale?
Java koliko znam nema "race condition" opciju prilikom testiranja, Maven/Gradle imaju losu paralelizaciju prilikom testiranja i nemogucnost kesiranja izvrsenih testova. Koristim i ja InteliJ za sve programske jezike, ali generalno sam mislio da je Java tooling prekomplikovan, pogotovo za deploy-ovanje i manage-ovanje build sistema u poredjenju sa Go.
I've heard good things about DataDog but haven't personally used it yet.
Their default MySQL stuff is documented here, and I believe you can add custom metrics for almost anything: https://docs.datadoghq.com/integrations/mysql/
Wonderful resource! Thank you for putting this together.
Here are some additional companies in the data visualization, monitoring and business intelligence space:
​
Domo is the fully mobile, cloud-based operating system that unifies every component of your business and delivers it all, right on your phone. Domo brings together all your people, data, and systems into one place for a digitally-connected business.
​
Datadog is the essential monitoring and security platform for cloud applications. We bring together end-to-end traces, metrics, and logs to make your applications, infrastructure, and third-party services entirely observable. These capabilities help businesses secure their systems, avoid downtime, and ensure customers are getting the best user experience.
​
We're putting the power of machine data analytics in the hands of everyone by unifying all data types, enabling universal access and leveraging cloud economics -- all from a single, cloud-native, continuous intelligence platform delivered and consumed as a true SaaS.
Icinga. Smokeping is nice as a historical dashboard, but it's not going to send alerts.
Here's the question I'd ask you, however: What are you going to do about it? Alerting is only useful if it's actionable. If you're just writing a tool that will send an email or a SMS alert every time your ISP flakes out, you're really just devising a robot to annoy your operations staff.
If you're collecting ammunition with which to gear up for an argument with your carrier, that's something you can do with just ping /t (on Windows, anyway). If, on the other hand, you want to test from outside, I'd recommend checking out Datadog, they have lots of synthetic monitoring options you can engage.
on a less sarcastic note, if the server is linux based then datadog has a pretty decent eBPF packet filter based tool that shows you who is talking to who. Only works for linux machines though. * correction, they are now running a beta for supporting it on windows 2016+ *
https://www.datadoghq.com/blog/network-performance-monitoring/
Noisy neighbors don’t use CPU credits. You’ll need to monitor the stolen CPU metric. See for example this blog from Datadog https://www.datadoghq.com/blog/understanding-aws-stolen-cpu-and-how-it-affects-your-apps/
Sad to hear, sorry about that. We're a larger company (maybe ~1000 software engineers) and fortunate to not have to worry much about infra costs much.
Without knowing much context of your company or throwing too much money at the problem, how about possibly renting out an AWS EC2 instance and run these tests on schedule there? The `t2.medium` instances have 2 Core/4GB of RAM and cost like $33/month to run 24/7. If you have an extremely light test suite, the cheapest one is a 1core/1GB ram for like $8/mo. Datadog offers pretty cheap pricing for light metrics usage (dollars per month), but that still adds cost...
Or a joke but not really, maybe turn off your screensaver and power settings and just schedule the tests to run on your work computer in the background :D
Everything is piecemeal and they upped their pricing big time.
There are numerous examples on reddit here and in /r/sysadmin with billing horror stories. For us, the charge anything/everything is a big pain and when you make a metrics error trying to tamp down non-prod vs prod, they are not forgiving. Telemetry is wild carded at the beginning to up the initial cost forcing you to adjust as you go and make you responsible for tuning it all while paying during that process.
The account manager I deal with is also a huge pain. A lot of their staff seem to be lately and I imagine it has a lot to do with recent changes in the org.
There is, but it's not the same play as pure cybersecurity companies like Symantec, Crowdstrike, etc.
They do provide insights and analytics from the data collected: https://www.splunk.com/en_us/software/enterprise-security.html
https://www.datadoghq.com/blog/announcing-security-monitoring/
I personally have used only the app/log monitoring components.
PS: Datadog added the security monitoring feature in April in Q2. More upsell :)
Can't really say much, although they themselves had admitted this. Or an analyst they hired, anyway.
> While Datadog is strong in supporting cloudnative environments, it’s weaker in more traditional and on-premises deployments.
If you look hard enough, there are plenty of D3.js jobs around. One such example is datadog. However, the main criteria of getting any software engineering job is.. To be good at programming and software engineering. The rest, such as knowledge on libraries, is secondary.
In my opinion, D3.js is literally the best data visualisation library that one can pick up. There is no reason not to pick it up.
As others have pointed out, you can use cmd.Run to call your scripts. That’s the simplest choice.
If you have more specialized needs, you can also use cgo to start the CPython interpreter and run your scripts directly. Datadog’s blog explains why you might want to do this. They use github.com/sbinet/go-python , which I’ve also had good experiences with.
https://www.datadoghq.com/blog/monitoring-101-collecting-data/
Old post, but a very good one that’s still totally relevant. I’ll write something longer once I get settled, but we base our intro training partly on that post.
A bigger Engineering team.
Hi - My name's Ryan, and I'm looking for developers who want to be part of a growing Monitoring company. For more information, visit our career page!
You can also send me a DM to help get your resume in front of the right person.
Ryan
If you're willing to spend money, Datadog has been a great solution for my company. Install their apm library to automatically instrument your server routes and have Winston pipe logs as well. It looks like for a single server you'll pay about $40/month. https://www.datadoghq.com/pricing/#section-apm.
You can use PerfMon to capture counters in the future. Request/sec is a good example: https://www.datadoghq.com/blog/iis-metrics/
Otherwise, parsing those InetPub logs is how I'd normally do it. LogParser is an ancient tool but can easily chop up the W3C logs.
For something like this I think you'd need a different tool, possibly apache drill? Check it out, you can use sql to query system files and processes I believe. I saw this too: https://www.datadoghq.com/blog/collecting-mysql-statistics-and-metrics/
Have you tried to apply to Datadog? They extensively use all of the tech you mentioned, have a strong track record of mentoring engineers at any stage in their careers, and they're a very international company so I believe they do visa sponsorship, but check with the recruiter upfront to make sure it's not a blocker. Their engineering is based in NY/Boston/Paris. If you're an EU citizen, the Paris office is a good fallback if you don't win the H1B lottery.
So Dice huh? I’m glad you mentioned them. Here’s a great talk about ops from one of their guys. If you really want to learn something, listen to it, you might learn a thing or two.
https://www.datadoghq.com/videos/surviving-blockbuster-game-releases-at-ea/
I wouldn't even say "used by most", more like "used by some". It can seem like everyone uses docker, but they really don't. About 25% use docker according to this https://www.datadoghq.com/docker-adoption/
That's "some", not "all" and not "most".
Anything worth doing is worth doing right, which means don't reinvent the wheel. I'd search for a third party tool or library instead of hack it up yourself. Datadog is what I'm used to. https://www.datadoghq.com/
Use whatever your organization already uses if you can other wise. take a look at NetData or possibly datadog. There are dozens of choices out there. The same algorithm applies to this stuff as to the rest of the technology stack. Choose something based on the shiny wrapper and anecdotal evidence from blogs. Be ready to switch it out if you have to.
We have tried many monitoring solutions – SaaS v OnPrem, Agent-based vs Agent-less, subscription vs software for over 10 years. It is most certainly evolving towards cloiud based solutions – even the likes of ZAbbix are apparently deploying cloud solutions, though they might be behind the curve by then. There are many very good cloud or SaaS solutions available. If you are a large enterprise with 1000’s of incidents etc to monitor, and with a decent IT team, it is hard to beat https://www.datadoghq.com. Can get a bit pricey though.
However, it seems you need something easier that still gives you complete coverage of your stack, but caters for medium sized enterprises. We went through the same questions you are for ourselves and clients, and ended up with CloudRadar – https://www.cloudradar.io It does not have all datadog enterprise level features yet, but we set up full monitoring for 46 instances/devices in under 2 hours. It monitors our Windows and Linux servers, gives full performance insights (cpu, memory, processes, disk etc), as well as our routers, switches etc. It is a combination of agentless and agent monitoring and we had no problems in the setup – was simple and is a great solution. And the pricing is very attractive.
For monitoring, you could use a service like DataDog to monitor your services, in both production and staging environments. DataDog works decently in Windows environments as well. It is a cloud platform which basically does a similar job as PerfMon, but way better. It's very simple to set up, and has a free trial period.
​
In our case, we have a TV with our DataDog dashboard open, which has graphs on all important performance metrics (cpu, memory, queue sizes, network latencies, etc.) of our production servers. Then, we basically set SLA thresholds for each metrics, and we receive an alarm/slack alert if any metrics on our prod servers exceed that SLA.
Can you share the config snippet for the nginx status module?
​
Also worth checking out our NGINX monitoring guide.
Thanks for input, the above queries i tried to implement after reading article https://www.datadoghq.com/blog/postgresql-monitoring/. Those queries will be base for other monitoring tool (garafana). I just wanted to implement some high level monitoring, i have many systems that run postgres and some have many databases some have just one. I am trying to get some basic metrics that will "work" in every situation "common ground", some high level overview before we go deep into details land try to find solution. I hope this make sense.
We use an open-source tool called BenchmarkDotNet to profile code.
You need measurements of your app in use to find the real bottlenecks. In the web world StatsD or Prometheus works well for this. AppDynamics and NewRelic are commercial products in a similar space. If you don't have any of these, but have logging, can you get timing data into those logs for later analysis?
Take this with a grain of salt since I work there: Datadog does APM https://www.datadoghq.com/apm/. We use Go and gRPC so it was one of the first things we supported.
Also there's backtrace.io which can do some really deep stuff with Go, though I haven't really delved into it too far.
StatsD is a daemon made by Etsy to help monitoring applications. You can read their introductory blog post.
This protocol is also used by services like Datadog.
I haven't used those functions of Node.js, but i'd start with https://www.datadoghq.com/blog/crossing-streams-love-letter-gos-io-reader/ and http://nathanleclaire.com/blog/2014/07/19/demystifying-golangs-io-dot-reader-and-io-dot-writer-interfaces/
May I suggest putting some monitoring on the box so you can see what's going on? Look at a service like... https://www.datadoghq.com/ as it'll require minimal input from you.
As for the issue, are you sure there's nothing in the MySQL log? When you go to start it what does it spit out into the log? At a guess it sounds like the kernel is killing the process, check messages & syslog and see what comes up?
//edit - Maybe try reducing the memory footprint, or at least make the box bigger. 1.5GB is nothing. If you run free -m what does it come back with?
//edit2 - Just put some monitoring on. Anything else is going to be 100% drunk man methodology.