They have another product called Ansible Tower, which isn't open source.
If I had to bet on it, I would say RH's Satellite will be merged with Tower, and the underlying technology will be from Ansible, as they have much broader capability.
From this page: "At the portfolio level, Ansible matches Red Hat’s desire to support a multi-tier architecture, provide multi-layer consistency, and deliver multi-vendor support"
"Ansible brings consistency at multiple layers of the architecture "
"Ansible supports heterogeneous IT environments"
"Windows environments"
So that's why I think this is where they're going. Instead of building these capabilities into Satellite, they bought tried and popular technology to do it.
They also hint this is what will happen:
"Red Hat Satellite [...] defined by the Ansible automation workflows."
What Ansible gets out of this: Red Hat's users will have a strong incentive to go with Ansible instead of the alternatives. It will boost their user base.
So basically it's automated through ansible and git. The directions are grossly simplified, but are the general gist of my deployment workflow.
Ansible is a configuration management tool that would seems to fit the bill for what you want. It would allow you to push config files and restart services idempotently, reboot servers in rolling batches, let you remove a host from load balancer before taking these actions, etc.
I think Jenkins is great, and I use it at work as a glorified cron replacement, but it would be a pain to set up the workflow you want, with the delays and rolling reboots and whatnot.
Note: I've only really used Ansible, so I don't have a good foundation of comparison for it against other config management tools like puppet, chef, or salt, so those may be good options for you as well.
ANSIBLE 2.0 HAS ARRIVED - Blog post
Don't want to put a wall of text here, so take a look at the link for changelog.
Ask yourself this: if your server completely crapped out and was unrecoverable for some reason, what would it take to restore it?
Would you have to manually go back and restore everything? Would you even remember exactly what was installed?
Probably not; you're human after-all. I would suggest taking the time to learn how to automate the provisioning of your servers.
At this point, I would set aside some time to learn a tool such as Ansible so that you can simply run a single command and provision your entire server (or servers!).
It will be confusing at first but stick with it, find some tutorials and you will be glad you did. Also, feel free to ask me any questions and i'll try to point you in the right direction.
Haha, yeah it's not so complicated really once you learn the jargon. BTW bower is just as simple, it's basically just a command line tool to install your dependencies such as jquery, bootstrap, etc... it is to webdev what apt-get is to linux (sort of).
To further blow your mind, there is a command line tool called Vagrant that automates your VMs. You start and stop full VMs with vagrant up
and vagrant halt
. You can also pair it with awesome tools like Ansible that will automate the provisioning (installation and setup of all necessary software and settings) of your VM.
I'm not saying that learning Vagrant/Ansible is trivial by any means, it's a fairly advanced topic but it's not out of the realm of your capabilities. Just slowly learning it bit by bit and it will make your life much easier.
EDIT: * One more thing while i'm thinking about it. Anytime you run into a new tool that you aren't sure what it does, just take a few minutes to do some research on it and see what all the fuss is about. It's the only way to learn and keep up. I know that seems obvious but a lot of people don't do it.
Consider http://www.ansible.com/home
It can do a lot of automated tasks on multiple systems from a central computer. Uses SSH so you don't have to make any changes on the client.
Adding onto this comment in regards to the Devops aspect of your job. Automating the process will save you so much heartache (as long as you test out each settings before pushing out to production). Do not blindly enable all of the STIGs or you will hate your life so much trying to troubleshoot issues.
http://www.ansible.com/security-stig
https://github.com/samdoran/ansible-role-rhel6stig
Get use to use some devop/automation tool so as the STIGs/your security requirements are updated you can can push out the changes with your tool saving you so much time and effort
Pretty sure ansible can do this using vagrant. So might puppet, but I am less familiar with that. You should be able to kick everything off from your local workstation without installing ansible on each of the 30.
I can't speak more highly of Ansible. It has incredibly straight forward syntax that doesn't require your Sysadmin/DevOps to know Ruby or have to deal with JSON files.
However, I've yet to use it on AWS (did a bunch of it on Openstack for the past year), so I don't know how good it's aws documentation is.
The biggest uses of it I think are automation and scripting. With something like Ansible, you can set up a full multi host PHP platform in about an hour with a playbook and deploy it as many times as you like with a few commands.
Aside from that, the ability to spin up servers on the command line is helpful.
Basically, you use the DO API for anything you'd use GCP or AWS's APIs for.
There are some pretty good tutorials that are going around under the name 'my first 5 minutes on a server'.
This is the original one, and this is one that's fully automated using a configuration tool called ansible, which you might actually be interested in learning more about. (Setting up servers automagically is really nice, much more useful in many ways than doing it by hand each time.)
For me the major thing has just been lack of packages. The basics are mostly there, but you might have to learn to package some rarer things yourself. Other than that it's remarkably nice. Have you ever tried to automate your Debian configuration with something like Ansible? I think of NixOS as a nicer version of that.
There are some extra distinctions between user vs system packages, and you can have different profiles with their own sets. The cheatsheet explains it pretty well. Personally I try things out in my user profile, then install all the keepers systemwide.
Oh one bad thing. The initial installation isn't nearly as smooth. It's more like installing Arch. But once you get past that (possibly after messing it up a couple times), maintaining a working system is super easy. You don't have to worry anymore about breaking your system, because you can always roll back. And you don't forget how to set it up because everything is written out as code.
have everything scripted through Ansible. there are other automation tools you could use also, though I like this one. whatever you decide to do, automate it. don't leave your mission critical processes (code deployment, database backups and restores, installing and updating dependencies, setting up server environment, etc.) to things that could possibly have human error in the process. get it right and lock it down in your automation scripts.
I studied a practical networking education which had internships included. At that point I had been running Ubuntu on my laptop, but the company I got an internship with (small ISP) ran Linux on most servers and so I began to learn Linux sysadmin work. It also included routing on OpenBSD.
Today I am studying for a master of engineering and as a side job I sysadmin on Linux earn money. My job is mostly based around server orchestration using ansible, network planning, and various other coding tasks.
To me Linux is simple. Things are mostly well documented in the opensource world in my opinion. It even goes as far as programming language documentation that I'm going though in my studies. F# list docs is horrible compared to the Ruby enumerable page, even though the information is supposed to be basically the same. One is big corp and crap, the other is small and well written. But that might just be my personal preferences.
(And yes, F# is OpenSource today, but it started out differently and that is what my point is based on. Again, I might be vastly biased, so please take this in to consideration. I actually like the language, just not the documentation.)
Oh, and I run Linux (something with apt) on all my machines today, and do not understand how any OS works without a decent package manager!
I would look into Ansible for this. I am a novice at scripting as well and use this for my setups. Once you have a layout it becomes very easy to add apps and config changes.
http://www.ansible.com/webinars-training http://il.luminat.us/blog/2014/04/19/how-i-fully-automated-os-x-with-ansible/
I took this route when I wanted to launch my first django project and I definitely learned a lot.
I'd highly recommend looking into supervisord for managing gunicorn and any other processes you might want to throw in there. I generally use supervisor to manage gunicorn, celery, and elasticsearch when needed.
After I learned some basic configurations for supervisor, gunicorn, and nginx, I started reading the Ansible documentation to automate my deployments.
If you decide you want to look into ansible, this cookiecutter project helped me get started. It has basic configuration files for supervisor, nginx, etc. It does use uwsgi, however. I ended up forking the project a while back and modifying it to support my own needs and now it's my go to project template.
Ansible. Kind of really dig the whole automation and developing thing, so realizing that Ansible uses a modern and sane programming language (Python) seemed like the perfect opportunity to start learning something awesomely geeky.
I've been thinking about creating automated installs/deployments with tools like Docker, Ansible, and Vagrant which could serve not only as machine-readable installation instructions, but also something relatively easy for a user to get started since versions can be locked to a specific OS for quick testing. Does the community have a preference about any specific tool(s)?
Really depends on why you want to use this. Is it because you want to tinker with a complicated OS setup before running it on the PI?
Might I suggest using something like ansible to install and configure your OS instead?
You could first set up a mini debian installation in a VM with it and if everything works, run the same setup against your Raspi via ssh. That way you can separate your testing environment (VM) from your production environment (raspi), too.
Both SaltStack and Ansible are written in Python and support custom modules written in Python that can extend the default set of modules and DSL-driven automation you get out of the box.
I use Salt in my homelab and have a few Python modules for doing custom tasks but mostly write YAML SLS files. I also use Fabric for some simple automation scripts. Fabric is super useful for abstracting away shell execution, connecting to hosts, etc.
For the management: Maybe try dagobah. Luigi looks good here too.
For the project creation: Perhaps create a skeleton with the basics? Scrapers are hard because the implementation is so different between each site that is scraped. Generic scraping is easy Ie. get all the links. However, when you want specific information, you are kind of stuck digging into the xpaths or css selectors.
Bottom Line: I use ansible on my mac and on remote servers. I can bootstrap a new VPS server with everything I need with one command. It takes about 10 minutes, mostly downloading and installing. You can script it to perform almost every task and it just works. I can't tell you if this will work with Windows, but its worth a look.
Welcome to the backend party. You must first learn to crawl before you can walk - doing things the hard way is not necessarily bad if you are still learning. That being said, you will never escape the task of setting up and configuring a vanilla server environment. However, there are tools to make your life easier doing so..
I can't speak too much for Windows based servers - but using a remote desktop session and doing configs via a GUI (which is how your post reads) is a very slow method. In an ideal world, if your doing configs manually it is arguably faster to connect over SSH and edit your config files in a terminal. Again this may not be possible with Windows servers - please correct me if I am wrong.
Going further, there are a plethora of devops tools out there to automate server deployments - Puppet being the first that comes to mind, as well as Chef and Ansible
It looks like a lot of the requirements to reverse engineer a galaxy server are present in the source. I found that the -s
option already allows for an alternative provider.
This script disappears in the 2.0 devel branch, so now I'm looking around for clues to its future. A recent-enough announcement speaks of a reinvestment in galaxy, but unclear if that means api changes.
I don't know about a tool built for theme-switching, but you could achieve this using Ansible. It's a tool usually used to provision and orchestrate servers via ssh not unlike puppet or chef, but simpler and you can also use it locally. The scripts (called playbooks and roles in Ansible lingo) are pretty simple YAML files where variables (like fonts in your case) and actions are defined and there are modules which help deploying config files from templates (template), or modify files inplace in various formats (e.g. lineinfile, replace, ini_file). You have to build your playbooks though which is a bit of work if you have lots of different config files to change (or, with a bit of luck, there's already someone who built and put it on the playbook sharing site, galaxy).
I'd also look into Ansible. http://www.ansible.com/ They have a commercial GUI that can be used by your dev teams to execute build script. Ansible has built in modules for docker and AWS. http://docs.ansible.com/ansible/list_of_cloud_modules.html
Most configuration management/orchestration tools will handle deploying Docker images. Obviously they can report on what is where.
Here are a few articles about Ansible related to docker for example http://www.ansible.com/docker
Docker is pretty much like any other build system, and so most tools for managing build systems will work just fine with it.
ну, и во всех дискуссиях на тему девопса, которые я читал на реддите и hacker news нахваливают ansible. кстати, судя по этим дисскусиям, с chef у многих (включая меня) аналогичный твоему опыт.
Well, that depends entirely on what tool you are using for deploys. I mentioned ansible in my post; You may also want to check out fabric:
Ansible: http://www.ansible.com/home Fabric: http://www.fabfile.org/
Fabric is extremely easy to set up for small projects, so I'd recommend that if you need something quick. Otherwise, hard to beat ansible.
http://www.ansible.com/how-ansible-works
> Ansible works by connecting to your nodes and pushing out small programs, called “Ansible Modules” to them. These programs are written to be resource models of the desired state of the system. Ansible then executes these modules (over SSH by default), and removes them when finished. Your library of modules can reside on any machine, and there are no servers, daemons, or databases required. Typically you’ll work with your favorite terminal program, a text editor, and probably a version control system to keep track of changes to your content.
Enabling SSH key authentication and locking down the server to just that is about the best you can do.
Changing the port on which SSH listens on is at best security through obscurity.
Sure, it might stop some bots from port scanning you on the default port - but they're never ever going to get in without your key and it's going to do nothing to stop a motivated attacker from running nmap against your hosts and don't have your key either so the big takeaway is to enable key based auth only!
Changing the port also is so annoying in a corporate environment, especially if you have to deal with different clients (is it 2222, 4222, or 8022 today?). If you really want to keep people out, learn how to set up a bastion host and only allow SSH via that using SSH's ProxyJump option.
fail2ban is a fiddly script, it's a better idea to block multiple connections at a kernel level with a firewall. Here's how to protect SSH with UFW:
sudo ufw limit 22/tcp
sudo ufw enable
Now if someone tries to brute force you, their IP will be blocked at a kernel level for a number of minutes, rather than relying on a Python script to go through your logs.
If you really want to get good at system administration, my advice is automation - this job can get boring so the more you automate, the more time you have to do fun stuff instead, so learn Ansible and put as much as you can into playbooks.
On the collectd side, all you need (in each node's main collectd.conf config file) is something along the lines of:
LoadPlugin network <Plugin network> Server "influxdb-node-ip-address" "25826" </Plugin>
On the InfluxDB node, enable the collectd input as documented here.
I'd recommend using something like Ansible with its templating support to maintain config files across the various VMs and physical machines. Keeping that many files up-to-date and in sync gets old fast.
This is cool. Thank you for sharing.
The thing is, what happens when one of your services starts acting weird? If one of the systems/services/components goes tits up, what do you do? What is your plan to monitor/log the system while its running or what if you need to make a tiny change?
Those are just a few of the problems with docker. The pushing and pulling of docker images is a PITA. There are lots of memory leaks, bugs and other tom-foolery that make running these long term kind of scary. If you container is internet facing, I would seriously harden the rest of the network.
Have you tried Ansible or Puppet? Ansible is really, really awesome at building/configuring/testing/deploying stuff in a repeatable, non-destructive way. It will build/configure/test/deploy what ever you want to almost any system securely, provided ssh/python on host machine.
I can spin up a Digital Ocean VPS with a full Django/Rails/Go/JS app in about 10 minutes with most of the time being spent on downloading packages. Then I can do the same thing to 50 more servers at the same time, or Ras Pi's or home machines... if it has SSH and python, I own it with Ansible.
Not sure what your stack is/what you're looking to automate:
https://terraform.io/ - Metal automation/ Cloud service automation ( I personally havent used this yet, but looks awesome) there are more than likely better alternatives
http://www.ansible.com/ - Configuration management, use it to download and configure wordpress onto a linux box.
http://jenkins-ci.org/ - Make it "click of a button" simple. Use it to automate testing.
There are probably other solutions, we just find this to work well. We also have scripts in place to deploy from staging servers into production. AUTOMATE ALL THE THINGS!
spawn any linux distro as vagrant box and execute remote commands using fabric http://www.fabfile.org/ (python with ssh wrapper) or ansible (and you do not need ansible tower) http://www.ansible.com/get-started (which is also python)
You can (we do) use core (CLI) ansible for any number of servers. There is no limit and no restrictions. You only need Tower when you want additional capabilities outlined here: Tower Features.
Namely:
As mentioned with the RH acquisition all of this will also become a community open source project and you won't have to pay going forward for these features. The paid product will then be differentiated by official support.
at my dayjob we use Ansible to automate our infrastructure management. if you desire an abstraction layer to sit in front of your Python-related management tasks I would definitely recommend doing it with Ansible.
Generally, you would be better off using provisioning software like ansible to manage your configuration files. This is especially the case since most config files contain "secrets" that you do not want to keep in your source code. If you have never messed with server provisioning before, I would definitely suggest starting off with ansible, as it is easy to use, and requires only SSH.
If you must keep the config file in your source control, you can place it wherever you want during package installation using 'data_files' in setup.py.
https://docs.python.org/2/distutils/setupscript.html#installing-additional-files
jslint plugin for my text editor of choice. It does static analysis to pick up silly mistakes like missing closing bracket that are easy to miss.
Error logging tool like sentry - it tells you when a bug happens in production. You can fix bugs you were previously unaware of.
ansible for deployment - write re-playbooks and run them for deployment. So simple and quick. Huge community too.
User tracking tool like intercom - so you can track where users are clicking on your site.
Thanks for the feedback! We had completely forgotten about sharing it. We just added some social sharing buttons on the left (scrolls with the page).
And yes! We're actually writing up a blog post for the Devops position, specifically regarding Ansible. This is more of my territory as I definitely consider myself a backend developer. I've been using Docker more recently but this totally caught me by surprise. Literally had no idea what it was until we worked on this project. So we're going to dive deeper into the data and see what we can find out, then make a blog post about it. I'll follow up here when it's posted!
ansible, clusterssh are popular and proably in your repos.
We use nrpe for monitoring, and sometimes us it to run arbitrary commands whose output we can query in icinga.
If you enable fact caching, you could probably use hostvars to make the determination in a later task/play. Not sure if there's a benefit to that method, just typing out loud.
Also, FWIW, this might be a nice use case for the upcoming system tracking in Tower, although I'm not sure of its implementation. http://www.ansible.com/blog/ansible-tower-2.2-preview
If you consider Iceland then you should consider Norway as well imho, Ireland... not sure how much of a tech industry they have but the country is slowly crawling out of it's recession.
IT in Iceland? Well, I have seen some jobs there, especially at data centres! I was debating about setting up a new company and being the outsource for work since a lot of companies might not need a fulltime sysadmin (nor can afford one), but once I get close to actually doing it I will look into it in detail.
If you don't know either then it's worth learning both, since Puppet has been around for a while (https://puppetlabs.com/) so that will help you get a job where they use it, and Ansible (http://www.ansible.com/home) is a lot newer but if companies use puppet you can't just know Ansible (although still better then nothing).
It's just a way of doing our job in a nice reproducible way, whilst storing our knowledge for when we leave. A great way to admin several hundred boxes and I highly recommend them both. Might be worth setting up a few local vm's and messing about with them.
That looks very Red Hat / CentOS / Windows specific, and is pretty hardware intensive.
For a slight variation that may be have a gentler learning curve and run all those VMs on weaker hardware, and has a more Debian / Ubuntu flavor, consider:
but otherwise great list - it's pretty much mirrors my work environment (except Debian instead of CentOS everywhere).
I see you solved the problem, but since I originally raised the point about snapshots, I wanted to clear that up.
When you take an image from an existing instance via the AWS management console, at least in some circumstances AWS creates a snapshot automatically as part of that process. Those snapshots end up listed on the snapshot tab, with a description that indicates they were created automatically and which AMI they relate to.
The point is you definitely don't need to manually take a snapshot and then create an AMI from that.
BTW, looking at that script you posted, you might be interested in looking at Ansible (or tell the author of the script to.) It allows much simpler and more powerful automation for these kinds of tasks than bash scripts.
I absolutely don't mind, at all, managing my own server. Which is what is SO DARN appealing at DigitalOcean.
I have a bad tendency to overlook things - sadly. However I am aware of that.
So, with that said, I fear that I will forget to lock down or secure something, and leave my client sites open to threats.
I've already gone though my project sites and protected my SSH by following this tutorial.
But was it really THAT simple?
Is there MUCH more to do, to call my website, "production ready"?
> learning a configuration management tool
I've heard of ansible before and yaml files, ember, node, ruby, puphet, gulp and tons of other "NEW" things that are supposed to ease my workflow, make things better in their own way.
I have been slow to adopt new technologies into my development. Been using PHP for a couple years. Learned git this past summer. Only started using SCSS/Bourbon/Neat last week.
I just don't want to overwhelm myself.
NOTE: I just read over the pricing for ansible and it's frightening, lol. Makes me glad I have a well documented plaintext file outlining my steps, so I don't have to refer to an online guide.
I THINK you're trying to ask how people manage their servers... With tools like Ansible, Puppet, Salt, Chef, etc. I use Ansible, but I've used Puppet in the past. Ansible is simple; It doesn't require client installations. Would highly recommend.
If you want to get started with it, go through the Quickstart video or google some online tutorials.
Does it have to be all on an ISO? If what you're looking for is a way to reliably rebuild systems, you could look at something like Kickstart (via iso, pxe, something else) + Ansible:
You could make a bash script that will install ansible and execute the bundled playbook. It's how Tower gets installed.
I think it's a small price for a great tool. You already install vagrant, vmware, your OS is already "polluted".
I also prefer Ansible because 1 machine gets "polluted", and has a lot less dependencies (Python which is pretty much everywhere).
Salt depends on 0mq and has to be installed on master and minions, last time I checked.
To conclude, I think you should use whatever you used in production, but if you're starting from scratch, my votes goes to Ansible.
Ansible, written in Python, actually has basic Windows support. It utilizes powershell on the backend with some Python handling the in-between stuff. Might at least be worth check out =)
Some configuration management software is written in Python as well as libraries that are just generally useful for developers and operations alike.
It's also possible that the products these companies have are written in Python. Being familiar with Python and the Python ecosystem would be helpful if you were responsible for supporting them and helping deploy / troubleshoot them.
If I were interviewing someone for a DevOps position my goto questions would probably be related to deployment and monitoring. Maybe something like "write a script that uses ssh to safely install a new version of some package and restart a service afterwards". Or "write a script that monitors a log file looking for some particular text and alerts based on it".
Good luck!
Could you provide a more specific example of what you're trying to do and which wheels you think you may be reinventing?
You might be able to leverage something like Ansible with a custom callback plugin or module to do this for you, but that could be a bit overkill. Rundeck might do the trick as well, and wouldn't require you to learn the ins and outs of a config management framework.
The docs follow you through installing Ansible and how it all works ( http://docs.ansible.com/intro.html ). You could also watch the quick start video to get a general idea ( http://www.ansible.com/resources )
For Arista they support ansible, its very customizable and Arista has custom modules built in. It might be a bit of a learning curb but you can pay for a GUI, otherwise its free.
Hopefully Cisco decides to work with the Open Source community, I know some of the Business Units are discussing it.
Use a toolchain that does most of the work. The idea of hand rolling your own Perl/Python code and reinventing the wheel works but take a lot of time.
Look at tools like Ansible and Puppet.