Yes and no. Docker has no inbuilt way to know right now whether a base image contains code for x86 or ARM. The community has been abusing Docker tags to try and differentiate, but there's no widely accepted convention on how to separately tag the x86 and ARM images for a given container. There's also some complexity around getting automated image builds for ARM, especially if you want to build from an x86 system. I ended up buying a Pi just to use as a Jenkins ARM build agent.
Assuming you have an ARM container though, it works great. Like on x86, performance is pretty much almost the same as running a given app without Docker. The 1GB of RAM is arguably the biggest hurdle once you throw a few containers up. I have a 5 node RPI3 Docker Swarm cluster I use for development and the only real issue I've had is not being able to run my preferred orchestration tool, Rancher - there's some hurdle preventing an ARM build. You can even get GPIO access if you run the containers in privileged mode.
Haha, now we're getting into the fun details. So the docker management platform I chose to use is called Rancher. Its fairly light weight, has a beautiful UI, has lots of powerful features and runs in its own container.
Rancher has includes a built-in load balancer, but its not quite smart enough yet to support dynamic L7 routing, so I created docker-rancher-events that listens to events via the Rancher API and configures the Load balancer automatically. It does this using tags defined in the service definition (docker-compose.yml file) and some metadata configured during the chef-run (like the base domain: *.depot.local)
So when I spin up a service named "plex", the event hander registers the new service with the load balancer using "plex.depot.local".
All that works great for local requests on the server itself, but the cookbook also configures a DNS server so that any local network computers can request *.depot.local and all traffic will be routed to the server and its load balancer will handle the routing to the correct application.
Sorry for the wall of text, but I'm pretty proud of getting that all working :)
Maybe Rancher:
Can see some pictures of the UI here:
I personally really liked Rancher OS, and intend to deploy that for containers that don't need access to bare metal (for things like Plex, the idea of Proxmox -> Rancher OS -> plexinc/pms-docker feels one step too far, so I just run it as Proxmox -> plexinc/pms-docker directly).
rancher is an orchestration tool which can create many stack types running other orchestrators and schedulers - kubernetes happens to be one of them, it can also do docker swarm and mesos. http://rancher.com/docs/rancher/latest/en/
I've been using google-drive-ocamlfuse and had consistent out-of-memory crashes. Is rclone more stable? What's your experience with it?
Also, re docker, I've found the linuxserver images to be very consistent and simple to configure for Plex + NZBget + Sonarr + Radarr, my docker-compose.yml is her if you wanna look.
Also Rancher has taken away some of the configuration pain for me.
Take a look at Rancher. It come with an integrated HAProxy load-balancer. There are catalog services for updating DNS and getting Let's Encrypt certificates.
It's really easy to use and maintain. The downside is that it uses more resources for itself than just going with Jwilder nginx-proxy.
Possibly a good opportunity to create a simple mobile app using the Rancher API, http://rancher.com/docs/rancher/v1.6/en/api/v2-beta/. You could start with the functionality you need and accept PRs to increase the functionality for other use cases.
I tried Portainer and liked it but wanted something with a little bit more power. If you're in a similar boat, check out Rancher. It does support clustered, highly-available setups like you mention, but it also works just fine with one "host", and I'm enjoying the extra flexibility it offers.
To set it up, just do the manual install, then visit http://x.x.x.x:8080 and it will walk you through the setup.
On automating running containers, you're asking for Docker Swarm. Docker swarm is the easiest among the docker container orchestration technologies out there. Docker Swarm on Rancher(http://rancher.com/rancher/) is even better. Docker swarm and rancher both help in networking docker containers.
Have you looked at docker compose? It doesn't work cross host, but it gives an idea of how containers can communicate to each other using dockers networking.
For cross host communication you should look at some of the orchestration tools like Rancher or kubernetes.
Rancher is pretty quick to get up an going since you just need UDP ports 500 and 4500 on each host to be accessible for you containers to talk to each other. Not sure about Kubernetes, never setup a cluster from scratch, but you make Kubernetes clusters with Rancher as well.
Email:
dirt cheap method: rent a VPS in Singapore (digital ocean, aws, etc), then install mail-in-a-box
more money than time: google apps.
Hosting:
It seems so. They appear to be focusing on being your platform for managing k8s everywhere, even the cloud k8s installs. I just reread their announcement:
You'll have a few issues if you're trying to install Docker in an LXC container. Try to install it on a VM containing minimal Debian, Ubuntu, etc., or a container-specific OS like RancherOS.
Hey sorry I'm super late seeing this -- Rancher 1.x actually had support for swarm: http://rancher.com/swarm/
I'm not sure what the support looks like now, since they seem to have doubled down on Kubernetes for 2.0.
I agree - I looked through it but couldn't find what I can set to make something a "stack" with the new k8s changes. I believe "stacks" have been based on docker compose files, and I don't use those. I was hoping there would be some kubernetes label I could use instead.
I also looked at the article on cattle/swarm/kubernetes side-by-side but there's no mention of what stacks look like in Kubernetes.
[EDIT] I also just skimmed through the kubernetes video from the older version of rancher, and I never see the term "Stack" given to any k8s deployments in the UI... Maybe it's just not a thing
Check out Rancher. I don't think it's as hard as you might think. I also am sorry I'm yucking your yum, but I tried this once. I found it requires a ton of effort to do it right, specifically test. How do you test whether your message actually got to the consumer? The correct answer is you can't, and it only highlights using the wrong pattern. The data isn't really a stream, there's no reason to use Events. Also, a reducer is a good pattern to ingest streams, but in this case, an email is being sent. There's no stream, and there's definitely not much to ingest.
Maybe you should take a look at http://rancher.com its an open source tool which helps you with managing a kubernetes cluster. It also has the ability to deploy a cattle cluster (more lightweight alternative to k8s developed by the rancher team) which also gives you sheduling, loadbalancing and managment of docker containers. My company is using rancher in production and we are really satisfied with it.
I don't fully understand your requirements for the ws but maybe you should take a look at combining redis with socket.io to have a ws connection shared between multiple servers.
Also for load balancing take a look at sticky sessions to make sure that your requests are hitting the same server after the initial login request.
Hope this gives you some directions to look at.
If you want something to manage your containers that doesn't have the learning curve of Kubernetes, you may want to check out Rancher. We use it at my work and, for the most part, love it.
If you don't want to go through the hassle of learning about container management, however, I'd definitely recommend picking one version of docker to use and baking it into your ami. Good luck!
Personally a better implementation of this concept is RancherOS where each system process is running inside a container. Far less overhead but with similar security measures.
It's also more mature and has been around longer. http://rancher.com/rancher-os/
I can only talk for Linux NFSv4 but it works out of the box with Rancher when no_root_squash is set. So sorry, can't seem to help you there.
If you are feeling experimental you can also give Longhorn a try.
So Bytesized connect is basically a Docker manager which is pretty cool and it has a lot of "seedbox friendly" apps available.
Seeing that there was some mention here this ~may~ go pay later on people might want to also take a look at Rancher as well.
It kind does a lot of the same things except you can keep the whole process "in-house" and under your control - no need to use a connection to an outside site.
You will however have to do a little searching to find Docker images that represent the apps available via Bytesized Connect. But not to worry they are pretty easy to find.
We use Rancher for orchestration... It's incredibly flexible, deploys in HA and plays nicely with Kubernetes or Swarm if you want to layer them on top. It's now hit v1.0 so is nice and stable, too: http://rancher.com
I use Docker and it's great. Only thing is I find it hard to really nail down a deployment workflow. I've been playing with Tutum, which I love, but is probably going to get pricey for my needs when they start charging. Rancher is also great because they use native Docker features as much as possible, just filling in the gaps when absolutely necessary, and giving you a nice GUI dashboard of everything.
Thanks for the feedback. I agree that Kubernetes has the most momentum and also the most powerful as well as flexible. However, it's not a simple product.
We've also started testing Rancher for a nice balance between simplicity and automation. Early days yet, but the progress so far has been quite decent. There's a reasonable list of all the other current and emerging tools / OS's listed here: https://github.com/weihanwang/docker-ecosystem-survey
In regards to the shared hosting for Docker I agree. Even for isolation between different projects I'd still want to have the Docker containers isolated from each other. We're working with Odin for the Docker support within Virtuozzo to best optimise this environment. This way there's the advantages of Virtuozzo containers (instant resource scaling, very low overheads and complete isolation) and combined with Docker easy deployment. This is a "nested" container environment, but with complete file and resource isolation from other Virtuozzo containers.