People, please don’t go to this site. They ripped it off of Christopher Noring. Idk what kinda fishy shit the site owners (and OP) are up to but, it’s no bueno.
Here’s the original: https://dev.to/softchris/5-part-docker-series-beginner-to-master-3m1b
I recently switched up my Raspberry Pi + Pi-Hole setup to use Docker Pi-Hole on the Pi. I didn’t really need to do that, but it was a good excuse to use Docker more.
I’d be interested in what others say. I’ve been looking for more stuff like that but have been coming up short.
Helm adds a fair bit of complexity on top of Kubernetes but I think it can also help show you around. Take a look at the helm charts repository for some ideas that will be useful for you. If you know something well like rabbitmq, redis, etc... and you know what running them looks like (ports, storage, etc...) then deploy them using helm and then look at their charts source to get an understanding of helm and kubernetes.
If you see something in the chart that you don't know, look it up, the Kubernetes docs are pretty good at explaining what the different resource types are and how they relate to each other.
I think Red Hat has done a bit more on top of Kubernetes to consider it "just [a] distribution". They've added multitenancy and a lot of networking and registry stuff on top, plus very easy to deploy aggregated logging and metrics. Check out https://www.openshift.com/container-platform/kubernetes.html for a comparison. I was very surprised to see the add-ons beyond Kubernetes.
Did you check the documentation? As you are using k8s then you can inyect the secrets to the container via files or environment variables. Do NOT put the secrets in the image.
Fundamentally, you are misunderstanding what docker is.
It is not a virtual machine of any sort. It is a framework for process isolation. You need the base OS because every docker image uses the host's kernel. It is just isolated away from all other software and libraries on the system via namespace and cgroups. (for linux anyway, I have no idea how it works on windows)
You could use something like CoreOS, which is a stripped down linux meant for container hosting only.
Looking into it briefly, it appears you can mount your NFS shares to a given host then mount them through to your containers as volumes.
It sounds like you just need some kind of "getting started" guide for Docker. You might start with something that focuses on Node.js apps, since this is a Node.js app. Beyond that, this looks really basic.
Here's an example: https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
> This specific repo, https://github.com/the1laz/cgateweb, does not get updated frequently, but ideally the docker would check to see if the file's it's using are from the latest, most up-to-date, repo.
That is not a function of Docker, rather it is a function of a CI/CD tool that you might use to build your container image.
Docker images are normally distributed through registries. Docker Hub is the default registry. Accounts include 1 free private repo and can sign up for more. Google has Container Registry, AWS has ECR; these make most sense if using their platforms. It is also possible to host private registry. I would suggest using Docker Hub if only need the one private repo.
It is possible to save image to tar file that can copied and loaded into other Docker instance. But it is easier to push and pull from registry since most of the tooling assumes it.
I use traefik for mapping domains to containers. When I add a container to my docker-compose file with the correct labels traefik automatically picks it up, adds a reverse proxy and fetches a cert for the domain.
This guide is similar to what I did.
Bash on Windows (the Windows kernel implementing Linux kernel system calls) is a distinct feature from Windows containers, although they could theoretically reuse some of it to run Linux containers on Windows. Microsoft is supporting native Windows containers in Windows Server 2016, meaning that Windows applications can run in containers, nothing to do with Linux.
Relevant (unanswered) SE: http://serverfault.com/questions/767994/can-you-run-docker-natively-on-the-new-windows-10-ubuntu-bash-userspace
I mean, this is a lot of work. This isn't a quick "some contract" amount of work. There are loads of factors needed to be considered before going at this project.
Personally I'd recommend using an open source project like: https://pterodactyl.io/
The Docker Lab & Tutorials are a good start. https://www.docker.com/play-with-docker
The point of docker is that you can build a small enclosed environment that has exactly what you need, and can then be replicated where you need it. That way the testing environment & the deployment environment are exactly the same so you never end up with the "It works on my computer" ( but not on the server because of configuration differences ).
I hope that helps ...
You’re describing the core concepts of what most container orchestration systems provide you. A way to manage, interconnect, and schedule one or more containers in a sensible way. E.g. The concept of a Pod in Kubernetes: https://kubernetes.io/docs/concepts/workloads/pods/pod/
I don't think deploying on Docker vs bare metal makes a difference, but these are some of the considerations you can make in evaluating them as web servers generally: https://www.digitalocean.com/community/tutorials/apache-vs-nginx-practical-considerations
Have you tried the docker tutorial? Last week I also set out to learn docker, the tutorial really helped explaining things. It shows how to setup volumes, so that it mounts your folder to the container for developing purposes.
boot2docker is long gone. Docker for Mac uses a Linuxkit VM running in xhyve via HyperKit - there's a basic architecture diagram on this page: https://www.docker.com/docker-mac
I think you're right that you'll need to access the VM to deal with filesystem stuff though. There's some info about accessing the VM here: https://forums.docker.com/t/is-it-possible-to-ssh-to-the-xhyve-machine/17426/10
> one of the speakers said that a ‘container is never going to be as secure as a virtual machine’.
This doesn't make any sense. It's not just "one of the speakers" saying that, it's Jérôme Petazzoni at Docker saying "containers don't contain" and suggesting running Docker inside a virtual machine:
The haugene one is simple. Just add your credentials where it says in the -e (environment) variables for username and password. Be sure to set your provider to NordVPN as well.
I don't know how different docker is on Windows. What cmd error did you get?
You can Translate a Docker Compose File to Kubernetes Resources quite easily. So that nice if you have plans to run in k8s.
Docker doesn't really help here because the host EC2 instance (server) would still be running, even if the teamspeak software was stopped when the last client left. EC2 charges by the second that the instance is online, regardless of what software is running, so you need to shut down the instance to save money.
EC2 does have an API to allow programatic control of instances (servers), so you could write a program to periodically poll teamspeak and shut down your instance if no one is online (assuming you're not using the instance for anything else).
Bringing it back up is more difficult because if the server is currently down, there's nothing to poll, and nothing for the first user to connect to. If there's some other way you can tell when to start up (a webpage with a "start my teamspeak instance", an email handler or API call to the game to tell when someone from your list of users is online) then you could run that somewhere (not on your EC2 server, because that will be turned off at this point - perhaps AWS lambda?) to periodically check and boot the EC2 instance & teamspeak when required.
It's probably quite a bit of work to write yourself and get it all working reliably though if it's just for personal use. Have you considered using / trying a smaller, less powerful type of EC2 instance to save money?
You need a scheduler. K8s, mesos, rancher, dockerswarm.
You can do horizontal scaling, not vertical. Meaning in kubernetes you would assign horizontal-pod-autoscalers (hpas) to watch the cpu usage of a deployment and it would scale up new pods as necessary. The new pods would be based on the original deployment so the resource limits would be exactly the same, hence why it's a horizontal and not vertical scale.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
Don't even attempt to do this with your databases until you fully understand how they work. You will likely destroy all of your data, or you'll wind up in a split brain scenario, etc.
If you are ordering certificates from the production let’s encrypt service there is a limit on the number of certificates that can be ordered. For testing always use the staging issuer.
Ref: https://letsencrypt.org/docs/rate-limits/
I haven’t reviewed the other issues.
Generally for Docker, the advice is to use personal access tokens where possible to avoid your main Docker hub account credentials being held in the clear.
For Docker for Windows/Mac they leverage the credential store on each platform to protect your creds, but (AFAIK) on Docker engine on Linux they don't, so it's important to a) use a personal access token and b) make sure you protect access to that file.
Docker the company has created what they call the 'Universal Control Plane', which is now in beta: https://www.docker.com/universal-control-plane I checked it out, and it seems really great - even better than tutum I think. But we are currently using mesosphere's DCOS in production and it's a great solution, not just for docker. They have a free community edition that takes 5 minutes to deploy to AWS via a cloud formation template. I suggest checking it out ( warning: the default templates use m3.xlarge which are very costly, you can modify the template file to just use t2.micros while testing): https://docs.mesosphere.com/install/awscluster/
CoreOS Container Linux and Fedora Atomic Host existed side by side. RedHat acquired CoreOS and integrated it into its product line => meaning EOL for CoreOS Container Linux. RedHat now offers RHEL CoreOS (RHCOS): https://access.redhat.com/documentation/en-us/openshift_container_platform/4.1/html/architecture/architecture-rhcos
Fedora Atomic Host is EOL already and has since been replaced with Fedora CoreOS: https://getfedora.org/coreos?stream=stable
Seeing how these still exist maybe the momentum still needs to happen.
at least with k8s the true way seems to be ConfigMaps: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap
we use these (with openshift) for injecting any sort of configuration on top of a base image
Check out https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ .
Here is a little snipplet of a migration job I use: ####### # Migration job ######## --- apiVersion: batch/v1 kind: Job metadata: generateName: migration- labels: job: migration spec: template: metadata: labels: job: migration spec: containers: - name: server image: XXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/someCoolContainer:1.0 command: ["./bin/migrate.js"] ... ...
Okay, let's clarify a little. You are trying to use Docker Machine to create a VM. Docker Machine is not Docker. Docker Machine is a tool to create VMs tailored to run Docker containers.
Are you planning on running a Varnish container or do you wish to SSH into the machine to install Varnish on the host?
Why do you require CentOS?
I have a hunch that you would be better served by a tool like Vagrant.
I use openswan to connect to two regions in AWS, works very well. One region is on the 10.0/16 . citr, and the other is 172.16/16 citr. Then I add a route, to the route table that tells traffic to go through the openswan / ipsec tunnel if the traffic is going to the other region. This can work with and public cloud or bare metal.
I do it, it's fun. In the past I'd run OpenShift as a single machine master/slave combo but I don't think it's as easy to do now. Today I just run minikube on my home server and give the VM as much resources as I can. Reaching minikube from outside the server is the hard part. You can do it with vboxmanage natpf. I use that for kubectl on the local network.
I also use cloudflare Argo tunnels as an ingress controller to expose services to the web, it's kind of immature but really cool.
Also check out DigitalOcean, they just announced managed k8s and are giving away free early access clusters until September https://www.digitalocean.com/products/kubernetes/
Well, if I were you, I would package all your front end stuff (HTML, CSS and JavaScript) into Docker container with nginx running in it. This Docker container could connect to the other Docker container, running your back end stuff - node + express JS.
Please take a look at: https://www.docker.com/blog/how-to-use-the-official-nginx-docker-image/ (Nginx with front end stuff)
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/ (backend with node and express)
If you're not on the free tier for amazon, you might be interested in checking Digital Ocean. The $5 or $10 per month offerings are pretty great value compared to what you get at AWS. (I don't work at either company btw, but use both of their services)
Docker Engine is open source.
Docker Desktop is a product that contains Docker Engine and a bunch of other stuff to run it on OSX/Windows. If you'd like to create your own version of it that includes Engine and other components, you can. You're also free to install WSL or another hypervisor, set up a Linux VM, and install Docker Engine inside of it. You can also use Docker Desktop for free for personal use or if your business is under the limits: https://www.docker.com/pricing/faq
Open source comes in two flavours: free speech, and free beer.
Docker Desktop / Docker Community Edition is open source, and free to use. Like free speech.
Registration for a Docker Personal account is part of the "cost" of being able to download a pre-built version of Docker Desktop. https://www.docker.com/blog/updating-product-subscriptions/
Could you do something like docker run -it <container> sh -c "workon myproject" && bash
? Just an idea, untested.
Edit: nvm that probably wouldn't work for setting the environment, but you could maybe do something more like bash && workon myproject
... Not sure. Just throwing some ideas at you.
Edit2: Looks like this question is what you're trying to do. http://serverfault.com/questions/368054/run-an-interactive-bash-subshell-with-initial-commands-without-returning-to-the
And some more info: http://stackoverflow.com/questions/7120426/invoke-bash-run-commands-inside-new-shell-then-give-control-back-to-user
Example: bash --init-file <(echo "workon myproject")
Typically the answer is "One process per container". That's a bit of a misnomer because there is often more than one process running - but you should only be starting one. So as a simple example (in a simple enviroment), wordpress: One container for the Wordpress PHP running PHP-FPM. One container for Nginx. One container for MySQL.
In your case, uWSGI is running the flask code. Assuming you're doing something like the doc examples. So 1 nginx. 1 Python app. 1 Tensorflow.
If you want the "official" answer, docker.com says:
>A container’s main running process is the ENTRYPOINT and/or CMD at the end of the Dockerfile. It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.
​
In short, there are a few exceptions when you need multiple things run in the same container, but you should avoid that if at all possible. Otherwise you're basically back to VM's.
You could use docker on coreos https://coreos.com/using-coreos/docker/ and etcd instead of consul as Registrator works with both. If etcd can write an nginx config and reload nginx when a query changed then that could replace consul template. I think then the two systems would be basically the same.
Can you link what you're trying to setup and install? I see a bunch of different products on what I assume is their homepage:
I see a bunch of images out on docker hub too, are you trying these?
> I can't say to Swarm on which nodes exactly I need to deploy those applications Check Swarm Filters
But it's not super clear what you want to do. Can you explain in a more specific way what you want/need? Maybe you don't even need Swarm in the end.
usually i'm using https://ngrok.com/ to forward local ports for presentation.
for permanent solution i'm using nginx proxy container:
Don't pre optimize for k8s when you don't need it now, when the time comes moving containers should be ok since the images are compatible.
How do you monitor anything before docker? Do the same here. See this for example
Same as first point, also take a look at some loadbalancers like traefik and auto discovery tools and how easy they integrate with docker.
The Veeam agent also requires the Veeam B&R server which is Windows only, so dockerizing that piece is much more involved and AFAIK requires Windows specific container implementation (someone please correct me if I am wrong), so all in all not sure Veeam is actually what you would be looking for. They do have a free version (https://www.veeam.com/virtual-machine-backup-solution-free.html) but I don't know what it's limitations are. I don't see an actual link to the video you were referencing so not sure what they are doing in order to be able to make any replacement recommendations.
https://dev.to/azure/docker-series-fundamentals-microservices-cloud-and-mindset-5f64
Hello, it appears you tried to put a link in a title, since most users cant click these I have placed it here for you
^I ^am ^a ^bot ^if ^you ^have ^any ^suggestions ^dm ^me
> So are you using Docker as a development tool or just as deployment one?
Docker is a deployment tool. Trying to use it as part of your development workflow, especially if you're using old-school Java tools like Eclipse and Maven, will lead to headaches.
If you want something to simplify / standardize your development environment, Vagrant is probably a better option.
edit: should have known better than to post "a screwdriver isn't the best tool for hammering nails" to /r/screwdrivers...
I understand Docker, and use it everyday at work. However, OP is trying to install Maven into the container, then (presumably) copy Java source code in, and run the maven compile inside the container. This breaks many of the Docker best practices, like having containers be immutable, and having a container image be something that can be deployed as-is to a staging or prod environment.
It's much better to have your build system (such as Maven, Gradle, or sbt) sit alongside the Docker engine. The build system does the compilation & unit testing then outputs a JAR or WAR. If your build system supports it, it can also build a Docker image for you, but if not, you can just write a Dockerfile that includes "COPY output/foo.jar /srv/whatever". For the concerns OP mentioned (like having IDE run configurations, or being able to right-click > debug instead of setting up a remote debugger session) Docker doesn't need to be in the picture.
On a team that wants to standardize their environment setup (which version of Maven, Docker, and other tools are installed) Vagrant is a useful way to do that. And note that Vagrant and Docker aren't in competition - you end up running your Docker containers inside a Vagrant VM.
I guess GitFlow makes sense if you have multiple product versions (1.5.x, 1.6.x, 2.0.x etc) that you need to support and you don't force upgrades on your users, e.g. on premises installed software. But I cringe every time I see it recommended as THE go-to git workflow for every situation.
If you control all the environments that the software is deployed on and you don't really need version numbering (like a SaaS), I'm a happy user of GitLab flow with the environment branches.
Best start: Domain-Driven Design: Tackling Complexity in the Heart of Software https://www.amazon.de/dp/0321125215/ref=cm_sw_r_cp_api_i_CEF1G0BM0DX5YJ60X89Z I also liked (but ymmv): Building Microservices: Designing Fine-Grained Systems https://www.amazon.de/dp/1491950358/ref=cm_sw_r_cp_api_i_N9PWY7RJ4RE0PQ817WZV
I run Private Internet Access and set up my PFSense to push the VPN connection to anything inside an IP range of 10.0.1.50 - 60. So anything I need to lock down, I thrown them into that IP range and they are secured.
vscode has some good documentation on developing inside a container
they provide a plugin called remote container extention which allows you to run vs code within a container
https://code.visualstudio.com/docs/remote/containers
I would say docker can be extremely useful and can help with a development workflow, but... It is another big thing to learn as someone starting out. I would start with a local Apache/MySQL or set up a virtual box Linux vm or sign up to digital ocean or another of the other cloud providers and spin up a small test vm.
Once you understand how to run your code and deploy it and have some practice scripting etc, then move on to more advanced stuff. Docker will require that understanding going forward.
DO also have good guides on setting up simple WordPress vms. https://www.digitalocean.com/community/tutorials/how-to-use-the-wordpress-one-click-install-on-digitalocean.
Check that the Windows firewall (or Ubuntu for that matter) isn't blocking ports that swarm needs.
True!
I think my initial impression of the article was that it was saying "Docker could not succeed in production without these particular bugs getting fixed."
After a re-read, perhaps the article seems to be saying that "Docker works pretty well, but could be an even more powerful offering if it were enhanced with these particular features, bug fixes (which are on the way)".
I will also point out that my defense of Docker here it partly colored by the ridiculously negative reactions to Docker as a solution over at HN.
It sounds to me like you should use something like Ansible to abstract the platform and just pick a couple of your bullets to do a POC?
You are already AWS so I would stay that route, but I personally prefer AWS. You can contact your AWS account rep (you have one even though you may not know them) and have a Solutions Architect come on site free of charge to talk though some options. It's a part of the well architected framework initiative:
Have a look at this article http://www.zdnet.com/docker-libcontainer-unifies-linux-container-powers-7000030397/ it explains pretty much everything you need to know about libcontainer :)
TLDR; Libcontainer is much more efficient.
touch
is a basically utility to create a file (it does a little more, but only this part is important for your use). You can create the file using whatever program you're comfortable with. If you have hide file extensions turned on, make sure you don't create a file named Gemfile.lock.txt for example.
As this is a problem you're like to experience often, I'd recommend installing a bash shell which supports a variety of gnu linux commands, the Git for Windows package is the easiest to get running, you'd then execute your project/docker commands in the "Git Bash" console instead of command prompt. If your uses become too advanced for that, WSL is a good option.
would this work?
https://nodejs.org/en/docs/guides/debugging-getting-started/
you could do it by passing an additional param on the nodejs invocation by overriding the entrypoint that you're currently using.
and then exposing the port for the debugger so that you can debug as normal?
You can either load the CA from which you generated that certificate or ignore cert validation (not recommended).
To load the CA you can use: https://nodejs.org/api/cli.html#cli_node_extra_ca_certs_file
To ignore cert validation set this to false:
NODE_TLS_REJECT_UNAUTHORIZED
This is interesting. Thanks. I came across this as well https://www.slideshare.net/Docker/deeper-dive-in-docker-overlay-networks-81012645 which looks interesting as well.
You’re right, I did not try it because I know how to install a package in a Debian image. You didn’t specify which tag you were using beyond “4.8”, which would be a Debian image. If you want relevant help from random internet strangers, you need to provide relevant details.
So you need to install a terminal editor via Nuget? Type https://www.nuget.org into your web browser and then type “terminal editor” into the search box. One comes up. If your complaint is that Nuget doesn’t have a terminal editor, that was a strange way to phrase it. I’ve never used Nuget, so I have no idea if that terminal editor meets your particular needs or not.
Or just use Debian. Or Alpine. Or Ubuntu.
See http://serverfault.com/questions/767994/can-you-run-docker-natively-on-the-new-windows-10-ubuntu-bash-userspace, like the second or third answer.
The linux subsystem is its own environment with its own filesystem and programs, so you have to install docker via apt-get
, but you cant run the daemon as of right in the linux subsystem afaik. You can run docker -H tcp://0.0.0.0:2375
to get the docker daemon running in windows. I set the DOCKER_HOST
to that url in my bashrc on linux.
I've noticed it's kind of slow sometimes. Git bash doesn't work all the time for me (some tty issue) and I don't really like powershell, so I find myself switching between Ubuntu for windows, cmd, powershell and git bash a lot.
Yes, this is exactly what k8s is great for! One note of caution -- make sure you understand that namespace isolation only goes so far by default. It is logical separation, not true isolation on its own. You need network policies in place to enforce true isolation.
Otherwise, any container can contact any other service by name within the cluster.
For example, from a pod running in app-a
namespace, you can connect to the redis
service running in other-app
namespace, by internal dns name: redis.other-app
(or its long name of form <service>.<namespace>.cluster.local)
NetworkPolicies are required to prevent this.
Quick question. What's the best way to set this up in development so I can test HTTPS/CORS etc in development. In production, I can set the VIRTUAL_HOST
environment variable on each container, but in development, I'm not using subdomains.
Thanks
Edit nvm found it https://letsencrypt.org/docs/certificates-for-localhost/
You have described exactly the use case of a continuous integration server, not docker.
In general, the Dockerfile is in the same project of the code that is going to be executed and just installs what it needs (like npm in your case) and then copies the executable.
The CI server is the one that should pull it and compile it, test it and then package it.
About your questions, all the commands of the Dockerfile are executed before the creation of the image so if your code changes you need to create another image (if you have correctly set up a CI server you can make it create the image the moment a change is pushed into master).
I'd check this out personally: https://atom.io/packages/php-server
If you want to use docker you can map a volume but I've never tried this using docker for windows, I imagine you would have issues with encoding ect.
The real answer for a Docker solution would be to have the container sync/pull your git repo on start/restart to a local/internal volume. This would require you to commit your code to git obviously before testing with Docker.
I'd probably use a combination of local dev with atom+php-server and then smoke test using a docker image that automatically pulls your updates on restart.
Hi,
A great starting point (and still do from time to time) is this article at medium. It's a really good summary of tools and articles that get you started using docker.
https://medium.com/@yoginth/what-is-docker-4494f2fc72e6
After the container is created you can mount a directory (or file) that has to stay persistent. Meaning it will not be destroyed after you destroy your container.
1) Let Docker store and manage the volume for you; Basically each time you create a container it will write a folder somewhere in your Docker data folder.
Example: Docker run --name MyWordpressSite wordpress -v "/var/www/html"
/var/www/html is the directory inside your container.
2) Point the volume to the exact location on the host.
(Recommend)
Example Docker run --name MyWordpressSite wordpress -v "/home/user/mywordpressdata:/var/www/html"
In this example all the data stored in your wordpress site are stored on your host under the home folder.
This way it's easier to find the data of your application and to create your backup routines.
Also I recommend learning Docker compose, it's way more comprehensive when reading a single yml file than a bunch of options in command, and it's easier to duplicate.
My first hit on Google and browsing over the content it looks like a good starting point for you.
https://www.upcloud.com/support/deploy-wordpress-with-docker-compose/
It will put the HTML and database in the directory wordpress-compose.
Also if you want to "enter" your container just run 'Docker exec -ti Containername /bin/bash' or /bin/sh (depends on the base image)
To find out the container name run 'Docker ps'
Cheers
>my engineering superiors/colleagues require VERY VERY conservative approaches to anything on the server/system side that may increase response latency of our app by any quantity of milliseconds. Yes, it's fucking outrageous.
Do you really mean milliseconds there, or is it a typo?
From http://lwn.net/Articles/239625/, section "Performance", the network namespaces which Docker uses don't have any performance penalty themselves, but you're going to have to go through NAT from the default namespace to get to them, so you have whatever latency is added by using iptables.
The reason for my first question is that I don't know the latency which iptables adds, but the worst case is that it's going to be of the order of microseconds. If there is a round trip of 100ms (milliseconds) involved then an extra few microseconds (thousands of a millisecond) is going to be hidden in the noise of that - it's too small to worry about.
The better question is, why are you using GUI apps in docker containers? This is literally what Snaps (see thunderbird snap) and Flatpaks (see thunderbird flatpak) were designed for.
I think you're doing things the hard way since you are trying to manually take a snapshot of your existing volume.
If you want to continue with the manual way use this:
You need to have at least two physical volumes ready and assign them to a single volume group. Once that's ready then you can create your logical volume and start taking snapshots.
# create two partitions on /dev/ubuntu-vg
fdisk /dev/ubuntu-vg
# mark them as physical volumes
pvcreate /dev/ubuntu-vg[1-2]
# create volume group
vgcreate volume_group /dev/ubuntu-vg1 /dev/ubuntu-vg2
# create logical volume
lvcreate -L 10G -n backup volume_group
# list logical volumes
lvdisplay
# Create ext4 filesystem
backup.ext4 /dev/volume_group/ubuntu-vg1
Now you're ready to take snapshots. Create a mount point in your directory for your backup volume. Then create a snapshot of your backup volume as such
lvcreate -s -L 10GB -n backup_snapshot /dev/volume_group/backup
For the merge to take place, you need to unmount, deactivate and then reactivate the volume.
Finally, if you ever need to recover your data, simply delete your backup volume and merge your desired snapshot with a new backup volume. Your data should be restored.
Source: https://linuxconfig.org/create-and-restore-manual-logical-volume-snapshots
Now that that's out of the way, a much easier solution would be to use some kind of service to automatically take snapshots for you. I found snapcraft is pretty comprehensive for Ubuntu and their guide is should be a bit easier.
Link: https://snapcraft.io/docs/snapshots
Hope this helps! Just note that I haven't had the chance to play around with it on my PC so just be careful.
Take a look at Rancher. It come with an integrated HAProxy load-balancer. There are catalog services for updating DNS and getting Let's Encrypt certificates.
It's really easy to use and maintain. The downside is that it uses more resources for itself than just going with Jwilder nginx-proxy.
On automating running containers, you're asking for Docker Swarm. Docker swarm is the easiest among the docker container orchestration technologies out there. Docker Swarm on Rancher(http://rancher.com/rancher/) is even better. Docker swarm and rancher both help in networking docker containers.
Have you looked at docker compose? It doesn't work cross host, but it gives an idea of how containers can communicate to each other using dockers networking.
For cross host communication you should look at some of the orchestration tools like Rancher or kubernetes.
Rancher is pretty quick to get up an going since you just need UDP ports 500 and 4500 on each host to be accessible for you containers to talk to each other. Not sure about Kubernetes, never setup a cluster from scratch, but you make Kubernetes clusters with Rancher as well.
I'm on mobile, so excuse my brevity. The Arch Linux image uses "pacman" as their package manager. You'll need to install Python, as it is a prerequisite of speedtest-cli. You can try "pacman -S python3". See this for more information about pacman. Note that when you do this inside of the container, it will not persist across reboots (as it is not part of the base image). If this is needed on a more consistent basis, you will need to create your own Dockerfile and make those additions (which are as easy as just referencing this image and the above command essentially).
Hope this helps!
It's not commonplace yet, but hopefully containerising everything like browsers will be the future of IT security. These folks are making a start. I don't know if they are using Docker - I keep meaning to give their OS a try.
Docker creators disagree with you: https://www.docker.com/blog/containers-are-not-vms/
> Docker is not a virtualization technology
It creates an appearance that it is virtualized but it is not. It is the same kernel as on the host running processes in the isolation with mounted filesystem. In simple words - if you run a Debian container in CentOS host - you are not running Debian kernel, you are still running CentOS kernel with a filesystem from the Debian image. This is very important distinction not many understands. Isolation and containerization is not virtualization, and if it was called that by someone historically - it was wrong. Humanity did wrong a lot in the past and we are probably doing something wrong today. Don't hold to the past. I don't know much about pre lxc implementations, so maybe it was even warranted there. In Linux containers it is not and there's no debate about that.
You cannot run an arm/v8 image on amd64, but you should be able to build images that work on each architecture relatively easily. Check out this post on the official docker blog
thanks for all of that comment, I agree!
> https://www.docker.com/products/docker-desktop
Yes, but do note, even when you get there, it refuses to tell you the latest version number, requiring (for me at least) downloading then either starting the install to figure out what was going to be installed or right clicking, visiting properties / details
Total crap.
Docker's FAQ tells you how to install Docker CLI and Engine without Docker Desktop:
https://www.docker.com/pricing/faq
> Can I just install the Docker CLI instead of using Docker Desktop?
> If you use a Mac, you can install Docker CLI and Engine inside a virtual machine, using VirtualBox or VMware Fusion for example, which may require purchasing a license for VirtualBox or VMware Fusion. On Windows you could install and run Docker CLI and Engine inside WSL2. If you use a Linux machine you can easily use the Docker CLI and Docker Engine, See the documentation on installing Docker Engine for instructions. Docker Desktop is not yet available for Linux.
Yes, performance loss and not enough benefits in return as you say.
However I don't know exactly how windows containers manage graphics, I am not sure if you can benefit from the dedicated GPU and draw the screen properly with the container layer in between.
Take a look at this if you want to learn more about windows containers, but they are not designed for gaming or graphic apps as far as I know:
https://www.docker.com/products/windows-containers
> I really do wish this feature of docker got more attention and use. Maybe since Apple is moving away from x86/amd64 we will see more of it used in releases
Yes I totally agree. My feeling is that the main reason is that corporations really don't care so much as they've got all their applications on AWS / Google Cloud, which run mostly on x86/amd64. This is unfortunate for the tinkerers who like to run things locally on their raspberry pi's. If Apple's strategy sees adoption in the cloud then sure, but as long as it's desktop/laptop only I don't see much changing - people don't typically run docker containers on their laptop.
> I am not sure how feasible this is with cloud hosting the CI pipeline for many without getting expensive
To be honest with you I don't even know all the conditions behind Github CI, but I wouldn't be surprised if they gradually wind down the free plan as more people adopt it. Luckily the project I wanted to build wasn't that resource intensive.
> Could you provide a resource or more detail on the manifest idea?
I found this blogpost talking about it: https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/. They first describe the manual "hard" way using docker manifest
but I couldn't get that to work at all. The easy way is the buildx way but again with no explanation on what to do if your Dockerfile needs to be modified depending on the platform.
Honestly, what I'm missing dearly in these Dockerfiles is the ability to build in easy logic. It would be convenient if there were IF, ELSEIF, ELSE even FOR etc. so the logic can be done at the top level, instead of in a weird embedded way inside a RUN command.
Not true. Windows containers exist, and is officially supported by Microsoft, allowing you to run Windows programs inside docker (more info). You just need to set Docker for Windows' runtime, which you can do in the settings.
Personally I've never used Windows containers as I'm more of a Linux oriented person, but there's support (possibly beta?) for Kubernetes as well.
End of December, WSL 2 GPU Support for Docker is in Tech Preview, so looks like it's coming. Until then, you'll probably have to test and train on your Ubuntu machine.
You should still be able to develop on your Windows machine and set the Docker client to use your Ubuntu machine as a remote and run it like that, but haven't done anything like that myself so can't really elaborate.
Thank you so much!
As for the Github Actions, there's a better article if you're interested -
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
I just checked MQTT out, and wow, I'll probably make a v2 soon
I did want to use an Alexa initially but setting it up for external prompts wasn't very clear. Alexa's documentation is excellent though, might try that again actually.
Why not just use Docker Desktop? It hides all of that from you and makes it look like Docker is running natively on your Mac or Windows computer (but it is using a Linux VM under the covers which you never see and don't need to be concerned with).
Docker hub, not Docker, is sending you that, and it's really not spam for them to send you information about how a service you signed up for is changing.
Also, did you read the Terms of Service when you signed up?
> 20.4 You agree that Docker may provide you with notices, including those regarding changes to the Terms, by email, regular mail, or postings on the Service. By providing Docker your email address, you consent to our using the email address to send you any notices required by law in lieu of communication by postal mail. You may provide us with legal notices at our postal address set forth above or via email to .
This is a zero effort suggestion on my part since I haven't looked into it for this purpose but rootless docker (setup script) might be an option in the future, it's not really ready yet though. Basically the docker process and all it's containers can run as an unprivileged user. There was a talk about it at last weeks dockercon. I'm not sure how your DLP works or how your dev workstations are set up but it could help.
Hi,
https://www.docker.com/what-container explains how containers are subtracted from the os. And the differences between a vm and container. The alpine container will have alpine libs, packages you install and the app you package with it. Containers share the OS kernel
Another great resource is https://docs.docker.com/
The git repository, along with editors/ide and browsers should live on the host. You will want a docker image built with all the necessary sdk's and cli dev tools(eg npm or lein) for working in your language to bring in dependencies and start your application in a development mode. I have a make file on the host that starts up the container with a mounted volume and connects to a bash/zsh session. From this cli you will start your app(located in the mounted volume) in development mode.
If you have an IDE that needs access to the sdk(for intellisense and/or debugging) the way to accomplish this is to install sshfs into the container, then mount the root of the container file system onto your host system then access the sdk that way. I've done this successfully with java sdk's but be warned there might be issues if the container is a native toolset when using this from a different os like macos.
Also a warning about docker for mac, it's a <strong>complete CLUSTERFUCK</strong>. It's completely unusable with mounted volumes, honestly 100 times slower than standard disk performance. See this thread describing the issue which has been outstanding for nearly a year. I know it's crazy, the primary function of DFM doesn't actually work. Wasted days of my time figuring it out. CRAZY that docker could call it 1.0 LOL.
> tell it to run on Windows 10 Pro through the Docker client
The important part there is you're running on Windows 10, so the Docker you're using is not native, it's just Docker running inside a virtual machine.
Windows 2016 server supports Native Docker, which is different. It runs without a VM and only supports Windows containers. Windows containers cannot be run on Linux natively and vice versa, you have to use a Virtual machine as a "shim".
The reason for this is by their nature, containers share the underlying OSes kernel. Linux and Windows have very different kernels and it would be very difficult to translate a Linux system call into a Windows system call or the other way around.
docker-machine
supports VMWare Fusion : https://docs.docker.com/machine/drivers/vm-fusion/
Also, boot2docker
is deprecated, you should install the new Docker Toolbox (https://www.docker.com/toolbox) which includes docker-machine
.
I put together a demo of some of 1.9's features here:
https://asciinema.org/a/7qdrtjx2wjdd0v7t3vs45g05o
Recreate/contribute here:
https://github.com/ianmiell/shutit-docker-1_9/blob/master/README.md
or @ianmiell
There are two big things blocking this capability: 1) Apple only exposes GPU acceleration for video through the system’s CoreMedia and CoreVideo APIs 2) Docker on Mac runs in a lightweight Linux VM on the hyperkit hypervisor, and hyperkit doesn’t yet support passthrough of GPUs of any kind
Because Apple has officially announced that they’re moving away from Intel CPUs I wouldn’t expect Channels to ever support hardware video encoding on Mac.
Edit: I was only looking at the docker version, it appears the native macOS app supports hardware acceleration when not running as a system service: https://getchannels.com/dvr-server/#macos
Just be sure you check out https://alternativeto.net/software/musicbee/?platform=linux
I myself 99% of my music-listening-time don't even see a GUI, therefore happy with Mopidy (tunein & spotify & local mp3). YMMV.
LXD is more like a full OS environment and you can run Docker containers nested inside it if you want. It's definitely not like a lite version of docker, it's more powerful.
You can try LXD in a browser here.
If you’re new to Docker / containers I would recommend Datadog’s Anuual report on the subject.
https://www.datadoghq.com/container-report/
If you wanted to do it as a job or understand the industry the stats are pretty revealing. In particular, is number 11 on their list where they graph the most used software inside a container. Lots of databases, data messaging brokers (Kafka) and data processing (Elasticsesrch)
Yes. Your explanation is correct.
The process may not consume 100% cpu as the task might have finished before that so in that case the CPU will be idle. As in your example the allocation is 2 CPUs but utilisation can be less if the container does not need any processing to be done. The times when a container is running short of CPU can be found from the "cpu.stat" under "/sys/fs/cgroups/" dir of the docker host. A good explanation is available in the datadog blogs : https://www.datadoghq.com/blog/how-to-collect-docker-metrics/ . I was not able to translate those numbers to CPU shares though.
that's awesome! i don't know how I've never seen lazygit before today as well, I think it's because I spend too much time on npm instead lol. the Internet is full of wonders, amazing work there man! i love that you support compose services as well out of the box, this will definitely help my team's workflow. is it possible to make the docker host configurable? it'll be great if we can make lazydocker connect to different docker machines for monitoring.
do you have any plans porting this to the k8s ecosystem? I can see the k8s devs celebrating if you do, k8s is a beast to monitor. this can save developers a lot of time trying to figure out the k8s commands to do the same thing, when all they need is the same info that lazydocker provides. the UI can remain largely the same, you can replace Docker Images with Deployments instead (as k8s devs usually host their images in a dedicated registry that may or may not be in the k8s host, deployments are more useful information i think). plus lazykube is a good name!
Replying to myself because I did not fully answer your question.
Kubernetes itself consists of several applications, all delivered as statically compiled golang binaries. There is the control-plane installed somewhere and often hidden by public cloud providers (kube-apiserver, kube-controller-manager, kube-scheduler, etcd) and the data-plane (kubelet, kube-proxy, at the least) which are installed on your worker nodes.
You are free to choose the Linux distribution and container runtime on which you install these pieces of software. Kubernetes' requirement to run are
Start small, get your own niche application running.
If the command line is too overwhelming... Install the dashboard and click around - https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
You'll probably need some form of secrets management. There's Docker Swarm Secrets or Kubernetes Secrets as two examples.
With that, you can copy the SSH keys to your machine and as part of the run command use scp or rsync to pull the files down/in sync.
All the magic happens with the new deploy tool kubeadm :-)
The process goes like this:
Spin up a machine with Ubuntu Server 16.04
Apply latest OS updates (apt-get update, apt-get upgrade, reboot)
Install the kubeadm repo
Install the kubeadm packages
run "kubeadm init" to bootstrap the master node
Install network fabric (CNI plugin, I prefer WeaveNet)
Done! "kubectl get nodes" and "kubectl get svc,pods --all-namespaces" should show all components up and running.
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/