You sound like you are using the wrong distro.
What about a virtual machine with a boring Debian "Stable" or "Testing"? Or some kind of Linux Container?
So.. I don't really want to start a debate, everyone is entitled to an opinion, but maybe I can ease your mind on a couple of points:
LXD is developed by core LXC developers, who have been working at Canonical for a long time, and actually just extends LXC to be used like a hypervisor - essentially it allows you to drop LXC into the place of KVM (or some other virtualisation technology) in something that requires a network accessible API (key example: OpenStack). See here for more
Bzr actually pre-dates git, and Canonical has a wealth of tooling around it. Some Canonical projects now use git. We're certainly not anti-git.
Juju? It's not a configuration management tool per se, indeed you can use other configuration management solutions within Juju charms, but that isn't quite the point. A quote from the juju website:
> Juju is an application and service modelling tool that enables you > to quickly model, configure, deploy and manage applications in > the cloud with only a few commands. Use it to deploy hundreds of > pre-configured services, OpenStack or your own code to any public > or private cloud.
... again, Juju has existed for several years now, and occupied a space that no other tool did when it was created.
Not really directly answering you, but I would try something like LXC to create a container with another distro, either NixOS or Arch, and use RDP or X redirection to have GUI apps running natively. Those kind of containers are like VMs but uses a lot less resources.
Oh man, some of the issues they point out there are blown way out of proportion. Like the "systemd-nspawn can patch at will any kind of file in a container". No shit, it's a container. Every decent container method does this because without it containers suck (like see https://linuxcontainers.org/lxcfs/introduction/ and https://insights.ubuntu.com/2017/06/15/custom-user-mappings-in-lxd-containers/ for examples). They follow up with "your hosting provider has means and tools to spy, modify, delete any kind of content you store there". No fucking shit, it's a container. Last I checked, root is still root.
Like I agree that the stewardship of the systemd project hasn't been the best and it's super frustrating when they roll shit out that breaks existing software, but overall it's a nice piece of software. My life as a sysadmin is way easier with systemd than it ever was with sysVinit.
That answer is a bit misleading. LXC is a container format that is primary supported on ubuntu, but can work on other linux distros. LXD is a container hypervisor meant to be used with LXC.
LXC support on other platforms is wider than LXDs, but neither is exclusive. Both projects are mainly supported by Canonical. Heres the project website:
More information about Linux containers : https://linuxcontainers.org/
Containers are showing up more and more in interesting computing applications these days.
For clarification, each app doesn't have its own container. The entire Android environment gets its own container.
I came here to!
Inside unshare --mount --map-root-user
(my favourite thing!) you can chroot
without root:
faux% unshare --mount --map-root-user bash root:~/.local/share/lxc# chroot wily/rootfs /bin/bash root@astoria:/#
It is, however, a pain to prepare the chroot; my preference is to use lxc, which can be easily set-up to work as a limited user without any security issues (unlike, say, Docker, which gives any user root-equivalent control of the machine):
% lxc-create -n wily -t download -- -d ubuntu -r wily -a amd64 ... % lxc-start -n wily ... % lxc-attach -n wily root#
The project is sponsored by canonical so I don't see any reason why it shouldn't be trust worthy. Canonical makes Ubuntu.
Ubuntu.com posted an article about the security of Chrome OS's crostini setup.
https://ubuntu.com/blog/using-lxd-on-your-chromebook
In short, LXD is configured in Chrome OS to be extremely secure two ways.
The first is that it runs inside a read only VM. The second is that Crostini will only run unprivileged containers. There is persistant storage, but it's accessed by the container inside the read only VM.
AFAIK unprivileged containers can't actually touch the system running the container at all, and Chrome OS has been configured to only allow unprivileged containers. Privileged containers are created by root and are run as root, but this option is way less secure.
https://linuxcontainers.org/lxc/getting-started/
Provided you use a stock container from LXD I don't see why it would be insecure at all, given you would need both code that escapes the container, and code that escapes the VM the container is in.
edit: I forgot to mention the part about unprivileged containers.
KVM and LXD. Give some time to learn LXD because is going to have a really nice integration with juju and openstack.
Some nice features around LXD:
It uses images instead of templates as LXC does.
Unprivileged containers by default (USERNS) !!!!!! <---- this is super important.
Live migration.
API Rest to manage the containers and the server.
Integration with Openstack.
Mailing list/IRC are really helpful.
Consume fewer resources than KVM to deploy linux servers.
Support for different backing storages.
then give the readme of the project a look, it does have instructions for traditional OS based deployment.
It's refreshing to find a considered answer to the 'why no docker' question. I can definitely appreciate the stance there should be other competing technologies; in that regard there is LXC, and I'd encourage you to give it a look.
> I think some Oracle systems like having access to raw block storage as well - which I don't know is possible in a container situation... Certainly not without a lot of messing around.
See the <code>lxc-create(1)</code> -B
option, which allows for the following storage options:
Among others.
LXC container Security and the difference between Privileged and Unprivileged LXC Containers. Today, LXC containers default to "unprivileged".
This Docker web page indicates that Docker security is similar to LXC's.
But with one important caveat to be aware of which says:
>Docker daemon attack surface
>
>Running containers (and applications) with Docker implies running the Docker daemon. This daemon currently requires root
privileges, and you should therefore be aware of some important details.
>
>First of all, only trusted users should be allowed to control your Docker daemon. This is a direct consequence of some powerful Docker features. Specifically, Docker allows you to share a directory between the Docker host and a guest container; and it allows you to do so without limiting the access rights of the container. This means that you can start a container where the /host directory is the /
directory on your host; and the container can alter your host filesystem without any restriction
​
​
Look into LXD. It lets you run what appears to be multiple full Ubuntu installs inside containers on the same machine. You could have different incompatible PPAs installed inside each container.
Taking the technology a bit further, there is no need to completely separate each container either. They could share the same network namespace as the host, for example, as solving PPA incompatibility doesn't require separate network namespaces. That way your machine and nominated containers would share the same IP address.
A dyno is term for a remotely hosted virtual machine instances. A lot of developers are doing this these days (lightweight desktop and spinning up instances for compiling and testing). I think "dyno" is a Heroku term. A more generic term is "containers."
If you haven't already make sure you look thru the official LXD support site
https://linuxcontainers.org/lxd/introduction/
Especially the User forum where the LXD developers answer questions daily
https://discuss.linuxcontainers.org/
If you search previous questions on the forum you may find some of yours already answered
If you chroot from userspace and try to fire up init, you'll have the host init system fighting with the gentoo one - usually it goes rather poorly.
Also, init usually expects to be PID=1, and you'd need to mess with cgroups rather than just chroot to achieve that - which is precisely what systems like LXC do.
You could use the kernel on your USB and tell it to mount root on the hdd if you wanted to - but why?
The official "getting started" documentation (https://linuxcontainers.org/lxd/getting-started-cli/) is a reasonable starting point :p. And the Documentation page on that site is a strong reference. Simos blog is great (https://blog.simos.info/).
The default LXD container today is unprivileged.
Root in an unprivileged container is not root in the host.
As /r/LXD is not a support forum you might want to ask your question on the LXD forum where the devs and experienced users answer Q&A every day.
https://discuss.linuxcontainers.org/
There is a good explanation of the difference between Privileged and Unprivileged containers found here:
Right, you should've specified you needed a router in your original post.
You could use lxc containers or virtual machines on (for example) an ubuntu host https://linuxcontainers.org/lxc/getting-started/
You would need to create a container or vm for pfsense , but you should run nextcloud on the host, since you don't have much compute resources.
This is a technology I'm really eager to have on Debian, maybe Docker is the best thing after sliced bread and I'm stupid but just can't cope with it, too much hype involved.
EDIT: Try LXD here, really nice https://linuxcontainers.org/lxd/try-it/
> Also, I am a hobbyist web dev, what other alternatives besides Ubuntu are good for this? I would enjoy installing Solus but I am not sure they have all the tools available that I would want to use.
If you use Solus use LXC (or LXD if they have the package) for containers in which you install your dev tools and testing webserver etc. I recommend Ubuntu and Debian for the containers.
Solus has not enough packages at the moment.
> Operating System - Ubuntu
I prefer Debian, which Ubuntu is based on; but the latter adds some complexity.
The Stable distribution is a very reliable way to maintain an internet facing server (I've maintained 4 internet facing servers without any downtime for 5 years now, and I'm a self taught sysadmin)
>Server - nginx
nice, but Apache has a lot of modules, such as mod_perl, which makes for very scalable apps, if you're into Perl. It depends on what language your devs use.
>SQL Backend - Mysql
Strongly suggesting Postgresql instead, fantastic to work with, outstanding documentation and with an amazing community support (as well as professional support).
Regarding sysadmin rights for devs, maybe lxc?
LXC images start around 2Mb (for a very bare bones alpine).
There are tools available to help with building custom images.
I was meant to put LXC/LXD, but they're Linux containers. You can learn more here: https://linuxcontainers.org/lxd/introduction/
I write desktop applications for clients in a niche industry. I've been using Sciter and C for a while, but recently switching over to Rust and doing the same thing. I also do web development, and devops.
I don't use an IDE, but instead a text editor called NeoVim, and the Telescope plugin. I find it much better than the traditional way of working in an IDE.
The specs are fine for what I need to do. I have a powerful desktop machine that I do most things on.
You should really be using ZFS or BTRFS. For testing purposes I'd use loop file over directory backend.
Those are images you can launch in LXD. You still need LXD (or LXC) to run them.
How to install is documented in https://linuxcontainers.org/lxd/getting-started-cli/ but it only mentions snap package (or installing from source) for Ubuntu.
It is not I myself but Kabouik that's explaining, the whole long video about it is here https://wwcw.youtube.com/watch?v=-dgD5jci8Dk
An LXC container is basically like a virtual machine, but not completely. They allow you to install a Linux system (as in all the programs and configuration that together are a Linux distribution) onto another Linux system. For example you could install Ubuntu inside SailfishOS (which for all intents and purposes here is just another Linux distribution). What makes it different from an actual virtual machine is that the kernel is still shared across both systems, so no hardware has to be emulated for it and performance will be close to the host OS.
For a longer generic explanation about LXC itself you can look here: https://linuxcontainers.org/lxc/introduction/
As for how to get it going on SailfishOS I have no idea myself, but you could ask Kabouik in the comments of his video, or find us on discord here https://discord.com/invite/k4NtAGy
I think what you are looking for is lxc (Linux Containers) or the newer lxd (see https://linuxcontainers.org/). They are basically a way to run a complete userspace (including network and process namespacing) on your running kernel without virtualization. These Linux containers are much more lightweight than using Docker or something alike and a good introduction should you wish to go that way.
Unprivileged containers remap container UIDs as they are seen by the host.
Example: Root user on the host is UID 0.
Root user on an unprivileged container is UID 0 inside the container, but seen as UID 100000 on the host.
The remapping in step 3 allows you to tell the host to use a different UID. You could remap UID 0 on the container to UID 0 on the host (unsafe and equivalent to a privileged container), or better map UID 109 on the container to UID 109 on the host, and ensure UID 109 on the host has r/w access to your nfs mount points.
Hopefully that makes some kind of sense. It's hard to wrap your head around at first.
Unprivileged is important to securing your host (emphasis added).
"LXC containers can be of two kinds:
The former can be thought as old-style containers, they're not safe at all and should only be used in environments where unprivileged containers aren't available and where you would trust your container's user with root access to the host.
The latter has been introduced back in LXC 1.0 (February 2014) and requires a reasonably recent kernel (3.13 or higher). The upside being that we do consider those containers to be root-safe and so, as long as you keep on top of kernel security issues, those containers are safe.
As privileged containers are considered unsafe, we typically will not consider new container escape exploits to be security issues worthy of a CVE and quick fix. We will however try to mitigate those issues so that accidental damage to the host is prevented."
LXD is more like a full OS environment and you can run Docker containers nested inside it if you want. It's definitely not like a lite version of docker, it's more powerful.
You can try LXD in a browser here.
It sounds like you should try LXD, it is lightweight, fast, and distro-agnostic, You can use it to run full VMs if you want to now too. Here are the docs. The lead developer is on Twitter.
1st: Lxd is just built on top of lxc. Lxd official introduction
3rd: I just know that webdock.io server provider uses ldx containers for hosting. So when you buy a server from them it is lxd container.
No. It's OS level virtualisation (no speed penalty). It is essentially chroot + nicer tools + extra security to protect your host system from the guest OS. i.e.: lxc launch ubuntu:16.04 packettracer will create a directory called packettracer in /var/lib/lxd/containers where it will store the filesystem of ubuntu 16.04. That ubuntu 16.04 will use the already running kernel of your ubuntu 20.04. Oh crap! I've already become your free lxd support person.
Anyway, please don't ask me any more questions about this (this is not the place for that), I won't answer them. There are tutorials - https://linuxcontainers.org/lxd/getting-started-cli/
Try LXD, these containers tend to be more like VMs unlike Docker containers, as it's said "It offers a user experience similar to virtual machines but using Linux containers instead." At least I use them regularly for similar purposes. And there is a worthy course on LinuxAcademy, and it's free.
The man page has it very clearly explained:
Signal the end of options and disables further option processing. Any arguments after the -- are treated as arguments to command.
This option is useful when you want specify options to command and don't want lxc-execute to interpret them.
If you want to pass options to your command, bash
in your example, you have to tell lxc to stop parsing the command line and passing the rest of the line as is.
As example lxc exec penguin -- bash -c 'echo FOO'
, without the double dash lxc-execute will try to parse the -c
option.
> The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.
Source (and more in-depth explanation)
Wikipedia article is also pretty nice.
It's basically like running another copy of Linux in a virtual machine however with much lower resource usage and lower overhead. The reasons you'd want to do that are pretty similar to why you'd want to use VMs - security, isolating processes from each other and so on. Perhaps even just to try out how a new distribution feels or test deployment processes.
There are also some limitations to this approach compared to a VM, such as each container will share the host's kernel. Also, if the there is a security hole in the container software (such as LXD) or the host kernel then processes may be able to escape the bounds of the container. This is also not impossible with VMs but is generally more difficult.
The main site for LXD (and some associated projects like LXC) has more information: https://linuxcontainers.org/
They also have a "Try it" feature where you can play with some of the features.
You should have a look at LXD/LXC. I run all my services in separate LXD containers and it does most of the things you want I think, though I don't know about the storing configuration files in git.
Especially rollback and migration are extremely easy. It uses zfs as storage backend and you can take regular snapshot and easily roll them back if anything goes wrong. Migration to new host is also easy, as you can just send the full container to a new host via network, or manually copy zfs image + db to the new host.
I went to the https://linuxcontainers.org/lxd website, and tried LXD. It feels very similar to Docker but seems to be more about being a VM than a Docker container. i.e. a Docker container is supposed to be a single application, whereas an LXD container is supposed to be a lighter VM that runs all your applications in a logical, isolated unit. Pretty much a VM with less overhead.
I suppose LXC is similar to LXD. Could this be the answer? LXD containers as a VM without committing huge amounts of RAM? LXD can run Docker containers as well it seems so that could be a good interim solution while I eventually convert some Docker things to LXD?
Well, I'd recommend 18.04 just for the stable Linux containers. Please see https://linuxcontainers.org/lxd/introduction/
My 18.04 desktop and servers run without any problems so far round the clock.
I'd ask about this on the LXD forum as the developers and others answer Q&A daily there
https://linuxcontainers.org/#navigation
But include your system info such as kernel version, snapd version, LXD version etc
On my Ubuntu 18.04 systems I use snap and everything works and this is with the newly released LXD v3.0.0 but I'm not running any docker in LXD at the moment
The author makes a mistakes in a lot of his points: he compares the Linux kernel to a fully functionnal system. Some of his points don’t even apply to every distro.
> 1 - Separation between base and ports
Depends on the distro, not relevant to the kernel. See, for instance, GuixSD or NixOS.
>2 - Good documentation and consistency
Sooo, the kernel has its own documentation? Like every piece of software on your computer. Most distros have their own doc, such as the Arch Wiki.
>3 - Better portable kernel configuration
Opinionated.
>4 - Advanced security
Security is not limited to Single User Mode and SeLinux, and there is alternatives to the latter, such as AppArmor.
>5 - Extensive filesystems
Can’t tell for UFS vs ext3, but I agree about ZFS.
>6 - Fine-grained update control
This does not apply to every distro. For instance, on Gentoo, I can update only one package, I can prevent a specific version of a package to be installed on my system, and I can rollback an upgrade. I can make my own subcomponents, and upgrade only one of them. In fact, when you’re upgrading a Gentoo box, you’re actually upgrading the “@world” subcomponent. I’m sure most of this also apply to GuixSD and NixOS.
>7 - Backwards compatibility
True for user-space, not for the kernel.
>8 - Better (easier) customization
Depends of your distro.
> 9 - Jails
See LXC.
>10 - The community
Opinionated.
You are trying to say all of these are platform agnostic. Just by you saying namespaces, cgroups are not Linux and Containers aren't Linux you have proven you are not worth my time to reply to. Take care. I feel bad for your ignorance.
Namespaces: http://man7.org/linux/man-pages/man7/namespaces.7.html
Containers: https://linuxcontainers.org/
Just do yourself a favor and move on.
Yes. That's how I do it. I used to have an NAS VM and then store all my backups and data in a virutal disk image with the allocated size of 1.5 TB. So even if I only used 200GB it used 1.5TB on my host, which was annoying. With LXC now I created an NAS container which does the same thing as the VM except for a fraction of overhead (e.g. only 20MB ram usage instead of 200) and store all my data in /nas on my host and then just set the config for the NAS container to "mount" /nas as /nas inside the container. So you basically pass through a directory. You still add it as a "disk" but it is not an image but only a directory.
As far as I understand, LXD (which is based on LXC but better) is currently only on Ubuntu 16.04. Although it seems you can use backports (https://linuxcontainers.org/lxd/getting-started-cli/)
I would check out lxd: https://linuxcontainers.org/lxd/
This gives you container density and performance but with the same semantics as VMs, so depending on your workload you can usually pack way more machines onto an instance than with KVM.
Disclosure: I work at Canonical but not on LXD.
Good questions. I don't run separate VMs, I run containers. LXC. Much more lightweight, easy to roll up. I try to keep all my services separate, so that if one needs an update, or crashes/goes down, it doesn't affect any of my other services. Also, I have a few network VLANs for separation as well. Examples, my VoIP phones are on one VM, that talk to my PBX, nothing else can talk to it,except the trunk port leaves outbound network. All my security cameras and NVR are on one VLAN, so nothing can talk to those/interfere. All personal devices on LAN are separate, all WLAN separate, etc etc. I just don't like the idea of automation things/IoT being openly accessible to anyone that manages to get on my network, or from the internet, so I restrict and block almost everything on that network except what is needed to run. I can't restrict some ZWave stuff or bluetooth things, so I am trying to learn more about that security.
I use openHAB for HA, with some other vendor specific products as needed. Rules on my firewall deny/allow access based on source and destination. So for example, it will allow my cameras to talk to the NVR, and openHAB (and vice versa), but anything else, even if its ON that subnet/VLAN, is blocked. Again, just for a bit of extra security.
I do not use blue iris, so I can't offer anything for that. For backups, I use crashplan which backs up almost everything to on site storage, including the security videos. Nothing leaves my on site, so if it all goes up in flames... it's all gone. I don't like the cloud though, so that can be considered a flaw in my Backup plan.
ProxMox is the hypervisor (kind of like Hyper V or VMWare) that sits on the hardware, that allows me to use the containers and virtualization (KVM).
Hmm, I think the article is kind of wrong then. LXD doesn't implement an alternative to Docker; it manages LXC containers, which are at the Linux kernel level.
https://linuxcontainers.org/lxd/introduction/ > LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the scene, LXD uses LXC through liblxc and its Go binding to create and manage the containers. > > It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.
Linux is powerful for all kinds of developers. Part of my work involves consulting for non-developer professionals (scientists and others) on productivity and workflow design for their projects. Checkout LXC (available on ubuntu software center) for setting up extremely light weight vm's (basically glorified chroots, no HAL) for easy compartmentalization of prerequisite libraries. In your case this allows you to test for instance, multiple server configs at the same time (different apache versions, extra modules, etc), or segregate the development tree from the test server for your website project.
It also lets you separate your productive environment from the environment necessary for a project to run. I typically like to work in the most recent release of Ubuntu for all my day to day work and then set up LXC containers for every major project with the exact libraries needed to build/test/etc.
This use case is better suited for something like LXD, it's a mix of Docker and LXC that works pretty good for its use case.
https://linuxcontainers.org/lxd/introduction/
It's made by Canonical, I always wondered why never got more popular.
Perhaps because it's made for Ubuntu server primarily and you have to use Snap to install it on other distros, I don't want a Snap if I'm already using AppImage.
To clarify, LXC is the Linux kernel feature that provides isolated containers. The name literally comes from "Linux Containers". LXC also provides its own set of commands to manage the containers, which all start with "lxc-", e.g. lxc-ls
to list containers.
LXD is a toolkit that builds on top of LXC to provide more advanced management capabilities. It provides its own set of commands through the "lxc" executable (this is the confusing part), e.g. lxc list
returns a list of all containers. In newer versions, LXD can also manage QEMU virtual machines.
PCT is another toolkit that's included with Proxmox VE (you can probably guess what the name stands for). It also builds on top of LXC and provides additional Proxmox-specific functionality through the "pct" executable. The command to list containers is pct list
.
So, three different sets of commands that all work with the same underlying technology. On Proxmox, you can use both the LXC and PCT commands - to get a shell in a container, you can use both pct enter
and lxc-attach
, and so on. In fact, if you look at the source code for pct (which is just a big Perl script), you can see the former uses the latter internally, and the same is true for many other pct commands as well.
I'm saying you need to know how stuff works in order to avoid breaking it.
Here's someone explaining potential issues better than I could.
You could look into Nix, it is trying to achieve what you're looking for. Not sure if you can get away with just the package manager or if you need the full OS.
For LXC you should find plenty easy to follow guides via Google
Well, I agree that it is hard to find good information about that, but lxc/lxd is somewhat nieche sadly.
​
Did you setup the lxc remote address on the host that you are using it from? lxc remote add
The ansible host will need to have lxd installed as well, but I understand that this seems to be the case for you.
​
Maybe the way you define the container is wrong and you need to change something there, I haven't been using it lately, so I can't check. I would look into the inventory plugin if I were you to see if the definition for the host needs to be different.
https://docs.ansible.com/ansible/latest/collections/community/general/lxd_inventory.html
I am not certain what to make of your error message.
He is mounting a CIFS share from another system, like a NAS.
His Proxmox server is 10.0.1.10 (from the URL in the browser); however, the 10.0.1.7 address that he enters for "Server" in the Add CIFS dialog is another system on his LAN that has a CIFS share created and available.
This is adding storage that Proxmox will use. At 6:17 he selects...
For content that will be stored on that storage. You'll learn that Proxmox organizes storage by content. This confuses a lot of people who are new to Proxmox. Proxmox is mounting these for use as storage, not sharing them. He probably adds to the confusion by naming the share "PROXMOX". The share name is irrelevant - it could be "ORANGE".
By the way, he also says the containers are Docker containers, which is incorrect. They are Linux Containers (LXC), which are entirely different from Docker containers.
>then specify zp001/lxd when using lxd init
Thanks, was going to ask that next. Do you know if that sets the the zfs.pool_name
key, or something else? That's the only key that seems relevant to this setting in the docs, unless I'm missing something.
If you don't trust linuxcontainers.org, then you might not want to use crostini at all - they're the people who wrote and maintain LXC and LXD which are key parts of the Crostini stack :)
More seriously, I'm not sure of the process you're imagining to happen for someone to swap out your container's image surreptitiously - do you use a device where untrusted people regularly have unmonitored physical and admin(/owner)-level access?
If you just want to know your environment hasn't been changed, look at an IDS like Tripwire or AIDE, initialised when the container is in a known state. This will still not fully protect you from sophisticated attacks from container/vm levels you're imagining, but unless your usage is really weird, it'll give you a much more meaningful indicator as to if your environment is secure than trying to figure out who originally supplied your image, while ignoring the thousands of changes that might have compromised it since.
My personal opinion is it might be fun to set an IDS up - and then forget about it until it triggers - but it'll add very little real security. The imagined sneakily-change-the-image attacks are bordering on state-sponsored hacking scale, and even if someone did want your details, there's far far easier vectors they'd look at first. Spend your effort on what's likely first.
Stephane Graber (LXD Project Lead) has a great Youtube video explaining LXD "system" container, file systems, Security, different Distro container images etc.
The Video below also uses <strong>the On-Line LXD "Try-It" system so you can learn/experiment with LXD</strong> without installing anything.
<strong>5 years of providing root shells to strangers on the internet - Stephane Graber</strong>
I guess I do need to install something additional:
[~] # sudo ctr --namespace moby c ls sudo: ctr: command not found
Note that I am running Docker on a QNAP NAS through Container Station that: "integrates LXC (Linux Container) and Docker lightweight virtualization technologies, allowing you to operate multiple isolated Linux® systems on a QNAP NAS as well as download thousands of apps from all over the world."
True, I was more thinking along Docker and K8S.
Using LXC/LDX, how would you run a Python or Node.js program in a container? I have not found a practical guide. https://linuxcontainers.org/lxc/getting-started/ is definitely too low level and way too many steps to run a single program.
Very good post. Thank you.
The only thing I'd add is that it would probably be a good idea to mention Linux containers as well such as LXC/LXD. Their use is on the rise.
LXC is Linux Containers, it's basically a Linux VM, in my case, it's running Ubuntu 18.04. For the most part it's indistinguishable from any other VM running Ubuntu. I'm guessing I could install your utility as if it were 'bare metal' I just was asking in case it supported LXC/LXD directly from the host server.
If I get some time this week I'll just try a 'bare metal' install and get back to you with the results.
Assuming linux, I think you could put wireguard + a lightweight proxy server in a network namespace, and create a veth tunnel between the global namespace and the wireguard one. You could then configure your browser to use the proxy via the global side of the veth tunnel.
I do something like this with openvpn and tinyproxy, scripted using a few tools like unshare (for the namespaces) and lxc-user-nic (so I don't have to be root to to set up the tunnel). I expect I'll migrate to wireguard eventually.
Besides the fact that docker doesn't make anything run worse as stuff is still executed on the same kernel in the same context just policed into its own "environment" (and even that isn't even remotely true, but just for the lack of a better term let's call it that way), this isn't even remotely close to what docker does.
I haven't used LXD in a while, but at some point I used to be very active in the mailing list.
To answer #2: https://linuxcontainers.org/lxd/docs/master/networks
In reality you can setup LXD networking in different modes, but bridge is the most common one.
With Ubuntu 18.04, you have native LXC ( with higher level LXD abstraction) available to you. This is what I use for all of my VMs and have been using it for years.
Out of curiosity, I looked up what is required by a container running OpenVPN client. Maybe it is useful for you too.
A quick search led me to Docker based openvpn client implementation which spells out the required devices and capabilities needed by the container. LXC has similar functionality to pass specific devices (lxc-device), and restricting container capabilities.
Proxmox natively supports Linux containers (LXC) rather than Docker containers.
You could glom Docker onto Proxmox and use Portainer if you wanted; however, if you are looking for a KVM hypervisor and Docker container manager integrated into a single web-based UI, then take a look at Cockpit. I believe it is sponsored by Red Hat, but appears to run on all common distros. See... RHEL 8 Beta - Cockpit Web Based Management Interface Overview. It looks fresh!
This is interesting. cgmanager is a red herring and has nothing to do with malware other than being used by it.
https://linuxcontainers.org/cgmanager/manpages/man8/cgmanager.8.html
The malware is creating processes in user namespace while masking behind standard system process names of packages you do not have installed but would not seem out of place by first glance.
So how did you get it? You deleted a bunch of files out of tmp but any chance you saw the date they started showing up? You could use date correlation to find files that appeared on your system at the same time this started.
It is very likely that a flat file cfg is being referenced to create these processes. When the log files you posted in tmp show up when you start X, what is in them? Anything pointing to a start line before it goes into its workload?
If you can find that you can use lsof to find what is locking it. I have a few ideas if you could post the log files at all.
If you are curious and want to “figure it out”, you can disconnect the internet and focus on what is spawning these processes. Find that then you can move onto whether you have a hidden user.
If you just want to get it back online, nuke it and reinstall.
The reason you can't mount a SMB or NFS share is probably because you are running an unprivileged container. See https://linuxcontainers.org/lxc/getting-started/ for more info how to run a privileged container instead. Beware that you lower security since the root account in the privileged container is mapped to the root account in the host os.
General info, getting started: https://linuxcontainers.org/lxd/getting-started-cli/
Advanced guides, e.g. for running graphical apps with sound: https://blog.simos.info/
Step by step guide to install DNS adblocker Pi-hole in an LXD container: https://m-svo.github.io
LXC containers by default run as un-privileged... meaning UID/GID of root in the container are not the same as UID/GID of root in the host. See the following
The code grading system at my university is using Linux Containers (LXC) to run the code that students submitted. Each submission runs (and compiles in case of other languages) in its own container, which is copied from a proto-container template.
I figured as much, but the first google result for privileged vs unprivileged lxc containers was pretty fear inducing
https://linuxcontainers.org/lxc/security/
>LXC containers can be of two kinds:
>
>Privileged containers
>
>Unprivileged containers
>
>The former can be thought as old-style containers, they're not safe at all and should only be used in environments where unprivileged containers aren't available and where you would trust your container's user with root access to the host.
​
I don't have a specific threat i'm concerned with, just the fact that the container will be opened to the internet and I assumed it was best practice to try to be as secure and up to date as possible.
Hi
I'd also recommend looking into ZFS. It's a fantastic filesystem with lots of cool features.
Thank you! We were learning as we went along and the first time we made the website it was horrible haha so we ended up rewriting it. Glad you're finding it easy to use!
And yes the code execution is done inside an unpriveleged Linux container which took us a while to figure out how to do. We think it's pretty secure but people are definitely free to try and break it haha.
take a look at LXD it works great, well documented and is focused on "system" containers meaning each container is like a VM in a sense. With LXD you can launch containers that are debian, fedora, centos, ubuntu, alpine etc. As with Docker they share the Host's kernel so are very light weight compared to a VM with all the HW virtualization.
there are alot of example LXD oriented applications also on /r/LXD
If you are familiar with installing & configuring a linux system then LXD containers will just seem like a continuation of that knowledge for the most part.
https://linuxcontainers.org/lxd/getting-started-cli/
​
For info on LXD
https://linuxcontainers.org/lxd/getting-started-cli/
Also... On that same site you can ask suuport type questions in the Discuss Forum
https://linuxcontainers.org/lxd/getting-started-cli/#navigation
That works as I did it but why 17.10 ? It's EOL.
Also the LXD/LXC Forum is a great place to ask questions as the Devs answer questions daily there as well as other experienced users.
A quick Google search uncovered this good article: https://forum.level1techs.com/t/how-to-create-a-nas-using-zfs-and-proxmox-with-pictures/117375
It's centric on LXC-based Linux workloads, but bind mounts are very convenient. More resources on LXC bind mounts below:
For a Windows guest VM, you should be able to mount the host's ZFS-SMB share from within the VM, and it should work no problem.
We use both. Containers are of course easier to manage than VMs, for instance since its just a folder on your system accessing the container is much simpler than a VM's storage. Things like clones, snapshots and backups also become easier.
They are also easier to move around across systems. Again since its a folder you can simply zip it and move it across servers. Most platforms like Flockport or LXD also let you move and manage containers across servers.
A VM provides better isolation with its own kernel, for instance for multi-tenancy VMs are required or when you need to test specific kernel features or run a OS other than Linux you need a VM. But for use cases beyond that especially when you are just running apps containers make more sense.
> 1) If I want to passthrough a PCIE device to a container, such as a SAS controller or a NIC, does this work in a similar way to a VM hypervisor?
Supposedly it's possible but I cannot confirm having never tried it myself:
https://medium.com/@MARatsimbazafy/journey-to-deep-learning-nvidia-gpu-passthrough-to-lxc-container-97d0bc474957
> 2) Are there controllers for managing multiple servers with containers at a time similar to vSphere? I guess I've tried virt-manager with a couple servers at a time, but is there anything more robust someone recommends? (KVM support optional)
Proxmox allows managing multiple servers from a single pane, much like vCenter.
https://pve.proxmox.com/wiki/File:Screen-startpage-with-cluster.png
> 3) Since the containers all share the same underlying kernel, are there additional security concerns I should be aware about?
https://linuxcontainers.org/lxc/security/
You'll want to make sure you're doing everything in unprivileged containers.
> 4) Can I migrate VMs I already have? I found this tool 'lxd-p2c' and built it using go, but I can't really find any decent documentation on how to use it ... does anyone have any experience with it they could share w/ me?
Moving from VMs to containers is tricky. I don't know of any tools to do it but did manual migrations myself (moving programs to new instances, etc).
LXD also supports managing local or remote LXD containers.
LXD let's you create containers on LVM, BTRFS, ZFS, and EXT4.
LXD supports snapshots & restore of containers.
LXD let's you run Docker inside LXD.
You might want to consider linux containers inside of your vagrant instance. Since docker's design philosophy is built around immutable containers, trying to emulate an environment modified by puppet opens up more
This will get you the flexibility of virtual machines without the overhead. You can setup bridge interfaces inside the vm, and all your containers will look like full OS VMs.
Late to the party - but I'm sure others will be wondering this and come across this thread. This question was asked in the mailing list
My opinion: a broken update has been pushed to stable before, so if you want perfectly stable, go with LTS as there's less changes in each update.
>On Mon, Feb 13, 2017 at 01:16:53AM +1300, Alex Clarke wrote: > Bit confused as to the differences between stable (2.09) and development > (2.8) what's the major differences between the two, functionality wise? Is > there benefit to go with development when building a non business critical > host?
Stéphane Graber's response was as follows:
https://linuxcontainers.org/lxd/news should give you some idea of what's new.
For LXD 2.0.x, we only backport bug and security fixes so you won't be getting new features when staying on the LTS branch (which is precisely what most production environments want).
Going with the latest feature release (LXD 2.8 right now) will get you things like the LXD network management API, attach of GPU and USB devices, recursive file transfers, configurable syscall filtering, PKI mode, PATCH REST operations and a bunch of extra configuration options.
Both the latest LTS release and the latest stable release are actively supported by upstream LXD, so pick whichever works best for you.
Note that we push new feature releases about once a month and don't support previous ones, so if you go with those, you'll be asked to upgrade to the latest should you ever file a bug report.
If you're using Ubuntu then you should try out containers with LXC / LXD. I used this guide to get me started. Once you've get it all setup spinning up new containers from images is really fast. They also have lower overhead than traditional VMs. Also there are official images based on several distros.
I recently built a tiny server on a Mini-ITX Xeon D1540 platform with 5x6TB WD Red drives running Ubuntu 16.04. Instead of going the full virtualization route that I feel is a bit of a waste of system resources since you have to duplicate a full environment for each guest OS, I run LXD with a number of dedicated containers- one of which is a Plex installation. It works remarkably well. Check it out: https://www.ubuntu.com/cloud/lxd
I was going down the same path as you but I found something better.
Docker is probably the wrong interface for this... You probably want to do development under lxd/lxc and then once you have a stable app image you can freeze the rootfs image and run it under docker.
lxd/lxc brings the benefits of containers to more free-form operating system images which behave more like a VM but without the overhead and hassles. I was able to set it up on my laptop and homelab and start a project on one and migrate to the other with no issues.
kvm inside docker ... sound like the concept of system containers that is implemented by LXD (https://linuxcontainers.org/lxd/) and baseimage-docker (https://phusion.github.io/baseimage-docker/).
The first one (that is behind liblxc) have the persistence based in the backend used (directory, zfs/btrfs volumes, lvm, etc) with the second one, the concept used for persistence is docker volumes, that is the real concept behind docker.
You can get an idea of the number and type of available LXD images by going to the online LXD tool:
https://linuxcontainers.org/lxd/try-it/
once logged into a command prompt execute:
lxc image list images: | more
I like(d) to virtualize because I wanted to keep services separate from each other. So, LXC basically allows you to run these services separate, but it is running on host level, like this:
ps aux | grep smb (running this from the host environment) 100000 9837 0.0 0.0 336596 6596 ? Ss Jul30 0:00 /usr/sbin/smbd -D 100000 9851 0.0 0.0 328620 2716 ? S Jul30 0:00 /usr/sbin/smbd -D 100000 9925 0.0 0.0 336596 3128 ? S Jul30 0:00 /usr/sbin/smbd -D 100000 39241 0.0 0.0 345064 6720 ? S Jul30 0:00 /usr/sbin/smbd -D 100000 39242 0.0 0.0 345064 7064 ? S Jul30 0:00 /usr/sbin/smbd -D 100000 39243 0.0 0.0 345164 7472 ? S Jul30 0:02 /usr/sbin/smbd -D 100000 39244 0.0 0.0 345064 7012 ? S Jul30 0:00 /usr/sbin/smbd -D
You can see the UID is different. It's basically the UID for the container. The kernel allows you to remap UIDs so you will have your own root inside the container. So it is separate, but it runs on the host kernel. This allows you to run all the processes super fast with a lot less overhead.
Also make sure you use the ZFS backend for LXD.
You can play around here before installing it on your own: https://linuxcontainers.org/lxd/try-it/
Its just important to understand what LXD's relationship to LXC is.
At https://linuxcontainers.org/lxd/introduction/ it says:
Relationship with LXC
LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a
new, better user experience. Under the hood, LXD uses LXC through
liblxc and its Go binding to create and manage the containers.
It's basically an alternative to LXC's tools and distribution template system
with the added features that come from being controllable over the
network.
LXD introduced the REST API to LXC, a new command line interface syntax (simpler & much better IMHO), remote management/provisioning uses a database to maintain config & state information (this is how local & remote LXD management of LXC is accomplished with the REST api.
To me the most visible was the CLI change.
To create a container...
Privileged containers:
Old syntax:
sudo lxc-create -n <name> -t <OS type/template>
new syntax example creating a privileged container named cn1:
sudo lxc launch images:ubuntu/xenial/amd64 cn1 -c security.privileged=true
Unprivileged containers:
old syntax... lxc-create -n <name> -t <OS type/template>
New syntax to create/launch an unprivileged CentOS v7 x64 bit container:
$ lxc launch images:centos/7/amd64 my_centos_OS -or- to launch an Ubuntu xenial x64 bit container: $ lxc launch images:ubuntu/xenial/amd64 my_ubuntu_cn
These differences become more important if you build/use scripts etc to manage/orchestrate the LXC containers.
Neat! I've been playing with LXD on my local server, it's still LXC at the core but with nicer management which makes it simpler to use, it can also setup ZFS automatically for storage which is really nice.
Also check out rinetd for NAT instead of a complex iptables command, you just add one line to a config file and do service rinetd restart
LXD has an Openstack backend. It's possible any management UIs that support OpenStack APIs - in addition to OpenStack itself - could work. Unfortunately OpenStack is as overkill as it gets, so it may not fit your needs.
A server needs an ip. You can't share ip addresses. You could have a server in front that offloads to other servers. Or you could use linux containers to host different websites on the same server in different server environments (the same concept as storing websites on different servers). Personally I use LXC to manage my containers.
https://linuxcontainers.org/lxd/introduction/
Project is being developed by Canonical under the Apache 2 license.
(yeah, it's amazing how much info one can find out by following a link to a project's website)
There's kind of two ways to do containers. The first is system containers, think lightweight VMs:
https://linuxcontainers.org/lxc/getting-started/
The 2nd is application containers, think like a big zip file with an application in it that you can interact with:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-getting-started
Then pick something to get started, say, a Wordpress Blog. That needs "wordpress", a database, and say, a load balancer. Learn how to deploy a blog with containers on your laptop. Then on AWS, then on Azure. Rinse and repeat for more complicated things. If you can nail down those sorts of skills you'll be good to go!
Install whatever flavor Linux you'd like to use, get the system up and running, and then look into Linux Containers. Containers will allow you to have several different instances of Linux running (each with their own IP and filesystem). These different instances can then be used for whatever you want. Something go wrong? Blow up the broken container, and start up a new one.
My home server is set up like this. My "host" acts as my firewall and router, and then each container performs a specific task (for example, one runs Apache, another Postgre, etc.).
I'd go with AWS or rackspace. I'm not sure what each site may need, but if it's using all of the same stuff (Ruby on Rails, or whatever framework), then go with setting up the different sites with virtual hosts on apache. If the different sites are using different frameworks, then go with Linux containers. They're MUCH more light weight than VMs.
Hi, Flockport containers are totally free to use and share. It's completely open, there is nothing closed here.
There is no code in the containers but open source applications and web stacks like Wordpress, Nginx, PHP, Ruby, Nodejs etc that can be deployed, optimised, uninstalled and used as per user discretion.
We have configured them so users don't have to and can get to deploy stage instantly.
Flockport is based on LXC which is an open source project supported by Ubuntu.