Notice how 2X nodes are distributed as Docker images. Docker images are easy to deploy cloud platforms such as Amazon AWS, Microsoft Azure, Digital Ocean and many others.
This means that we can likely soon see hoards of 2X nodes coming live from cloud services, many of which will be running in the same data center, on the same physical machine and even within the same virtual machine.
Deploying nodes like this does nothing in terms of decentralization of the network. It just artificially inflates the 2X node count.
It is sad to see to which great lengths Jeff Garzik, Coinbase and the DCG are prepared to go and deceitfully keep up appearances of support. Not to mention the money wasted and the predictable damages to the Bitcoin community of which many unsuspecting members are inevitably going to be duped.
It is a despicable display of disrespect to the Bitcoin community and they should be very, very ashamed of themselves.
Docker, along with a few service containers, makes a fast and slick development environment. There are a few examples and tutorials out there for its use. Here's one: https://docs.docker.com/samples/library/php/
Minecraft, CS:GO, Ark, Factorio and L4D2 is what I remember off the top of my head. No special optimizations, most were installed using LinuxGSM and Ansible. Everything was running locally, so ping times weren't a real problem.
I've tried looking on Docker's website and I'm having trouble decoding what they're trying to portray that their software/IT solutions so. Anyone have a clear explanation of what this is supposed to do?
Edit: So apparently (after further research) Docker is like a box that has all the normal necessary stuff for an application to run on a given set of hardware. At least that's what I'm grasping (correct me if I'm wrong). I'm gonna do a little more research because I'm genuinely curious as to what this does.
Edit 2: Ahhhhhhh, I understand now. Docker Swarm is a type of protocol that can link multiple computers together and use all of their resources together. Say you wanted to render a very large file in AutoCAD, or something in After Effects. You would use something like Docker Swarm and combine all the computing resources on all of your computers together to distribute load, making the process nor only faster, but more efficient. At least that's what I'm gathering from here.
Someone correct me if I'm wrong because I've always wondered how those big huge move studios can render an entire movie such as The Jungle Book (something that would take years to render on even the most advanced supercomputer) can be done in a few weeks.
Edit 3: Thanks everyone that replied for the explanations! For the correct explanation, see the replies to my comment.
> That's gigantic.
2 meg is gigantic but I'm being sarcastic? Is this real life?
It's trivial compared to using something like Docker for deployment but people do it just to overcome the issues of replicating run environments.
This already exists. that’s what I use on my servers.
I’m running them under ESXi VMs, the VMs are stored in a FreeNAS share with ZFS snapshots and the game backups go to another NAS array.
If you actually gave a shit about learning, you could have gone to docker.com and learned about how docker provides virtualization containers for server deployment.
But you didn't because in all actuality you don't give a shit about docker.
Hell, docker spent time on their website to sell it to technically inept CEO's for their companies. But despite this, it was beyond you to
So if you didn't care in the first place, why ask what docker is?
> There is a file cache for our games, but some... need servers. Im working on finding the "self hosted" versions of those.
Take a look at Pterodactyl for managing your game servers. https://pterodactyl.io/
of course, as long as you know how to manage a linux server. :)
Or you can use tools like Server Pilot to offload the management of the server. Installing wordpress is three clicks away.
I wrote about this (using server pilot) on my blog: https://gagah.me/2017/02/14/setting-vps-for-wordpress-without-ounce-linux-knowledge/
Yes it does (support virtualization) and yes docker will (very soon) work on those.
It’s still unclear if it will support x86 containers (through emulation) in addition to arm64.
https://www.docker.com/blog/apple-silicon-m1-chips-and-docker/
it sounds like you have a fundamental misunderstanding about what docker is even trying to do, much less what it does.
it is a suite of tools around managing linux containers. https://www.docker.com/whatisdocker/
"Filled in pixel-by-pixel" is the generally-agreed upon definition of pixel art?
I find that problematic because for some images there's no way to tell the method by which they are generated.
For OP's picture, it's pretty likely that it's a screenshot. But who knows, they could have filled it in, pixel by pixel, from a reference screenshot.
Take this image as another example. I could have easily done that pixel-by-pixel or with the oval tool, as long as anti-aliasing was off. By your metric, the same output image could be classified as both pixel art and not pixel art, depending on which method you think I used.
So basically RunDeck?
You should look into existing solutions, rolling your own is a massive effort. But to answer your question: You don't "need" JavaScript for simple web sites that reload the page on every action. JavaScript is used to make the page more interactive and faster - see how on Reddit, comments are saved instantly and you don't need to wait for a whole page load? That's enabled by JavaScript.
These days, web applications are often single page applications (SPAs), meaning there are no page reloads and all content is displayed and made interactive through JavaScript. These type of applications are what frameworks such as React, Angular or Aurelia are for. Instead of emitting a full HTML page, the backend would become a JSON API, which then is also accessible by other applications.
Docker on Windows is more of a hybrid. They use HyperV virtualization to provide a Linux kernel and run Linux containers under that. https://www.docker.com/docker-windows
I neglected to acknowledge that Docker can run on Windows and was discussing only the LInux hosted side.
Caprover is one of my favorite things ever. It's a fabric to manage server apps and containers with dead simple built-in nginx with lets encrypt for one-click ssl, shitton of built-in apps like nextcloud (the only nextcloud instance i've had not shit the bed as well), wordpress, ghost, adguard, k8s, the usual fare. if you decide to stop using caprover your apps deployed with it will still function.
Unlike some tools of this sort, i find it does a good job of helping you understand the underlying infrastructure of what's going on with your tools too.
Also supports docker swarms, repository hosting, all sorts of stuff that's beyond my paygrade!
> but without the massive headaches of setting up and maintaining a graphical Linux desktop
I see you have not touched any desktop Linux distributions in the last ~10 years. They ship with complete desktop ux, and what I can speak for is Ubuntu and Debian, those had incredibly smooth user interfaces with all the things you need.
> brew
This is a simple package manager. You can use basically any package manager on your Linux as well, and they can easily be separated, but what you really should be looking for is docker.
> Command/Super instead of Ctrl for system shortcuts (...) since it means out of the box there's zero conflict between terminal control codes and the rest of the OS.
This is not anything new, I have never seen any distros having conflicts with terminal. Other than the generic keyboard commands (copy, paste, select all) the shortcuts are on either command or alt. This is so old that even Windows 7's shortcuts were fully on the command(windows) key.
> I don't know of any Linux equivalent to iTerm2
There are but this just comes down to taste and what you are used to.
They do inform you.
> "However, you understand that your use of the Service necessarily involves providing ServerPilot access to and the ability to modify the contents and operation of your servers"
Point 11 of their terms @ https://serverpilot.io/terms
> I was told this was needed to provide tech support. That's noble.
In all seriousness, as SP are spinning up your server and controlling it. How do you think they would provide support for it without access?
By using a managed host or pseudo-managed host (like SP) you are essentially handing over administration of your server to a third-party.
If you are not comfortable with a third party having access; don't rely on a third party to manage your servers and do it all yourself
I don’t mean to burst your bubble, but there is already something very powerful and open source out there. If you are interested I would join the Discord and help in development if you are able.
It use Docker and I use it to host all my game servers.
So far they don’t have very many plugins, which would be something that you could focus on :)
Actually at this point is neither OS nor hardware.
The Developer Transition Kit had an A12Z that did not have virtualization extensions enabled. That shouldn't have stopped the Docker team from being able to port onto Apple's hypervisor framework (frankly, something I'm surprised they hadn't already done, and should have done some time ago), but it would have prevented testing on the ARM platform. M1 does have those extensions enabled.
Right now, it's mainly upstream dependencies that are holding them back. Go will not support Apple silicon until 1.16 due in February (though I expect Docker can start working on it now -- as far as I can tell everything needed for Go to support Apple Silicon is on master now).
They also call out Electron, which is already shipping beta versions with Apple silicon support, but again, they probably don't want to make a production release on prerelease upstreams, not to mention it's a major version so there are probably breaking changes that need to be worked through.
Finally... they actually need to get machines to test on.
So yeah, I'm as antsy as the rest of you... but these things take time, and Docker is in a bit of a unique position simply because it uses virtualization, which is pretty much the one thing that Rosetta can't help with.
The blog post has more details: https://www.docker.com/blog/apple-silicon-m1-chips-and-docker/
Since this is a publicly-facing server I would aim for a distro that is current on security fixes (and has a history of being current), and is not "bleeding edge" (exposing yourself to new exploits or instabilities)... Debian seems like a good choice. The only non-Arch system (which serves as my mini-server for outside services) in my house runs Debian and I use webmin for managing it.
Edit: Don't use webmin
Digital Ocean and ServerPilot will make your static and WordPress site management process a tad easier. Upgraded plans give you free SSL certs, sftp users, logs, etc.
You can also manage multiple servers from one dashboard.
Then you can use standalone droplets on DO for your two-tier architecture sites.
Joined!
While steam has a large amount of games, ive found this site to be linux friendly as well. (home: proxmox with LXCs Work: vmware esxi vm)
i use their quake and UT servers installs for a local VM lan server at work.
It’s a software that lets you run lightweight applications. Instead of copying your code and running it on a server, you copy the code to run the application and the application code. This gets around errors where “it worked on my machine” because I installed library X and you must be missing that. Since you are also including X and all the code to run your app instead of just your code.
This is not a full summary of docker but if you want to learn more see docker’s documentation on containers https://www.docker.com/resources/what-container
Yeah, as a software developer, that page was clearly not written for me. The main problem with going to docker.com is that the real answer I need to "What is Docker?" is "It's basically just a tool for managing Linux containers (LXC)." But docker.com can't actually say that on its home page if it wants business types to pony up big bucks.
Tellingly, the "What is Docker?" page doesn't even contain the word "Linux," let alone "Linux containers."
I would say first priority is getting backups/DR squared away and after that getting the old machines virtualized.
Nagios is a good idea for monitoring, there are also other options out there should you go that route. I would not recommend webmin, as it's had some security issues.
Ultimately, if there are so many places where outdated hardware/software/whatever are going to cause problems for your company in terms of downtime, budget is the only thing that will fix this. I would try to put together some sort of documentation along the lines of $x for new servers/hardware/etc vs $y for downtime.
Virtualization means using your hardware as a host for virtual computers (allowing you to run multiple operating systems as virtual machines). So, think of it as your computer becomes a big foundation that you put tiny houses on -- and you can configure and customize those houses to your liking. You can turn them on when you want, shut them down, wreck them, rebuild them... all while the foundation (and other virtual machines) are safe and secure.
And I'd agree that once you add some more memory (I'd also toss a =<250, you'll probably have a blast getting into it.
ESXi and Hyper-V are good choices, but I'll throw my vote for Proxmox. It won't cost you a dime, it's a solid hypervisor with a pretty solid community and is based on Debian so if you need to drop down to the command line, you'll have a plethora of resources at your disposal.
Also, take a look into LinuxGSM. If the games you want to serve are on the list, it'll help you keep the resources down vs. running a top heavy Windows server.
Have fun, mate. =)
I would recommend trying the pterodactyl.io panel which runs on GNU/Linux. For beginners I recommend a GNU/Linux distro called Debian. You’ll need the panel itself (hosted on a web server, nginx is recommended) and the pterodactyl wings daemon(a standalone executable written in Go)
Edit: for a second I thought this was on r/admincraft 😂
Define "bad".
The CPU architecture is slightly different, but compatible, so beside the performance difference, it's no biggie. Yet if pods run on the Pi3, they'll be slower than if they run on the Pi4. That's bad for CPU intensive programs.
It's a good educational exercise to work around this (mark the 3b as "low priority" or put special non-CPU-intensive workload on it), but you don't gain much here in practical terms. Some would thus see this educational value as "wasted time". In reality you try to keep all nodes identical. Simplifies management a lot.
There's more educational value if you add in an even older RPi. Or x86_64 nodes. Then the binaries are no longer compatible. See multi arch images. Again: you'd usually not do this in reality.
2 Pi4 and 1 Pi3b will work. So I guess that counts as "not bad". It's not good either (3 Pi4 would be absolutely ok), but for the sake of having 3 nodes to set up and test a HA setup, it'll do.
I don’t use docker on the Mac a regular basis, but it looks like Big Sur is officially supported and there’s a tech preview out for Big Sur on ARM Macs. Are there game-breaking bugs that aren’t readily apparent?
All of these seem very generic, with the exception of the bottom center logo which is (too) strongly reminiscent of the Aperture Laboratories logo.
That said, just throwing this out there....
How about something whimsical? Like the Docker logo. I say whimsical because when I saw your studio name I immediately thought of a brain with a party hat since it looks like a portmanteau of "celebrate" and "cerebral" (which is what I assume you were going for with the name). Just sayin'. I think you could evolve some distinctive branding around a concept like that, and given that you are a game development studio I don't think whimsy entirely inappropriate for a brand.
Performance is great for me. I haven't used a pre-built image from their app directory. I get a blank Ubuntu install and use ServerPilot to setup the LEMP (+ Apache) stack. It's so fast that I host multiple sites on a $10 VPS without issue. ServerPilot is kinda like CPanel as you use it to set up your sites, databases, SSL certs (free w/ Lets Encrypt) and more. I can send you a code to get some free usage if you want to try it
You could potentially look at RunDeck for this type of thing. It supports LDAP/AD authentication, with the ability to limit certain jobs to certain groups. You can run ad-hoc commands as if you were on the command line on the localhost, or on set of pre-defined remote hosts. It also has the ability to store pre-configured jobs that execute a series of command or scripts in a pretty nicely defined workflow. It integrates nicely with a number of different tools like Jenkins, puppet, chef, etc and most importantly (I think) is that is has a sense of history in that tracks when each job was executed, by who, and what the output/results were.
I don't think it's fair to say that Google isn't contributing. I've personally been involved since the beginning (https://www.docker.com/blog/community-collaboration-on-notary-v2/) and have spent a substantial amount of time giving feedback from the perspective of a registry and client maintainer. We're also working on some changes to the image-spec and distribution-spec that (if adopted) could enable the notary v2 prototypes to be portable across real registries.
I'm just not heavily involved in refining the requirements documents, because that's not really interesting to me as an engineer, and time is a limited resource :)
> Just wanted to learn some docker and kubernetes
Windows Docker Desktop has a built in Kubernetes setting that you can toggle to run K8:
https://www.docker.com/blog/docker-windows-desktop-now-kubernetes/
Kudos!
For those looking for a simpler solution, there are also:
I have been using dokku for more than two years now and it has been absolutely fantastic. There have been some issues here and there, now and then, but considering how easier it made my life and the zero cost of it (a huge plus compared to exorbitant prices of Heroku), I recommend it 100%.
Give Pterodactyl Panel a try. Supports almost every game I know and its very easy to get into. Creates docker container for each game and user management is really easy. Actively updated too and more features are to be added.
I mean, this is a lot of work. This isn't a quick "some contract" amount of work. There are loads of factors needed to be considered before going at this project.
Personally I'd recommend using an open source project like: https://pterodactyl.io/
tl;dr: There's nothing to worry about here.
All it is is a Dockerfile, and all the files needed inside of the docker environment.
The Dockerengine is a way to isolate applications (like PHP, or Python) in their own environment. These environments are called 'containers'. All this does, is it creates a virtual environment, installs the necessary software (like Apache, opcache, mongodb and Redis) inside this environment, so your computer won't be affected.
Only the volumes
parts in docker-compose.yml
, will affect your computer a little bit. Basically they will place files that are used inside the container, on your computer, outside of the container. This gives developers / users, easy access to configuration files, and other files, without having to go inside of the container.
More explanation: https://www.docker.com/resources/what-container
The Docker Lab & Tutorials are a good start. https://www.docker.com/play-with-docker
The point of docker is that you can build a small enclosed environment that has exactly what you need, and can then be replicated where you need it. That way the testing environment & the deployment environment are exactly the same so you never end up with the "It works on my computer" ( but not on the server because of configuration differences ).
I hope that helps ...
Connecting Your Server to ServerPilot
Because Amazon Lightsail uses SSH keys, you will have to use the manual installer to connect to ServerPilot.
Source: https://serverpilot.io/community/articles/how-to-create-a-server-on-amazon-lightsail.html
How to Manually Connect a Server to ServerPilot
https://serverpilot.io/community/articles/how-to-manually-connect-a-server-to-serverpilot.html
This is a pretty broad topic, and you're talking about covering several bases here:
With all that being said, I think you'd be served best by the simplest solutions out there (always a fan of simplicity for fewer parts breaking, ease of extention, etc.) With that in mind, I'd suggest ganglia for your metrics (CPU, network, disk I/O, etc.), monit, nagios, or sensu for service monitoring, and if you really want remote admin, Ajenti if you're looking for something semi-established, and maybe play with cockpit if you're feeling adventurous (I have zero experience with it, but... it just looks cool.)
If you know how to use Linux you can use an old PC and install Ubuntu on it.
Then download the script files from here: https://linuxgsm.com/servers/
Setting up multiple instances is also very easy with Linux GSM.
Just my 2 cents:
As mentioned use a firewall and open the appropriate port. I can HIGHLY recommend pfSense. I'm running it inside a VM as well and it handles everything. Put your MC Server on a different subnet/DMZ than your LAN.
To access your server for administration set up a VPN. I wouldn't recommend opening up SSH to the outside world unless you really know how to secure it. Only asking for trouble.
As a better way to run a MC Server I can recommend Linux Game Server Manager (LGSM). Makes it very easy. Here ya go Handles updates, backups, notifications etc.
> Intel NUC
Don't expect wondrous performance since it uses a mobile CPU
> My question is whether it's better to run a Ubuntu container and have everything setup from scratch or if I should move to MineOS especially if it just makes it a lot easier to work with
The former, MineOS being based on debian is generally pretty outdated for no real advantage. I'd consider an alternate panel (pufferpanel) or LGSM instead.
> should I run Spigot or migrate to Paper?
Paper
Thanks for posting this already, we just published the official blog post for 1.0: https://cloudron.io/blog/2017-06-20-cloudron-1.0.0.html
Let us know what you think, my co-founder and me had a great journey the last 2 years building this!
I've used Forge in the past but 20$ a month is way too expensive. At the moment I'm using https://runcloud.io since it has the same features for 10$ a month. The only thing I needed to do was to create a .env file.
At the moment I'm working on my own server management tool because I think it's more secure to store ssh keys on a European server. It will take a while for me to finish it.
Your licensing needs work. How is 5 instances a professional license? I have 9 game servers running on my Proxmox GS not including Satisfactory. What is the justification there, especially considering Pterodactyl is free, open-source, and objectively better.
MacOS Server has been stripped to the bone, Docker won’t be going native for a while, and in general server applications have not been a priority for Apple in at least a decade. I don’t say that this can’t ever change, but folks won’t be quick on the uptake for this use case.
Have you tried the docker tutorial? Last week I also set out to learn docker, the tutorial really helped explaining things. It shows how to setup volumes, so that it mounts your folder to the container for developing purposes.
boot2docker is long gone. Docker for Mac uses a Linuxkit VM running in xhyve via HyperKit - there's a basic architecture diagram on this page: https://www.docker.com/docker-mac
I think you're right that you'll need to access the VM to deal with filesystem stuff though. There's some info about accessing the VM here: https://forums.docker.com/t/is-it-possible-to-ssh-to-the-xhyve-machine/17426/10
Personally, I would ask this question the other way around:
What services would you rather run in a container?
The thing that trips up a lot of people new to this kind of thing is that while containers have use cases, they're not just something that should be deployed willy nilly. Linux is designed for multi-tasking, you can have hundreds of services running on a normal host without any problems. Containers provide certain features and conveniences, but shouldn't be the default for a new service. Check the official Docker site list of use cases for some examples of why you'd use containers.
For example, there's almost no reason to run a single always-running nginx server serving basic web pages in a container. On the other hand, if you wanted to spin up ten nginx servers to each run minor variants of a customized module to test how stable they were, that's a great container use case.
I wouldn't run most of the stuff you've listed in containers unless you just really want to mess with containers. :) FreeIPA and DNS will probably be much easier to configure as normal anyway.
Docker is a containerization application that allows for packaging and deployment of applications and their dependencies. It's really a Linux thing, but can run on Windows. We're starting to look at using it for dynamic on-demand deployment of servers and systems for data analytic solutions we develop for client projects. If extra servers or nodes are needed, they can be spun up and configured for the application, and then taken down after. Docker ensures that the the software we deliver will behave the same every time, regardless of the environment the client runs.
> email accounts
You don't want your web server acting as a mail server, anyway, for all kinds of reasons.
I always recommend using Google(Apps) or MS Office 365. In fact, Office 365, which essentially runs an Exchange server for you on a per user basis, is one of the areas I have to concede Microsoft still does the best job. The reasons are far too numerous to go into here. I'd recommend research.
>wordpress sending emails from server
Depends on the type of email. Emails sent directly from a dynamically generated IP address -which you have- is likely to get caught in spam traps these days. You'll need the right PHP libraries installed, and a product like sendmail, assuming you're running the LAMP stack.
If you plan on any newsletter or user message, its best to use a service like Mail Chimp.
> lack of a "control panel"
You'll need to be at least a little cozy with command line (SSH). Products like webmin can provide a gui for many operations that you'd normally use command line for.
> subdomains etc
These are handled in your vhost files. The aforementioned webmin has a gui for managing Apache vhosts.
> etc
Managing your own server definitely requires a learning curve. Keep back ups. If you haven't locked yourself out of your own server a couple of times, its not secure enough (That's hyperbole, don't try to lock yourself out).
> do I run into the risk of having to google stuff all night trough for email accounts, databases.
No, you'll be googling problems for much longer than a night, but don't get discouraged. To say that virtualized options like Digital Ocean, Linode or AWS are better for your money is putting it mildly. Once you get the hang of it, you'll wonder how you dealt with so much garbage for so long; you'll never go back.
If you want something in the middle ground you could go with Cloudways or ServerPilot. They will deploy a lemp stack to Digital Ocean for you and provide some support on it. Cloudways was doing $30 free credit for Black Friday/ Cyber Monday.
Starting with RC1, PHP 7.0 is available on servers managed by ServerPilot.
https://serverpilot.io/blog/2015/08/20/php-7.0-available-on-all-servers.html
Happy to jump in here as well.
First, learn about what OS you want to use. I run six game servers and a TS3 server off a single HP DL360 Gen 8 that I got second hand for about $375. It's running headless Ubuntu and all the game servers are virtual machines (LIBVIRT/VIRSH). Virtual machines are a STRONG recommendation because:
Now, most CPUs support virtualization these days so you'll want something with a relatively high core/threat count and a solid amount of RAM to spread around. I'm running 24 threads (dual Xeon CPUs) and 32GB RAM. That's why I went with a old decommissioned enterprise machine. CPUs and RAM are cheap for previous generation systems. But, energy costs go way up.
Second, Linux Game Server Manager is your absolute BEST friend. https://linuxgsm.com/It could not possibly be easier to set up a dedicated game server than using LGSM. Plus, they have a really good support discord.
Also, downstream bandwidth isn't super valuable to you. Your upstream is what's most important.
Feel free to reach out if you need any more guidance. I like setting up my dedicated servers more than I enjoy playing on them so it's fun for me.
I use Cloudron and love it, it's some of the best few dollars I spend every month. There's a pretty decent array of apps available, good documentation on how to create apps of your own that you can then publish, and it comes built in with an email server including aliases and unlimited '+' emails.
You could use one of many purpose built job schedulers. Rundeck is a cool one:
You could also abuse Jenkins. It isn't really designed to be a job scheduler but many people use it like it's cron on steroids.
Or modify your application to use a queue. I'm more familiar with Python and Ruby where Celery and Resque are the favourites.
For PHP it looks like php-resque is good.
For a dedicated server a graphic card shouldn't matter. Your other specs are fine, obviously depending on how many servers you are going to host in the end. As I never hosted a gaming server before, I can't point to much out for you. I would use pterodactyl for the simplicity and as it splits the servers into several containers.
I am going to call out pterodactyl as a free and OSS control panel for unlimited servers. It's not just minecraft either. There are guides for a full CentOS install.
One thing to call out is that it doesn't run on OpenVZ VPS's due to requiring docker for the daemon.
Being open I am on the project team.
That's the idea of VMs, to decouple from the hardware. And containerised apps go one step further and decouple from the original VMs so that the apps can run on different VM environments. See here for more explanation: https://www.docker.com/resources/what-container
Docker itself runs very well on ARM / ARM64. Docker images typically contain compiled code, so the same caveats apply: if the code inside the container is compiled for x86-64 then it will only run on an x86-64 processor, likewise ARM.
Docker Hub supports many architectures, here are the public ARM64 images: https://hub.docker.com/search?architecture=arm64&source=verified&type=image
The number of public images for ARM64 is several orders of magnitude smaller than there are for x86-64, but it's an industry transition that is well underway. The "library" containers for the most important things you need, like Go, Node, Ruby, Python, and basic Linux images like Alpine, Debian, Ubuntu, Red Hat, Fedora, etc. are all available in ARM64 already.
Docker Desktop supports cross-compiling the same Dockerfile to architectures other than the one in your local development machine: https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
As above, this is still not super easy and has some pitfalls, but the issues are getting worked through as ARM64 adoption continues.
AFAIK the Docker For Windows product uses only Hyper-V. There is (was?) some other versions of docker on windows which could use VB but were less transparent or seamless to use.
https://www.docker.com/docker-windows
The old Docker Toolbox:
If you have good social and communication skills, you'll find tech companies begging to hire you for technical sales and marketing roles. In particular, look at a role called "sales engineer" e.g. like this one https://www.docker.com/careers/sales?job-id=488746
This is going to be overkill, but have you thought about segregating your apps through Docker? I guess I just wanted to share this technology because it's really helpful for hosting, especially in small server setup environments. I use dokku to manage the different web applications running on my VPS and it's been great. dokku doesn't scale very much, but if you're hosting on a single VPS, it makes it really easy to manage multiple web apps.
1) With 16 GB of ram, I would definitely install ESXi to give you flexibility. You may have some issues with part comparability but you might be okay. Check their HCL for what you bought. If you haven't bought ti yet, checking first would definitely be advisable. A CPU with hyperthreading would have been good though.
2) Honestly, you are best to just stick with debian/ubuntu. You want to stick with one of the LTS releases. This needs to be stable and relatively unchanging. If you are virtual you can split each function out to its own server; which would allow you to choose something different for each application.
3) I know less about the security. It sounds like you have a pretty good plan though. Keeping updated with patches is another key thing. Something that a LTS version of an OS will help you with. With ESXi and two nics you could create a virtual switch inside and put a firewall appliance (install something like pfsense) and bridge internal and external networks? I do something like this to keep my "lab" isolated from my home network. Remote access is done with an SSH server on a non-standard port.
Actually, after looking your server over again... you don't have ECC memory? Not requirement but usually you'd want that in a server.
Also, look into Webmin.
BlueHost is shit, they are at fault.
How to migrate:
Or follow this tutorial: https://serverpilot.io/community/articles/how-to-migrate-a-wordpress-app.html
A Digital Ocean VPS (cheapest is $5 a month) + a free account at Server Pilot to partition your VPS into separate "apps" (or sites) would probably be the easiest way to go.
Take a look at ServerPilot. It sets up a LEMP stack for you and actively updates your VPS. They have a free version that has a lot of the features you want and plays nicely with DigitalOcean Droplets.
Maybe to start with, check out Cockpit Project (http://cockpit-project.org/). If you aren't familiar, it is a server dashboard that does a little bit with VM management. Might be enough to keep you on CentOS.
As for OMV in Proxmox, I don't see what you gain (maybe I have a bad imagination). Proxmox will handle the storage well enough. And if you convert some of your VM and services into LXC containers, you'll keep more of your management in the Proxmox GUI.
You mean, things like per service monitoring, easy per service throttling and organized logs are usable to desktop and not enterprise server? My mileage is completely different, ever since moving on RHEL/Centos7 our whole infrastructure became really simple and most importantly predictable
Even using old /etc/init.d service file with all old commands is still ends up beneficial when used in systemd. At the same time you use "chkconfig myservice on" systemd auto generates unit file which can be fine tuned and get all the benefits of systemd
In your place I would really check http://cockpit-project.org/ just to see what everything can someone do by default to get the feeling how maintainable services are compared to anything else
i recommend you setup a server using LinuxGSM instead. the server will just run under a created user account. you can also hook into the server console this way easily.
http://rundeck.org is not bad. It can be ran as a single vm acting as a master/slave, but can be scaled to any number of slaves. You can have any machine act as a slave via ssh keys and then rundeck will execute whatever you want on them and will track the result of that. It also has error handling (I.e. If the job fails, take "X" action). It's as simple as you want to make it but can grow to what you want/need it to be.
Flask is a general purpose web framework. What you seem to want is a specific purpose web application that allows safe and audited execution of scripts. Look into Rundeck for this purpose http://rundeck.org and save yourself reinventing this particular wheel.
Ansible is where it's at. Ansible Tower (the GUI) is free for up to 10 nodes. Other than that, pay up or use something like http://rundeck.org/
You seriously should consider learning proper terminal use if you're doing system administration though. Nothing scary in it. And Ansible is just scripts.
In that case, check this out:
http://bencane.com/2014/07/17/integrating-saltstack-with-other-services-via-salt-api/
I think it's exactly what you're looking for. If you already use Rundeck, you may also be interested in this
http://rundeck.org/news/2014/03/20/Rundeck-and-Salt-at-Salesforce.html
Not really but we honestly never make that many adhoc changes on servers. But the biggest problem we had was that we have about 20 different AWS accounts, which means that infrastructure was spread across different VPCs and two different data centers. Since we made it a a purpose that we would not expose any individual machine to the outside network, it would be very difficult to implement mcollective because it needs to be on shared MQ server. What we did instead, was implement Rundeck. It was actually incredibly easy to set up and it's able to parse ec2 tags. Since we tagged each machine with its role and it's environment already we can do things like "free -m" on all machines with "tags: production+redis".
Rundeck is pretty cool. People should know about it more.
Have a look at pterodactyl to run your servers. Very easy set up for lots of different games. Runs each in its own docker container so keeps isolation between them. You can also set limits per server etc. I'm running it on an ubuntu server VM on proxmox. Had started installing some games individually and then came across this. https://pterodactyl.io/
I use https://craftycontrol.com/
Which can do what your after.
I've heard good things about https://pterodactyl.io/ As well.
The other thing to remote into the computer would be using TeamViewer
https://www.docker.com/blog/released-docker-desktop-for-mac-apple-silicon/
I have an M1 MBP. When it was released, Docker was not supported on it at all.
They added support in Rosetta and now. looks like they released a native version on August 15.
I stand corrected and this is good news for me.
Generally for Docker, the advice is to use personal access tokens where possible to avoid your main Docker hub account credentials being held in the clear.
For Docker for Windows/Mac they leverage the credential store on each platform to protect your creds, but (AFAIK) on Docker engine on Linux they don't, so it's important to a) use a personal access token and b) make sure you protect access to that file.
As tempting as it is to assume we're all familiar with that problem, you'll get more help if you act as if you're the first one to see it.
The only problem that I know of regarding Docker on Fedora has been that since Fedora switched to a cgroups v2 configuration, Docker hasn't worked because it doesn't support cgroups v2. They released a version that does in December, and I'd expect Docker to work on a default Fedora configuration since then:
https://www.docker.com/blog/introducing-docker-engine-20-10/
(Though, to be clear, I prefer the security model used by podman, so I don't use Docker.)
> LS.io's pull limit was reached
Nope. The limit is applied to the user who downloads the image.
> Rate limits for Docker image pulls are based on the account type of the user requesting the image - not the account type of the image’s owner.
Perfect questions! Kubernetes helps you run containers in production. You might have heard of Docker which runs containers, and which Kubernetes is built upon. You can think of containers as a bundle with everything required to run your app. In this instance, the WordPress container includes Apache, PHP, and all the PHP files that make WordPress run. The nice thing is that for a lot of popular software, the owners have built container images like the one we're using in the example (WordPress Docker Image) so you can launch it into production with a minimal amount of configuration. No need to manually set up Apache, PHP, decide what kind of server to set it up on, etc. The same goes for MySQL.
So in this demo, we are taking both of those images and telling Kubernetes to run them on a cluster of many servers. Kubernetes does the hard work of figuring out where to run your container and restarting in the event of any failures (for example, if a hard drive dies, Kubernetes will move your container to a healthy server).
While someone running a single wordpress blog alone might not need it, its a very cool tool to learn, especially if you want to run WordPress along side a few other personal blogs / sites / etc. Each might have its own unique webserver and config, but at the end of the day the server running it doesn't care, as all that is taken care of inside the container.
Basically stuff like Docker. A halfway house between running apps natively on a box and running them in their own VM.
Quoth their website: >Containers are a way to package software in a format that can run isolated on a shared operating system. Unlike VMs, containers do not bundle a full operating system - only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed.
Docker the company has created what they call the 'Universal Control Plane', which is now in beta: https://www.docker.com/universal-control-plane I checked it out, and it seems really great - even better than tutum I think. But we are currently using mesosphere's DCOS in production and it's a great solution, not just for docker. They have a free community edition that takes 5 minutes to deploy to AWS via a cloud formation template. I suggest checking it out ( warning: the default templates use m3.xlarge which are very costly, you can modify the template file to just use t2.micros while testing): https://docs.mesosphere.com/install/awscluster/
I think you should have a look at tools like vagrant or docker.
Vagrant is VM manager, it allows you to configure your desired environment and its built on top of existing virtual machines (VirtualBox, VMWare,etc). The problem with is the required overhead.
Docker is different because it shares the kernel host. Here is a quick description from the website: > The Docker Engine container comprises just the application and its dependencies. It runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.
So, docker might be what you are looking for.
My company uses webmin and we've never found a reason to switch. It's free and pretty easy to setup securely. Also they have a new theme (I think you need to enable it in settings) that looks modern.
There are a considerable number of different ways to do this, but it depends on what you are permitted to do with the servers in question.
Easiest way (And possibly a godsend when it comes to administration) is install Webmin.
Another way is to use SFTP.
A third way is to login via SSH into one of the servers and use the SCP command. So for example if you log into server1 and wish to copy a file from it to server 2 you log into server1 via SSH and use these commands...
scp /file/path/filename.file username@server2:/file/destination
Check out webmin. It's probably the most useful piece of software I've ever installed. You set it up and runs a simple webserver you can access from anywhere ( assuming you know your IP ). It will give you great control over your linux box and includes the ability to do a webbased ssh session, HTTP tunnel, lots of fun stuff.
You get what you pay for. Cheap and Reliable only goes so far, but you can get a Vultr VPS for $30 a year, then use the free tier on Server Pilot to install wordpress.
Since LightSail supports Ubuntu 16.04, you can use ServerPilot on LightSail just like on DigitalOcean.
One thing to watch out for is that LightSail firewalls off port 443 (HTTPS) by default, so be sure to open that port in the LightSail firewall before enabling SSL on your sites.
Seriously. You just have to start a fresh droplet, copy/paste the given code and bam, they install a strong nginx backend with apache for static. From their control panel, you can add "apps" (well, websites) and databases. It's really a simple web panel that just takes the hassle off installing everything, I love it.
Oh. And it's free if you don't need the advanced options.
It supposedly has the ability to aggregate stats from other sources, but that feature doesn't seem to be documented (that I can find).
Cockpit is another one to look at. You can add multiple remote servers to the 'dashboard'. I have found it to be lacking, but it will spit out the basics at the very least.
Docker Engine is open source.
Docker Desktop is a product that contains Docker Engine and a bunch of other stuff to run it on OSX/Windows. If you'd like to create your own version of it that includes Engine and other components, you can. You're also free to install WSL or another hypervisor, set up a Linux VM, and install Docker Engine inside of it. You can also use Docker Desktop for free for personal use or if your business is under the limits: https://www.docker.com/pricing/faq