It's a clean, light and easy distribution.
AlpineLinux philosophy pleases me : https://www.alpinelinux.org/about/
It doesn't use "big" tools like systemd (no pro/anti war, systemd is a pretty great tool for many things, and I use it with arch because it's deeply integrated by maintainers' choice, but I would like to not use it).
It's easy (for me and the knowledge I have) to configure a super optimized system. All distributions can be optimized, but it can be harder for some of them (because they are first made to be fully integrated or uses specific tools for specific use cases).
Would you accept that people who are linux enthusiasts and have spent thousands of hours on it as a kind of hobby might know more than somone who has had brief contact with Ubuntu?
Linux itself doesn't really care what hardware you've got. Any memory model/speed is fine. You need a hard disk, doesn't have to be huge or new, and can even be slightly faulty. Any chipset is fine (in fact almost any instruction set is fine).
Now, whether or not specific features of your motherboard will work with linux is a different story. But since you are on win XP it is kind of irrelevant, because it has the same challenges most linux distros do (you need to find and install drivers). You might already have found these for your win XP install, you probably can find some for your linux install.
Apline is a lightweight distro designed to run on hardware which has limited resources (not necessarily old). Although I don't recommend it for beginners. It's just an example of something you could install on a PC machine from the 90s with old hardware, tiny memory/disk, and it would be really performant compared to win XP.
The big advantage is though: security updates. If you don't understand why this matters you should look into it.
The advantage for you: you can play some of your steam games again (provided they have a linux version).
This uses Alpine Linux which is not GNU/Linux.
https://www.alpinelinux.org/about/ > Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.
> Debian stable is pretty minimal to be honest.
To quantify that, 128MB RAM minimum, 512MB RAM recommended, 2GB disk space. With OP's requirements (HTTP, ssh, telnet) something close to the minimum RAM is almost certainly fine. If you need more bleeding edge tools, Debian Sid isn't that much heavier.
Alpine Linux can be smaller (maybe 150GB of disk space) and 64MB of RAM.
The article is largely correct (especially the sequel). There is not only a value in simplicity, but simplicity is the primary and most important value and advantage old Linux used to have.
Pre-Poettering Linux used to work pretty well. You had to do everything manually, but everything was easy to do.
After 10 years of "progress", wifi and suspend still don't work on Linux, but now you have no idea where to even start fixing any of it, and if you dare to complain, 10000 redditors laugh in your face.
I'm very happy that some resistance still exists, in the form of projects like musl, Alpine Linux, the suckless project and so on. But they're few and the enemy is many.
Short answer: no. And it is a general problem with binaries on Linux, not just Ada. You can't statically link GNU Lib C. The library itself doesn't allow it. I have plan to write a longer post how to solve the issue. I also landed in this pitfall some time ago. :) But for now, a short version:
Use Linux distribution which uses different standard C library. Like Alpine Linux. It uses musl
library which can be statically linked, and you can produce fully static binaries which should work everywhere on the most Linux distributions. Caveat: be ready that your binary will be a lot bigger than now. At least a few MB per needed library. Additionally, you will have also all other libraries compile statically. If you try to do it with any graphical application, you can expect that your binary will be bigger by even 1 GB. :) Thus, this way is good only for very simple applications. Also, this way isn't guaranteeing that your binary will work everywhere.
Deliver your application with GNU Lib C. This is very unhandy, because you will need not only library, but also parser and enforce users to execute your application only per this parser. This should work everywhere, bigger chance than way #1.
Normal way: create normal Linux packages for each distribution. At the start it is a bit hard, but when you learn how to do it, everything will work without problems. :) This is the only way which have 100% chance that your application will work on selected distributions, because you can absolutely forget about the ability to create binary which will work on every Linux. There are too many difficulties to solve with only one binary. Personally, I started a small project to solve that concern: Ada Linux Packages Repository. But it is in very early stage of creating thus, it may take some time before it will be useful for everyone.
https://www.alpinelinux.org/Alpine Linux. Runs in like 70mb of ram for the full install. 220mb if you want XFCE. Only problem with Alpine is just the lack of packages. Based on musl libc and busybox.
The problem is that AppImage is a file system in user space. It made easier to keep the program, but doesn't solve the problem with incompatibility of "standard" C library. 😉 You can test it, the best on Alpine Linux container.
A bit better could be use Snap or Flatpack. Just both are the same as "normal way", a different kind of packages. And Flatpack at this moment is focused only on GUI applications. Snap works on a few distros and isn't too liked by Linux community.
Given Alpine Linux has also just released with cool stuff like CephFS out of the box I am quite interested in seeing what this layer on top of it is like.
Alpine? It's used a lot in server based appliance so I imagine the hardening is pretty documented, and a lot of people recommend it so I think there's a fair user base. Very minimal, seems to fit all your criteria.
Alpine uses BusyBox instead of GNU, and is compiled with a different compiler than gcc. It proves that there can be a distro with some level of popularity that does not rely on GNU. Linux still is the kernel, but it isn't super glued to a set of system utilities such as GNU.
Yes you can, Alpine Linux is a nongnu/Linux distro.
On the other hand, if you write a POSIX-complient kernel. You can use the GNU tools to build it, and create a GNU/whateverTheNameOfYourKernel.
Have you tried Alpine Linux? It has the fastest pkg manager IMHO. It's super fast and resource friendly. The one thing stopping me from using it as my main OS, is that it doesn't support appimages, and use a few(20+)of those, otherwise ....
I run it on my laptop, it's also one of the top distros for IoT, embedded systems, and containers. Basically, anywhere where you'd be better off without GNU/bloat.
Here's the download page, it's fully functional. If it's not working right for you there's probably something wrong with your browser.
I think you are confusing a Linux distribution with a Linux installation.
The Linux installation is what actually gets installed onto your PC, and this is what you measure in terms of size, in particular, how much disk space is used, and how much memory is used when it is running.
A Linux distribution is like an online web service: it provides to everyone the packages they need to download so they can install the Linux distro onto their PC. Back in the late 90s when Linux was first getting started, distributions like Slackware, Redhat, and Debian were also distributed on CD-ROM, and they tried to put as much onto each CD as they could in order to minimize web access, since lots of people still used dial-up back then. But once installed from CD-ROM they only took a few tens of megabytes on disk.
If you really want a minimal installation, either Guix OS or Nix OS are probably your best bet, because they both use what are called "pure functional package managers," which are a very modern dependency management algorithm to compute very precisely the exact software that you need, which ensure ALL software that you do not absolutely 100% need is never installed.
Another option you might want to check out is Alpine Linux, which is designed to be used in virtualized Linux installations (like Docker) using the smallest amount of memory possible.
Alpine Linux is what I'd go for. Here is an exert from their About section:
>Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.
>
>Binary packages are thinned out and split, giving you even more control over what you install, which in turn keeps your environment as small and efficient as possible.
OS requires very very little RAM Alpine Linux can run on 130 MB alone
https://www.alpinelinux.org/about/
RAM is generally needed for applications and AAA games are generally stupid compute applications.
https://static.techspot.com/articles-info/1785/bench/VRAM2-p.webp
6GB is the norm for the latest AAA gen games
Alpine linux has 32bit isos.
https://www.alpinelinux.org/downloads/
There is also a way to turn an existing alpine installation into postmarketOS. This probably won't work for x86 as there are only x86_64 packages in the postmarket repository.
https://wiki.postmarketos.org/wiki/Existing_Alpine_installation
I've been using GNU/Linux for about 5 years now. I originally discovered Ubuntu and distro hopped (in VMs) for about 2 or 3 months until i found Arch so at this point I knew nearly nil about Linux. I dove right in and broke installation after installation until i figured out how the hell it worked (Same thing I did with Windows. Crashed several of my dads PCs before I knew how to reinstall Windows hehehe!) In the last year and a half or so I moved to Manjaro out of shear lazyness. Over the years I search the Arch forums. Lately I've been experimenting with Alpine Linux as a desktop (I'm a diehard KDE fan) and although it's not quite ready for it I managed to get a mostly functional (A few configs only worked via terminal, such as wifi) setup and I only managed it with the help of the Arch forums despite the drastic differences between the two!
Sorry, long post :)
What you've said is very true.
Performance-wise, having bloated binaries has neglectable impact on modern machines, especially desktops. IPv6-enabled dhcpcd is what, 10KB bigger than one that's not? It takes like one more microsecond to load. Meh.
In some industries which involves embedded Linux, though, 10% (not actual number) off every package has a real impact on the size of flash memory. Most routers have a 32MB flash to hold the whole system. Now it doesn't sound so stupid to debloat your Linux, eh?
In the server domain, where containers (docker) are involved, speed and size, they all plays a major part. You could have ten web servers all running concurrently on one machine! Alpine is a popular choice of container because it can be as small as 8MB.
As in why the hell do some of us care so much about this on desktop. I admit that it's mostly an obsession. Other reasons include not wanting some specific hard dependencies in those popular distros (looking at you, systemd), but I really don't want to start another war here. And the less functionalities means less chance something breaks.
Speaking of obsession, why did you guys care that whether your system is open-sourced anyway? Why not Windows?
No reason. Same reason.
My server does very light duty running:
I actually just run bare-metal Alpine Linux (ridiculously lightweight Linux distribution), and have services mostly in Docker containers. VMs seem to be more popular here, but as everything I run runs fine under Linux, and containers are more lightweight, I haven't bothered.
Alpine : it's a specialised distribution aimed at ultra-light sizes and secure systems. It is often used in Docker containers due to those feature. It is managed differently than Debian-based distribution but you rarely need to fiddle into containers if you use pre-packaged ones.
Debian works with different levels of updates : Old-Stable, Stable, Testing, and Experimental.
- Old-Stable, you guessed it, is the previous Stable and is maintained for some time after Stable release.
- Stable (Current : 9.5 Stretch) is the one preferred for servers since it is not bleeding-edge but focused on stability and reliability. It is released every 3-4-5 years.
- Testing is where future Stable lives, you can have stuff breaking from time to time.
- Experimental (continuously known as Sid) is where the newest, baddest packages arrive. You don't want that.
- Backports are a special repository where some testing/sid packages end up faster than they would if they went through the classical release of 3-4 years. They are thoroughly tested as for Stable, so they're supposedly safe.
All this is set in APT's preferences /etc/apt/source.list. See the Debian docs to see how to modify this. Typically, you want the Stable, Security and Backports (in some cases) branches active for a server.
No, not really; it means you have to be aware of what your application actually needs.
https://www.alpinelinux.org/about/
The above image is basically the smallest of the small; and you can use it's built-in package manager to safely install additional tools as needed; the only "gotcha" we had with this wasn't even related to the image we thought OpenJDK would be safe for usage when we actually needed OracleJDK for some core image API's.
The only way I could even remotely come to agreeing with you is if your organization wasn't using a container orchestration service (Kubernetes, Docker Swarm, or Fargate) and didn't have any in-house automation to do quite a bit of the basics that those tools / services offer.
I think I meant it makes use of some BSD packages/ideas more than most Linux distros (Alpine does this even more so).
> openbsd devs call gnu : Giant, Nasty, Unavoidable.
I can get that. Really the only think I've found myself liking a lot from GNU is Emacs. Everything else is just there, and honestly I've yet to run into anything I can complain about really (besides GCC being what it is...).
Absolutely! I use Alpine Linux all the time for Docker containers, no glibc there and it's a fully functional system that works just as well as a glibc-based environment for many use cases.
I have considered Puppy. It's a nice complete system for old PCs, but I excluded it from my choices for some reason. Maybe becaause it contained too much unneccessary stuff for me.
Anyway. I just got a confirmation that Alpine Linux runs of Pentium III CPU. It does not need any special instructions from a CPU, so it's perfect. And to all that it even uses musl instead of glibc. So with Alpine I go. If it fails, then I'll consider between Puppy, Slacware and Gentoo.
Thanks for remiding about the Puppy!