Just to make it clear - platinum membership in Linux Foundation is $500k a year.
This is roughly what ONE senior developer in google makes a year.
Google used Linux on their servers from day one - for over 20 years.
LWN.net for general in-detail news, reviews and politics and huge changes within the community. It is very balanced and factual and therefore not very fast moving. (See Debian & systemd news there).
For Multimedia stuff → http://libregraphicsworld.org/
For reviews e.g. their video editing series: http://www.ocsmag.com/
The SCHED_FIFO scheduling class is a longstanding, POSIX-specified realtime feature. Processes in this class are given the CPU for as long as they want it, subject only to the needs of higher-priority realtime processes. If there are two SCHED_FIFO processes with the same priority contending for the CPU, the process which is currently running will continue to do so until it decides to give the processor up. SCHED_FIFO is thus useful for realtime applications where one wants to know, with great assurance, that the highest-priority process on the system will have full access to the processor for as long as it needs it.
> Seriously, there are big transitions on-going or soon to come in the Linux user space. Init/systemd discussed here, but later X/Wayland and also filesystems with btrfs (whose features may be increasingly used by other software, and become expected at some point in the future). Big transitions like this are never free and perfectly smooth, let's not kid ourselves. But at some point a majority will want to move forward, and it will happen. > >It's perfectly fine not to like any such big change. Any person ok with the previous situation will suffer some instability and changes (learning/retraining has some cost) and even loose some particular feature. This person has the right to be unhappy about the change. But then it is NOT ok IMHO for such person to feel entitled to continuing support of the old platform, or even worst to coerce people in on-going support by playing tricks like this GR. It's free software: be happy to benefit from the work of others, but this means at times that things won't go the way you want.
Exactly. They were most likely breached into over CVE-2015-7547.
Did I already say you shouldn't use Linux Mint?
Well, here I am saying it again: Don't use Linux Mint! In fact, don't use any of these distributions who <strong>do not have a dedicated security team</strong>. Please, just don't!
This again just shows that maintaining a distribution takes more than just developing your own desktop packages and creating ISOs. It's a matter of providing something people can rely on!
None of these "I make my own Linux distribution because I can." distributions have their own security team.
FYI, the vulnerability was fixed in RedHat, Debian, Ubuntu, Fedora, openSuSE the day it was announced! Simply because these distributions have dedicated security teams!
Go ahead and downvote me into oblivion. But I will continue to repeat what I have said multiple times here: Linux Mint is garbage! Don't use it. It's a <strong>FrankenDebian</strong> by design!
I follow HackerNews because everything remotely important in the IT field will get on the start page. I also frequent Linux subreddits and sometimes Slashdot (although I get quickly dissapointed by the sheer amount of low quality submissions on it now).
I plan to subscribe to the LWN because their writing is excellent.
Edit: I totally forgot about mentioning Linux Action Show and Techsnap made by podcasters called "Jupiter Broadcasting".
Everything is a trade-off, there is no "best" language. On the other hand, here are some trade-offs I see:
parseInt
does not return an integer despite documentation lying to your face and saying that it does. Proof: parseInt("-0")
does not return the integer 0
.)What if someone else starts submitting the patches to "polish" or "add new features" the non-systemd udev? Will they just ignore them all together if they have nothing to do with systemd?
Also with Canonical being very invested in Upstart I can see this attitude back fire greatly. Not to mention Debian being anti systemd because of how it locks you to Linux where Debian has other kernels available, they've even gone so far as to drop Gnome(in favor of XFCE) as the default partly because of Poetterings lust for a systemd dependence.
As much as i hate to say it, no Debian or Ubuntu backing means your shit will lose relevance to a massive segment of Linux users, at least in the context of you make it so you can't use udev without pulling in a package they don't want. Fork city if that ever happens.
Every time I read something written by Poettering he comes off as an extreme dick who won't acknowledge something as good unless he had something to do with it. And not in the funny comedic sense like good 'ol uncle Linus.
The first mobile company in the list of top contributors is Samsung at 5th. Intel and Red Hat are first and second, and 4.2 even contains a gigantic new desktop GPU driver from AMD.
Yes!
"net.core.default_qdisc = fq_codel"
This is a great step in the right direction in the fight against "bufferbloat". Read more here: LWN (edit: typo)
It explains stuff. Especially for kernel development, there is no better or more updated description of how the kernel is evolving. Their "kernel index" page is the go-to place if you want to know what happened lately in a given subsystem.
Reading an LWN article is infinitely more convenient then reading all messages on LKML (thousands of messages every day, it's impossible to keep up), and goes way beyond a mere listing of features as you can find in a release note. More over the writing style is very accessible and you can actually learn how something is working (before, say, going head-first into the source code).
It is also a place where useful kernel-related discussion can happen, and steer future development; in his talk "How to write a good kernel API" at FOSDEM 2016, Micheal Kerrisk invites more kernel developers to write articles to LWN as a complementary activity to discussions on LKML, to reach a wider audience, before deciding, say, the behavior of a new syscall.
Last but not least, coverage of kernel related summits and conferences; take as an example LWN's coverage of this year LSF/MM summit. In these events kernel developers gather together to discuss future directions, and Jonathan Corbet (LWN editor) goes there explicitly to take notes and report on the website, so that everybody can know what's going on.
EDIT: spelling.
I recommend reading this article "XFS: the filesystem of the future?" http://lwn.net/Articles/476263/
XFS always was a better filesystem, designed for scalability, capacity and performance from the start. Meanwhile the Ext family was never designed for anything modern and it was never redesigned, it just got patched, patched and patched without fixing the root issues. Ext4 can't even do basic features like dynamic inode allocation.
One of the reasons why people kept using Ext4 was that it had much better metadata performance than XFS, which is important for desktops and workstations (eg. untarring, compiling). But XFS has been improving quite a lot in the last years, and they have fixed the metadata performance issues. So Ext4 has lost the advantages that made it attractive and XFS looks overall a better filesystem. And now that Red Hat has bet on it, XFS has more development activity than Ext4.
> This is simply not true and I see it being repeated over and over again. I did two things: installed linux-image-generic (AFAIR) and enabled "level 5" updates in update manager settings. From that point onwards, each kernel update was displayed in update manager and installed properly.
It's not enabled by default which renders the whole concept of security updates pointless.
Yes, for God's sake, you can enable all that stuff to get the updates. But the arguments I made on lwn.net still remain valid.
You forgot the best one (from the top post):
> When lots of competitors attack a project on purely political grounds, you have to wonder what THEIR agenda is. At least we know now who belongs to the Open Source Tea Party ;)
Which caused Lennart to wear this shirt for his talk at Linux.conf.au.
Patches aren't automatically accepted -- they are reviewed, and the process is fairly complicated. There's a fair deal of quality control involved. An attempt to sneak a backdoor in Linux appears to have actually been made, back in 2003: http://lwn.net/Articles/57135/ .
It's also true that the Linux kernel doesn't have ~~the best~~ the least controversial track record in terms of security issues. They have been criticized in the past: http://seclists.org/fulldisclosure/2008/Jul/276 .
Phoronix tends to be ad-heavy (it's how they make a living - don't like it, use adblock), a bit heavy on the sensationalism, a bit light on the details, and very quick to print unconfirmed news.
Lots of people see these as inherently negative traits.
In my opinion, as long as you're aware of these things, Phoronix can be a great source for news you'll be unlikely to see elsewhere, or unlikely to see elsewhere first.
Among Linux news outlets, LWN and it's intrepid editor Jonathan Corbet undoubtedly carry the most respect. LWN's highly technical, routinely conveys nuanced detail accurately, and provides considered commentary from a perspective steeped in FOSS community culture and convention. But LWN doesn't typically cover the latest whizbang Linux game.
I'm not a fan of his either, but I figure it's worth pointing out something good he's done; his free book What every programmer should know about memory is well-regarded. Here's the pdf version.
More details: http://lwn.net/Articles/637613/rss
>> It seems it was about time for another certificate authority horror story; the Google Online Security Blog duly delivers. "CNNIC responded on the 22nd to explain that they had contracted with MCS Holdings on the basis that MCS would only issue certificates for domains that they had registered. However, rather than keep the private key in a suitable HSM, MCS installed it in a man-in-the-middle proxy. These devices intercept secure connections by masquerading as the intended destination and are sometimes used by companies to intercept their employees’ secure traffic for monitoring or legal reasons. The employees’ computers normally have to be configured to trust a proxy for it to be able to do this. However, in this case, the presumed proxy was given the full authority of a public CA, which is a serious breach of the CA system."
I felt lost for a long time. That book helped a lot. Some of it is out of date. Like, they updated the first part that loads from asm to arch dependent C between 2.6 and now, but the best place to start is probably there. It's easy to grab one of the stable or longterm branches and to start poking around. Remember, it is the thousand eyeballs approach, not the thousand fingers approach. Read the code and lurk the lists.
/init/main.c is probably the place to start, you don't need to know the ins and outs of every device driver, but you need to get the core if you want to play around.
Also, don't get too hung up on the kernel itself. Glibc isn't kernel code and is just as important.
He left redhat several years ago, and hasn't been that active in glibc development since. Further, in 2012 (http://lwn.net/Articles/488778/) the steering committee for glibc also stepped down. glibc is now a community project.
.bashrc: bind '"\t":menu-complete'
It's called cyclic tab completion. You're welcome.
edit: wait, that's not what you wanted. You wanted this:
.inputrc: set show-all-if-ambiguous on
You're still welcome.
>run man in the middle attacks with ease
Yes, this is possible if an attacker obtains the private key.
>decode all the traffic being sent to the devices
Actually, no; ssh uses ephemeral session keys that are exchanged via the Diffie-Hellman protocol, which provides perfect forward secrecy. The private key is only used for authentication, not for key exchange, so a purely passive attacker (or one reading transcripts) with access to the private key will not be able to decrypt the traffic. Some links:
http://lwn.net/Articles/572926/
http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange
Then you should be happy with what GNOME is trying to do with having a runtime system and an sdk so that the same binary will work regardless of which distro. To the point that GNOME will as well, and then you shunt all apps to an app store. It's not completely ideal. See these two posts:
http://blogs.gnome.org/mclasen/2015/01/21/sandboxed-applications-for-gnome/
and
http://lwn.net/Articles/630216/
We're trying to solve that problem. Distros are what is holding us back now and we need to make it easier for app programers to have a relationship with GNU/Linux people without having to go through a distro.
I loved maemo (open source software) and would still be using it if it had been released on newer phone.
Jolla is not maemo, it is closed source software as far as I've seen. Why is this more compelling then android/firefoxos/ubuntuphone?
Lots of propietary software will/has been layered over linux, supporting it just because it has absorbed linux as a low level building block doesn't seem particularly useful or relevant.
EDIT: This was helpfully linked below - http://lwn.net/Articles/561463/
However """Parts of the Sailfish Silica QML components were released under a BSD license with the alpha SDK; the native (C++) code parts will follow "soon". Silica is what is used internally as well; there is "no secret magic", as everything uses the Silica API.""" isn't really enough.
There either is or is not enough code for someone to release an opensource distribution of Jolla ie a working phone/tablet environment like the significant numbers of android distros coming out of xda-developers.
While I think it's great that Sailfish is helping Qt development I don't see enough code release for me to buy a phone or get very interested in general.
EDIT2: As someone rightfully points out below. Maemo wasn't completely free either, which reinforces my point.
The community worked for years to meet Nokia halfway and build a free ecosystem only to have Nokia basically walk away as their business model changed.
Qt ended up much stronger for Nokia's involvement so all good, but it's a good warning for anyone getting excited about sailfish.
Even better answer than the FAQ is in the linked LWN article (the subscriber link comes from the conservancy announcement).
tl;dr: vmkernel includes a module (vmklinux) which includes relatively large parts of the Linux kernel (not just an implementation of the internal APIs; the Linux code as well, that's why Christoph Hellwig could sue). These are needed by vmkernel to use Linux storage drivers. The plaintiff argues that the vmkernel has become a derivative of the Linux kernel by virtue of including large parts of e.g. the Linux SCSI layer. That is, that the vmkernel+vmklinux pair is not "mere aggregation" which is allowed by the GPL. It's of course a lot more complicated to prove it, but that's the basic assertion.
Yep... again, (I'm being a drag trying to bring seriousness back to a reddit thread, aren't I? :-) if you look at the situation here's what seems to actually have happened:
In other words, we're making a mountain out of a mole-hill, and we probably should move on (probably should go to Florian and see if we can help him fight software patents in a more effective way :-)
Nope. This was a big complaint against it when it launched, but the RPi foundation somehow managed to come out of it looking like they were fully open source. The BeagleBone Black was more open, but its GPU isn't fully open sourced either.
Well, it's not exactly what you're asking for, but something really bad (TM) happened to a very popular mirror. Luckily, as far as anyone knows their Arch repos were unmodified. However, with a distro that has package signing (nearly ANY other distro) such hacks are useless unless you also have that distros package signing key. This is why package signing is very important.
> People also object to systemd taking over maintenance of udev and then stating that they would eventually drop support for other init systems,
The the maintainer of udev was a co-inventor of systemd too. The inventor of udev, Greg KH, is also working on systemd. So there was no takeover, udev was made and maintained by systemd developers.
Besides that, the systemd developers explicitly made sure that people wanting to use udev without systemd could do so: http://lwn.net/Articles/490413/
Yes, there will be a change when kdbus is merged, but nothing more than can be fixed by non-systemd distros making a simple kdbus setup.
This is the key paragraph in the article that people will want to read, since it's the most common question I've seen about kdbus:
> One of the initial questions was, inevitably, why does this functionality need to be in the kernel in the first place? The kernel already provides a number of interprocess communication primitives, and tools like D-Bus have successfully used them for many years. See this message from Greg for a detailed answer. In short, it comes down to performance (fewer context switches to send a message), security (the kernel can ensure that credentials passed with messages are correct), race-free operation, the ability to use buses in early boot, and more. There do seem to be legitimate reasons to want this kind of functionality built into the kernel.
All in all, this is a very nice article and answers a lot of questions that many people have (especially the one above, as I mentioned).
It's nice to see that the discussion is mostly technical and is remaining civilized, hopefully we can see it merged sometime in the next few months; I'm interested to see how it'll all work out.
I believe LWN allows me to share a working link to the LWN article with reddit: http://lwn.net/SubscriberLink/616241/80f49c13eaf6da28/
Consider subscribing to help paying for the next article! LWN is an important part of the Linux community, and makes very high quality content.
This will be normal in a few years due to persistent memory - basically RAM that retains its state without power.
There are some Linux kernel devs already working on it already: http://lwn.net/Articles/591779/
XFS has far better multi-threaded performance than EXT. And seriously even your browsers are multi-threaded these days.
While it also excels at large storage pools, even in the small it behaves better.
This is old (2 years) but a good solid primer on why xfs is a good choice http://lwn.net/Articles/476263/
They're supposed to release it immediately as per GPL requirements. Somehow they got the idea they can wait 90 to 120 days. I love the reply from Matt here responding to a guy from HTC asking for help.
Back when I used to dabble in Linux kernel development, lwn.net was THE source for information on how to get started with 2.6 kernel development, but it was when I first posted my kernel project to the kernel mailing list that I really grew a respect for the publication.
http://lwn.net/Articles/111247/
The editor not only wrote a blurb about what the patch was for, but then went into details about the implementation and how I was (ab)using the LSM API to add file permissions. Details that mean he actually dug through the kernel module to figure out how it worked???
The level of detail and research put into this one little article is only one piece of the much larger picture. Lwn.net consistently puts this much original work and more into the articles they put together. For this reason, it is completely worth the $50 a year or so for 52 quality-packed weekly editions (and other special features spread throughout).
DBus is the most widely used IPC protocol used on Linux, it's both powerful and versatile and has libraries available for almost any programming language on Linux. It didn't become this successful by being some lame half ass definition Patrick's playground tries to make it look like.
With KDbus things begin to get really exciting IMO, with Kernel integration it will become the first defacto standard IPC protocol for Linux, and the new implementation brings along huge advantages.
IPC will become a lot faster by typically 10 to 100 times for generic DBus use cases. But it will also become more powerful with more features, and automated optimization for transmitting big messages that are not viable with DBus.
It will be near identical to DBus to use, and compatibility is maintained through proxy which is planned to disappear in time.
If you use something else, you can of course keep using that.
http://lwn.net/Articles/580194/
http://kroah.com/log/blog/2014/01/15/kdbus-details/
KDbus is a huge step forward, and will make it possible for programs to interact with ease in a standardized way that is very powerful.
> I'm a completely non-technical user and I just do update and dist-upgrade, pretty much daily.
Look, this isn't the point.
The problems are manifold:
I'm sorry, but Linux Mint is in every way incredibly unprofessional.
> While the license conditions for the GPL might have affected some projects negatively, I don't think it applies to gcc much. Extremely few people are proficient enough to make changes to the gcc source code, let alone make enough large changes to justify a separate fork.
In this case it might affect GCC negatively. It's not about forking GCC but about the fact that competing, non-copyleft software has cought up. That mailing list thread is part of an ongoing discussion of whether to include a feature in GCC that exports the abstract syntax tree such that software like Emacs could use it for code analysis. Stallman worries that this feature also enables non-copyleft software to use GCC as the compiler backend. Others argue that not including this feature will unnecessarily cripple GCC. See also: http://lwn.net/Articles/629259/
> What in GPLv3 did the *BSD people object to?
The BSDs have always had their problems with the GPL as they see copyleft as a restriction and not as additional freedom. The deal breaker was the prohibition to run GPLv3 software on hardware that restricts modifications to the software.
> PVS-Studio analyzer have V595 warning that detects errors when pointer is dereferenced first and then checked for NULL. ... There were almost 200 V595 warnings.
There was an actual real-life kernel vulnerability based on that exact type of bug: http://lwn.net/Articles/342330/
I think that after that kernel developers threw a white flag and decided to always compile it with -fno-delete-null-pointer-checks, but I might be wrong.
By the way, an especially neat thing about this particular bug is that it doesn't even dereference a possibly invalid pointer before the check, just computes the address. Well, computing an invalid address (like, computing an offset from nullptr) is still undefined behavior, so there you go.
Kay Sievers plans to use kdbus as a transport for udev. It will then depend on having a user kdbus server running to push messages around. Systemd developers are the first (and currently only) people with one.
It's a bit of a pain having to change the way udev works. On the other hand, when you read the LWN articles, you can see where they're going. Articles one and two.
The push here is for one transport that handles point-to-point as well as multicast messages with reliable ordering and fast delivery. There's also discussion of gigabyte-sized message streams, and I know high throughput is important for the automotive industry. Those guys have already tried a few solutions aiming for that.
Add in the kdbus support for LSM security, memfd and its sealable descriptors, the push across all those projects for deployable applications in containers ... It kind of shows their big picture of hotplug device support being easy to integrate with an application and receiving hotplug notifications by subscription and security policies. Can't do that launching bash scripts on connect, though that doesn't have to go away.
Linus knows this: > The other problem is the "permission from maintainers" thing: I have an ego the size of a small planet, but I'm not always right, and in that kind of situation it would be a total disaster if everybody had to ask for my permission to create a branch to do some re-architecting work.
Source: lwn.net
Ya'll seem to be unaware of NixOS and other highly forbidden, arcane developments in the witchkitchens of heretics.
Essentially what Michael Niedermayer is doing, yes. People decided to run away from his leadership which in turn made him step up his game.
http://lwn.net/Articles/650816/
Scroll down to the table, the numbers speak for themselves.
> The general principle at work here is that you should only test explicitly against None
if you really want that.
I'm not entirely sure I agree. This kind of code led directly to the false midnight bug.
Fundamentally, multithreading is about thread contexts which have different relationships with various memory systems.
You need to learn to visualize how various primitives act when executed on multiple thread contexts or cores on multiple processors which have multiple levels of cache on top of system memory.
Read and comprehend https://kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook-e1.pdf and http://lwn.net/Articles/250967/ parts 1-9.
After that, it's relatively easy to map those concepts to JMM and the juc classes.
Then you just need to learn how to run code in your head simulating a multicontext computer.
That is not exactly accurate. They wanted to break away from it. There was unrest before Oracle even acquired Sun. http://lwn.net/Articles/303009/ Oracle didn't know what to do with it other than slap their name on it, and the fork happened due to a lack of action and the general distrust of Oracle by the devs in the wake of the Google lawsuit and the fork happened. They all knew it was a matter of time before the project was rejected by Oracle in one way or another, and they were right.
FWIW /u/corbet is the founder and main editor of http://lwn.net/ which is one of the better news sources out there about what's going on in Linux kernel development and software in general. It's probably mainly known by programmers and Linux users, but it's respected and not a small deal. (Disclaimer: I'm a subscriber.)
I learned, it helped me understand memory a lot better. Since assembly has a handful of instructions and you can use few registers, you have to know exactly what you are doing. So IMHO yes, it did helped me. Two things that helped were programming from the ground up, and what every programmer should know about memory.
> If you take away the reiteration and embellishment of LKML posts.
I think that the main value of LWN is that it makes people able to keep up with LKML, without the need of reading all LKML discussions. The summaries written by Jonathan Corbet (LWN editor) are an invaluable resource. I will paste here a comment I already wrote on /r/linux on this subject:
> Especially for kernel development, there is no better or more updated description of how the kernel is evolving. Their "kernel index" page is the go-to place if you want to know what happened lately in a given subsystem. > > Reading an LWN article is infinitely more convenient then reading all messages on LKML (thousands of messages every day, it's impossible to keep up), and goes way beyond a mere listing of features as you can find in a release note. More over the writing style is very accessible and you can actually learn how something is working (before, say, going head-first into the source code). > > It is also a place where useful kernel-related discussion can happen, and steer future development; in his talk "How to write a good kernel API" at FOSDEM 2016, Micheal Kerrisk invites more kernel developers to write articles to LWN as a complementary activity to discussions on LKML, to reach a wider audience, before deciding, say, the behavior of a new syscall. > > Last but not least, coverage of kernel related summits and conferences; take as an example LWN's coverage of this year LSF/MM summit. In these events kernel developers gather together to discuss future directions, and Jonathan Corbet (LWN editor) goes there explicitly to take notes and report on the website, so that everybody can know what's going on.
> Mistakes may have been made, but your distribution racism is as absurd as it is irrelevant.
There is no such thing as distribution racism. Stop trying to pull the racism card when there isn't any racism involved. That's just ridiculous.
Mint has been criticized multiple times from other developers as well, for example, Ubuntu. And the mere fact that they are mixing their own packages with Ubuntu's or Debian's packages is just plain dumb because they are violating one of the very important rules when using a Debian-based distribution, don't create a FrankenDebian.
> I just hate seeing crap like this get upvotes because it looks correct.
Well, maybe it's just my experience from almost 20 years using Linux combined with the fact I'm a Debian Developer. I don't pull this stuff out of my nose, I know how to properly maintain a distribution and the way Mint does it, is wrong. They withhold kernel and X.Org updates, don't issue security advisories and mix binary packages from foreign distributions. That's just blatant flub.
> The security team (the people who look at security bugs, patch submissions, private data etc) aren't the same people responsible for hosting these things. It's the webops whose responsibility this falls under.
Linux Mint does not have a security team. I do not see any security advisories issued. I had a look earlier today, couldn't find anything. Look, any other major Linux distribution has security advisories, see: http://lwn.net/Alerts/ Linux Mint doesn't.
Also, since Clement took the website down himself, I don't think their "security team" and website team are different teams, it's just Clement in one person.
Very professional. But yeah, I'm a distribution "racist".
I would like to know as well. I get the impression they develop in c++ on an embedded variety of Linux. My guess is at modern clock speeds Linux, perhaps with some RT patches, is near enough real time that it doesn't matter so much about hard realtime systems anymore for this sort of stuff. I think they use triple redundant hardware and voting instead of more expensive and less performant radiation hardened stuff. There have been AMAs, interviews and there are other discussions and also historical job listings to look at.
There are indeed users and in last years article, Gleixner did mention that he saw significant numbers of downloads from known manufacturers. http://lwn.net/Articles/572740/
I never thought that it would go this far.
This makes me think about how dependent we all are on a couple of Linux kernel maintainers and how much we take for granted. The same is ofc. true for many other open source projects.
Just FYI, this is slightly out-of-date. Since 11.10, there is a /run directory that replaces the old /var/run. Fedora 15 has it too and I presume other distros will make the transition as well.
Kind of like how RMS convinced the gNewSense hackers that dropping all proprietary firmware from the distro wasn't enough, he made them add a history rewriting feature to drop any and all mention of proprietary firmware from their version of e.g. the git log of the Linux kernel - because if a user can find reference to the existence of proprietary software inside of their free OS, it might plant the idea in their head to use it.
EDIT:
Did a bit of fact checking on my own post and found out that I was wrong about one thing, namely the name of the distribution - it wasn't gNewSense, it was Linux-libre. Note that the distribution is not named GNU/Linux-libre, even though it's a FSF project. WTF?
Read http://lwn.net/Articles/593918/ or an article by the developer behind it. The idea is that without sealing you can't directly use the memory as the sender could change it out from under you.
It's also not quite implemented in kdbus, but came ~~aboot~~ about because of kdbus.
Edit: I'm not canadian.
And then there's a counterexample of a sweeping change that provoked highly heated discussion and later had to be reverted: devfsd (http://lwn.net/Articles/139404/).
HAL also proved to be fragile and was phased out by improvements in udev and DeviceKit (upower, udisks, etc.)
That said, flame wars over software have obviously been going on for a long time. systemd is sort of a unique one, though, because of its huge ambitions that challenge what many perceive to be essential maxims of Linux (cathedral versus bazaar and so on). We've simply never had anything as grandiose as a "basic userspace building block to make a Linux-based OS" from before.
Here's the problem:
http://lwn.net/Articles/570485/
It's a violation of DFSG to distribute the API Keys so Debian isn't going to do it. They are going to ship a broke Chromium package instead.
Use the binary chrome package from Google. That way you can watch Netflix.
It'll make it less likely that the OS is going to run on older hardware.
This is a discussion that's been going on for quite some time now; here is a nice longer read.
What filesystem are you using?
ext4 supports a secure delete attribute, via: chattr +s SOME_FILE.TXT
A normal "rm SOME_FILE.txt" will securely erase that file.
Note that there are some caveats with journaling. See: http://lwn.net/Articles/462437/
Actually, plenty of open source developers get paid. For some projects, the vast majority of the work is professional. LWN regularly does an analysis of who wrote each release. Here's the analysis for Linux 3.0.
> Drepper has been farcically dickish since forever;
it takes a special kind of character for this sentence to be a severe understatement.
> it blows my mind that he was ever allowed to head a significant project.
personality aside, he knows his stuff, and is quite a competent developer. his paper on memory (amongst a few others) is also a very useful read, higly regared for its informational value.
Staying on top of trends is essential. I was always disappointed at how few of my co-workers paid much attention to new technologies, languages, etc. But does "every programmer" need to understand the concept of an 'attack surface', or a 'threat model'? Or to read the whole Google Browser Security Handbook, or know what a CSS image sprite is? Or precisely how DRAM is precharged and activated? Then there are similar articles on Unicode, floating point numbers, etc.
I know the titles are just hyperbole but I worry that it contributes to a condescending attitude from some towards others. "Oh, you don't know about the concept of least privilege? Then you have no business scripting a World of Warcraft UI mod."
The next Linux release (3.3) should also help somewhat. They're changing from buffering a certain number of packets to a certain number of bytes (it's called BQL - Byte Queue Limits).
Here is some benchmarking. Summary: > The amount of queuing in the NIC is reduced up to 90%, and I haven't yet seen a consistent negative impact in terms of throughout or CPU utilization.
He's not really happy with LibreOffice:
> Shuttleworth has a fairly serious disagreement with how the OpenOffice.org/LibreOffice split came about. He said that Sun made a $100 million "gift" to the community when it opened up the OpenOffice code. But a "radical faction" made the lives of the OpenOffice developers "hell" by refusing to contribute code under the Sun agreement. That eventually led to the split, but furthermore led Oracle to finally decide to stop OpenOffice development and lay off 100 employees. He contends that the pace of development for LibreOffice is not keeping up with what OpenOffice was able to achieve and wonders if OpenOffice would have been better off if the "factionalists" hadn't won.
They basically had a "low-emissions" mode and a "high-emissions" mode in the engine control software. Theoretically, the low-emissions mode is used almost all the time, and the high-emissions mode is there for oddball cases (temperature extremes or partial malfunctions or whatever, I don't know). In actuality, the triggers for the low-emissions mode were tailored precisely to the standardized test procedure that is used for emissions tests, and the high-emissions mode was used almost all the time.
Here's a readable description and a somewhat technical talk by a guy who reverse-engineered the engine control firmware from scratch and worked out the modes. Here's a Reddit thread about the talk/article.
For mesh data, you could do a lot worse than https://github.com/KhronosGroup/glTF
For texture data, you could do a lot worse than the container format used by the PowerVR Texture Tool. (It does handle uncompressed, DXT, ETC and several other formats)
For data written by the game (save games, user configs), you could do a lot worse than using SQLite as a container. Even if you don't use the SQLness and just store a single, binary blob --at least you can offload handling file integrity under the threat of process/OS crashes mid-write
For game-specific configuration data, you could do a lot worse than using Lua as a file format. That was it's original design goal after all.
For a read-only container/archive format, you could do a lot worse that using zip files. It's one of the most widely supported archive formats in the world.
> inotify failing to recursively monitor subdirectories
inotify doesn't support this in the first place:
http://lwn.net/Articles/604686/
> Inotify does not provide recursive monitoring. In other words, if we are monitoring the directory mydir, then we will receive notifications for that directory as well as all of its immediate descendants, including subdirectories. However, we will not receive notifications for events inside the subdirectories. But, with some effort, it is possible to perform recursive monitoring by creating watches for each of the subdirectories in a directory tree.
This workaround however is fraught with potential bugs and data races (file detected but deleted again before the new watch is made, hitting watch limit count, the time it takes to create new watches recursively can cause the event queue to fill and lose data, etc) and just difficult to do right, which I guess you can't really blame anyone for not volunteering their time to this.
So taking about incompatible, every inotify implementation that supports recursion is probably filled with its unique set of bugs and quirks.
So tell me how you would do the above on a full disk encrypted drive? Your attacks won't work.
Here's some literature for you to read:
http://www.schneier.com/blog/archives/2009/10/evil_maid_attac.html
http://lwn.net/Articles/359145/
http://theinvisiblethings.blogspot.com/2009/10/evil-maid-goes-after-truecrypt.html
http://lwn.net/Articles/259157/ http://lwn.net/Articles/301135/
I found these two articles talking about the issue (no famous RMS quote, but I didn't look too hard). A lot of vim plugins use Clang and I'm guessing that this is partially why.
FTFA, here is a summary of the arguments in favor and against such a scheme.
Is the argument about being unable to have the data on multiple volumes still going to be valid? I thought union mounts existed so we could merge directories from different volumes.
Because inotify can't sanely recursively watch a directory without races and potential data/event loss.
http://lwn.net/Articles/604686/
> Inotify does not provide recursive monitoring. In other words, if we are monitoring the directory mydir, then we will receive notifications for that directory as well as all of its immediate descendants, including subdirectories. However, we will not receive notifications for events inside the subdirectories. But, with some effort, it is possible to perform recursive monitoring by creating watches for each of the subdirectories in a directory tree.
This workarounds however are also fraught with potential bugs and data races: file detected but deleted again before the new watch is made, hit the watch limit, time taken to create all the new watches recursively can allow the event queue to fill up and lose data, and more.
Its just very difficult to do right, which I guess you can't really blame anyone for not volunteering their time to this.
ZFS is both mature and awesome. I use it on a FreeNAS box and it is very stable. Of your specific use cases, I can only directly attest to Plex working well (I haven't personally used the other software you list, but most of them should work straight away).
For reference, I built my NAS based off one of these embedded CPU/mobo combos: http://www.newegg.com/Product/Product.aspx?Item=N82E16813138393 in an old case with some spare RAM. In retrospect I should have spent more for ECC memory (with a supporting CPU+mobo...), but the performance of even this cheap hardware is more than sufficient for home use.
Edit: I remembered that LWN recently did a survey of some popular NAS distributions: http://lwn.net/Articles/631310/
The description was hyperbolic. Anyway, see here and the comments by Richard Moore on this G+ post (I can't seem to be able to link to individual comments...).
Quoting the relevant one:
> +Boudewijn Rempt He did report the issue to qt security, and I explained to him that anyone using Qt in an suid binary was doing it wrong. He seems to think that since we're writing a library we're responsible for all ways it can be misused, which is not the case. Rather than the convoluted attack he found (which was a genuine bug, but not a security issue) there are simple ways to perform attacks. This part of his talk relies on the premise that some distros might install kppp in a configuration (suid) that hasn't been recommended for over a decade. If distros do this then that's a bug in the distro, not in Qt.
> The talk is a rerun of the presentation he gave back in May which I responded to at the time http://lwn.net/Articles/552300/ there are some genuine issues in it, but sadly he chose to overhype things.
I think that this article points out some major areas for potential improvement, but some of this advice is just not good in my experience.
ls -Z
isn't that much harder than ls -l
, and /var/log/audit/audit.log
exists for a reason.I kind of feel that Lennart's goal seems to be "speed at any cost" which while he absolutely has some valid points, even suggesting some of these things may lead to problems ("I removed rsyslog to make my system boot faster because I have no use for logs!" will happen at least once...).
Lots of people recommending GDB/DDD here. I doubt they have actually tried to use them with the Kernel : it's not exactly straightforward.
Go read Linux Device Drivers, EXCELLENT book. I'd recommend some VM software (so you can hang VMs rather than real machines 90% of the time) and a simple-ish editor, rather than IDE: VIM or Emacs. (I use the former).
(oh, and git. You should definitely acquaint yourself with git if you haven't already. best starting point for git).
i think we have something similar in czech http://czfree.net/wiki
info taken from http://lwn.net/Articles/404342/
It is a network run by volunteers around the whole Czech Republic. It started at times when we had a single company which had de-facto monopoly on telecommunication services and the only available option for having internet at home was the classical 56k modem with connections charged by time spent on-line.
People started connected their houses together with Wi-Fi, home-made free-space optical links and various other technologies. Every part of the city had one or more small ISPs and people learnt a lot about the networking technology, Linux, FOSS etc
Made me remember this > -<odd>.x.x: Linus went crazy, broke absolutely _everything_, and rewrote the kernel to be a microkernel using a special message-passing version of Visual Basic. (timeframe: "we expect that he will be released from the mental institution in a decade or two").
LWN had a short mention of the "overthrow" a month or two ago. The comments section looks rather exhaustive and it might have more details (although I haven't looked myself).
I'm a programmer too. Here's a few things I've found over the years about SpaceX and programming that I really liked.
They did an AMA: https://www.reddit.com/comments/1853ap They use a custom Linux kernel: http://lwn.net/Articles/540368/
My favorite part: "In his team, they have a full-size Justin Bieber cutout that gets placed facing the team member who broke the build"
> Even if you're not a C or C++ programmer you will need it one day to understand what's going on with a program or open a core dumps.
That becomes obvious once you realize that almost all runtimes as well as many compilers of other languages are C or C++ at the core. Many programmers of dynamic languages can get by for a long time without the lack of this knowledge becoming an issue. But when that moment arises, they’re usually in for a lot of painful education.
Apart from GDB/Valgrind, this Drepper article: http://lwn.net/Articles/250967/ deserves mention.
I'm quite familiar with the german language, thank you.
>Basically, in German orthography, <oe> is considered an acceptable substitute for <ö> if the latter is not available.
That's true, the reverse is not - "Johann Wolfgang von Göthe" would be incorrect as well. And names especially are only really acceptable in the spelling that they appear in on your passport/birth certificate etc.
See P*<em>oe</em>*ttering's comment on lwn (and yes, that account is confirmed as being him).
I don't know if this will help you, but it certainly helped me. All you've said I agree with, you're third point I've heard before not in such a succint and profound way. Regarding this:
>I still think I suck, it doesn't help that the other guys are wizards as far as I can tell, but instead of feeling ashamed I just do my best and try to improve myself.
I would consider either watching/listening to Jacob Kaplan-moss' keynote or reading the overview by lwn.net.
The fact that vmkernel was once loaded into the kernel by a module is enough to conclude that it is a derived product of the kernel and, thus, only distributable under the terms of the GPL.
But there is more... vmkernel loads and uses quite a bit of Linux kernel code, sometimes in heavily modified form. The primary purpose for this use appears to gain access to device drivers written by Linux, but supporting those drivers requires bringing in a fair amount of core code as well.
>If one downloads the source-release ISO image from the page linked ( https://my.vmware.com/web/vmware/details?downloadGroup=ESXI55U1_OSS&productId=352 ) and untars vmkdrivers-gpl/vmkdrivers-gpl.tgz, one will find these components under vmkdrivers/src_92/vmklinux_92. There is some interesting stuff there. In vmware/linux_rcu.c, for example, is an "adapted" version of an early read-copy-update implementation from Linux. vmware/linux_signal.c contains signal-handling code, vmware/linux_task.c contains process-management code (including an implementation of schedule()), and so on. Of particular interest to this case are linux/lib/radix-tree.c (a copy of the kernel's radix tree implementation) and several files in the vmware directory containing a modified copy of the kernel's SCSI subsystem. Both of these subsystems carry Christoph's copyrights and, thus, give him the standing to pursue an infringement case against VMware.
>The picture that emerges suggests that vmkernel is not just another binary-only kernel module making use of the exported interface. Instead, VMware's developers appear to have taken a substantial amount of kernel code, adapted it heavily, and built it directly into vmkernel itself. It seems plausible that, in a situation like this, the case that vmkernel is a derived product of the Linux kernel would be relatively easy to make.
As you may know, Guile Emacs aims to replace the elisp engine with an interpreter written in Guile. This will enable concurrency and we will have a faster implementation of elisp, but it also means that we will be able to write extensions in pretty much any language with Scheme as the host language (I bet you can expect cl, Emacs-wizards love their lisp). I know that the wait is painful, but the elisp interpreter is done and the project is looking closer than ever.
Get up-to-date by reading: The future of Emacs, Guile, and Emacs Lisp and read the EmacsWiki piece about the technical details.
>Criticising systemd from a technical, architectural or (as is the case here) social perspective is not hate.
Link to quote: http://lwn.net/Articles/621478/
This 1000%. I like systemd, yet everytime I point out issues that concern me I'm instantly labeled as a systemd hater and get torn apart by the systemd-fanboys claiming all of the anti-systemd thoughts are wrong.
"The avalanche has already started, it is too late for the pebbles to vote." ~Kosh
Others already pointed out that Dalvik/ART are not JRE. However, getting a port of ART/Davlik to Linux might be possible. The other, big issue is that you don't really "have the kernel". Yes, you can get the source code for the Linux kernel used in Android, but can you build that modified kernel for a desktop machine? Even if you can, you may not want to. Several Android kernel modes (e.g. binder would be a serious security issue on a desktop system.
You could run the Android kernel in a VM. But why bother setting up all that. Just download the Android SDK and run the app in that.
With sockets both processes ends with its own local copy of the data (each process can separately manipulate its local buffer), so at least 1 copy is required always. Also, memory consumption doubles (2 separate copies).
With kdbus there is a single memory buffer accessible from both processes (so zero copies), but the source process "seals" the buffer before sending it, so it can't be modified neither by process A nor by process B.
There are 2 specific use cases where the performance increase of the later is noticeable: a) buffers to send are BIG b) the same buffer is sent to multiple destinations. In the b) case, the buffer is shared not only by 2 processes but by N processes. A multicast protocol (such as D-BUS) can take advantage of this. The a) case is relevant when you need to share big chucks of data between applications: images, videos, audio, ...
kdbus is not the only method to do fast IPC using a shared memory model, but it had another advantages. See Fast interprocess communication revisited (2011)
Nice spectacularization. However, it's not exactly like he points it out. See the comments in https://plus.google.com/115606635748721265446/posts/TTJ1j1RMThA (check the ones by Richard Moore) for some comments from the "other side".
Likewise, check http://lwn.net/Articles/552300/ for more.
Also, this change will make Qt abort when a setuid binary is found.
LibreOffice has been trying to rebase their entire codebase onto the new Apache OO code for licensing reasons. Since they based their original fork on code that was LGPL3, they can only offer LO under LGPL3, but they'd like to dual license it as LGPL3/MPL. All of their additions/changes have explicitly been LGPL3/MPL, so as long as they can swap out the old base with the new base, they should be able to eventually reach that goal of being able to offer the whole under both.
read the thing about gdm, i also said that in my comment, you don't read. That is about gnome-session, which already does things systemd --user does as well, but he wanted the user instance to augment it than replace it. read the mail in full, don't just quote without context so that you can justify your fallacy.
EDIT: and now we know what's coming next "but CK was brooken". That's not the point at all, it was if Lennart stepped up to suggest a hard dependency on logind (systemd component) in gdm (which is a part of GNOME), which he did. He goes on to argue in subsequent mails that GNOME should prioritise Linux for full realisation of its GNOME OS initiative, which might be OK, but then he did not only break it on Linux, he also went on to suggest that it be broken on BSDs as well, which worked before systemd.
EDIT2: OK, I'll cherry-pick just like you did:
> gdm will interface with the new CK-replacing code I am working on. http://lwn.net/Articles/441328/
Do note that logind is still in the making, yet he already has plans how to uproot support for anything but systemd.
> The closest integration I expect in gdm. Ideally I'd like to rip out the current CK support completely and replace it entirely by the more low-level systemd specific code. However, that I can only do if the outcome of this discussion is clear.
The outcome was what GNOME devs agreed on, and LP was in charge of CK too, and so it happened, and he asked for it to happen.
> copying data over USB still locks up any desktop (with the default scheduler at least).
LWN ran a detailed article on why that happens. Long story short, try the following and see if it helps (it does for me):
Create /etc/sysctl.d/laggy-usb-copying-fix.conf and add the following in it:
vm.dirty_background_bytes = 16777216 vm.dirty_bytes = 50331648
They use Linux exclusively, all their microcontrollers are PowerPC, and they have no hard real-time requirements (proof). So, while they are technically working with embedded systems, they're not ATmega chips with 32kb of memory. I'd focus more on learning about Linux and the real-time scheduling abilities of the Linux kernel.
Sure, I want to believe you that it's just smoothing. But,
"Did you know that the ECU reports a constant 780 RPM on the tacho when the engine’s idling, regardless of the actual engine speed? [Domke] has proof in the reverse-engineered code!" [source]
"he noted that there is a 12KB block of code that is used to ensure the tachometer always shows 780 RPM when the car is idling. Even though the engine is not that steady, car owners want to see that value hold steady at idle, so car makers effectively lie to satisfy them." [source]
"This code takes away all of that and makes it flat 780." [source]
So I'll take the claim of "780 at idle" literally, until proven otherwise. But only in the case of VW.
The excellent Linux Weekly News has occasional articles on OO vs LO, the latest from last week following up on exactly the CVE debacle.
I highly recommend that read if you're interested to know more about the differences between LO vs OO.
As a side note, look at Sun and now Oracle for examples on how not to manage an open-source product.
http://lwn.net/Articles/328363/
It just causes longer compilation times due to the slow fsync issue which was never fully solved.
Ext4 might be fine. I just use XFS with Gentoo because it's what I've been doing for ages now and there's no reason to change.
> mdadm makes the same guarentees as BTRFS's implementation
It doesn't. ZFS and btrfs are both aware of which bits are stored where – especially including the information needed to repair them. "Rampant layering violation" and all that.
Linux md
's traditional layering model hides mirrors and parity from upper layers, consuming e.g. disks on one end and presenting an assembled striped/mirrored/parity'd block device on the other. That assembled /dev/md_
device does not permit access to individual underlying copies, and it performs no checksumming itself. You get the behavior described in man md
:
> If all blocks read successfully but are found to not be consistent, then this is regarded as a mismatch.
>
> If check
was used, then no action is taken to handle the mismatch, it is simply recorded. If repair
was used, then a mismatch will be repaired in the same way that resync
repairs arrays. For RAID5/RAID6 new parity blocks are written. For RAID1/RAID10, all but one block are overwritten with the content of that one block.
If you put btrfs on top of software RAID, its internal checksums will still be able to detect bitrot, but btrfs can't get more information from md in order to repair the bitrot. Similarly, if you ask md
for a scrub, it'll make itself consistent in an arbitrary way, without consulting btrfs to see if such a reconstruction has a correct checksum.
As for doubts about btrfs, you're not alone.
Practically speaking, GCC is copyleft (GPL) whereas LLVM is BSD licensed.
That means it's possible for a company to extend LLVM and not release its changes' source code back to the community. In particular, it would be possible to have specific hardware support (for chipsets) or software support (debuggers, IDEs, etc.) added but not have the community be able to see the source code.
GCC has become very successful in the embedded world because people making such changes have to give their source code back to the world, and that has often set up a virtuous circle of improvements - it's also been useful when companies end-of-life products.
There are other concerns too. LLVM is a newer project, written in a newer version of C++ and focusing on modularity and code re-use. GCC was written in C, and has only recently moved to C++.
I am not actually a developer, so don't really have any thoughts or fears about the effects on GNU/Linux beyond observing that the leader of the project (Linus) is a very practical, pragmatic person - so we're unlikely to see anything rash happen.
For more information on GCC/LLVM architecture and licensing and its practical effects, I recommend reading LWN's coverage - it's where I got my information. Here's a brief sample: http://lwn.net/Articles/582242/
A longer sample can be seen here: http://lwn.net/Articles/629259/ - in both cases the comments are worth reading, as LWN users tend to be well informed and fairly close to the issues being covered.
If you're not an LWN subscriber, you should consider subscribing. It's an excellent source of Linux news, and does technical coverage of these kinds of areas very well. I have no relationship with them except as a happy customer.
Also, sorry that you just lost the rest of your day to reading old LWN articles... ;-)
Embedded systems. I use C whenever I have to write to a board. Sure I can use C++ for some embedded systems, but not all of them. There just is no C++ compiler for some processors.
Also, most electronic engineers I know were taught C in college. There's no need for them to learn something which isn't universally supported when what they've always used works fine.
Another camp is people like Linus Torvalds. See here. There are lots of strong opinions both ways.