Tanenbaum wasn't wrong and neither was Torvalds. The fact is that these are complicated matters that you can't make black-and-white.
Microkernels are the future
And they are, the future just hasn't arrived yet. But microkernels see more and more adoption every day. They offer a degree of reliability that is unprecedented. But they also come with a performance penalty that is for a lot of people enough of a drawback that they would rather have 'good enough' than 'perfect'.
For software that needs to be 'perfect' microkernels are the way to go and in fact in the embedded world there are more microkernel varieties that you can choose from now than ever before. Once performance penalties are no longer important and people will start to demand software that does not crash with every change of the weather I believe microkernels will see another wave of increased adoption. As far as I'm concerned this can't come soon enough. Userland drivers are so much better than a monolithic kernel.
x86 will die out and RISC architectures will dominate the market
And in fact, in the mobile arena this has already come true. And the way Apple is moving I would not be surprised to see an Arm powering an Apple laptop one day.
(5 years from then) everyone will be running a free GNU OS
I think both parties underestimated the strength of the windows lock-in here. And many people still underestimate the strength of this lock-in, even here on HN the demise of Microsoft is announced with some regularity.
As far as microkernels go, I'd say that the future has arrived. We don't call them microkernels, of course -- we call them hypervisors. But they're fundamentally the same thing.
True, but those things run inside the hypervisors are usually still the same old monolithic operating systems. I think that once those are also micro kernel based that there will be a real shift in perception.
You say potato, I say potato. What's the meaningful difference? Inside even the purest microkernel you're still running "processes" with unified address space subject to awful crashes and memory corruption. The process is itself an abstracted machine, after all. How far down the abstraction hole do we have to go before we reach purity?
The differences between the linux kernel and a microkernel (such as for instance QnX but there are plenty of others) is that everything is a process, and everything but that tiny kernel runs in userland.
It's the difference between 'potatoes' and 'mashed potatoes' ;)
No, you miss the point. I understand very well what a microkernel is. I'm asking you what the conceptual difference is between running a bunch of "macrokernel" systems inside a hypervisor and running a single microkernel with a bunch of processes. There is none: they are the same technology. The difference is in the label you stick on it. Which is a very poor thing to start an argument about.
(edit: I should clarify "same technology" to mean "same use of address space separation". Microkernels don't need to use virtualization technology like VT-d instructions because their separated modules don't need to think they're running on unadulterated hardware.)
> I'm asking you what the conceptual difference is between running a bunch of "macrokernel" systems inside a hypervisor and running a single microkernel with a bunch of processes. There is none: they are the same technology.
Complexity is the difference.
Hypervisors "won" because it was easier to implement; They only had to add another layer to the stack, rather than fundamentally change the structure of the OS.
The outcome is a more baroque collection of code, though. Worse truly is better.
The difference is the purpose of the system. I the prior, the purpose is to simply multiplex the hardware into multiple logical systems performing different tasks. And the later the purpose is to build a single unified system. It has more co,
Mmunication between the systems, and duplication of work is minimized. Only one process has any FS drivers in it. Another only worries about display. And more importantaly, it's more fault tolerant, if the display process goes down, all the other processes are generally built to wait on it coming back up. Whereas you cannot have a microkernel go down and not take an application process, file system process, and a network process with it..
I don't buy that at all, it's just semantics. Why can't multiple OS images be a "unified system"? That's what a web app is, after all.
And the fault tolerance argument applies both ways. That's generally the reason behind VM sharing too. One simply separates processes along lines visible to the application (i.e. memcached vs. nginx) or to the hardware (FS process vs. display process).
Potato, potato. This simply isn't something worth arguing over. And it's silly anyway, because there are no microkernels in common use that meet that kind of definition anyway. Find my a consumer device anywhere with a separate "display server", or one in which the filesystem is separated from the block device drivers. They don't exist.
(edit rather than continue the thread: X stopped being a userspace display server when DRM got merged years ago. The kernel is intimately involved in video hardware management on modern systems. I can't speak to RIM products though.)
> x86 will die out and RISC architectures will dominate the market
It wasn't just Tanenbaum who was wrong about that. Billions of dollars were dumped into RISC architectures on the assumption that x86 wouldn't scale. Microsoft committed to an expensive rewrite of Windows (or OS2) to make it portable. Apple considered x86 and decided to bet on RISC instead.
So this wasn't just some wacky college professor opinion, the industry thought RISC was a sure thing. (Linus of course didn't really care, he just wanted something to run on his 386 clone.)
edit: it bothers me that this debate is always presented without context. Torvalds is a PhD student busy reinventing a 20 year old unix kernel design, and Prof. Tanenbaum is pointing out he isn't advancing the state-of-the-art, which is totally correct. The fact that Linux turned out to be really useful and popular is mostly aside the point - the advancement was Torvald's open source management, not the kernel design.
Every RISC vendor had their own little PC 'consortium' (except perhaps Sun SPARC). They never sold that well, and when te Intel Pentium Pro came out, it beat them on most specs, so the whole idea of a RISC PC died around 1996 (outside of Apple).
But microkernels see more and more adoption every day. They offer a degree of reliability that is unprecedented. But they also come with a performance penalty that is for a lot of people enough of a drawback that they would rather have 'good enough' than 'perfect'.
For software that needs to be 'perfect' microkernels are the way to go and in fact in the embedded world there are more microkernel varieties that you can choose from now than ever before.
I'm looking into this space a bit for some personal projects. Would you be able to point me to some examples/good resources on this?
I've used it for years on a very large message switch and in my experience it was rock solid, very easy to develop on and extremely responsive. For hard real time stuff from userland you'd still have to tweak things a bit but even that is possible.
>And they are, the future just hasn't arrived yet. But microkernels see more and more adoption every day. They offer a degree of reliability that is unprecedented. But they also come with a performance penalty that is for a lot of people enough of a drawback that they would rather have 'good enough' than 'perfect'.
Correct. The future of computing is mobile and the weakness of the Linux kernel's monolithic architecture is highlighted by Android's numerous design and implementation issues as well as Android's numerous maintainability, upgrade, reliability and performance problems.
Sounds like an interesting thesis. You have a link to one of these design or implementation "issues" and how it's a reflection of the lack of address space separation and/or IPC design of the linux kernel?
No? Yeah; sounded like a content-free platform flame to me too.
Actually: I'd be curious to hear some more knowledgable folks on this. My understanding of the iOS kernel is that it's a microkernel only via historical label: the PVR driver stack, network devices and filesystems live in the same address space and communicate with userspace via single context switched syscalls. Is that wrong?
Here's one for you genius. This would not be such a hard problem to solve on a Hybrid/microkernel OS. And you wonder why some Android devices don't get updates?
The Android kernel code is more than just the few weird drivers that were in the drivers/staging/android subdirectory in the kernel. In order to get a working Android system, you need the new lock type they have created, as well as hooks in the core system for their security model.
In order to write a driver for hardware to work on Android, you need to properly integrate into this new lock, as well as sometimes the bizarre security model. Oh, and then there's the totally-different framebuffer driver infrastructure as well.
This means that any drivers written for Android hardware platforms, can not get merged into the main kernel tree because they have dependencies on code that only lives in Google's kernel tree, causing it to fail to build in the kernel.org tree.
Because of this, Google has now prevented a large chunk of hardware drivers and platform code from ever getting merged into the main kernel tree. Effectively creating a kernel branch that a number of different vendors are now relying on.
Now branches in the Linux kernel source tree are fine and they happen with every distro release. But this is much worse. Because Google doesn't have their code merged into the mainline, these companies creating drivers and platform code are locked out from ever contributing it back to the kernel community. The kernel community has for years been telling these companies to get their code merged, so that they can take advantage of the security fixes, and handle the rapid API churn automatically. And these companies have listened, as is shown by the larger number of companies contributing to the kernel every release.
But now they are stuck. Companies with Android-specific platform and drivers can not contribute upstream, which causes these companies a much larger maintenance and development cycle.
In Mac OS X, Mach is linked with other kernel components into a single kernel address space. This is primarily for performance; it is much faster to make a direct call between linked components than it is to send messages or do remote procedure calls (RPC) between separate tasks. This modular structure results in a more robust and extensible system than a monolithic kernel would allow, without the performance penalty of a pure microkernel.
The Greg KH link is very stale. All that stuff got merged. And you're interpreting it wrong anyway. Android introduced some new driver APIs, they didn't completely change the kernel. Check the .config file on an actual device and count the number of drivers that are absolutely identical to desktop linux.
And how exactly does having a microkernel fix the problem of having a stable driver API? Drivers must be written to some framework. Windows NT derivatives are microkernels too, and they're on, I believe, their third incompatible driver architecture.
And did you actually read that second link? It's drawing a single "kernel environment" with all the standard kernel junk in it. That is not a microkernel.
For software that needs to be 'perfect' microkernels are the way to go and in fact in the embedded world there are more microkernel varieties that you can choose from now than ever before. Once performance penalties are no longer important and people will start to demand software that does not crash with every change of the weather I believe microkernels will see another wave of increased adoption. As far as I'm concerned this can't come soon enough. Userland drivers are so much better than a monolithic kernel.
And in fact, in the mobile arena this has already come true. And the way Apple is moving I would not be surprised to see an Arm powering an Apple laptop one day. I think both parties underestimated the strength of the windows lock-in here. And many people still underestimate the strength of this lock-in, even here on HN the demise of Microsoft is announced with some regularity.