Despite the drawbacks of its grassroot nature TOFU goes a looooong way.
With my own machines I can just physically check that the server host key matches what the ssh client sees. Once TOFU looks good I'm all set with that host because I don't change any of the keys ever.
In a no-frills corporate unix environment it's enough to have a list of the internal servers' public keys listed on an internal website, accessible via SSL so it's effectively signed by a known corporate identity. You only need to check this list once to validate carrying out TOFU after which you can trust future connections.
In settings with huge fleet of machines or in a very dynamic environment where new machines are rolled out all the time it probably makes things easier to use certificates. Of course, certificates come with some extra work and some extra features so the amount of benefit depend on the case. But at this scale TOFU is breaking down bad on multiple levels so you can't afford a strong opinion against certificates, really.
I wish web browsers could remember server TLS host keys easily too and at least notify me whenever they change even if they'd still accept the new keys via ~trusted CAs.
Coding agents and LLMs basically tivoize open source.
When AI will eventually become the primary means to write code (because hand-programming is going to be slow enough that no company can continue like before) then that means AI becomes your new compiler that comes with a price tag, a subscription.
Programmers were held hostage to commercial compilers until free compilers reached sufficient level of quality, but now it doesn't matter if your disk is full of free/open toolchains if it's not you who is commanding them but commercial AI agents.
Undoubtedly there will be open-source LLMs eventually, of various levels of quality. But to write a free compiler you need a laptop while to train a free programming LLM you need a lot of money. And you also need money to run it.
Programming has been one of the rare arts that even a poor, lower class kid can learn on his own with an old, cheap computer salvaged from a scrap bin and he can raise himself enough intellectual capital to become a well-paid programmer later. I wonder what the equivalent path will be in the future.
Currently: human output is copyrighted so companies sign a transfer agreement with employees that anything they produce at work belongs to the employer. The employer now owns the copyright eventhough the employee, depending on jurisdiction, still owns the moral rights (which matter not much more than squat).
With AI: the company uses AI to produce code that isn't copyrighted. The company can take it like any public domain piece of software and incorporate it into their product. Their product is copyrighted by the company. There are even no moral rights that are personal (no need to mention "This product is based on work by Claude").
It's absolutely necessary that there's a line of people somewhere who will understand the path from garbage collection to assembly instructions. We can't build upon abstractions only as long as we still run stuff on physical cpus.
But it's also unequivocally true that once we have enough long-bearded oldtimers and newtimers who do understand how writing a Python expression somewhere will end up with a register write elsewhere all the others just don't — have to.
In old times, all you had was hardware and to program you had to understand hardware. But those who then did program and did understand were the few smart people who had access to hardware. Everyone else was left out. Now we have high-level languages, scripting languages, AI, what else. As long as we can maintain the link to hardware by some people, the rest can build on that.
In the modern world, this is like saying people under 18 shouldn't have the freedom to be able to read and write. We would be decades back into digital stone age if we had held onto such a preposterous idea in the 80's and 90's. Virtually everything we have now is basically built by people who were hacking on their computers in elementary school and exercising their freedom of speech in terms of writing code freely at the discretion of their own imagination.
Correction, BeOS was killed. I’ll never get over Microsoft getting in trouble for including a browser in Windows but not for forcing companies to not allow BeOS to be installed when it was getting legs.
I learned recently that Hitachi actually shipped computers that would dual-boot into Windows 98 and BeOS R4, except that Microsoft's license didn't allow for dual-boot, so the option was removed from the bootloader (or, rather, the Microsoft bootloader was defaulted to, instead of the Be bootloader).
It wasn't that hard to boot into Be, but I suppose most users wouldn't bother because all games and applications were on Windows anyway. Ultimately, lack of apps was probably what held it back, although Microsoft's commercial practices definitely played a role in curbing OEMs and app developers.
I studied the MS antitrust case extensively when it was happening, and I agree that the abuse against BeOS was MS greatest antitrust offense. However, as a fan of BeOS, I see no evidence at all that Be Inc. would have been successful if MS hadn't abused its position. Unfortunately we will never really know what might have been.
Yeah, Be Inc. made no sense at all for its own purposes. The reason it existed is that Apple (yes, that one) had fired one of its executives - Jean-Louis Gassée often abbreviated to "JLG" - and he wanted to show they were wrong.
AIUI the intended exit was either an acquihire (Apple gets JLG back and the Be Inc. "journey" ends once people tidy up) or maybe Apple's software side fully embraces Be Inc. (after all JLG is sure he's correct about what Apple should do) and absorbs the entire entity as Be's operating system BeOS becomes the new Apple OS.
That part isn't crazy, it's the early 1990s, affordable CPUs have virtual memory support, the physical size limit is looming, software reliability is worsening, Apple's 1980s co-operative multi-tasking operating system is not up to the job. If you understand the big picture it's obvious that you want something closer in principle to a Unix. You could hire somebody to build one (as Microsoft had for "Windows NT") or some people might build one in their bedrooms (Linux) or you could buy one which already exists, so, that's what Be Inc. set out to be.
In the end Apple decided that if they're going to re-hire an executive who they have fired previously it should be Steve Jobs. The moment they've made that decision, Be Inc. was superfluous -- JLG knows Steve isn't going to hire him, Steve hates him, so next the priority now is to help the money get out so that investors will continue talking to JLG. Fortunately the Dot Com bubble happened, Be floated on typical bubble era nonsense, about how their system is somehow perfect for the Internet, and that was enough for the big money to get out, leaving the wreck for the poor Be fans who were still buying even after the last dregs were gone.
im pretty grateful to beos for proving a young me with an offramp from MS architecture that got me using cli, understanding api architectures, making it easy to tinker, etc.
I ran OS/2 Warp and was a fan of it... But to say that it was simply "better" than Windows 95 is a bridge too far. It had its strengths (rock solid multitasking) but also plenty of rough edges.
If we ignore the fact that it required 1000 euros more in additional hardware, thus most folks went with DOS/Windows 3.x instead, and when Windows 95 came around it was already too late for adoption.
I am in the same boat, every time I like something, it is a commercial failure. They should really hire me to check if I like whatever project they got in mind and if I do, cancel immediatetly and save the losses from being a failure.
Buy whatever you want! Buy what makes you happy and buy two if it makes you happier! Do tell all your friends of your keen finds. But remember to buy some put options with each of your Lovely New Products! Thank me later.
I was Palm guy and not Blackberry, so I went from a Palm Treo to webOS. After that though, I went to iPhone. I considered Windows Phone though. The tiles and text orientation were so amazing. I am, however, glad that I never went down that road, not just because Windows Phone died, but also seeing what has happened to Windows more recently.
I was seriously interested in PenPoint, but it was too early for tablet PCs to succeed. Handwriting recognition was nowhere near mature enough yet and unfortunately that became the main issue in that niche. Even Apple pretty much failed with the Newton because of it.
But PenPoint had a lovely UI and, if memory serves, an API much like Apple's Objective C.
Microsoft had a hand in killing PenPoint, just as they did with BeOs. Jerry Kaplan told the story in his book "Startup".
This post is really bringing me back! I knew talk of BeOS would stir up all us old heads. I think what we're all really nostalgic for is the days of tinkering with computers. When things lacked polish, and people put real effort into making their system nice. I remember corrupting my family computer hard drive trying to get a Linux dual-boot setup. Good times!
Have you seen Genode (1) ? An operating system framework with a pretty usable OS built on top. Last I heard, it was getting pretty close to being usable as a daily driver. lots of cool tech (micro kernel(IIRC), capabilities, sandboxing as a first class citizen, GUI system, posix compatibility layer, etc). Its been around for ages, has full time developers (its used as the basis for some (all?) of their products.
From the website: "Genode is based on a recursive system structure. Each program runs in a dedicated sandbox and gets granted only those access rights and resources that are needed for its specific purpose. Programs can create and manage sub-sandboxes out of their own resources, thereby forming hierarchies where policies can be applied at each level. The framework provides mechanisms to let programs communicate with each other and trade their resources, but only in strictly-defined manners. Thanks to this rigid regime, the attack surface of security-critical functions can be reduced by orders of magnitude compared to contemporary operating systems."
The true rite of passage for the child hacker I remember my dad and brother taking a floppy to copy a sys file to restore a win 3.1 install from the Sam’s display computer in the pre-internet days
I think we were on the same track. I absolutely loved the Amiga and was about to jump on board BeOS when it went under. I never got to use BeOS as a daily driver (just ran their demo disk). How did you find it?
From them internets after the x86 version got out, I think. Played enough with what I found around, and I ultimately bought (with real money) the BeOS 5.0 Personal Edition, made it dual-boot my Linux machine and knew that this is it! It felt like an Amiga but on soulless PC hardware instead! The exhilaration was unlimited! It booted fast, no old cruft, unorthodox designs, everything one-in-a-thousand a true harbinger customer loves!
Eventually I think the setup gradually bit rot with no updates and unsupported hardware, so I reluctantly had to go back to Linux. I remember Ubuntu and Gnome 2 started to look pretty nice (well, for an inferior desktop environment) in the early years of 2000.
(Unsurprisingly, years later Gnome came out with Gnome 3 and killed all the good stuff that Gnome 2 had accumulated. I can only wait and see how long Mate desktop survives.)
I still keep a Haiku VM around and boot it every now and then.
I ran BeOS on both the dual PowerPC desktop and later on an x86 laptop. Thanks to its posix-ish environment, I was able to do all my upper division CS projects on it.
Others who had windows or macs had to "telnet" into a remote Unix workstation in an engineering lab to do the same.
I ran it in a dual boot with linux install but I ended up using Linux more despite liking beos because of the ecosystem. There were just more software available on Linux, especially lightweight tui tools.
I'm too young to remember BeOS but I've taken a superficial look at Haiku and I don't get the hype. What made BeOS so special? How is it different from GNU/Linux or BSDs?
^this, plus being able to play 3-4 quicktime videos at the same time smoked everyones brains around me. Using mac os 8/9 was a several times a day cursor freezing up and having to reboot. win95 was even worse
Yes you could! Windows did have (at some point) "show window contents while dragging" option, but it was quite slow at the time, and I don't remember if it supported showing (overlay) video content while moving or not.
Super responsive—running ten things at once, on a Pentium 90 or PPC. The filesystem metadata was neat as well, and though we have these things today, it was unique in the 90s.
There is absolutely nothing special about BeOS compared with any of the modern alternatives that you list, or Windows and macOS for that matter.
But this was 1995. Linux (or BSD) on the desktop didn’t really exist, Apple’s OS was System 7.5, Microsoft’s was Windows 95. BeOS was a preemptively multitasking, multimedia operating system, with a transactional file system. Nothing else like it existed, at that time.
I am risking the one full-time paid developer of Haiku popping up here and shouting at me, because he's done that a few times before and even written to my editor-in-chief to complain. Sadly for him, my former EIC was a hardcore techie -- it's how I met him, long before either of us worked there -- and he was on my side.
Unix is a 1960s design for minicomputers. Minicomputers are text-only standalone multiuser computers. That is why things like handling serial lines (/dev/tty -- short for TeleTYpe) are buried deep in the core of Unix, but networking and graphics aren't.
There is an absolute tonne of legacy baggage like this in Unix. All Unixes, including Linux kernel 7.0. We do not use minicomputers any more; nobody even makes them. We don't have multiuser computers any more. In fact, we have multi computers per user. Modern servers are just PCs with lots of connections from other computers not from people.
In the early 1980s the Lisa flopped because it was $10K, but the Mac did well because it was $2.5K and had a GUI and no shell. The future, woo, etc.
The Mac was black and white, 1 sound channel, no hard disk, no expansion slots, and in cutting down the Lisa, Apple discarded multitasking.
Enter the Hi-Toro Lorraine. Intended to be the ultimate games console, with a powerful full-16 bit Motorola 68000 chip (a minicomputer CPU on a sdingle die) amazing colour graphics, multichannel stereo sound, but it could plug into a TV.
Commodore bought it, renamed it the Amiga, and tried to develop a fancy new ambitious OS, called Commodore Amiga Operating System: CAOS.
They couldn't get it to work so it was canned, and a replacement hastily cobbled together from the research OS Tripos written in BCPL and some new bits. It had a Mac-like windowing GUI, full preemptive multitasking (with no memory protection because the 68000 couldn't do that), and it fit on a single DD floppy (~880 kB) and into 512 kB (1/2 MB) of RAM.
It was a big hit and set a really high bar for expectations of what an inexpensive home computer could do. It ran rings around the Mac and could emulate a Mac with excellent compatibility.
A decade later a lot of people missed that. PCs and PC OSes were very boring by comparison. Sure, reliable, fairly good multitasking by then, dull grey UIs. Linux was a thing but it was for minicomputer fetishists only, and looked like it came from 20 years before Windows or Mac. (Which in a way it did.)
So a former Apple exec set up a company to make a modern geek's dream machine. Everything had true colour graphics and stereo sound now, so that was a given, not a selling point. It had to have a snazzy very fast very smooth GUI, it had to have excellent multitasking, screaming CPU performance because RISC chips were starting to take off. Mainstream computers struggled with >1 CPU so multiple RISC CPUs was the selling point, and amazing blindingly smooth multimedia support, because PCs and Macs could just about play one jerky grainy little video in a postage-stamp sized window in 267 grainy pixelated colours.
The BeBox was to be the mid-1990s geek's dream computer. Part of how they did it was an all-new multitasking single user OS with a very smooth built in GUI desktop, best-in-industry media support, built-in TCP/IP networking. All the cool bits of Windows NT, multitasking as good as Linux but pretty, a desktop better than Windows 95, and it threw all the multiuser stuff in the trash, all the boring server stuff in the trash, because FOSS OSes did that tedious business stuff.
It was beautiful.
It flopped.
The company pivoted to selling its OS on the other PowerPC kit vendor: on PowerMacs, with reverse-engineered drivers. It flopped. Classic MacOS was just barely good enough: crap multitasking, crap virtual memory, but loads of 1st class leading pro apps. BeOS had almost none.
So Be pivoted again. It ported its shiny new C++ OS to x86. You could buy multiprocessor x86 PCs in the late 1990s. I had one.
It was amazing on PC kit. It booted in under a tenth of the time that Windows sluggishly lurched into life. It could do blinding 3D like spinning solid shapes while movies played on their surfaces, and it did it all in software.
Haiku is an all-FOSS ground-up rewrite, but with the original desktop, which was FOSS. It's a lovely mixture of the Classic MacOS Finder and the Windows 95 Explorer, with the best bits of both but none of the bad bits.
Haiku is lovely. It's got a huge amount of Linux compatibility now. That means lots of apps, fixing the one big killer problem of BeOS.
But it is much bigger and much slower. It's still 10x smaller and 10x faster than any FOSS Unix but the original could boot in 5-10 seconds to the desktop in 1999 on a Pentium 200 from a PATA hard disk. A modern PC with an SSD should load it in half a second, but Haiku still takes 10 seconds or so. Good, sure, but not as impressive as BeOS was 25 years ago.
Somewhat ironically, perhaps a formal, deterministic programming language (in its mathematical-kind of abstract beauty) is the outlier in the whole soup. The customers don't know what they need, we don't know what we ought to build, and whatever we build nobody knows how much of it is the right thing and what it actually does. If the only thing that causes people to sigh is the requirement to type all that into a deterministic language maybe at some point we can just replace that with a fuzzy, vague humanly description. If that somehow produces enough value to justify the process we still won't know what we need and what we're actually building but at least we can just be honestly vague about it all the way through.
I was going to say this. I never liked the 256-color VGA game (and now comparing, it does look bland) but Amiga struck the best, IMHO, balance between good hand-crafted pixel art but with realistic enough colors to give sufficient depth and athmosphere in the scene.
One thing that often gets dismissed is the value/effort ratio of reviews.
A review must be useful and the time spent on reviewing, re-editing, and re-reviewing must improve the quality enough to warrant the time spent on it. Even long and strict reviews are worth it if they actually produce near bugless code.
In reality, that's rarely the case. Too often, reviewing gets down into the rabbithole of various minutiae and the time spent to gain the mutual compromise between what the programmer wants to ship and the reviewer can agree to pass is not worth the effort. The time would be better spent on something else if the process doesn't yield substantiable quality. Iterating a review over and over and over to hone it into one interpretation of perfection will only bump the change into the next 10x bracket in the wallclock timeline mentioned in this article.
In the adage of "first make it work, then make it correct, and then make it fast" a review only needs to require that the change reaches the first step or, in other words, to prevent breaking something or the development going into an obviously wrong direction straight from the start. If the change works, maybe with caveats but still works, then all is generally fine enough that the change can be improved in follow-up commits. For this, the review doesn't need to be thorough details: a few comments to point the change into the right direction is often enough. That kind of reviews are very efficient use of time.
Overall, in most cases a review should be a very short part of the development process. Most of the time should be spent programming and not in review churn. A review serves as a quick check-point that things are still going the right way but it shouldn't dictate the exact path that should be used in order to get there.
With my own machines I can just physically check that the server host key matches what the ssh client sees. Once TOFU looks good I'm all set with that host because I don't change any of the keys ever.
In a no-frills corporate unix environment it's enough to have a list of the internal servers' public keys listed on an internal website, accessible via SSL so it's effectively signed by a known corporate identity. You only need to check this list once to validate carrying out TOFU after which you can trust future connections.
In settings with huge fleet of machines or in a very dynamic environment where new machines are rolled out all the time it probably makes things easier to use certificates. Of course, certificates come with some extra work and some extra features so the amount of benefit depend on the case. But at this scale TOFU is breaking down bad on multiple levels so you can't afford a strong opinion against certificates, really.
I wish web browsers could remember server TLS host keys easily too and at least notify me whenever they change even if they'd still accept the new keys via ~trusted CAs.
reply