2026-02-27
Lecture Friday: Wat
Basically programming standup comedy. Not much to learn here other than how weird some parts of some programming languages are.
Basically programming standup comedy. Not much to learn here other than how weird some parts of some programming languages are.
I disagree that software used to be "serene and pleasant." I remember using MS-DOS for things like desktop publishing and video games and a lot of things were fairly slow and unreliable. Like watch the screen paint the UI, slow. Crashes are probably a toss up. They were worse having gone from computer crashes, to application crashes, to tab crashes, to cringe mascots. On the other hand, the world now has so much more software I experience crashes a dozen or more times a day. We literally joke about "snow days" at work every time Github shits the bed; a near weekly occurrence these days.
But software used to crash enough that I developed a compulsive Ctrl-S habit from word processors and image editors that would crash and loose your work every few hours. I remember the advice to periodically close and restart Photoshop because of how buggy and unstable it was (is?). Overall, I think there's a lot of rose tinting on this assessment. You usually remember games from your childhood as these incredible experiences, but far too often, you go back and play them and many don't hold up to your now refined modern standards. More importantly, they've also all stabilized. Developers aren't updating them anymore so they're either known to still work or not. Really popular old software may even have community patches you're expected to apply. But it's all been worked out. You know what to expect. It won't stop working tomorrow because someone pushed a broken change through CI.
I agree with the problem, there's too much code, that's too many bugs, too many security vulnerabilities, too much fiddling after launch. Monocultures are inherently a problem. Large codebases are inherently a problem. "Many eyes make bugs shallow" is a myth. A good theory, but it's not backed up empirically. Who's eyes and what they're looking for matters more. Even when the project is doing literally everything they can, the pressure to break in becomes enormous with enough users, and the larger the interface the more porous to attack it becomes. It's why I don't put OpenSSH directly on the internet.
Even still, I think his sales pitch is a bit weak. The reason operating systems consolidated (or we consolidated on operating systems if you want) isn't because of hardware interoperability. It's that running on an operating system is more convenient than running on metal. Preemptive multitasking is just that good. It's one of those core computer technologies that make a computer what it's become like networking, bitmap displays, floating point arithmetic, and most importantly, backward compatibility. Even if I have my computer set to boot to USB, having to shutdown, dig through my stash of sticks, plug it in, and boot again—it's too much. At one point I dual boot Windows and Linux. I got rid of the Windows partition because juggling two whole systems on the same machine was annoying.
In this every application is an operating system world, I still want to listen to music while I program and consult various sources of documentation. Then when I build and run that program, I ideally want changes I make in code to be reflected in that program as fast as possible so I can iterate quickly. I also want to be able to use one or more best in class debuggers and profilers to quickly figure out where and what the problems are. That leads the development environment back to being a platform and you invoke jwz's law, Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.
Unix is just a developer environment that grew into an operating system. Browsers are just fancy document readers that grew into an operating system.
If you want the user experience, consider that I might want to check a wiki while I play some video games. I might need to consult my email while working on a spreadsheet. I want to balance my ledger against my bills and bank account. I could go on. Would any of these be as possible if we relied on a single vendor to bundle all these features into their application?
Games are maybe the only completely isolated software experiences left, and even there many people want to stream podcasts, take screenshots, reference the community wiki, chat with friends, stream their session online, or install mods to their experience of it without relying on the developer's support for any of this. It's the classic TV-VCR goal dilution trap. Wildly unpopular despite how good the TV or VCR was because you always knew getting each in a dedicated unit would be better. When we switched to DVD, anyone who did get the combo unit ended up having to get a dedicated box anyway or throw it all away and upgrade to a three-in-one. The GNSS (a.k.a. GPS) built into modern cars is universally awful because car manufacturers don't have the pressure to be as good as Google or Apple Maps do. You already bought the car.
Do we expect people to buy half a dozen different computers to be able to do these at once? An MP3 player, a wiki reader, a handheld email client, a digital painting tablet? That's all called a phone and it's wildly successful because it's portable, integrated, and so simple you can park a toddler in front of it as a babysitter.
Convenience plays a massive role in purchasing decisions. This is why developers have also embraced this tower. To send you these text files, I don't have to write a font renderer. I don't have to write a video codec. I don't have to write a web server or file system or network driver or remote administration server or resource cache or smooth scrolling interface or vector graphics interpreter or hyperlink loader. Better still, you've already downloaded half that stuff in a browser and I can cobble the rest together with Neocities and PeerTube for free in an afternoon.
These towers of code are as big as they are for various human factors, just one being the allure of convenience. Others include the overhead and imprecision of communicating between people, especially over time. How easy something is to get started with and learn. The challenge of fighting entropy in a long lived software project. The temptation to add a layer of indirection or reuse something it was only kind of designed to handle originally. The fear of not making payroll and trying to find safety in doing whatever users say they want. Egos, politics, inertia, greed, ignorance, sloth; the list goes on.
A lot of factors lead us here. SoCs don't seem like they're going to help much. Raspberry Pi and the ESP32 are probably the most popular examples of this. You might argue it's the binary blob firmware that's holding them back. Maybe? The ESP32's Wi-Fi stack is being reverse engineered and open sourced though. You could then review that code to create your own MAC driver for the chip. Maybe after that the dominant application delivery platform will begin to shift?
Obviously you can level the, "it's not good enough," retort that's become popular these days. They're not as powerful as your average desktop computer. Sure, but if they were, you'd then want to run all your existing software on it. That would require Windows or maybe one of the big two mobile operating systems. If you're running one of those, why would you want to reboot into a game instead of launch from within? Latency is a super niche reason. That's why Windows is still popular despite being widely loathed. It runs almost all the world's consumer software. Every game on Steam has to run on Windows according to the Steam publishing agreement. All the most popular Linux "only" software has been ported to Windows.
That's why Linux still only has single digit adoption. The console wars understood this. VHS and Betamax understood this. Content sells platforms. Nobody buys a platform. They buy a platform to access content. Think of it as a little two by two matrix. On one side is their current platform. On the other, the new one. Between them you have forces pushing from old to new, forces pulling from old to new, forces repelling them from the new, and forces holding them back on the old. To switch, the first two must overcome the second two. One insanely valuable application can drive the sale of a whole platform, but usually it's the sum of all the potential that drives the sale because making something so valuable it's worth hundreds or thousands of dollars in switching costs is really hard.
With that in mind, the 3-6% or whatever Linux is currently at is almost exclusively driven by just how incompetent directors at Microsoft are in managing their cash cow, resulting in people who hate Windows enough to switch. There's very little pulling users, lots repelling them, and a mountain of things holding them back. But that mountain's been shrinking as they keep abusing their developers who've been moving to the web, the pushes have been getting stronger with ads and instability, and the repellents have been slowly easing as polished user friendly UIs and Windows compatibility layers have gotten pretty great. All that's missing is the killer app. Something you can only get on Linux and not Windows that's worth switching for.
That's also why people build on popular platforms and end up reinforcing their dominance. Gambling on being so good people would switch for you is risky. Why use a different platform if it's not likely to pay off spectacularly well if you succeed.
That all aside, we can see if this plays out because who says the best market is wealthy industrialized nations? With the insanely low cost of solar that continues to fall, lots of developing nations are rapidly expanding their energy capacity. That opens up a wealth of opportunities for low cost computerization like Raspberry Pis even in remote areas. If there's ever been a time to see if SoC bare metal single purpose systems will take off, it started a couple years ago and will play itself out over the next decade and a half. And it kind of is. Inverter and charge controllers are becoming a huge area of concern because they are these network connected embedded single purpose proprietary systems that also pose significant risk to nations as countries continue to expand their weaponization of supply chains.
I've considered going with a unikernel for my backend. A single binary operating system and application together. Boot the machine directly to the server shipped as a single disk image. But then I ask, why? I reboot this server a couple times a year. All that effort to save maybe 60 seconds a year waiting for the server to boot. This machine sees a normal CPU load of between 0.5% and 1.5%, mostly depending on how much connection spam I'm handling.
Sure, I could spend my time maintaining my own data storage (file system, database, and backups), network stack (TCP/IP, DNS, HTTPS, and firewall), thread scheduler, and remote administration/debugging interface. Or, I could spend my time writing these posts to collect my thoughts and spread my ideas. I'm not cursed to live forever. I can only do so much in the very limited time I have left to live. I'm not seeing an advantage to doing it myself instead of bringing together a pile of open source software. With those, I spend the equivalent of 1/5th of a full time dev (in exchange for not learning to play guitar or something) plus $0.03/hour ($20/month) building and maintaining this thing.
And that's for a target that almost amounts to an SoC. I run on a VPS, which conforms to the VirtIO machine spec running on an x86_64 Skylake chip. But why do I want to write and maintain a SCSI controller? The machine is already so huge for the workload and the existing driver in Linux is good enough. Sure there are efficiencies that could be gained by writing a controller that exposes a relational database interface directly instead of layering storage controller, file system, and database. It's also thousands of hours of work to build. It's hard to justify. This site is never going to get big. If it does, I can throw money at it and consider the tradeoffs involved in rewriting it to be more performant. Sure, writing it to be able to handle planet scale from the start sounds great, maybe even sounds cost effective, but I'm very unlikely to correctly design such a system from first principals (scaling behaviour always surprises you), and I'd spend all my time designing a platform without content and guarantee I never need to scale.
Learned about her last night, or rather, her contribution to medical research for things like the polio vaccine (a disease a segment of the population seems adamant to bring back from the edge of extinction). Sadly, not voluntarily. Go read her Wikipedia page. Least you can do is learn how much she's helped humanity without her knowledge or consent thanks to her unfortunate early death.
This talk is wonderful. You won't learn too much, but it's such a cool idea taken to great lengths. It's stuff like this that gets me excited about computers. I hope it brightens your week.
Quick refresher on queue theory. Really take some time and do the math he's doing yourself by hand. That's the big skill from this talk. If you just let him do it for you, you won't learn it. It'll build the ability for you to use Little's Law in your own programming and design work. Simple summary was always limit request concurrency. Specifically he shows how to do this leveraging TCP's congestion protocol.
Can't agree more. The failure mode I see again in again in asynchronous services is not limiting queue sizes including the request queue. You never want an infinite queue. Honestly, you usually don't want queues either, you want stacks, because stacks prioritize liveness, not fairness. If someone shows up with a thousand things to do, a stack will ensure everyone with odd requests who show up after gets priority.