Rant time. I'm pissed off with Linux on the Desktop.

I've been using Linux for a long time--since 0.97.1, in August of 1992. I re-wrote chunks of the X server to get it to work better with my video card. I wrote drivers for cheesy video cameras, fixed broken system calls, and so forth. I know Linux. I've been running Linux servers professionally since 1995 or so. At Internap, I ran over 700 Linux boxes, including a ton of desktops. My primary desktop system was a Linux box from 1993 until I bought a PowerBook in 2002; I've probably only had a Windows box running at home for a month or so since 1993.

I've been around Apple systems since 1983 or so--I had an Apple //e, and I lusted after their hardware for years, but I ended up buying a faster PC for way less money in 1988, and I didn't look back until early 2000. My wife was complaining that it was hard for her to use my Linux box, because it was perpetually just slightly broken. Things like incomplete kernel upgrades, broken X servers, and flaky copies of netscape kept her away from her email. I spent all day keeping the computers at work running; I didn't want to spend the rest of my time fixing computers at home. Plus, we'd just had our first child, and Gabe was suffering from a lack of tacky home video footage. So, Gabe and I decided to go out and kill two birds with one stone, so we bought a graphite-colored iMac DV and a DV camcorder. I added a wireless networking base station and card for the mac, and it was able to work pretty much anywhere in the house. My wife could read her mail and surf the web, and I could leave broken Linux boxes sitting in the computer room. Everyone was happy.

Sort of. The problem was that the Mac ran OS 9. No matter what Apple people claim, OS 9's core is about on par with Windows for Workgroups, around 1993 or so. It's awful. It's not a modern OS by any metric, with no memory protection, no real multitasking, weird networking, and (of course) no command prompt. It tended to crash a couple times per week, plus I hated using it, just on general principles. But, it was never really broken, because I never wanted to tweak anything on it.

In late 2001, Cyn was griping about an irritating crash of something or other, and was wishing for Emacs and ssh while we were out driving, and I remembered that OS X 10.1 was shipping, and was supposed to be usable. So, we dropped by CompUSA and grabbed a copy, and it was nice. I liked it, because it was a real OS (it ships with openssh, that's usually real enough for me). She liked it because little crashes didn't take down the whole system. A few months later, I decided that I needed a laptop and bought a PowerBook G4. I wanted a machine that would let me (a) work (which means mostly SSH, X, and a web browser) (b) run Photoshop and (c) watch DVDs while traveling. On a PC, I'd have had to dual-boot to do (b) and (c), while the Mac could do all 3 at the same time without problems. So, since I'd spent over $2,000 on the laptop, I decided that I was actually going to use it, not just let it gather dust, and started turning off my Linux desktops at home and at work.

And, bizarrely, I was happy. I've avoided treating the Mac like a Unix box. I've limited the amount of Unix cruft that I've drizzled through the filesystem, although I have X and XEmacs installed. I do 90% of my file management through the shell, and I use rsync and scp all the time--I've not glued to the GUI, but I enjoy the working environment. Plus, tons of stuff just works, without we needing to spend hours fiddling with it. The system address book syncs correctly with my cell phone. My calendar on the phone syncs with the computer, which syncs with my wife's at home. Some things, like iTunes, are amazingly right, while others are still a bit flaky, but all in all it's the most usable Unix I've ever seen.

Which brings me, in a round-about manner, to the point that I was starting with. Under the hood, Linux is quite a bit more capable then OS X. It's faster and cheaper, and it runs on nearly every hardware platform known to man. It's wonderfully flexible for servers. On the desktop, though, it's just too flexible. I build my first Linux desktop box in over a year this weekend, with Debian and KDE 3.1. After fighting the usual fight with Debian's installer, I was able to get X and KDE working after a couple hours (missing drivers, broken dependancies in sid, nothing that I can't handle, and most of that was Debian-related, not really anything endemic to Linux itself). However, when I was done, I was still left with a hodge-podge of mostly interoperable programs that all worked just a little differently. KDE's web browser and Mozilla have a hard time printing to the same printer. KDE apps seem to understand the multimedia keys on the keyboard, but Mozilla doesn't. Sub-pixel antialiasing is set up wrong, and leaves a colored fringe on letters on the cheap LCD that we're using. There's nothing like iTunes, which is wonderfully simple to use, yet still manages to just work. Instead, I can accomplish the same basic things, but it takes 2-3 times as much work. But, in exchange for this, I can do it in 15 different ways.

That's not really a step in the right direction.

On Friday night, we went out for Chinese food, and I watched the waitress add up our bill on paper with a calculator. I started to wonder why they didn't use a computer--there are tons of opportunities that a computer could help with, besides just adding the numbers right. One local burger drive-in takes orders on iPaqs with wireless cards, and beams the orders back to the kitchen, shaving a minute or two off of each order. So why doesn't the Chinese place do this? Because it's freakishly complex and expensive. What are the odds that their computer would work perfectly without failing all year? What happens when (not if) it dies? Can they fix it in-house, or do they need to wait for a consultant to show up? What do they do when it's down?

After a couple minutes, it seems obvious that paper and calculators is a better approach for this place, and quite possibly most non-chain restaurants, because they can't afford the incredible cost of keeping their computers working.

I'm not saying that buying computers from Apple would make their lives easier (although it probably would, a little), I'm saying that pretty much everything computer-related right now is too complex and too prone to breaking. And, once it breaks, it takes an expert to un-break it. Computers tend to be brittle and easily broken, and once they break, they can't fix themselves. There's no single fix to that, but I've seen a few things that help.

1. Don't be too flexible. Understand the problem that you're working on, find a good model, and then stick to it. My two favorite pieces of software right now, iTunes and the TiVo both succeed by making it easy to do what you want to do without providing excessive flexibility. Compare to KDE on Linux--how many ways to burn a CD do we really need?

2. Software breaks, computers break, but there's no reason for them to remain broken. Look at TiVo, or at Internap's reference system--in either case, the system software for each box is at least partially self-repairing. At Internap, you could overwrite system files and libraries, and odds were it would be repaired and returned to service without anyone ever knowing. Even if the box died completely, we could build a new one and restore the old data exactly within minutes. Appliances like TiVo need to behave the same way--they need to keep low-level problems from turning into high-level problems that the user can see.

3. Virtualize and separate. Something else that we did at Internap that helped was to separate different services onto different physical servers. That's pretty common at companies that care about reliability; if one server dies, it only takes out one service. You then deploy redundant servers for each service, and things tend to keep working through hardware faults. Software faults still kill you, though. In my ideal world, software would take that even further; I'd love to manage a system made out of smalltalk-like images, where each logical service was entirely contained within a system image that couple be copied around over the network, without any external dependancies. Assembling a network's worth of services would then become an exercise in bolting together components, and the development side of administration would be mostly creating components.

I need to practice short, coherent rants.