Painfully slow disk access

I co-worker just pointed this out. On my Powerbook (a G4/550, 768 MB, 40 GB 4200 RPM drive), running ‘find /usr’ takes around 30 seconds every time I run it:

tibook$ time find /usr | wc
   48402   48402 2390367

real0m30.041s
user0m0.410s
sys 0m2.620s
tibook$ time find /usr | wc
   48402   48402 2390367

real0m32.034s
user0m0.450s
sys 0m2.710s

On the other hand, one of my Linux boxes at home isn’t that much faster (Athlon 700, 384 MB, old Maxtor 5 GB drive), but it’s able to do repeated finds much quicker:

debian# time find /usr  | wc
 124088  124108 5869110

real1m43.631s
user0m0.680s
sys 0m1.170s
debian# time find /usr  | wc
 124088  124108 5869110

real0m2.090s
user0m0.530s
sys 0m0.700s

Notice that repeated finds drop from 103 seconds to 2 seconds on the Linux box, while they stay around 30 seconds on the Mac, even though the Mac has twice the RAM of the PC.

I’m assuming that OS X is restricting the amount of RAM used for disk caching, but it’s really painful in this case.

Posted by Scott Laird Wed, 05 May 2004 01:32:21 GMT


Xorp

CNET is running a blurb on Xorp, the eXtensible Open Router Platform. According to the article, Xorp’s authors are hailing it as “the Linux of routing.” Since open-source router platforms are one of my interests, and I’d never really heard of Xorp, I just took a quick look and was pleasantly surprised.

It’s based on the Click modular router from MIT, which I’ve been fascinated with for years. Click is largely a replacement for Linux’s (and potentially other free OS’s) networking stack. It slides in between the hardware network interface drivers and moves the kernel’s native packet handling off to the side. Click’s packet processing is completely programmable; you can write network switches, routers, VPN hardware, mesh routers, programmable network test hardware, or nearly anything else with Click, all without fighting against the kernel’s native packet handling. More importantly, Click is freakishly fast; one of the demos I read about a couple years ago was handling roughly 1 million packets per second on a normal dual-CPU PC.

The big problem with Click was that it was difficult to configure. You had to understand IP networking at a very low level in order to make heads or tails of it, and even a simple 2-port router took most of a page to configure. Multiport routers or switches were doable, but you wouldn’t want to set them up by hand.

Unless I’m mis-reading things, this is where Xorp comes in–it’s a full wrapper for Click. It looks like it provides a CLI, dynamic routing support, and a simple configuration language. Xorp’s backend then sets Click up and off it goes, flinging packets around at freakish speed (er, freakish for a PC, that is–I wouldn’t put it up against a Catalyst 6000 or faster).

Xorp 1.0 is due out soon. I haven’t downloaded it yet, or even looked through more then one whitepaper, but I’m looking forward to playing with it. Assuming that someone goes to the trouble of gluing this together with a small Linux distribution (I think it’s Linux-based; their website and whitepaper don’t say, but Click only works on Linux and FreeBSD, and the FreeBSD layer has been broken for a while), it should be an easy way to build cheap mid-speed routers.

Posted by Scott Laird Mon, 19 Apr 2004 19:00:54 GMT


Linus has a G5?

It looks like support for Apple’s PowerMac G5 was added to Linux today. That’s nice, but I’m not really sure why you’d spend the extra money for a G5 over an Opteron if you’re planning on running Linux. The interesting bit was Linus’s reply; apparently there were a couple small bugs that he had to fix after merging, but then he said:

Anyway, with that fixed, it will compile and appears to work on the G5. Thanks. Although I did see it hang when I inserted a USB keyboard (in addition to the X problem). Hmm.

So Linus has a G5 handy. His or OSDL’s?

Posted by Scott Laird Thu, 12 Feb 2004 23:47:35 GMT


Linux on the Desktop

Rant time. I’m pissed off with Linux on the Desktop.

I’ve been using Linux for a long time–since 0.97.1, in August of 1992. I re-wrote chunks of the X server to get it to work better with my video card. I wrote drivers for cheesy video cameras, fixed broken system calls, and so forth. I know Linux. I’ve been running Linux servers professionally since 1995 or so. At Internap, I ran over 700 Linux boxes, including a ton of desktops. My primary desktop system was a Linux box from 1993 until I bought a PowerBook in 2002; I’ve probably only had a Windows box running at home for a month or so since 1993.

I’ve been around Apple systems since 1983 or so–I had an Apple //e, and I lusted after their hardware for years, but I ended up buying a faster PC for way less money in 1988, and I didn’t look back until early 2000. My wife was complaining that it was hard for her to use my Linux box, because it was perpetually just slightly broken. Things like incomplete kernel upgrades, broken X servers, and flaky copies of netscape kept her away from her email. I spent all day keeping the computers at work running; I didn’t want to spend the rest of my time fixing computers at home. Plus, we’d just had our first child, and Gabe was suffering from a lack of tacky home video footage. So, Gabe and I decided to go out and kill two birds with one stone, so we bought a graphite-colored iMac DV and a DV camcorder. I added a wireless networking base station and card for the mac, and it was able to work pretty much anywhere in the house. My wife could read her mail and surf the web, and I could leave broken Linux boxes sitting in the computer room. Everyone was happy.

Sort of. The problem was that the Mac ran OS 9. No matter what Apple people claim, OS 9’s core is about on par with Windows for Workgroups, around 1993 or so. It’s awful. It’s not a modern OS by any metric, with no memory protection, no real multitasking, weird networking, and (of course) no command prompt. It tended to crash a couple times per week, plus I hated using it, just on general principles. But, it was never really broken, because I never wanted to tweak anything on it.

In late 2001, Cyn was griping about an irritating crash of something or other, and was wishing for Emacs and ssh while we were out driving, and I remembered that OS X 10.1 was shipping, and was supposed to be usable. So, we dropped by CompUSA and grabbed a copy, and it was nice. I liked it, because it was a real OS (it ships with openssh, that’s usually real enough for me). She liked it because little crashes didn’t take down the whole system. A few months later, I decided that I needed a laptop and bought a PowerBook G4. I wanted a machine that would let me (a) work (which means mostly SSH, X, and a web browser) (b) run Photoshop and (c) watch DVDs while traveling. On a PC, I’d have had to dual-boot to do (b) and (c), while the Mac could do all 3 at the same time without problems. So, since I’d spent over $2,000 on the laptop, I decided that I was actually going to use it, not just let it gather dust, and started turning off my Linux desktops at home and at work.

And, bizarrely, I was happy. I’ve avoided treating the Mac like a Unix box. I’ve limited the amount of Unix cruft that I’ve drizzled through the filesystem, although I have X and XEmacs installed. I do 90% of my file management through the shell, and I use rsync and scp all the time–I’ve not glued to the GUI, but I enjoy the working environment. Plus, tons of stuff just works, without we needing to spend hours fiddling with it. The system address book syncs correctly with my cell phone. My calendar on the phone syncs with the computer, which syncs with my wife’s at home. Some things, like iTunes, are amazingly right, while others are still a bit flaky, but all in all it’s the most usable Unix I’ve ever seen.

Which brings me, in a round-about manner, to the point that I was starting with. Under the hood, Linux is quite a bit more capable then OS X. It’s faster and cheaper, and it runs on nearly every hardware platform known to man. It’s wonderfully flexible for servers. On the desktop, though, it’s just too flexible. I build my first Linux desktop box in over a year this weekend, with Debian and KDE 3.1. After fighting the usual fight with Debian’s installer, I was able to get X and KDE working after a couple hours (missing drivers, broken dependancies in sid, nothing that I can’t handle, and most of that was Debian-related, not really anything endemic to Linux itself). However, when I was done, I was still left with a hodge-podge of mostly interoperable programs that all worked just a little differently. KDE’s web browser and Mozilla have a hard time printing to the same printer. KDE apps seem to understand the multimedia keys on the keyboard, but Mozilla doesn’t. Sub-pixel antialiasing is set up wrong, and leaves a colored fringe on letters on the cheap LCD that we’re using. There’s nothing like iTunes, which is wonderfully simple to use, yet still manages to just work. Instead, I can accomplish the same basic things, but it takes 2-3 times as much work. But, in exchange for this, I can do it in 15 different ways.

That’s not really a step in the right direction.

On Friday night, we went out for Chinese food, and I watched the waitress add up our bill on paper with a calculator. I started to wonder why they didn’t use a computer–there are tons of opportunities that a computer could help with, besides just adding the numbers right. One local burger drive-in takes orders on iPaqs with wireless cards, and beams the orders back to the kitchen, shaving a minute or two off of each order. So why doesn’t the Chinese place do this? Because it’s freakishly complex and expensive. What are the odds that their computer would work perfectly without failing all year? What happens when (not if) it dies? Can they fix it in-house, or do they need to wait for a consultant to show up? What do they do when it’s down?

After a couple minutes, it seems obvious that paper and calculators is a better approach for this place, and quite possibly most non-chain restaurants, because they can’t afford the incredible cost of keeping their computers working.

I’m not saying that buying computers from Apple would make their lives easier (although it probably would, a little), I’m saying that pretty much everything computer-related right now is too complex and too prone to breaking. And, once it breaks, it takes an expert to un-break it. Computers tend to be brittle and easily broken, and once they break, they can’t fix themselves. There’s no single fix to that, but I’ve seen a few things that help.

1. Don’t be too flexible. Understand the problem that you’re working on, find a good model, and then stick to it. My two favorite pieces of software right now, iTunes and the TiVo both succeed by making it easy to do what you want to do without providing excessive flexibility. Compare to KDE on Linux–how many ways to burn a CD do we really need?

2. Software breaks, computers break, but there’s no reason for them to remain broken. Look at TiVo, or at Internap’s reference system–in either case, the system software for each box is at least partially self-repairing. At Internap, you could overwrite system files and libraries, and odds were it would be repaired and returned to service without anyone ever knowing. Even if the box died completely, we could build a new one and restore the old data exactly within minutes. Appliances like TiVo need to behave the same way–they need to keep low-level problems from turning into high-level problems that the user can see.

3. Virtualize and separate. Something else that we did at Internap that helped was to separate different services onto different physical servers. That’s pretty common at companies that care about reliability; if one server dies, it only takes out one service. You then deploy redundant servers for each service, and things tend to keep working through hardware faults. Software faults still kill you, though. In my ideal world, software would take that even further; I’d love to manage a system made out of smalltalk-like images, where each logical service was entirely contained within a system image that couple be copied around over the network, without any external dependancies. Assembling a network’s worth of services would then become an exercise in bolting together components, and the development side of administration would be mostly creating components.

I need to practice short, coherent rants.

Posted by Scott Laird Wed, 03 Sep 2003 01:41:06 GMT