What’s worse then the sound of one hard drive going “click, click?” Why, two drives going “click, click, click” in the same RAID 5 array, of course.
I’m not very happy with my little Infrant NAS box right now. I think I’ve had it with RAID 5–if I’m going to pile my life onto a disk array, then I really want something that can survive a 2-drive failure without croaking, and that’s basically impossible in a 4-drive enclosure.
I’m seriously considering replacing the Infrant with an OpenSolaris box running ZFS over RAID-Z2 with 6–10 drives; that should live through 2-drive failures, right? Anyone feel the need to talk me out of it?
Like most of the people I know, I spent at least 60 hours per week staring at text on a monitor. These days, I’m mostly using 24” Dell LCDs–3 at work and 1 at home. They’re nice, but I keep finding myself wishing I had something bigger, with more pixels. Unfortunately, today’s batch of 30” LCDs doesn’t quite cut it–they all require a dual-link DVI connector, and of the 5 devices that I’ve plugged into my LCDs lately, only 1 can drive a 30” screen at full resolution. Half of them won’t even plug in at all. I want something that I can use at a high resolution for computer work, and then fall back to HD or even lower to play games or watch movies.
It looks like Westinghouse is going to announce something cool at CES: a 52” 3840x2160 LCD TV at CES. My math says that the screen will be roughly 45.25”x25.5” at about 84 DPI. That’d make an awesome full-desk monitor, although the DPI’s a bit low and the corners would be difficult to use for anything that you needed to focus clearly on. Since it’s a TV, I can only assume that it’ll have The Mother of All Video Scaling Chips in it, so my Xbox 360 will still work with it, even if it’s only at 720p. Assuming that the computer side is about like the IBM/Viewsonic/Iiyama 3840x2400 displays that were on the market for $7,000 a couple years ago, it’d do around 25 Hz with a single dual-link DVI and 50-60 Hz with 2 dual-link connectors. I could probably live with that. Sure, there are a few shortcomings, but I’m sure I’d find a way to cope if someone felt the need to mail me one.
This site actually lives on a box under my desk at home and talks to the world via DSL with a static IP address. For the past three years or so, it’s been running on a 700 MHz Athlon with 768 MB of RAM and a 5 GB hard drive (from 1997!). The old box has been working hard to keep up, but there’s just too much going on between mail, Asterisk, multiple blogs, and a zillion other services, and I have a few tools that I’d love to write but there’s just no way the old box could keep up.
So, it’s time for new hardware. I have an Athlon 64 sitting in my home office mostly unused, so I’m going to swap it into service next month, after I finish travelling. The new box will be an Athlon X2 3800+ with 3 GB of RAM and a pair of 250 GB drives. That’s probably an 8x jump in performance over the old hardware. I’m going to try to use it with Xen, so I can spawn sites off onto their own virtual server for better isolation, but I’m not sure how well Asterisk will cope. It’s basically a realtime app that needs direct access to a pair of PCI telephony cards, so it’s not all that easy to virtualize.
So pretty much nothing worked yesterday.
It started with an 8:00 phone call over VoIP that only had one-way audio. Then my 11:00 phone call gave a potential client an “all circuits busy” instead of ringing through to my VoIP phone. I’ve been using Asterisk for almost 18 months, and this is the first time that I’ve ever seen either of these, so I spent a while trying to reproduce the problems and sending support email off to two different VoIP providers.
After that, I finally started in on The Evil Thing–I bought a copy of XP and I was planning on installing it on a spare PC so I could test Typo with IE 6. How hard could it be, right?
I gave up on it at 8:00 PM.
Here’s a short list of things that went wrong:
I couldn’t find an old Windows CD to use to make the upgrade test on my XP CD happy. I should have CDs for ‘98 and 2000 sitting here somewhere, but I couldn’t find either. I had to borrow one to make XP’s installer happy. It would have been easier to download a cracked copy then to use the legitimate version and fight with its copy and licensing protection.
Once I got past the upgrade test, the installer refused to format my hard drive. No matter which options I picked (full disk or small partition, NTFS or FAT, quick format or full format), it would always die out within 5 seconds with a “Setup was unable to format the partition” error. The error suggests that I check the power on my external SCSI drive. Since I’m installing onto a completely standard 80 GB internal IDE drive, the error isn’t very helpful. Digging around a bit, bad IDE cables and bad CD drives seem to be the most common causes for this error. Since this is an old box that I put together from spare parts, the system is using old 40-pin IDE cables; I need to swing by a store and pick up a couple 80-pin IDE cables. Maybe that will help.
For the fun of it, I tried booting my borrowed XP disk (the one that I was using to pass the upgrade test), and *it* partitioned the drive without any problems. Unfortunately, it refused to take my license key. The nice hologrammed one that came directly from Microsoft. Apparently my key is just good for XP Pro Upgrade CDs that come with SP2 pre-installed or something. Rebooting with my CD put me right back into formatting limbo.
I swear, I should have just downloaded and installed a cracked version–I would have been done early yesterday afternoon.
I need for a new keyboard and mouse to use with my Mac, and I'm looking for recommendations. Right now my PowerBook is spending most of its life on my desk at home, plugged into an external monitor, drives, and network, and I'd like to throw a real keyboard into the mix. My PowerBook keyboard is getting old (over 3.5 years now), the down-arrow key is starting to stick, and it makes weird clicking noises sometimes. I'd like to replace the PowerBook soon, but for now I'm more interested in having a keyboard that I can be productive with without accruing a couple grand in debt.
I spent a few minutes at the store yesterday and found three things:
- I don't really like the the feel of Apple's current keyboards.
- Microsoft's Optical Desktop Elite has a decent-feeling keyboard, and I like the idea of having a scrollwheel on the keyboard. I have no idea if I'd ever actually use it, though.
- Logitech's LX500 package has a okay keyboard, and it's around $25 for a keyboard and mouse after rebates.
All of the other keyboards that they had on display sucked. Most keyboards are too mushy for me. My personal favorite is from NMB--they used to make a really nice keyboard with real keyswitches that clicked a bit while typing, but not as bad as the beasts that IBM used to make. Unfortunately, I can't find anything like that on the market anymore. Apparently mushy is cheaper, so no one stocks non-mushy keyboards anymore.
So, I'm basically looking for three things:
- Has anyone used either of the keyboards that I listed with a Mac? Did they work okay, or do they need some nasty driver before anything works right?
- Does anyone know where I could find a modern USB keyboard (ideally with a Mac option key instead of a Windows key) that uses real springs instead of a membrane?
- Does anyone have any other keyboards that work well that they'd recommend?
- There will be a VGA cable available for the Xbox 360. It’ll cost $40, but that means that the 360 will be playable with existing computer monitors and projectors. This is something that Microsoft screwed up with the original Xbox design–they worked so hard to convince people that it wasn’t a PC that they left out a number of features that would have been useful, like real USB ports and a VGA connector.
- The Xbox 360/Xbox 360 Core System comparison chart in the middle shows “Play Original Xbox Games” as one of the comparison items. The hard drive is required for Xbox compatibility. This has been rumored for months, but this is the first official statement that I’ve seen. Further down is says that “top-selling” games can be played on the 360, including Halo 2.
The Inquirer reports that Intel’s Tukwila chip is going to have an on-board memory controller, just like all of AMD’s newer chips. Tukwila is a multi-core Itanium, and is due sometime in 2007; the Inquirer suggests that Xeons will probably get on-board memory controllers in the same basic timeframe, simply because this will let Intel use the same controller chips for both Xeon and Itanium systems.
Assuming that the rumor is true (and considering how well AMD’s on-board controller works, I’d be surprised if it’s not), Intel will probably end up putting 4-6 FB-DIMM channels per CPU; since each channel’s good for around 10 GB/sec, a dual-chip system could potentially have 120 GB/sec in memory bandwidth. Even better, it’d be possible to build a high-capacity server with 48 DIMM sockets spread over the 12 channels; with 4 GB DIMMs, that’s 192 GB in a relatively simple box.
This assumes that multi-CPU systems remain common; given the way that multiple core systems are progressing, I’m not sure that there will really be a market for commodity multiple-CPU-chip systems after 2007 or so–if you can get 8 cores on a single chip, why would you pay the complexity cost of adding more chips, except for really high-end stuff? Even today, compare the cost and performance of an Athlon 64 x2 vs a system with 2 single-core Opteron 2xx chips–the Opteron system will have a bit more memory bandwidth, but they’ll have similar performance on a lot of workloads and an Athlon 64 x2 with cheap motherboard will be cheaper then most dual-CPU Opteron motherbards, never mind the CPUs.
Dual-CPU systems have been the bread and butter of the PC server world for the last 5-7 years, but I doubt that they have more then another two years to go before they fade into the sunset. Personally, I’d much rather manage a handful of single-chip 8-core clustered, virtualized (where virtual environments can migrate between physical systems under explicit admin control) systems then a smaller number of 2-4 CPU 16-32 core systems.
The Inquirer has a photo of one of Broadcom’s upcoming SAS (Serial Attached SCSI, basically SCSI over the SATA physical layer) cards. The interesting thing about this card is that it’s both a PCI-E and a PCI-X card–you can flip it over and plug it in either way.
This isn’t the first time I’ve seen this sort of thing done–there were ISA/MCA cards on the market for a while in the late 80s–but they’ve always been extremely rare. I doubt Broadcom will have this card on the market for very long, because it’s almost certainly cheaper to make two different models then one model with two different interfaces.
I’m really interested in seeing how SAS hardware is priced, because it could be extremely useful in low-end servers. Unlike old parallel SCSI, SAS is a point-to-point network–no daisy-chaining drives on a single cable. Unlike SATA, though, SAS is designed to support “fan-out” devices, so you can plug multiple drives into a single controller channel. Supposedly it’s possible to plug SATA drives into SAS controller; if it’s possible to plug SATA drives into a SAS fan-out enclosure, then we’d get the best of both worlds–the ability to buy big, cheap (but slow) SATA drives and the ability to hang a dozen or so drives off of a single server without needing a dozen different cables. I don’t know if any vendors will be aggressive enough with their pricing to make this cost-effective, though.
As mentioned earlier, Intel has been making noises about improving network I/O on PC servers. Today, at IDF, they released a few details on their plans. Apparently the presentation itself was good, but their web documentation is slim on details. Lennert Buytenhek summarized the important details, centering on the threading improvements:
[…] Rather than providing multiple hardware contexts in a processor like Hyper-Threading (HT) Technology from Intel, a single hardware context contains the network stack with multiple software-controlled threads. When a packet thread triggers a memory event a scheduler within the network stack selects an alternate packet thread and loads the CPU execution pipeline. Porcessing continues in the shadow of a memory access. […] Stall conditions, triggered by requests to slow memory devices, are nearly eliminated.
This isn’t exactly like the IXP2800, but there are some distinct similarities. In essence, it looks like Intel wants to provide the OS with the ability to task-switch on cache misses. I’m not sure that current OSes can switch threads much faster then the CPU can handle a cache miss, so this will be interesting to follow. I suspect that you could switch fast enough if you don’t touch the TLB or most of the CPU mode bits.
Intel also points out that with 10 GbE, just mitigating the effect of cache misses by processing multiple packets in parallel isn’t enough–packets actually arrive faster then the computer can fetch data from main memory–with 64 byte packets at 10 Gbps, a new packet arrives every 51.2 ns, which isn’t even long enough for a single main-memory access. According to Intel, normal packet processing requires 5 main memory reads. Intel’s fix for this is to add the ability to DMA directly into the CPU’s cache, and then add support for offloading memory copies onto the memory controller itself.
While Intel is aiming at improving network performance, I suspect that other types of processing may see big improvements from the planned changes. Video compression, for instance, can have horrible cache performance; I saw a study a while back that showed P4s running a MPEG-2 codec were averaging one instruction every 5 cycles during part of the processing, or way under 10% of what the CPU is capable of. A video codec that could compress several macroblocks at once, switching between them on cache misses, could easily see big speed boosts.
So, I’ve been thinking about the new Mac mini. I could definitely use a couple new computers at home, and I’d be happiest with new Macs. They’d fit in well with my Powerbook and our dying old iMac. The Mac mini is certainly cheaper then older models, but the pricing is kind of deceptive. Yeah, you can get a model for $499, but by the time you bump the hard drive up to 80 MB, add a DVD burner, and add a reasonable amount of (third-party) memory, it’s pushing $1,000 all of a sudden. More specifically:
- Mac mini, 1.43 GHz/80 GB model: $599
- upgrade to Superdrive: $100
- add keyboard: $29 (Apple total: $728)
- 1 GB of Mac mini RAM from Crucial: $226.99
I’m sure I could get the memory for a few bucks less elsewhere, but I’ve had good luck with Crucial in the past, and I’d rather not monkey around with the RAM if I can avoid it. The initial rumors were that the Mac mini’s RAM wasn’t user-upgradeable; now it looks like it’s just sort of not recommended. It doesn’t require any special tools at the very least.
So, for $1,000, I can have a Mac with around 3x the CPU power of my aging PowerBook, enough RAM to do a bit of photo editing now and then, and a bit of disk space. I’d reuse the 22” CRT sitting on my desk at home and a Logitech optical mouse that I already own.
The problem is that I can’t afford a new Mac and a new Treo 650. Fortunately, no one seems eager to sell me a GSM Treo 650 any time soon, but sooner or later, Cingular is going to announce pricing, and I’m going to have to decide what I’m going to do about it. If they’d been shipping it 3 months ago, I probably would have ordered right off the bat, but its lack of memory and WiFi makes it look less enticing every month.
Oh, well–I should really wait until taxes are done this year before ordering any new hardware anyway.