The Inquirer reports that Intel’s Tukwila chip is going to have an on-board memory controller, just like all of AMD’s newer chips. Tukwila is a multi-core Itanium, and is due sometime in 2007; the Inquirer suggests that Xeons will probably get on-board memory controllers in the same basic timeframe, simply because this will let Intel use the same controller chips for both Xeon and Itanium systems.

Assuming that the rumor is true (and considering how well AMD’s on-board controller works, I’d be surprised if it’s not), Intel will probably end up putting 4-6 FB-DIMM channels per CPU; since each channel’s good for around 10 GB/sec, a dual-chip system could potentially have 120 GB/sec in memory bandwidth. Even better, it’d be possible to build a high-capacity server with 48 DIMM sockets spread over the 12 channels; with 4 GB DIMMs, that’s 192 GB in a relatively simple box.

This assumes that multi-CPU systems remain common; given the way that multiple core systems are progressing, I’m not sure that there will really be a market for commodity multiple-CPU-chip systems after 2007 or so–if you can get 8 cores on a single chip, why would you pay the complexity cost of adding more chips, except for really high-end stuff? Even today, compare the cost and performance of an Athlon 64 x2 vs a system with 2 single-core Opteron 2xx chips–the Opteron system will have a bit more memory bandwidth, but they’ll have similar performance on a lot of workloads and an Athlon 64 x2 with cheap motherboard will be cheaper then most dual-CPU Opteron motherbards, never mind the CPUs.

Dual-CPU systems have been the bread and butter of the PC server world for the last 5-7 years, but I doubt that they have more then another two years to go before they fade into the sunset. Personally, I’d much rather manage a handful of single-chip 8-core clustered, virtualized (where virtual environments can migrate between physical systems under explicit admin control) systems then a smaller number of 2-4 CPU 16-32 core systems.