Will I/O be the new bottle neck?

In the anticipation and eventual announcement of IBM’s cell processor, described by Hannibal here and here, there has been a lot of talk about the plateau in clock frequencies and the ensuing rise of parallelism in chip architectures.

Bill Clementson, in the latest installment in a series on parallel programming in Lisp (see also parts 1, 2, and 3), linked to this bit of hype about the cell processor from Nicholas Blachford:

The first Cell based desktop computer will be the fastest desktop computer in the industry by a very large margin. Even high end multi-core x86s will not get close. Companies who produce microprocessors or DSPs are going to have a very hard time fighting the power a Cell will deliver. We have never seen a leap in performance like this before and I don’t expect we’ll ever see one again, It’ll send shock-waves through the entire industry and we’ll see big changes as a result.

I’m skeptical about how much of a difference in performance we would see from these highly parallel processors in PCs, especially in web and database servers, where I get the impression that there tends to be a lot of disk access. Sure, the processor with its array of cells will be able to compute much faster than a single-core design, but that doesn’t matter much if the entire system is blocked, waiting for some disk access to complete.

It seems to me that the cell architecture is ideally suited to applications with low latency in IO, such as game systems and signal processing. How are they going to perform in applications that make heavy use of hard drives? That’s what I’d like to know.