It’s not just about large numbers

Innovation within computers on both desktop and server room is usually boiled down to CPU cycles. At the most basic level, people tend to assume that more ghz means better product. And a case can be made that this is the case for the exact same computing infrastructure which is being put to work on the same task. But this is also the wrong way of really approaching the issue. Sticking with the same type of CPU for every situation is a bit like insisting on using a spoon to eat every meal. It’s obviously true that it’s a great match for soup. And when the steak comes out one could certainly get at it with a spoon if the diner had enough determination and power. But it’s obvious that a better idea would be to simply bring out a knife and fork while using far less power to cut the meat. The same is true of servers as well.

A server is designed to do just that, serve up data to networked components. There’s a variety of related tasks associated with this as well. But in general a server doesn’t need to be nearly as versatile as a desktop computer. On the desktop computers are expected to do everything from using browsers to playing cutting edge video games. A server is much more focused on a handful of related tasks. But the processors used in them tend to be designed more in line with versatility. And this is part of the reason why the clock speed comparison is flawed. It’s good for comparing the exact same performance on the exact same set of tasks. But it falls apart when one considers that it’s possible to use processors which are specially tailored for unique needs. This is also why micro data centers are such an important move forward for the industry.