A Supercomputer on Your DesktopSteve Wildstrom View All Contributing Writers 01/31/2011
The world's fastest supercomputer, the Tianhe-1A, at the National Supercomputing Center in Tianjin, China, is an exotic beast. But its central components are very similar to those found in your laptop or desktop computer. And that is a key reason why supercomputer-like performance may soon be within reach of everyone, creating new dimensions in personal and business computing.
Most everyday computing doesn't require a whole lot of processing power. Writing documents or emails, for example, is limited by the speed of your typing or thinking. Calculating a large spreadsheet or playing video doesn't tax a modern processor. Where might massive increases in processing power make a difference? In making intelligent use of the vast amounts of data now readily available. In finding useful answers to complicated, often fuzzy questions that human beings struggle with. In understanding of natural language for everything from automatic translation to speech recognition.
Business analytics, for example, often require extracting a relatively small amount of data from a massive database, then analyzing the information along a large number of dimensions. For example, a retailer might want to know as much as possible about which customers are buying which products in which stores, based on multiple characteristics of the products, customer, and location. This rapidly becomes a task whose computational intensity can easily overwhelm an ordinary computer.
The key to creating massive improvements in processing power is a technology with the off-putting name of "general purpose computing on graphics processing units" (GPGPU.) Tianhe-1A's processors are off-the-shelf Intel server chips, high-end relatives of the Intel Core processors used in personal computers. Tianhe uses 14,336 of them with six processing units on each chip for a total of 85,016 processing units, or "cores." But that's only the beginning. There are also 7,168 Nvidia Tesla graphics processing units (GPUs), each with 448 cores for a total of 3.2 million.
Those GPU cores can't do everything a standard processor can, but they are much cheaper and consume much less electricity. They are particularly effective when a task requires performing the same sequence of operations on large amounts of data. Fortunately, many computationally intense tasks, such as rendering video or generating 3D images, fall into a category computer scientists call embarrassingly parallel," and are ideally suited to the abilities of GPUs.
Nvidia got the ball rolling a couple of years ago by developing programming tools that let developers easily use the processors in its graphics cards for processing. Intel, AMD, Apple, and others are backing a standard called OpenCL for computing on graphics processors. Use of GPGPU computing in research has grown rapidly, but now it's coming to a PC near you.
Until recently GPGPU only worked on computers with relatively high-end graphics systems. But the newest versions of processors with built-in graphics—Intel's 2nd Gen Core chips and AMD's Fusion—can handle processing on their graphics units, bringing this ability to just about all new laptops and desktops.
The next step is getting software to take advantage of the new capability, and this is starting happen. For example, consumer video software from Cyberlink to ArcSoft can use GPUs to speed processing. Adobe is using GPU processing to speed the performance of Flash, which is notorious for its ability to bring the fastest processors to their knees.
More important uses will go well beyond graphics. The ability to extract very specific information from large and complex databases, including the ability to provide precise answers to natural language questions, can require surprising amounts of processing power. The speed of GPGPU computing is already being used for tough analytic problems for business and soon may find its place in consumer services.