Over the years at AccelerEyes, it has been surprising to me how many people miss a big picture understanding of the trends affecting the computing industry. To help, I’m going to post a few articles with high-level explanations. I’m going to do so in a hand-wavy manner. I look forward in advance to the lively comments on my mistakes. But, in general, I think these posts will be a fairly accurate view of the important trends.
Today, I’ll start by talking about CPU processing trends.
Let’s start with something we all know: CPUs are central processing units and are the main processor in the computer. You probably had to label the CPU on a diagram at some point in grade school, along with the hard drive, keyboard, mouse, monitor, and floppy drive.
Up until 10 years ago or so, CPUs improved in speed mainly by increasing in the frequency of their clocks. You probably remember CPUs improving from MHz speeds to 1 GHz to … to 3 GHz. You rarely hear of CPUs running above 4 GHz. There’s an important reason for this. If the frequency gets too high, the chip can actually melt from the excessive power/heat. Today you typically see CPUs running at 2 to 3 GHz, which is where they landed over 10 years ago.
There are ways to make CPUs run faster. But that requires keeping them cool so they don’t melt. Even at 2-3 GHz your computer needs a lot of fans and heat conducting metal apparatuses (called heat sinks) to whisk away the heat from the chip. That’s why your computer make noises; it’s the fans keeping your CPU cool. But fans only work up to 2-3 GHz. If you want to go above that, you need liquid cooled solutions. However, short of really hardcore computer geeks and gamers, no one really wants to put liquid in their computer.
So, in order to keep improving speed over the last 10 years, CPU manufacturers had to find other ways to improve things. They found improvements in adding more CPU cores (each of which is a full CPU) onto the same CPU chip. One of the cores could do one thing (like play a movie) which the other core did something else (like run Microsoft Excel). Since those tasks were split up between different CPU cores, each core did not have to work as hard to get the tasks done and you could effective get a faster experience without raising the clock frequency.
This is where parallel computing first entered the technology scene in a big way. On CPUs, this is called “Task Parallelism.” In order for programs to actually use all the CPU resources, the software had to be re-written and re-compiled with special consideration for the fact that the processor architecture had changed.
So for the last 10 years, we’ve gone from single core, to dual core, to quad core. But again, you don’t see many CPUs with more than 4 actual cores. That’s because with 4 cores on one chip, the size of the chip grows and the power/heat starts to rise again. So again, CPUs have hit again hit some physical barriers. And this time CPUs are having a much harder time figuring out how to deliver substantial performance improvements. Mainly CPUs are relying on making transistors smaller, so that the power per core requirements go down, so that more cores can be added.
Luckily, for most people, regular dual or quad core CPUs are good enough for the tasks that need to be done. If your computer is slow, it likely is not due to the CPU anymore. It is much more likely to be your hard drive (upgrading to an SSD is the best way to speed up your computer these days).
However, for scientists, engineers, and financial analysts (i.e. for people that run big simulations), CPUs are still slow. AccelerEyes was founded in 2007 on the cusp of a transformative computing event when the high-end computing professionals realized CPUs were no longer going to improve as fast as is needed for their applications.
The answer to this problem was found in using other processors to supplement the CPU in getting the computing work done. It started with leveraging the GPU (graphics processing unit) on the video card as a companion to the CPU in computational tasks.
In my next post, I’ll talk more about how GPUs have made a permanent home in the world of computing and how heterogeneous computing is the name of the computing game for the next decade.
What parallelism does your application use?
Notes:
- CPUs have also improved greatly over the years in architecture, caches, prediction, and more. But those improvements have not been sufficient to stem the tide of heterogeneous computing.
- CPUs also have ways to offload computations to data-parallel sections of their same chip (using SSE/AVX instructions). Those options are also not significant enough to alter the macro-level trends.
—
Posts in this series:
- CPU Processing Trends for Dummies
- Heterogeneous Computing Trends for Dummies
- Parallel Software Development Trends for Dummies