We live in a day and age where electronic devices are an integral part of our daily routine. What’s inside them that gives them the power to do so many amazing things? On today’s episode of Pocketnow Power User we’re going to talk about the brain that powers your device: the CPU.
The Central Processing Unit, or CPU for short, has been around for quite some time, but has changed significantly over the years. Traditionally the CPU has been the “traffic-cop”, the component that handles and routes all the commands flowing through the system to the place where they need to go. It’s backed up by other components like the ALU for processing mathematical computations, storage for the long-term retention of data, RAM in which applications run, GPU for outputting what you see on your screen, and numerous other subsystems.
Today, many of these subsystems are built into just a few chips, often called a System on a Chip, or SoC. Regardless of how many subsystems are integrated into an SoC, the CPU still lives at the heart and does most of the leg-work.
The “faster” a CPU is, the quicker you can get things done, but “fast” is a relative term. To find out what it really means we need to dig a little deeper.
All About the CPU
RISC versus CISC
Not all CPUs talk the same lingo. Some CPUs are built around RISC where others use CISC. Your desktop computer probably uses a CISC processor — “Complex Instruction Set Computing”. Your smartphone and tablet probably use RISC processors — “Reduced Instruction Set Computing”.
The definitions of these terms continues to evolve, but from a basic perspective RISC processors and the programs that ran on them were much more basic than the complex programs that could run on CISC processors. RISC was used for embedded and specialized hardware (or such was the theory) whereas CISC was used for larger and more powerful systems.
Today we see those lines blurred, and the instruction sets of many RISC-based processors are just as large as their CISC counterparts.
The clock rate or “frequency” is the speed at which a CPU executes instructions. The faster the clock speed, the more instructions the CPU can execute in any given time.
Just like your heart, the CPU can only operate within a predefined set of frequencies. Run it too fast and things get unstable (and very hot), run it too slowly and your system slows to a crawl. To pump more data through, just like your heart pumping more blood, you either need to go bigger, or add more pumps.
To increase CPU speeds, engineers add more transistors. This essentially makes the CPU bigger and able to handle more data in the same amount of time. Gordon E. Moore, co-founder of Intel Corporation, described a trend in a paper he wrote in 1965. He predicted that the capabilities of electronic devices would improve at roughly exponential rates. The concept that transistor counts and densities would double at a fairly predictable rate has guided the semiconductor industry for decades.
Originally this period was approximately 18 months, so you could count on your computer doubling in speed about every year-and-a-half. Today that rate has slowed to about every two to three years.
We often equate processing speed to the frequency at which the CPU operates. While frequency (measured in megahertz or gigahertz) is certainly an important factor, there are many other considerations to take into account when talking about speed. This has led chip manufacturers like Intel, AMD, and others to shy away from marketing their chips based primarily on their “speeds”.
Now manufacturers emphasize model numbers over clock speeds.
A “core” is essentially a single processor. If you want to make things go faster, simply add more processors! Some computers, such as high-end servers, have multiple sockets to accept multiple individual processors. As you can imagine, this takes up quite a bit of space on a motherboard. To provide a similar benefit without doubling the required space, manufacturers began building additional processors onto a single chip.
Today we have dual-core, quad-core, and so on.
While this approach certainly increases observed performance, there becomes a point where the CPU is no longer the bottleneck, and some other component or subsystem holds the system back.
CPUs consume quite a bit of power. RISC chips generally consume less power than their CISC counterparts. As chips get faster they run hotter and they use more electricity to power them. To mitigate these factors, chips are made smaller and more efficient. “Smaller” is measured in nanometers. This size has progressed from 90nm to 45nm, and we’re now down into the 20’s. Chips built using smaller construction techniques require fewer raw materials to make them, consume less power, and put off less heat. All are beneficial in portable electronics.
What Does it All Mean?
There was a time when you could easily calculate the cost-benefit of upgrading to a faster processor. Back then it was almost entirely focused around megahertz. Today there are many more variables to consider when evaluating the CPU inside a device you’re looking at. Speed is only one component of the equation, power consumption is now almost equally as important.
Because there are so many variables, deciding which is “the best” is no longer black-and-white, it will vary by application, and more importantly by what’s important to the individual — what’s important to you. You’ll need to weigh each feature individually to come up with what best satisfies your needs.
Now that you know a little more about what goes into a CPU, you’ll be much better prepared to understand why one CPU may be better for you than another, and decide where to put your money.
Thanks for joining us for the first episode of the second season of the Pocketnow Power User! Make sure you tell your friends, and tune in next week when we’ll dig into another component that powers your mobile lifestyle.