NVIDIA’s 192-cores are awesome, but don’t mean all that much
I’ve got four cores in my Nexus 5. My old Nexus 7 has four cores. Some of today’s smartphones have eight cores (though it’s arguable whether or not they can use all eight at the same time). My desktop computer has four hyper-threaded cores, so it looks like I have eight. Now, news out of CES 2014 is that NVIDIA has a 192-core processor: the Tegra K1, and it just might power your next smartphone or tablet.
192-cores: that’s what the headlines are saying, but they’re wrong — sort of. Let’s take a look at what goes into an SoC, and why NVIDIA’s new Tegra K1 might not be all it’s cracked up to be.
At the central core of every computer is its CPU. This central processing unit handles all the heavy lifting inside your smartphone or tablet. They’re getting more advanced every day, and are starting to do more than what the “classical” definition of a CPU used to do. Essentially, a CPU handles all the “instructions” that the operating system throws as it.
These instructions can be run in multiple threads, which makes splitting up tasks across multiple cores a very efficient way of running things. Programming to make apps work really well in a multithreaded manner isn’t all that easy, though many modern compilers try and offload a lot of what used to be manual labor to an automated process. We’re not yet to the point where a developer doesn’t have to know about how to write for a multithreaded environment, but it’s getting closer, and compilers do what they can to help out, even today.
All that makes it sound like the more cores your CPU has, the better off you are and the faster your device will be. To a certain extent, that’s true. So that means 192-core should be amazing, right? We’ll get to that in a moment.
RAM (random access memory) and storage are very much intertwined in the basic functionality of the CPU. Information is loaded from storage into RAM where it’s processed, and does something. That something could be anything from posting a Tweet, reading an email, taking a picture, or reading an article on Pocketnow. All of these separate processes happen automatically without you knowing how it happens. The faster the CPU, and the more cores it has, the quicker your somethings will get done. And don’t we all hate when our somethings take too much time?
There is another component to all this, and that’s the GPU (graphics processing unit). After all, for most of us, we’ve got to be able to see what we’re interacting with to make any of those somethings worth while.
The GPU is another very important part of your system, whether that’s on your phone, your tablet, or your computer. It takes all those instructions and turns them into pretty pictures on our screens. Dot by dot.
Well, that’s the end result, but it’s how we get there that’s the interesting part.
Let’s look at a movie, for example. For decades, most films that you see in a theater are made up of gazillions of individual pictures that are run past a projector at a rate of 24 frames every second. Some newer movies bump that up to 48-fps! Each frame is a unique and fully constructed picture. When we’re talking about analog, that’s not bad; but when we make the leap to digital, it’s terribly inefficient! Each frame might be 1% different than the frame before it, and the frame that follows might be another 1% different again. Today we have codecs that help address this redundancy of visual information to compress our movies into more manageable sizes. Just image how big a file you’d need to hold a 2-hour movie (7,200 seconds, which is 172,800 frames at 24-fps) if each frame is 5MB. Doing the math, that’s almost 844 GB — yes, gigabytes!
To put that in perspective, a single Bluray disc can hold up to 50GB of data — and we haven’t even factored in the audio track or countless trailers and menus that we’re forced to sit through before we can watch the movie.
When we’re talking about graphics processors, simply displaying information on the screen is only a very small part of what that part of the system does — and you can see how much data we’re talking about from the example above. But let’s talk about the real horsepower behind the GPU, shall we? Yes, let’s!
Games aren’t everyone’s cup of tea, nor should they be. However, even if you’re not interested in games, you should be very interested in the gaming capabilities of the GPU that powers your devices. Why? Games push your GPU to the limit, so if a game plays smoothly on your device, just imagine how smoothing everything else should run! There’s another reason: where games are today, apps will be soon.
What do I mean by that? Most apps are significantly less complex than game, right? The evolution of those apps, however, will follow the path forged by those games. Maps, for example, used to be flat images that you could browse around. Then Google added perspective and let you tilt your maps to give the impression that you were looking at them from a birds-eye rather than from directly above. Soon three-dimensional buildings and geography were added that you could “fly” around, complete with fairly accurate shadows. Sounds a lot like some games you’ve played, right? And that’s just the app that helps you get to Grandma’s house.
Other apps are taking advantage of perspective, geometry, dynamic lighting, and various 3D effects and animations. Again, this sounds very much like what you’d expect in a game, not a Twitter app. Yet here we are.
NVIDIA Tegra K1
At CES 2014, NVIDIA announced its newest technology: the Tegra K1. The chip will come in two flavors: a 32-bit variety and a 64-bit version, but neither of those are selling points anyone is talking about. Instead the company is touting its 192-cores. Yes: one-hundred ninety-two cores!
Only it’s not.
Take a look at the headlines surrounding the Tegra K1, they all say the same thing: 192-cores. Over and over again. But they’re “graphics cores” not cores on the CPU. Big difference. What we’re talking about isn’t necessarily going to speed up your device, it’s just going to help your graphics look more real, your games play better, and your in-game frame-rates be higher.
I’m not trying to downplay that at all. Looking at some of the video of what the Tegra K1 can do, it’s nothing short of amazing.
According to NVIDIA reps, both the 32-bit and 64-bit versions of the Tegra K1 have been certified by AT&T and Vodafone (in addition to other carriers), and devices powered by the new chips should start to arrive as early as the first-half of 2014.
One thing that’s missing from the SoC, however, is LTE support, which will have to be included via a third-party chip.
Don’t get caught up in the hype about 192-cores. Those are just there to make what comes out on your screen look amazing. It certainly does do that, but at the heart, it’s still just a quad-core SoC.
New chips are just one of the things coming out of CES 2014. We’ve got lots more coverage from the show floor, excellent articles by the support staff, and don’t miss Jaime’s daily video updates! As you’ve come to expect, Pocketnow has you covered!