Today, Nvidia formally unveils Tegra 3 (codenamed "Kal-El"). Given the number of leaks within the past few months, though, most of the technical specifications regard this latest SoC doesn't really surprise us.
Two months back, Nvidia published two whitepapers that basically spelled out what Tegra 3 would all be about. At that time, the company was adamant to only refer to quad-core design as Kal-El. But who were they kidding? We all knew it was Tegra 3.
Tegra 3 Highlights:
- 5x performance of Tegra 2
- Better battery life
- 3x faster GPU
What's interesting is that Nvidia is promising better performance and improved battery life (compared to the Tegra 2). That's a tall order in world of notebooks, but it's an even more difficult task more when you're dealing with embedded architecture. The voltage requirements are much tighter and we're dealing with power consumption a magnitude lower than the familiar x86 processors.
However, Kal-El is technically a quint-core SoC (Cortex-A9 architecture), because it features a fifth "companion" CPU core that handles low overhead tasks such as syncing email, playing a ringtone, and keeping applications alive in standby mode. We've already covered the technical CPU side of the discussion on how Nvidia accomplishes in an earlier post, so it's not entirely new to us. (For those curious, SoC cache size is 32/32 KB L1 per core and 1 MB L2 cache shared among the four cores. The companion core has access to the full 1 MB of L2 cache when the main cores are idle.)
The end result is that we should finally be able to watch Flash video at sites like Hulu without major stutter, which has always been one of our major complaints concerning Android devices. After all, what good is touting Flash compatibility over Apple's iOS devices when video playback is choppy?
On the GPU end, we should make it clear that Nvidia has basically recycled and supercharged its Ultra Low Power GeForce GPU from its Tegra 2 SoC. The graphics core is still OpenGL ES 2.0 compliant, so it's not a move up in the same way we'd think of DX10 to DX11. Unlike Nvidia's desktop GPUs, both SoCs are based on an architecture that pre-dates the company's unified design.
With Tegra 2, you’re looking at four pixel shader cores and four vertex shader cores. This means the SoC operates most efficiently when the ULP GeForce GPU is presented with an even mix of vertex and shader code. Tegra 3 basically doubles the number of pixel shaders, which means it's going to operate most efficiently when faced with an uneven mix of code.
While the graphics core hasn't undergone a revolutionary overhaul, you are going to see games optimized for Tegra 3 that enable dynamic lighting, motion blur, and more realistic water and ballistic effects. This is partly due to the fact that many games continue to be CPU bound; so, the quad-core architecture is really going to help.
However, the increased memory bandwidth also increases the capabilities of the GPU. How much is a matter of debate, but we expect that this will open up a whole new world of mobile gaming.
Nvidia is already advertising the fact that you can hook up a Tegra 3 tablet to a 3D monitor or HDTV and use a PS3, Xbox 360, or Wii controller to play games. That's currently possible with Honeycomb (minus the 3D part), but it's a less than ideal situation because Tegra 2 lacks the horsepower.
Tegra 3 promises to close that gap, which highlights the fact that we're coming to a point where there's greater convergence between devices. Nvidia's may have finally taken the first step in demonstrating that this isn't just a pipe dream, but it's a real possibility as the technology matures. A tablet that replaces your gaming console? That's something that looks mighty tempting.