Last year was rich in new developments in the world of 3D graphics. There was the first real update of the Nvidia architecture, the GT200. We also witnessed AMD’s return to the big leagues with the RV770 and its radically different approach, as the firm bet everything on its small, efficient chip rather than a larger, more complex design.
On the application programming interface (API) side, the first details of DirectX 11 were revealed and OpenGL 3 was made available. But even with all of that going on, the event that really marked the year was Intel’s official introduction of Larrabee at SIGGRAPH.
What’s all the excitement about? Simply: the fact, as we’ll see in this article, that Larrabee is radically different from any GPU currently available, and as such, it’s intriguing. Enthusiasts are wondering if this is going to be the design that changes the perception of how well Intel and graphics go together.
It also marks Intel’s return to the high-end GPU market after it killed off the i740 some 10 years ago. That retreat coincided with the end of the golden age in PC graphics, when numerous companies were struggling to reach the top of the GPU market before disappearing one after the other, or else refocusing on a less competitive sector. Today, the high-end GPU market boils down to AMD and Nvidia, and the few attempts to change that state of affairs have met with failure. Matrox, with its Parhelia almost seven years ago, XGI with its Volari, and 3DLabs with its Realizm all threw in the towel, and with good reason--modern GPUs are extremely complex and, consequently, require considerable investment and skills that only a few companies can afford.


(btw great article im sure you explained why it might or not kick ass)
also i own an intel cpu, but i DO not belive intel witll get it right (at first)
If you read the article it is more more expensive to produce than the cell. So really I don't think that hoping it outpaces the now old 8800GT is a lot to expect.
Also it mentions that this is intels stab at DISCRETE graphics, meaning not on a motherboard.
I have to disagree on this as Matrox, before going business only, was high-end.
Maybe this will just end up as an addin card for applications that require parallel processing. Will they be able into interconnect like SLI???
Ok the addin card is just an idea but surely from what i read for some applications it could be an ideal solution. Wouldn't it work well for decryption like the Nvida cards have been used?
@core i7 ownage, it'll probably be pci-e but with an x86 base(quickly reminding everyone how the 1ghz snapdragon comes close to the atom)it won't perform great, unless it's basically an i7 cpu on a card because then you'd have tesla league power.
I don't know if Intel is capable of pulling this off, since there's a lot more to it than simply having a better product on paper; Betamax was better than VHS but look what happened in the end. Its one of several solutions, some of which are already in mass production.
I see nothing stopping say AMD or nVidia taking this design and adding their own tweaks, in the meantime using their discrete technology to accelerate applications that are in dire need of a speed hike. Ask anyone who uses video transcoding software: do you care whether it's 15x or 50x faster..whether it's AMD, Intel or nVidia..as long as it's available now, and at the right price?
Keeping all the options open was a good idea on the part of AMD, since competing directly with the fastest Intel Core i7 architecture isn't going to be the future of desktop computing.
Thanks for this interesting article, Abi-Chahla. There is a lot of information to take in (some of which is beyond my understanding), but it helps put things in perspective.
I Lol'd.
If Intel stick with it and do produce a quality piece of kit down the line maybe Nvidia and AMD will take note.
Competition = lower prices, that's the most i can hope for from Intel with this card to be honest.
Like most people who play games I doubt this is going to be worth looking at when it's launched. Even compared to other cards in the same price range.
The fact is, there not even usen a grain of sand in comparison to the capabilities of the current platforms in any direction you go in, but to think that an instruction set so used and abused as x86 can deliver anything new to the programming angle of the equation,,,, I just dont see it.
Flexible, yes, manageable, definitely, a solution, no. GPU is my opinion of a solution within itself. It is capable of so so much more than it is currently used.
AMD is leveraging its acquisition AIT to combine CPU and GPU on one chip. This could give advantages over an Intel CPU + any add-on graphics card. Who knows how any of this will work out, but my guess is that Intel has realised that is cannot afford to be left out in the cold if the AMD Fusion chips prove to be very successful. Therefore, Intel is getting into the graphics market with a very CPUish graphics chip to cover itself.
In any case, competition can only be good for the consumer.
P.S. I thought what happened a decade ago with Intel graphics chips was that Intel was hoping to re-used older fabrication equipment to make the chips, but what happened is GPUs used the latest, smallest dies so the Intel strategy just did not produce anything competitive on their old fab gear.