If you have an interest for high-end computer graphics, you might have heard that Intel’s upcoming Larrabee (nicknamed LRB) graphics processing unit (GPU) has been shelved as a product, but it is not certainly not dead as a project.
The current Larrabee, I’ll call it 1.0, will instead become a development vehicle that will serve to get the software going for whenever LRB 2.0 will show up. What happened? Intel didn’t provide much details, but it didn’t think that LRB 1.0 could be competitive in the market.
I’ve heard rumors that Intel is already working on another design based on a low-power X86 core but that’s just speculation. I also suspect that the software is simply not ready to make LRB a decent GPU.
In the meantime, Larrabee 1.0 will be used internally and externally to develop the graphics drivers and non-graphics software that will be used down the road. It’s a serious bump on the road, but this is not over yet.
Performance & Readiness
Without inside information, it’s hard to know exactly what the state of LRB as a GPU is. From the demo that was shown at IDF, I’m inclined to say that the graphics drivers were not ready and without good software, we can’t know what the full potential of the hardware is.
Quite frankly, the ray-tracing (RT) demo was cool but not impressive, and RT is not really practical anyway (more RT drama on Beyond 3D). I would have loved to see a DX11 demo, even a basic one.
“I told you so” would be appropriate, but in the end, I recognize that what Intel is trying to do is simply *really hard*. Unlike what most analysts believed for the past couple of years, slapping more cores and compounding theoretical Teraflops does not make a GPU.
Sony did break its teeth by trying to turn Cell into a graphics processor and failed miserably, but apparently, not everyone learned something from that.
Companies like NVIDIA and AMD have hundreds of people working on the code/algorithms that make their stream processors handle graphics. Plus, these engineers have been doing it for years, if not decades.
The same goes for everyone there working on drivers, etc… Finally, both NVIDIA and AMD have incrementally built their hardware around the fact that current real-time 3D has a particular data flow that can be accelerated in a particular way. A few years ago, they started to make their hardware capable of handling non-graphics tasks, with some success.
Intel on the other hand is taking the opposite direction by starting from a cluster of general-purpose cores to achieve graphics, which is still friendly to special-purpose functions (triangle setup, raster operation, transparency…). Intel will have to learn how to do it well, and it is not easy. You can see why AMD and NVIDIA have an inherent advantage on the graphics side.
Is the concept of Larrabee fundamentally flawed?
In theory, it’s not. Intel argues that over the long term, it will work because even GPUs are evolving towards being general-purpose compute chips – they are right.
Real-time graphics already is much more complex than it used to be, and there’s no other way than going towards a more general approach that would provide limitless possibilities to developers.
The questions are: “when is the time right?” and “is X86 the best architecture?”. The timing really is an unknown. Also, I’m not going to join those who have predicted that X86 will fail for the past two decades. They have all been proven wrong.
That said, it seems to me that X86 doesn’t bring much to the table as far as graphics are concerned. But I also understand that Intel also has non-graphics plans for this project.
Some developers want general-purpose hardware now, others think that we’re fine with a gradual evolution of things, but basically, nobody really knows and there are just too many variables and players (including Microsoft with DirectX, developers, hardware vendors, console makers…) that have a great deal of influence over the final outcome. This is not only a technical issue, but certainly a political one as well.
Intel’s problem is that everything around real-time graphics gravitates around current GPUs. From how games are designed to APIs like DX or OpenGL that have been created to support fixed-pipeline hardware, then updated for programmable shaders.
To be commercially successful, Intel will have to provide “close-enough” performance on a terrain that is definitely not of its choosing. In the meantime, Intel will have to do its best and continue to invest (Billions?) into that project.
Intel needs a game-changer
One thing that could turn Intel’s fortune would be a successful game console contract (PS4, Xbox 3). In the console space, the relative performance against existing products is not as critical. Secondly, developers don’t mind (to some extent) bypassing APIs like DirectX and program the hardware directly (down to the metal), which would be a dream for Intel.
Finally, Intel can probably maintain binary backward compatibility forever, which could be good for obvious reasons. In reality, it’s not so easy, and if you think that PS3 programmers are complaining too much, wait until they’re told that everything that GPU hardware does today, they’ll have to program themselves tomorrow.
It might be fun for high-profile developers, but for the smaller guy in an under-staffed team on a tight deadline, it could be one more reason to jump out of the window.
Intel has surely been talking to console makers, but I’m not sure that they can win a contract without solid proof that they can deliver, and this cancellation sure doesn’t help.
There’s too much at stake for Intel to give up
Some are already saying that Intel is throwing the towel, but I don’t think that it does, will, or can for that matter (at least so soon). The stakes are just too high. What stakes? Simply the future of Intel as the provider of the most expensive/important chip in computers.
The future of performance is multi-core, Intel itself said so and that’s true by any measure today. However, applications that do very well with many cores, tend to be accelerated even further by graphics processors, sometimes hundreds of times faster. You can imagine that this is a (big) problem for Intel in the long run. At the moment, Intel’s top priority is to slow/stem this GPU advance and/or get into it and dominate. Easier said than done.
Intel does not have a whole lot of options to counter the rise of GPUs. It can’t just multiply the number of CPU cores by 4, 8 or 12 to match peak performance.
Even if that was feasible, that would be inefficient and would increase the cost and the number of defective parts (the bigger the chip, the more likely it is to have a physical flaw), which would impact the bottom line. Also, consumers would never pay a huge premium just to accelerate a few applications like video compression and physics computations. Most apps won’t be accelerated by a massive influx of processing cores anyway.
GPUs have an awesome “killer app”: graphics. Computer graphics is still relatively young and there is plenty of room for further developments and acceleration. A company like NVIDIA sees graphics as being “computationally unbounded”, and it is right, at least for a long while.
What’s even better for GPUs is that operating systems like Vista and Mac OS X are now relying on their presence to display fancy user interfaces. Because of that, consumers are ready to pay a premium to get better graphics. Video compression or physics acceleration come as a “bonus”. Graphics is a great “carrier” for massively parallel processors: they are the only obvious way to produce and sell a massively parallel processor.
In my opinion, that is exactly why Intel wanted to build a GPU. Now, Intel is also looking at other options aimed at the super-computing world where hefty margins can justify building and selling such a processor.
In that space, Intel can use the decades-old value proposition that made X86 unbeatable: backward binary compatibility. If Intel can create a “cluster-on-a-chip” that accelerates existing applications with minimal efforts, that would create a tremendous interest. I’ve always said that something like LRB would be a formidable general-purpose compute chip.
Simply put, if Intel can’t stop the rising tide of massively parallel chips, it has to get on that wave and that’s precisely what LRB was supposed to do. That’s why Intel can’t possibly give up now.
Live to fight another day
It is clear that Larrabee 1.0 was not ready for prime time and that things are much harder than once thought by some (but not all) at Intel. My opinion is that part of management got carried away and started to parade too early and too loudly. Now it’s time for humility and pragmatism.
The road to building a GPU is long and littered with the remains of companies that were once glorious. That said, none of these companies were anything like Intel, so this challenge is far from over. Intel has plenty of cash, smart people, and manufacturing capabilities that are years ahead of everyone else. Now it’s time to dig in for a multi-year struggle.
Link: Larrabee architecture overview
*Photo courtesy of brightsideofnews.com
Filed in Computing, GPU (Graphics Processing Unit) and Intel.
. Read more about