AMD has this Heterogenous System Architecture thing, supposed to let CPUs and GPUs share memory. Perhaps it could be applied to two GPUs sharing memory?
Sharing the memory is just a thing that helps with GPGPU tasks mainly. The CPU doesn't really need to access the GPU memory at any given time, from what I remember and the GPU only needs it when moving stuff across it's own buffer. And, even if it does help, it's just one part of the whole thing they need to develop.
I do remember when the 4870X2 was launched and it sported it's own internal PCIe bridge. I was never used IIRC. Wasted money in R&D that could've gone to a better GPU or something.
Indeed, but it would be fascinating to see if it could be done and done well. They could even use different configurations on each chip perhaps for more efficiency in performing tasks perhaps?
Well, if they still have space in the process node, I really don't see how putting an outside chip is *better* in terms of all the costs associated than just including it in the design (integrate it to the GPU). It's kind of the same deal with fixed pipelines. You could have them outside of the GPU, but why bother?
Would you think nVidia would put a PPU (as they were called) alongside a GPU nowadays?
And I really don't think they would be more efficient. Once you move stuff off-core (or off-GPU in this case) you will always incur in performance penalties. It *might* help with cooling, but that's about it...