Intel's Xe Graphics Architecture to Support Hardware-Accelerated Ray Tracing

Credit: IntelCredit: IntelIntel published a news byte today outlining its announcements at the FMX graphics trade show taking palce in Germany this Week. It includes the tasty tidbit that the company's forthcoming data center specific Xe graphics architecture will support hardware-based ray tracing acceleration.

From the blog:

I’m pleased to share today that the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of APIs and libraries.

As a quick refresher, Intel's Xe graphics architecture is Intel's forthcoming range of low- to high-power graphics solutions. These graphics processors will scale from integrated graphics chips on CPUs, up to discrete mid-range, enthusiast and data center/AI cards. Intel said it will split these graphics solutions into two distinct architectures, with both integrated and discrete graphics cards for the consumer market (client) and discrete cards for the data center. The cards will come wielding the 10nm process and should arrive in 2020.

Support for ray tracing would bring Intel's graphics cards, at least for the data center, up to par with Nvidia's Turing architecture, which largely paved the path to hardware-based ray tracing in the consumer market. Given that this type of functionality is typically embedded at a foundational level in the microarchitecture, Intel's support for ray tracing with data center graphics cards strongly implies the desktop variants could also support the same functionality, though it is noteworthy that the company is splitting its offerings into two distinct architectures. 

Nvidia's Turing offerings also come both with and without ray tracing support, so it is possible that Intel could adopt a similar tiered model that leverages ray tracing as a segmented feature to encourage customers to buy higher-priced models.

Details on Intel's Xe graphics architecture should continue to trickle out over the weeks and months ahead. In the meantime, head over to our Intel Xe Graphics Card feature for the latest details. 


25 comments
    Your comment
  • OctaBotGamer
    Yup finally Intel is getting dedicated graphics card upgrade, but it will be a brand new thing but I expect enabling ray tracing will drop the fps to below 10:tearsofjoy: I also think that it'll last only for A very short period of time and will probably be solved after some time, rest assured when the graphics card will be launched
  • digitalgriffin
    This is not suprising. Intel demonstrated real time ray tracing with back in 2007. As their team includes one of the fundamental Larabee architectects, it would make sense they include hardware support.

    [MEDIA=youtube]blfxI1cVOzU[/MEDIA]
    View: https://www.youtube.com/watch?v=blfxI1cVOzU


    Intel's original approach was non-ASIC small CISC cores (Larabee/KC/KL) this sacrifices efficiency for a more flexible approach that can be updated.

    But their implementation is no different than what NVIDIA's or AMD's implementation. Define a bounding box for the ray trace hit test. Extend ray forward from viewer to point space then calculate caustics/IOR, ambient/diffuse, single reflections, shadows etc...

    I don't want to say these are elementary calculations. They aren't. The algorithms have been around for decades now. I used to dive into POVRay source and see how it worked. But the mechanisms for speeding it up with ASIC circuits though have remained out of the spotlight because of the amount of circuitry it demands for BASIC ray hit test and calculations.

    We've come to a point now where adding those circuits might be more beneficial to image quality than say creating more raw FLOPS throughput. (For several technical reasons)

    In the future you will see architectures perform some interesting balancing acts. A lot of this depends on which takes greater hold. Ray Tracing OR VR. You can't do both.

    If I were a betting man, if ray tracing does take off, I would say we would see a new sub standard of chiplet architecture where chiplets are assigned based on rendering viewports on draw calls. One dedicated to left lens, one to right for VR. I've been working on the algorithms to solve the efficiency issues here. At one point when dealing with Z Buffer depths comparisons can you render a background draw with 1 call and duplicate it across both viewports? .5 pixels calculated distance change in the main FOV with a 1.5 max delta at the max common fov edges?
  • bit_user
    Quote:
    the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of APIs and libraries.

    This could actually be taken to mean it's not coming in their first gen products. The fact that they cite their "roadmap", rather than being more specific, could be very deliberate.

    Also, the term "hardware acceleration" can be taken to mean anything that runs on the GPU, itself.

    So, we could actually see first gen products that implement it in software, like what Nvidia recently enabled on some of their GTX 1xxx-series GPUs, followed by true hardware support in the second generation. This progression also ties in with talking about a "roadmap".