Intel is making a big push into the future of graphics. The company is introducing seven new research papers to Siggraph 2023, an annual graphics conference, one of which tries to addressVRAM limitations in modern GPUswith neural rendering.

The paperaims to make real-time path tracing possible with neural rendering. No, Intel isn’t introducing a DLSS 3 rival, but it is looking to leverage AI to render complex scenes. Intel says the “limited amount of onboard memory [on GPUs] can limit practical rendering of complex scenes.” Intel is introducing a neural level of detail representation of objects, and it says it can achieve compression rates of 70% to 95% compared to “classic source representations, while also improving quality over previous work.”

It doesn’t seem dissimilar fromNvidia’s Neural Texture Compression, which it also introduced through a paper submitted to Siggraph. Intel’s paper, however, looks to tackle complex 3D objects, such as vegetation and hair. It’s applied as a level of detail (LoD) technique for objects, allowing them to look more realistic from further away. As we’ve seen from games likeRedfallrecently, VRAM limitations can cause even close objects to show up with muddy textures and little detail as you pass them.

In addition to this technique, Intel is also introducing an efficient path-tracing algorithm that it says, in the future, will make complex path-tracing possible on mid-range GPUs and even integrated graphics.

Path tracing is essentially the hard way of doing ray tracing, and we’ve already seen it be used to great effect in games likeCyberpunk 2077andPortal RTX.For as impressive as path tracing is, though, it’s extremely demanding. You’d need a flagship GPU like theRTX 4080orRTX 4090to even run these games at higher resolutions, and that’s with Nvidia’s trickyDLSS Frame Generationenabled.

Intel’s paperis introducing a way to make that process more efficient. It’s doing so by introducing a new algorithm that is “simpler than the state-of-the-art and leads to faster performance,” according to Intel. The company is building upon the GGX mathematical function, which Intel says is “used in every CGI movie and video game.” The algorithm reduces this mathematical distribution to a hemispherical mirror that is “extremely simple to simulate on a computer.”

The idea behind GGX is that surfaces are made up of microfacets that reflect and transmit light in different directions. This is expensive to calculate, so Intel’s algorithm essentially reduces the GGX distribution to a simple-to-calculate slope based on the angle of the camera, making real-time rendering possible.

Based on Intel’s internal benchmarks, it leads to upwards of a 7.5% speed up in rendering path-traced scenes. That may seem like a minor bump, but Intel seems confident that more efficient algorithms could make all the difference. In ablog post, the company says it will demonstrate how real-time path tracing can be “practical even on mid-range and integrated GPUs in the future” at Siggraph.

As for when that future arrives, it’s tough to say. Keep in mind this is a research paper right now, so it might be some time before we see this algorithm widely deployed in games. It would certainly do Intel some favors. Although the company’sArc graphics cards have become excellentover the past several months, Intel still focused on mid-range GPUs and integrated graphics where path tracing isn’t currently possible.

We don’t expect you’ll see these techniques in action any time soon, though. The good news is that we’re seeing new techniques to push visual quality and performance in real-time rendering, which means these techniques should, eventually, show up in games.