Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (2024)

Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (1)

Jensen Huang of Nvidia gave a sneak peek at what the trillion-dollar GPU company is planning to do with future iterations of Deep Learning Super Sampling (DLSS). During a Q&A session at Computex 2024 (reported by More Than Moore), Huang answered a DLSS-related topic, saying that in the future, we will see generated textures and objects that will be created purely through AI. Huang also stated that AI NPCs will also be generated purely through DLSS.

Generating in-game assets with DLSS will help boost gaming performance on RTX GPUs. Work transferred to the tensor cores will lead to less demand on the shader (CUDA) cores, freeing up resources and boosting frame rates. Huang explains that he sees DLSS generating texture and objects by itself and improving object quality, similar to how DLSS upscale frames today.

We could be somewhat close to this next iteration of DLSS technology. Nvidia is already working on a new texture compression technology that takes into account trained AI neural networks to significantly boost texture quality while retaining similar video memory (VRAM) demands of modern-day games. Traditional texture compression methodologies are limited to a compression ratio of 8x, but Nvidia's new neural network-based compression tech can compress textures up to a ratio of 16x.

This tech should apply to Huang's discussion of enhanced object image fidelity through DLSS. In-game objects are just textures wrapped in a 3D space, so this texture compression tech will inevitably boost texture quality.

The more intriguing aspect of Huang's future iteration of DLSS is in-game asset generation. This enhancement of Nvidia's DLSS 3 frame generation tech generates frames in between authentic frames to boost performance. Asset generation is a step beyond DLSS 3 frame generation, with in-game assets generated entirely from scratch through DLSS. (DLSS will need to be told where assets need to be placed in the game world and what assets need to be rendered, but they will be generated (created) entirely from scratch.)

Huang also discussed the future of DLSS surrounding NPCs. Not only does Huang expect DLSS to generate in-game assets, but he also envisions DLSS generating NPCs. He gave an example of six people existing in a video game; two of the six are real characters, while the other four are generated entirely by AI.

It is a callback to Nvidia ACE, which was demoed in 2023. ACE is an in-game LLM designed to bring NPCs to life, giving them unique dialogue and responses in conjunction with user interaction from another character in-game. Nvidia believes ACE (or some future form) will play a vital role in PC gaming and be an integral part of DLSS.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

It isn't the first time we've heard about DLSS's future capabilities. The tech giant has publicized that it expects the future of PC gaming to be rendered entirely through AI, replacing classic 3D graphics rendering. In the immediate turn, generating specific assets in-game is a step towards this AI-generated future Nvidia envisions.

Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (2)

Aaron Klotz

Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

More about gpus

AMD MI300X posts fastest ever Geekbench 6 OpenCL score — 19% faster than RTX 4090, and only eight times as expensiveAMD talks 1.2 million GPU AI supercomputer to compete with Nvidia — 30X more GPUs than world's fastest supercomputer

Latest

Nvidia RTX 4070 Ti Super using AD102 GPU appears — a fresh variant surfaces with a harvested RTX 4090 die
See more latest►

11 CommentsComment from the forums

  • Metal Messiah.
    Nvidia's next-gen DLSS may leverage AI

    DLSS has always leveraged AI by the way. So word the title accordingly.

    "Nvidia's next-gen DLSS may leverage AI to generate in-game assets, objects and NPCs from scratch".

    But anyway, this is the actual Q&A snippet. Huang still was not very clear whether this tech will be included in next-gen version of DLSS , or will it be separate AI tool for gaming.

    If used in DLSS, then we could be looking at a future version 4 or 5 here. *speculation*

    Q: AI has been used in games for a while now, I’m thinking DLSS and now ACE. Do you think it’s possible to apply multimodality AIs to generate frames?
    A: "AI for gaming - we already use it for neural graphics, and we can generate pixels based off of few input pixels. We also generate frames between frames - not interpolation, but generation. In the future we’ll even generate textures and objects, and the objects can be of lower quality and we can make them look better.

    We’ll also generate characters in the games - think of a group of six people, two may be real, and the others may be long-term use AIs. The games will be made with AI, they’ll have AI inside, and you’ll even have the PC become AI using G-Assist. You can use the PC as an AI assistant to help you game. GeForce is the biggest gaming brand in the world, we only see it growing, and a lot of them have AI in some capacity. We can’t wait to let more people have it."

    Though, I'm more inclined towards the Neural Texture Compression (NTC) solution being used here as well.

    https://research.nvidia.com/labs/rtr/neural_texture_compression/assets/ntc_medium_size.pdf

    Reply

  • Metal Messiah.

    Somewhat related.

    https://nvidianews.nvidia.com/news/new-nvidia-research-creates-interactive-worlds-with-ai
    ayPqjPekn7g:92View: https://www.youtube.com/watch?v=ayPqjPekn7g&t=92s

    Reply

  • CmdrShepard

    All these decades in steady improvements until we reached almost fully photo-realistic rendering in games, all those gigabytes of textures, highly detailed 3D models, accurate mocap and lipsync... and now we are throwing all that out for some fake AI hallucinated frames?

    Let me be the first to say -- NO THANKS.

    That video above looks horrible to me, and any new games using these new AI gimmicks for "reducing load on CUDA cores" which I was dearly paying for generations ever since 8800 GTX will be on my hard pass list.

    I am not against use of AI for improving NPC personas (would be great for RPGs), but I don't want fake visual crap.

    Reply

  • ivan_vy

    looks like a fever dream, won't it compromise the creators' vision? like AI photo-coloring, looks great but sometimes it chose the wrong color.
    I'm more for it for content creation and assets compression, but for rendering ...mmm...I think it needs a few more generations.

    Reply

  • bit_user

    Metal Messiah. said:

    DLSS has always leveraged AI by the way. So word the title accordingly.

    ...to the extent that people use AI and Deep Learning interchangeably, yes. I had the same thought.

    Metal Messiah. said:

    But anyway, this is the actual Q&A snippet. Huang still was not very clear whether this tech will be included in next-gen version of DLSS , or will it be separate AI tool for gaming.

    It sounds to me like something fundamentally different than DLSS.

    Metal Messiah. said:

    Though, I'm more inclined towards the Neural Texture Compression (NTC) solution being used here as well.

    That paper didn't sound terribly practical, IMO. Texture lookups are higher-frequency than the rate at which DLSS interpolates pixels, so I don't know if it's a big win to put a lot more computation in that phase. You also need to make the model small enough that it's not going to generate more memory traffic than it saves by increasing texture compression ratios.

    That gets at a broader concern I have around this AI-generated content, which is the size of the models needed to generate convincing assets. These seem like they'd chew up a lot of memory and hardware bandwidth, if they're being run mid-gameplay (i.e. as opposed to being limited to level loading).

    Either way, I think it's not right around the corner, but maybe something that starts to happen in 3-4 years.

    Reply

  • bit_user

    ivan_vy said:

    looks like a fever dream, won't it compromise the creators' vision?

    Yeah, it will need to provide creators with enough control, but I guess big game publishers are known to be cheap. So, even if it doesn't have quite the degree of control they'd like, I'm not sure that'll keep it from being adopted by some.

    In terms of realism, I believe that much will need to be competitive with manually-crafted assets.

    Reply

  • Ogotai

    so nvidia wants to create more fake stuff, like the fake frames of DLSS 3 ?

    Reply

  • valthuer

    Ogotai said:

    so nvidia wants to create more fake stuff, like the fake frames of DLSS 3 ?

    Oh, please. What is real anyways? After all, we ‘re talking about virtual environments, for God’s sake.

    You ‘re living in a world with Anisotropic Filtering reducing texture pixel counts, heterogenous deferred shading reducing lighting pixel counts, Z-culling reducing rendered pixel counts, MSAA reducing rendered pixel counts (over SSAA), TSAA and other shader-based AA techniques reducing pixel counts (over MSAA), anisotropic pixels reducing pixel counts (e.g. Wipeout using variable pixel widths to raise and lower per-frame render loads to maintain 60FPS in varying environments), Variable Rate Shading reducing pixel counts dependant on screen content, screen-space reflections reducing rendered pixel counts by just duplicating rendered pixels, probe reflections reducing rendered pixels by just copying from a texture, and so on.

    Game engine optimisation is all about finding places where you can outright avoid doing work wherever possible. It's 'faking' all the way down.

    It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

    If you have a good upscaling and sharpening model that looks better than native plus TAA, or at least close enough to be equivalent, then what's the problem? Especially if it boosts performance by 30–50 percent?

    Reply

  • bit_user

    valthuer said:

    It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

    If you have a good upscaling and sharpening model that looks better than native plus TAA, or at least close enough to be equivalent, then what's the problem? Especially if it boosts performance by 30–50 percent?

    You're singing my tune!

    I maintain that every pixel at 4k is not precious. Most 4k monitors are too small for that resolution to really add much value to the gaming experience, yet a lot of people are moving that way on the resolution scale (often probably for non-gaming reasons). So it makes sense to use more approximations, interpolations, etc. to fill in those extra details.

    More to the point: the proof of the pudding is in the eating. If the end user finds technologies like DLSS 3 yield a better experience than going without, they'll use them. And what's wrong with that? I use motion interpolation on my TV, in spite of the occasional artifact, because the overall image quality is a lot better.

    Reply

  • thestryker

    valthuer said:

    It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

    DLSS 3 isn't upscaling it's frame generation which is where the "fake frames" commentary comes from.

    I do think there's a lot of value to be had with frame generation technologies, but it's being pitched all wrong. For a good implementation it can make games at high detail look really good so long as your minimum frame rate is good enough. It can't make up for poor performance due to input lag, but it can make something that can run 120 FPS natively even better.

    Reply

Most Popular
Intel Core Ultra 200V specs leak points to nine Lunar Lake SKUs and a single Ultra 9 variant
This custom Raspberry Pi PC is battery-powered, has a built-in handle, and glows with RGB LEDs
AMD's upcoming Ryzen AI 9 HX 370 beats the company's current best mobile chip – Strix Point ES Geekbench results show big improvements
Intel’s Lunar Lake and Arrow Lake to launch this fall: Rumored launch dates revealed
Pocket Z project hopes to rekindle pocket PC form factor — with a Raspberry Pi Zero 2W inside
Intel could be prepping 24-core Arrow Lake-H processors for notebooks
LG Tandem OLED display hits mass production — Dell XPS 13 is the super vibrant display's first design win
Surface Copilot+ PCs the most repairable ever — iFixit praises Microsoft's change in philosophy
PC designed to be air cooled at the center of a massive fan — centrifugal force says no
Nvidia to sell its advanced AI processors to Middle East countries amid tough US export rules
Mechanical keyboard with see-through chassis, LED matrix, TFT display, scroll wheel, and panic lever becomes instant Kickstarter success
Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch (2024)
Top Articles
Latest Posts
Article information

Author: Clemencia Bogisich Ret

Last Updated:

Views: 6265

Rating: 5 / 5 (80 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Clemencia Bogisich Ret

Birthday: 2001-07-17

Address: Suite 794 53887 Geri Spring, West Cristentown, KY 54855

Phone: +5934435460663

Job: Central Hospitality Director

Hobby: Yoga, Electronics, Rafting, Lockpicking, Inline skating, Puzzles, scrapbook

Introduction: My name is Clemencia Bogisich Ret, I am a super, outstanding, graceful, friendly, vast, comfortable, agreeable person who loves writing and wants to share my knowledge and understanding with you.