Nvidia CEO Jensen Huang has clarified that he is also not a fan of AI slop, but that DLSS 5 is not aiming to turn games into it. Instead, it’s a tool that developers can, or cannot, choose to use for development.
Appearing on Lex Fridman’s latest podcast episode, Huang was asked about DLSS 5 and the criticism of it, a week after Nvidia announced the new AI-powered tool at GTC. Huang stated that he now understood where the criticism stemmed from, just a few days after proclaiming all critics of DLSS 5 as “completely wrong.”
“I think their perspective makes sense, and I can see where they’re coming from, because I don’t love AI slop myself. You know, all of the AI-generated content increasingly looks similar, and they’re all beautiful, and I can…so I can…I’m empathetic to what they’re thinking,” Huang stated.
Speaking on the potential for DLSS 5 to turn games into AI slop, Huang reiterated that the tool was fully in the hands of developers and under their control.
“DLSS 5 is 3D-conditioned, it’s 3D-guided. It’s ground truth structure data-guided. The artist determines the geometry, we’re completely truthful [to the geometry in every frame],” he continued. “It’s conditioned by the textures, the artistry of the artist. And so every single frame, it enhances but doesn’t change.”
This aligns with Nvidia’s messaging around DLSS 5 since its announcement, in which it explicitly stated that the technology is able to enhance features such as the sheen on a fabric, the interaction of light on hair, and more without altering the structure and semantics of the original scene. It also stated in the announcement that developers will have control over different aspect of the implementation, such as intensity, color grading, masking, and more.
This, however, stands in contrast to comments made by another Nvidia employee, who further detailed how DLSS 5 works with YouTube creator Daniel Owen. Speaking to Jacob Freeman, a GeForce Evangelist at Nvidia, Owen asked bluntly whether DLSS 5 was taking a static 2D rendered frame from a game and using that as the sole input for the model to enhance. “Yes, DLSS 5 takes a 2D frame plus motion vectors as input,” Freeman replied via email.
Owen then followed up by asking if DLSS 5 has any access to the underlying 3D geometry data present in a game’s engine, or whether DLSS is similar to taking a screenshot and asking a generative AI model to enhance it. Freeman’s reply here aligned with Nvidia’s previous comments, stating that the model is trained to understand complex scenes, but still concluded by saying that this is all done by “analyzing a single frame,” inferring that the model has no immediate information provided to it by the game it is implemented in outside of the currently rendered frame, and the one immediately after it.
This could explain why, as Owen mentions throughout his video, character models have details, such as additional hair and makeup, that are not at all present on the original 3D models, with DLSS 5 instead inferring those enhancements from the assumption of a single frame alone. That makes it sound far less sophisticated than Huang is describing in this interview, but might explain why he thinks the tool might be useful for artists to enhance their games with wholly different art styles to what was originally intended.
“…in the future you could prompt it, ‘I want it to be a toon shader.’ I want it to look like this, kinda, you know, so you can give it an example. And it would generate in the style of that. All consistent with the artistry, you know, the style, the intent of the artist. All of that is done for the artist, so that they can create something that is more beautiful, but still in the style that they want,” Huang concluded.
In October 2025, the Content Overseas Distribution Association (CODA) sent an open letter to OpenAI requesting the company to cease training its models on copyrighted content it represented, in the wake of the overwhelming influx of images and videos generated using the toon art style from Studio Ghibli films. OpenAI had previously allowed users to prompt its latest Sora 2 model to apply the shader to provided images and videos.
