Everything Looks Good Enough (Or "The Limits of Fidelity")
14 Sep 2025Since the beginning of time invention of video, humanity has thirsted for realism. For recorded video, this usually means higher resolution, richer colors, broader dynamic range, and perhaps higher framerates. For video games, this meant higher polygon count in models, higher fidelity shadows, detailed reflections, realistic skin and hair.
With decades of work put into so many disciplines - various display technologies, 3D rendering, visual effects, simulation engines, game design - I believe that we’re finally nearing the limits of fidelity, or at least the end of incessant thirst for it.
What Is Fidelity?
Here, with the term “fidelity” I refer to the idea of perceived realism in visual art. I concede that this is a nebulous concept, wrought with opinions. I’m not referring to realism that could be scientifically tested with blind tests between real objects and 3D renders, but rather the subjective feeling of a movie or game feeling real enough.
Yes, I am aware that not every filmmaker or video game artist aims to make their art indistinguisable from reality. So I’m only talking about art that aims for realism with intent. I hope the spirit of what I’m saying is clear.
With that out of the way…
Things don’t have to look real to look great. In movies, visual effects or fully CG sequences can look interesting in their own right. As a consumer of this media, I like to lean into the finished product and look past its visual flaws. This isn’t always easy or possible. It’s the artists’ responsibilities to make it so. Many things, such as uninteresting writing, bland characters or plot holes can make me disillusioned and nitpicky. With this mindset, if presented correctly, the video or game can feel “real enough” without being photorealistic, with any further effort in visual polish only offering minuscule marginal value.
What’s interesting is that there is no clear “point” that applies to all film and games. It can vary by genre and artistic intent. Sometimes, the art direction is enough to carry almost-realistic over the finish line. I will illustrate this with examples later in this essay.
Fidelity in film and games is not created fully within the creative product either. Presentation matters. It’s not only cameras and game engines that carry realism, but also the projectors and screens that are used to view or play them.
A Jaunt Through Display Advancements
Cathode-Ray Tube (CRT) displays were around a very long time before flat screen technologies rolled around. We’re talking commercialization in 1922, several decades before plasma and LCD hit mainstream. Digital displays as we know them today (usually LED-lit LCDs or OLEDs, but also other kinds) have been ubiquitous for less than half that time.
Yet, advancements in modern screens have been incredibly rapid, in not only quality but also manufacturing processes that make them cheap as dirt. It wasn’t long ago that a 30” or 40” LCD TV was a huge expense. Now, it’s easy to come by high quality 55”-65” class TVs well below US$500, and it’s not uncommon to have several in a household. To be fair, much of that can be chalked up to the built in advertising that gives the manufacturers ongoing revenue, but that’s a different topic.
720p TVs became mainstream in the 2000s, 1080p around 2010, and 4K roughly in the mid-2010s. What’s notable is 4K adoption in cinema being slower. A large number of movie theaters remain on 1080p projection, though post-2020 data seems limited in my research. In 2018, still in the early or middle era of home 4K adoption, some estimates were of an 80/20 split between 1080p and 4K in movie theaters.
4K in film is a thing today, but that can be credited to streaming service requirements. Netflix, Prime Video, and others started requiring 4K deliveries for their original content in the 2010s. It’s a safe bet that that was driven by 4K adoption in households.
Nevertheless, despite the first consumer 4K TV coming out in 2012, and mainstream-ish adoption in households in the late 2010s, it doesn’t look like 5.6K, 8K or higher resolution displays will become common even in the mid-2020s, when I’m writing this.
And it’s not for the lack of trying. All the major manufacturers have manufactured and sold 8K TVs in various capacities. They tend to be large, lower volume, and expensive.
Of course they tend to be large, because that’s where the high resolution is somewhat justifiable. That’s as long as there is any 8K content to watch on it. I’m sure they have some upscaling built in to tide over the relatively few customers, but still.
While their prices have gone down over the years, this trend hasn’t been as sharp as with 4K. Mainstrem adoption seems unlikely.
Limit Of Fidelity 1: Pixel Density
This brings me to the first limit that we have hit - diminishing returns on pixel density.
Apple has a brand name for this - Retina Display. First used in 2010, they continue using it in other forms more than a decade later (see: Liquid Retina, Super Retina). Originally, it was Apple’s cute way to refer to displays with pixels packed so close together that users’ eyes couldn’t discern individual pixels from a reasonable eye distance for that device type.
Did we hit this limit 15 years ago? Yes, definitely on handhelds, and eventually on laptops. Today, most phones and tablets, and many laptops on the market have Retina-class displays. For these products, I’d argue the main benefit is gorgeous font rendering. Before, there was much greater reliance on font smoothing techniques.
Desktop monitors and TVs are more geared towards film and games. On those devices, it took a little longer. I’m not sure if we’re there yet, or we’ll ever want to be.
The Steam Hardware & Software Survey points out that even in the mid-2020s, when I write this, over half the user base has a 1080p monitor (around 54%). Standard 1440p is a distant second (around 20%). Standard 4K of 3840 x 2160 is a tiny minority (under 5%). See updated data here.
Resolution numbers without screen size don’t tell us the whole story. Larry Jordan has a great explainer on how setting the virtual resolution to half the monitor’s resolution can make any monitor Retina quality.
Take a look at Rodrigo Polo’s calculator for determining what distance a screen can be perceived as Retina quality, given its resolution and size. Here are common monitor sizes and minimum distances at which 1080p and 1440p can be perceived as “Retina” according to Rodrigo’s calculator:
Diagonal size | DfR* at 1080p | DfR* at 1440p |
---|---|---|
21” | 32.77” | 24.58” |
24” | 37.46” | 28.09” |
27” | 42.14” | 31.60” |
32” | 49.94” | 37.46” |
* DfR = Distance for “Retina”
Some searching tells me that the ideal viewing distance for a computer monitor is 20 to 30 inches, or up to 40 inches for larger monitors. Meaning, the 1080p to 1440p jump in computer monitors would be noticeable to most people.
1440p monitors have been very attainable for many years. Today, I can find basic ones for just over US$100, and others with some bells and whistles for under US$200 on Amazon in the US.
Yet, over half the Steam population using 1080p monitors paints a clear picture. Higher PPI is overrated to most people, even if the higher resolution would be noticeable to them.
Game console upgrade cycles are often over 5 years long. Console manufacturers know that each generation has to be aspirant, and have to be marketed as such. When the current generation launched, Sony made it a point to include the 8K buzzword in their PlayStation 5 literature. The box included a gold badge proclaiming “8K”, despite it being obvious to folks with some technical know-how that the included hardware wouldn’t play contemporaneous games in 8K at acceptable graphics settings. Xbox Series X, too, technically supports 8K output, it never enabled it for any use case.
Of course, game consoles are generally paired with TVs, where average viewing distance is different. Even on paper, 4K is less noticeable on TVs assuming an 8-10 feet distance, compared to computer monitors.
In 2024, Sony dropped the 8K label from their updated box design for the PlayStation 5 Pro.
Video Games Today - Pure Artistic Vision
I distinctly remember when in the early 2010s, in-game screenshots with artistic flair started popping up online. People started creating mods for games like Grand Theft Auto IV to make some elements, such as vehicles, look nearly indistinguishable from reality. Others would set up and take screenshots that could fool someone at first glance. Games like The Elder Scrolls V: Skyrim already had skyboxes that looked like landscape paintings.
I predicted that in-game photography would become a thing. I was certainly not the only one.
Today, it’s commonplace for high budget games with realistic graphics to come with photo mode features built in. There are online communities dedicated to in-game photography, like the game-agnostic subreddit r/PhotoModePhantoms, and game-specific ones like r/ControlPhotography.
Limit Of Fidelity 2: Games Look Better Than Real Life
Ok, this limit of fidelity is admittedly more subjective than the first one. I can’t convince you using numbers. Still, I’ve been convinced of it for years and still stand by it.
Game design technologies have matured to a point where, depending on the production’s goals, artists can express themselves with great creative freedom. Disk space on the players’ systems is cheap and plentiful, RAM is voluminous, and graphics processors powerful enough.
We’re not at unlimited creative freedom, mind you. There’s still progress to be made. More on this later. Nevertheless, in my opinion, games don’t have to look completely photoreal to “compete” with real life. As I mentioned at the top, the artistic intent and execution can give them a great boost.
Best Time For Remakes And Remasters
I think the industry realizes this. The number of big brand game remakes and remasters appear to have shot up considerably in the last decade. In my opinion, there are two key reasons for this.
- Major game consoles, starting with the Xbox One and PlayStation 4 generation, run on architectures uniform with general purpose PCs. This has several benefits - game ports between PC and console platforms are easier to develop, and games can benefit from the longevity of running across console generations due to the common x84 CPU architecture. Games released prior to this era were more likely to remain “stuck” on older consoles because architectures were more bespoke.
- As I described earlier, hardware limitations for creating widely appealing game art have all but dissolved away. It’s still time consuming and takes much effort, but now it’s possible to make high-fidelity games that look real enough.
Despite the popular online sentiment of remakes and remasters being unnecessary cash grabs, which sometimes I do agree with, this feels like the time to remake and remaster classics for the long term. Opinions incoming…
- The Last of Us Part I finally looks “good enough” to appeal to a wide audience for decades, compared with the PlayStation 3 era original. Admittedly, the original is a good looking game, but sensibilities change with time, and younger gamers of the 2020s may not be as forgiving as me.
- I can say the same about Uncharted: Legacy of Thieves Collection from the same studio, and hope they remake or remaster the original trilogy for modern console and PC audiences.
- I can argue that some games remade in recent years went from completely unappealing to gamers with post-2010s sensibilities, to more than playable. Think of Resident Evil titles, Silent Hill 2, the two Final Fantasy VII revamps, Tony Hawk’s Pro Skater 1+2 and the fan remake of the original Half-Life, Black Mesa.
What remains to be seen is whether these remakes stick around for a long time, or if IP owners revamp these well-received revamps even further in the next decade. I am of the opinion that those would be unnecessary.
Peak Realism Already Realized
On the other hand, We already live in a world where games with timeless high-fidelity graphics have already existed for years. I wouldn’t say the majority of realistic games fit this, but the existence of any such games bodes well for the present and future of this art form.
I played the original PlayStation 4 release of Horizon: Zero Dawn on my PlayStation 5 several years after release. It’s one of the first games I ever played that I clearly remember striking me as “wow, this looks… good enough.” In 2024, this game received a remaster that was widely deemed unnecessary. Looking at video comparisons, they look slightly different in some areas, yet equally appealing to me. Some elements are objectively more modern, such as snow and sand deformation as Aloy walks around. Even so, I would recommend either version to new players.
The aforementioned The Last of Us series is similar. In the graphics and character animation department, Naughty Dog achieved peak “realistic enough” status a while back. This carries over to their other works, Uncharted 4: A Thief’s End and Uncharted: The Lost Legacy.
Sony’s PlayStation Studios have been home to well-produced singleplayer eye candy games for a number of years.
Detroit: Become Human is another example of a game that I can’t imagine ever aging poorly. A big reason is what the game is. It’s not set in a huge open world. Scenes are confined to sets without day-night cycles. As a result, lighting never changes and can be baked in. Effects such as snowfall can push hardware limitations because players follow predictable, linear paths through most of the setting. The cyberpunk visual aesthetic oozes style.
Remedy Entertainment has been toying with live action video integrated into their games for a while (see Quantum Break, Control). Their 2023 release, Alan Wake II is an absolute masterpiece in this regard. I can’t imagine this game ever needing a remaster. As of this writing, it may be the most photoreal game I’ve ever experienced. Human characters may not look indistinguishable from reality (more on this later), but the environments are jaw-dropping. Locations and effects are so realistic that the game successfully weaves live action video right in the middle of gameplay. There are multiple points at which gameplay transitions to live actors and sets, then back to a 3D render world, almost seamlessly.
We’ve achieved realistic computer generated shiny surfaces a while ago, both in film VFX and in games. Look at driving games. Forza and Gran Turismo titles from the 2010s started looking nearly indistinguishable from reality. Especially in photo mode, these games can genuinely fool people.
Life is Strange games fit this list in a different manner. Despite being far from totally realistic, these games know exactly what they are. They push a distinctive visual aesthetic clearly designed by artists with a vision. Characters and sets look like hybrids between comic books and paintings. This is a series where visual realism takes a backseat to a story with player-driven choices, similar to Detroit: Become Human. So, once again, the realism doesn’t matter, and the games will remain timeless due to their art direction. It’s worth noting that the first game in the series did get a slight remaster in 2022.
Going by its trailer (I haven’t played it yet), Lost Records: Bloom & Rage by Don’t Nod, the original Life is Strange studio, likely checks all the same boxes.
The original Mirror’s Edge, released 2008, is a rare pre-2010s title that’s worth a mention. Once again, because of scripted settings, linear gameplay, and baked-in lighting, this game is timeless.
Note that none of the games in this list are photorealistic to the point where they’ll fool anybody outside of still images, except perhaps Alan Wake II. Yet, I doubt they’ll alienate many players for several years from now purely because of their respective art direction.
These are just a select few games that look “good enough”, dare I say, forever.
Armchair Praise For Modern VFX
There is an unjust rhetoric in film discussions online against the use of VFX in film. Apparently, any use of “see jee eye” is noticeable and actively makes movies look worse, and that we should go back to the good old days of location shooting and in-camera effects, and rely on filmmakers’ creativity to make movies instead of pressing buttons on the computer to achieve easy-mode filmmaking.
On doing any amount of research, it’s easy to learn that this couldn’t be farther from the truth. VFX is everywhere and very often hard to pinpoint.
If you’re out of the loop, and can remember the bad VFX you saw in that one movie, this can be hard to believe. That’s because poor VFX is very noticeable, can often look worse than in-camera effects. To the general audience, this gives the VFX industry a bad rap.
Limit Of Fidelity 3: VFX Often Indiscernible
We live in a time when VFX has been perfected to such a degree that it hides in plain sight. A lot of VFX used in films and TV shows is visually indiscernible and used for the subtlest of touch-ups. According to Vulture, over 90% of Hollywood(?) films released in 2022 required some VFX.
Here are some examples of subtle VFX techniques that fly under the radar.
- Sky replacements are considered a basic skill.
- Set extensions to avoid having to build everything physically, including areas the audience won’t see up close. This is one of many things that the marketing department for Barbie famously downplayed and lied about.
- Digital doubles, aka fully animated humans are employed fairly often. This video from Vox brilliantly explains this concept using superhero films as examples. These hardworking fake individuals are employed
- in dangerous situations,
- for complicated stunts,
- to bridge missing footage between two shots that need to be spliced together (like in 1917),
- conjuring twins out of a single actor (like in The Social Network),
- cleaning up mistakes to save production costs (like Jeremy Renner’s broken arms in Tag, or possibly the gloves in Skyfall).
Tentpole Hollywood movies with giant budgets obviously go all out with this. There are countless examples where CG VFX was employed that nobody in the general audience would bat an eye at, even ones who are (over-)confident in their abilities to tell VFX from reality.
- The copious amounts of fake broken glass in John Wick: Chapter 3 – Parabellum, as broken down (pun intended) by Ian Failes with befores & afters - part 1, part 2.
- The nearly fully animated bridge sequence in Spider-Man: No Way Home, created by Digital Domain, in concert with the on-set filmmakers, and described here, also by Ian Failes with the help of VFX supervisor, Scott Edelstein.
- Some nearly fully animated sets and sequences and significant touch-ups in Ford v Ferrari, a movie that also had a lot of impressive on-set photography.
- Gemini Man’s inclusion of a de-aged Will Smith so realistic (under varied lighting too), that I can bet money that people unfamiliar with the actor would not detect without being told.
- All these examples from David Fincher’s productions, compiled together by kaptainkristian on YouTube.
If you are somebody who is confident in their ability to pick out use of VFX from a lineup, or just fascinated by all this, please watch this four-part series, “No CGI” is really just invisible CGI. Here are two supercuts to whet your appetite.
He not only talks about way more examples, but also calls out studios and filmmakers who shamelessly lie about or downplay the hard work done by VFX artists on their own movies, while doing press tours for them.
The only difference between fully animated films and some of these scenes is the artistic intent. Artists at Pixar and Blur Studios want to present an animated look to their 3D animations, while the masterminds behind the above examples go for a photoreal look.
What Remains
Stones Yet Unturned
This is an essay of praise for what many experts have achieved in their field, not one of dictation.
As we have established earlier in this essay, I’m not currently a professional in any of the industries I just wrote about. I am merely a member of the audience, albeit one who observes, researches, and has endless opinions.
There is cost of diverting resources and professional expertise between fields. We can’t be laying people off because there is not enough need for higher fidelity screens or rendered graphics. I am not nearly qualified to say what people in the display industry, game development or visual effects should or shouldn’t work on.
Nevertheless, I can make predictions, and I can air my hopes and dreams!
In Displays
While I think 1440p for monitors and 4K for TVs approach limits of visual fidelity (at least to the level most people care for), resolution is not the only thing that defines the motion picture experience.
HDR makes a real big difference on bright enough displays. Deeper color gamuts are another component of HDR, even without the high brightness, and one I think many folks sleep on. I think there’s plenty of room for improvement in:
- higher brightness
- crisper HDR
- lower power consumption
- lighter TVs, perhaps, so that they are easier to move and mount
- all at lower prices
Mini-LED is display tech to keep an eye on. Micro-LED, perhaps more so. Both have their pros and cons, but they promise OLED-level color performance at lower prices and higher brightness. At the same time, OLED panels are rapidly getting cheaper, so it’s hard to say what will win out.
It’s highly unlikely gimmicks such as curved displays (except for ultrawide monitors), 3D or VR will be mainstream, although niches and off-shoots will always exist. We have seen time and time again that most people don’t want to put hardware on their faces and bodies, or even get up from their seat, to consume this type of entertainment.
At this point, I’m confident that humanity figured out the ideal medium for the presentation of moving pictures over a hundred years ago, and it’s the flat rectangle.
In Graphics Technologies
The world of graphics rendering is vast, and I’m not qualified enough to have specific hopes. Still, I’d like to point out some relevant trends in this industry.
Pending any moonshots or breakthroughs, we seem to be hitting another limit - how much compute we can eek out per watt of power. This is not only my armchair observation. Some people, including Jensen Huang of Nvidia as of 2022, opine that Moore’s Law is dead, meaning compute power won’t increase as quickly anymore as it did in the past.
Over the last half decade, Nvidia and their competitors appear to have directed a large portion of their efforts towards offloading graphics computation to software and machine learning, instead of traditional hardware grunt.
For the less technical readers, in the world of game rendering, this is some creative problem solving to sidestep increasingly tough challenges.
For nearly the entire history of consumer gaming, new graphics card have improved upon previous generations by making their hardware more powerful and/or more power-consuming. Now, these companies are finding themselves climbing steep slopes to continue this trend. So, instead, they are squeezing out perceptible realism (what this essay is about) by using machine learning solutions for upscaling and sharpening each frame. Even newer technologies are fabricating entirely new frames in certain games to try making gameplay smoother. These techniques use different types of computation. The idea is that in this area, there may be more room for development, and a less steep slope for now, compared to pushing out raw hardware grunt like before.
I don’t have strong opinions on this one way or another. Some online people scream “fake frames!” I do think these techniques are here to stay, and for good reason. The deep learning tech that I’ve seen applied in several games so far is good. In normal gameplay conditions, technologies like modern DLSS (2.0 and newer) only improve the experience for nearly all gamers, and are hardly perceptible if they just play the game instead of staring at pixels.
Some companies are experimenting with generative AI that creates endless virtual worlds. I think this application of gen-AI is too niche for mainstream use across the whole games industry. Still, it makes me think of the possibility of more gen-AI usage at render time. The current applications are essentially limited to stretching and interpolating games. Could gen-AI be reined in enough to reliably offload some corner cases to render time? Imagine games not having to package space-consuming 3D models and textures for every soda can and shrub, and instead the user’s computer filling in the details if and when they zoom into them.
In Film
A major advantage that artists and technicians aiming for realism in film VFX (vs games) is render time. Realism in film CGI will always outpace games because film effects artists can take their sweet time rendering each frame on server farms.
We may have perfect water simulations (see Avatar: The Way of Water, 2022), perfect shiny bodies (see Transformers, 2007, and numerous car racing films since). Yet, some things continue to be ever so slightly imperfect.
- Skin is incredibly hard to fake. We can chalk this up to the uncanny valley and our lizard brains. Yet, Deadpool & Wolverine had some remarkable fully-CG skin effects, despite character Cassandra Nova’s uncanny valley-triggering superpower. I was amazed they managed to pull off animated skin to this level of photorealism. See this behind the scenes breakdown. And of course, there’s Gemini Man mentioned earlier in this essay.
- Lighting remains very hard to fake, and I’m not sure when we’ll see significant improvement in this area. It remains very hard to simulate crisp shadows with the right amount of falloff on complex textures, like fabric. Basically, if a scene involves flashes (like gunfights) or complex color-changing lighting, it has to be planned out and executed on set, or the VFX department is going to have a hard time. Perhaps the right type of machine learning will bridge this gap one day.
- For some reason, fake fire effects still seem difficult. This is a common pet peeve of Corridor Crew’s “VFX Artists React” YouTube series; they routinely point out how on-screen flames (which look totally fine to me) often don’t compare to real fire captured in-camera. Practical fire in action scenes causes cameras to struggle with exposure. This is a detail often missing in scenes with fake fire. Occasionally, we get films with great effects in other departments, but terrible fire effects, like Black Widow from 2021.
In Games
Games have room to improve in similar ways to film. However, they will always lag behind film in realism. Films can pre-render and freeze every frame before release, but this is not viable for interactive experiences. Games have to be programmed to allow home computers and game consoles to calculate dozens of frames every second while considering player input. So, games simply must lag behind in photorealism (at least during gameplay).
Same as in film, skin is very hard to get right. Even games released in the 2010s, like Naughty Dog titles, have impressive skin and hair. Still, I doubt a single game exists that would pass blind tests in this area. Film has already crossed this boundary.
There is one main thing that games fall short on, in my opinion. This aspect is not easily comparable with film. It has to do with the video game medium’s unique quality of there being a “curtain” that the player can peek behind, even if the game designers don’t want them to.
Although many games look better than real life because of copious amounts of human intent poured into every corner, they still can’t convey holistic feelings of realism.
Many games do a great job capturing specific experiences from reality very well. Grand Theft Auto titles capture action movie-style shootouts and driving. Games like Battlefield emulate the feeling of being in war action movies. The Watch Dogs series are great Hollywood hacking simulators. Yet, all these games only capture narrow slices of reality, stopping at boundaries beyond which any additional effort wouldn’t yield meaningful improvements for most players. Games that benefit from detailed alleyways have detailed alleyways (see Sleeping Dogs), but perhaps not detailed rooftops (see Marvel’s Spider-Man). These games have to be designed to discourage players from organically wanting to peek behind the curtain. Sometimes, like in highly detailed open-world experiences, this type of game design is tough, but necessary.
I’m curious if improvements in development tools and increased power in home computers and game consoles will eventually yield wider coverage of reality. Still, to make that happen, there must be economic pressure that pushes studios. Does that even exist?
Wrapping Up
When I started writing this essay, I didn’t intend it to become such a wall of text.
Social media silos can be all doom-and-gloom about the use of CGI and VFX, and about games industry. As someone who consumes a lot of this media, considering the trends, I disagree. There is a sea of negativity, and it’s not limited to these hobbies. Be it either intentional engagement-farming or learned behavior, so many people forget or ignore plenty of gems we get from both, film and games industries, every year.
Every month, there is an abundance of both, technical innovations and respectable art that comes out in both mediums. I’ve been on record about games having come so far that I’d like to see studios spend more resources on telling more stories while reusing established foundations, instead of focusing so much on new tech development for each sequel.
Let’s salute the work of seldom-lauded individuals from dozens of fields of expertise who have added bits of delight to the world in recent decades.