You should do a video about Xbox Live 1.0 its back thanks to Insignia and from a tech standpoint reengineering Xbox live to work again on the Original Xbox is a interesting feat
I am so happy that Philip managed to get the message THIS far out. I do fear that this tech might have issues with particles and moving objects and the like, but when you mentioned that we could use DLSS to ONLY FILL IN THE GAPS, my jaw dropped. Thast so genius! I really hope that this is one of those missed opportunity oversights in gaming, and there isnt like some major issue behind it not being adopted yet.
@Nikephor my apologies with the "exact same setup" I believe I was referring to enabling ASYNC and DLSS as the other guy stated. Still, ASYNC has been an option for around a year. You can add the setting under system settings of a games config on Lutris
There are obvious caveats with scenes that have a lot of movement in them for this particular implementation, but remember that this kinda stuff has been done on VR games for a while now, there are many ways to improve it beyond what's shown here.
Or you render a slightly higher res image, crop into the middle, amd use the outside pixels as a filler. You could even render the outside stuff less accurate and like only every 4th pixel, and then approximate the rest.
I was pretty sure your display had to support async reprojection or something. Async reprojection is better than having low framerates, but not better than having normal high frame rates. VR headsets have different kinds of motion interpolation I think? and some outright dont support it.
Not mentioned in the video: you can render frames at slightly higher fov and resolution the the screen, so that there's some information "behind" the monitor corner. Won't save you from turning 180 degrees, but it will fix most of the popup for a very slight hit on performance
You could even render the stuff outside the screen at an even lower resolution, you wouldn't notice that much since you'd only see it while in motion...
@morfgo nobody here realizes this comment is referring to Valve ignoring 3kliks instead of LTT there was a controversy where valve used a bugfix (mapfix?, idk) that 3kliks made without crediting him within the patchnotes of CS:GO.
I actually was thinking about writing an injector to apply this to existing games a few years ago when I have seen the effect on the HoloLens. A few limitations though: camera movement with a static scene can look near perfect, however if an animated object moves depth reprojection cannot fix it properly, and you would need motion vectors to guess where objects will go, but that will cause artifact near object edges.
@ayaya This isn't true. Game object's position and movement aren't random. They have set values and formulas that could be co-opted to the prediction algorithm. The problem is that it would be really costly to check every frame. This would have to be built into the game itself in a very clever way.
yeah.. this is the reason for video ram memory. if you render from center point, all the unused 10+gb of ram on gpu, that all modern gpu have totaly unused... can hold old w.a.s.d-permutations while working on new vector permutations. its not a problem. it should be able to tell the new delta/angle to render, using floatingpoint operations.
Just think about that: You can see the difference on a CHclip video! Granted it's 60FPS but it's still compressed video streamed from CHclip. I can only imagine how much of the difference you can see live running it yourself. This makes it even more amazing!
THIS IS INSANE, I use this already on Assetto Corsa in VR so I play at 120hz but it renders 60fps, Such a light bulb moment at the start, Really wish this can catch on because I've already seen first hand how great this is
It's huge for low/mid range setups to make games more responsive but it's also nice for high end machines because you'd completely negate the impact of 1%frames and feel like you're always at your average
@Lu It's a joke. In one of the previous videos there was a throwaway comment about him being the display guy and he goes "I'm the display guy here. I do all the reviews of displays, I own a display..."
One thing I wondered about when I first saw that video is if the PERCIEVED improvement is good enough that you could lose a couple more frames in exchange for rendering a bit further outside the actual fov, but at a really low resolution. Basically like a really wide foveated rendering. It would mean the warp would have a little more wiggle room before things started having to stretch.
I’ve never understood why this hasn’t been done before. I’ve thought it should be done since 2016 when I got my VR headset. Like you said, extremely obvious!
You might be able to hide a lot of the edge warping by basically implementing overscan where the game renders at a resolution that's like 5-10% higher than the display resolution, but crops the view to the display resolution. It should in theory be only a very minor frame rate hit since you're just adding a relatively thin border of extra resolution.
@Carl O That assumes you'd need same resolution for overscan. If game is rendered at 45deg FOV at 1440p, render an overscanned area between 45 and 90deg FOV at 360p. You don't need a lot of detail, just something to make valid guesstimates within that motion blur until proper frame fills up the screen.
The magic combo there would be foveated rendering alongside the async reproj with overscan. The games that would make sense for will inevitably be a case-by-case thing for but the performance gains would be massive.
Welp, I think I already know what this feels like without watching this video. Because I've played games before that had unlocked framerates while having character animations stuck at 30 fps. I think it's pretty much that. Yea, it will feel good, but look a little disappointing.
yeah, my comparison was very laggy Gmod physics when props are trying to summon a kraken, all while you can still walk around it. In some games it will not help, but there are so many that I care about where it would.
Asynchronous projection is great in VRChat (in PCVR on my relatively old PC) where I usually get 15 fps and often even far less than that! I don't really mind the black borders that much in that case especially since the view usually tends to go a bit farther than my FOV making them usually only appear if things are going _extremely_ slow, like, a stutter or any other time I'm getting over 0.25 seconds per frame. So, perhaps another way to make the black bars less obvious, would be to simply increase the FOV of the rendered frames a little bit so that there is more margin. Would make lower frame rates, but it might be worth it in any case where the frame rates would be terrible anyways.
Here's an idea to improve this: have the game asynchronously render at like 5% (or even 1%) of the original resolution, at the refresh rate of the monitor, and use those frames to fill the gap that isn't able to render at full resolution yet (assuming the full resolution cannot be rendered as fast as the monitor's refresh rate).
@Catmato ya but its not that special anymore. Its physx all over again. Eventually they will stop dicking around with the marketing schemes and will be a standard feature in all gpus.
Now that's a public interest video ! Raising awareness on this technique will certainly go a long way, especially in open source. I hope the constructors don't shy away from it from fear that it would diminish interest on their high-end GPUs.
FINALLY! I've been saying for years that the retina and brain process visual stimuli asynchronously, allowing us to perceive framerates >100 FPS. This kind of compromise is an _EXCEPTIONAL_ means to fill in the space between frames with something "close enough" so that our eyes and brain can get the smoothed out experience. reality is asynchronous. _WE_ are asynchronous... Why shouldn't a computer and it's video output take advantage of asynchronicity as well!
We need AI upscaling, AI frame generation and Asynchronous edge detection built into a monitor. Everything would instantly look better with no load on the computer. And then support for component video would make retro gamers very happy.
Can you please test this on low end hardware? So far I get the feeling it only works on devices that already have some spare performance. Cause either way the gpu has to renter something, even though it's less taxing. Maybe find some hardware that can push only ~40 fps and try it on that. I'd really want to see the effect of it.
I know philip will see this and I know he will feel awesome. You have come a long way Philip. I am proud to be part of your community since your first tutorial videos.
Sadly this is useful in limited scenarios. Objects moving even relatively fast, particles, volumetrics and anything motion blurred or defocused in any way will almost always make this useless. That's why resolution upscaling became the main way to achieve better performance. In theory this could still help, but engines would need to isolate moving objects and effects to make it work in more scenarios, and that's hard to automate given the amount of them you usually have on screen. It's the same problem as with the frame interpolation in DLSS3. Artifacts will always be there. And they are noticeable.
I commonly lock my fps to something like 20-30 fps as I play RTS, and it keeps my laptop quiet enough that it's not waking people. this would be very nice If they would render like 75' while fov is 70' or stuff like that. If you get just a touch more than is currently on-screen, it would be unnoticeable. All the people complained about were corners where tech made a lot of assumptions about what's there. If added just a bit of render around the edges, I don't think they would notice anything changing at all up to 15.
See, this is what I expect from DLSS 3, no increase of inputlag. I also wonder if you could render more on the sides and then crop the image to give a native res frame, that would eliminate border artefacts more, like they do to eliminate SSR border artefacts sometimes.
I'm very interested in using a form of overscan where you're viewing a cropped in frame of the whole rendered image, so when you're panning your screen around it doesn't have the issue of stretching, unless you pan outside of the rendered frame.
oh wait, time for another balls to the wall computer build! only the third this week. /s But for real, they've been doing a great job with not doing what I just said
I would be very curious if a hybrid solution would be possible, such as in a fps game, synchronously drawing the environment, but asynchronously drawing the players? I’m sure there’s some limitations involved with that, but it does sound intriguing
This kind of thing is what I actually always think about since way back when motion interpolation becomes common in TV plus the fact that I'm familiar with 3D (I don't do real time 3D rendering, only non real time). What I'm thinking was the fact that you have this motion data, depth, etc should be good enough to have some kind of in game motion interpolation but not really interpolation but for future frame. Even without taking the control input into account, just creating that extra frame based on the previous frame data should be good enough to give that extra visual smoothness feel (basically you'll end up with somewhat the same latency as the original FPS). Since it already working directly within the game, we should be able to account for the controller input and the AI, physics, etc to create the fake frames with an actual lower latency benefit, so basically the game engine run at double the rendering FPS so the extra data can be used to generate the fake frames. For screen edge problem, the simple way to solve it is simply to overscan the rendering (or simply zoom the rendered image a bit) so the game have extra data to work with. Tied to this problem is actually the main problem with motion interpolation and this frame generation/fake frames thing, which is disocclusion. Disocclusion is something that was not in view in the previous frame becoming in view in the current frame. How can the game fill this gap because there is no data to fill the gaps. Nvidia I believe is using AI to fill those gaps which even with AI, it still looked terrible. But as it has been mentioned by people using DLSS3, you don't really see it, which is actually good for non AI solution, because if in motion people don't see that defects, then using non AI solution to fill the gaps (simple warp or something) should be good enough in most situation. Also doesn't need that optical flow accelerator because the reason why Nvidia use optical flow is to get motion data for elements that is not represented on the game motion data (like shadow movement) but in reality, that is not important, as in most probably won't notice when the shadow just move based on the surface motion (rather than the shadow motion itself) for that in between fake frames. For a more advanced application, what I'm thinking is a hybrid approach where most stuff are being rendered at like half the FPS and half of it will reuse the previous frame data to lessen the rendering burden. So unlike motion interpolation or frame generation, this approach will still render the in between frame, but render it less, like probably render the disoccluded part, maybe decouple the screen space stuff and also shadow so it rendered at normal FPS instead of half so what the game end up with is alternating between high cost and low cost frame. When I thought about that stuff, AI wasn't a thing thus I didn't think including any AI stuff in the process. Since AI is a thing right now, some stuff probably can be done better with AI like for example the disocclusion problem, rather than render the disoccluded part normally, probably it can just render the disoccluded part with flat texture as a simple guide for the AI to match that flat rendered look to the surrounding image which might be the faster way to do it.
I tried to build something like that demo a few years ago, but I was trying to use motion vectors + depth to reproject my rendered frame which I never got to work correctly. In my engine I rendered a viewport larger than the screen to handle the issue with the blackness on the edges and then was going to use tier 2 variable rate shading to lower the render cost of the parts beyond the screen bounds. But VRS was not supported in any way on my build of Monogame which is what my engine was build apon so that was another killer for the project. I am so glad that Phil popularised the idea and its awesome that someone else managed to get something like this working, how he did it in one day I will never know, I spent like 3 weeks on it and still failed to get it working correctly. I should find my old demo and see if I can get it compiling again.
GSYNC for me was real game changer and jawdropping moment when I first experienced it with the exact same rest of hardware, it worked so much better than VSYNC ever could, so much smoother especially when FPS dipped only momentarily, with the only downside to GSYNC being that at very low FPS, the Refresh Rate naturally also goes down into the teens, and that leads to super-juddery input lag from hell. So if GSYNC and ASYNC Repro could be combined to have ASYNC take over at those lower FPS or at least support GSYNC by stopping the syncing of Refresh Rate to Framerate, that might make for the best of both worlds experience, could be pretty cool especially for mid range gaming rigs.
Take this a compliment : I love how LTT has now transformed more into a Computer Science/Electronics for Beginners channel than just another "Hey we got a NEW GPU [REVIEW] " channel.
It's why I keep watching them, I got tired of watching reviews of hardware I can't afford/don't really need yet. Though my VR rig is getting very tired.
Ahhh so this is how Quest2's run when linked to PC using AirLink. I always wondered why sometimes I would see stretching or the stacked frames if connection was weak.
I asked in a VR subreddit about a year ago why nobody is making Async for computer games and people gave me shit about it like "wouldn't work that way, the idea is stupid, just not possible, etc." so I gave up. Glad I asked the right people
It's because your head moves independent of your gun that it works in VR. Camera-attached stuff like HUD and weapons can't work with it without layers (which vr supports, usually just used for hud not weapons as they won't react to the lighting right during the move without re-rendering them). Notice in the demo there is no weapon in use by the character.
@InfernosReaper Being an expert doesn't help much either. The problem is nicely described in "Technology Forecasting: The Garden of Forking Paths" by Gwern
@zyxwvutsrqponmlkh Understood now. But quoting something posted, without posting why, can lead to interpretations as to why. By posting something as not your words, says noting about the intent on posting it. Leave anything up to interpretation, and anyone can only make guess about it. Unfortunately.
It's all well and good to make the video feed _smoother,_ but the information we're _not_ getting is the most important - smooth up-to-the-millisecond data on the position and movement of _other_ objects (esp. people and projectiles). This is exactly why VR games can often _feel_ smooth yet the real gameplay be janky because we aren't actually getting smooth interaction with in-game assets. In multiplayer FPS especially, there is simply no substitute for eliminating as much disruption to real, accurate feedback as possible, and at high levels of play the jankiness of anti-lag, prediction, ping compensation, or actual network latency are all already nearly unbearable on their own. While there may be use cases for reprojection (quite notably VR), for the most part I'd rather not have games introducing error just to make the game _feel_ smoother than the actual gameplay _is._ It's just one more lie to subconsciously deconstruct on the path to intuitive gameplay. And the fact that this _is_ (mostly) just a placebo would have been much clearer in the trials if there was actually something in the scene besides the player camera moving - or even just a gun to fire.
This makes me wonder if you can get a low latency eye-tracker to use foviated rendering on 2D games. That could get a large performance boost with extremely little compromises. But maybe it's only useful in VR due to the FOV to screen ratio.
Seems like rendering outside of the FOV further would help eliminate a lot of the problems along the edges of the view when using normal-speed movements and panning.
If the game was rendered with a slight overscan area so the GPU had more information about the objects at the edges, and then the image was then cropped back to the display resolution I bet those noticable edge smears probably wouldn't even appear.
The biggest issue I can think of... multiplayer and "tick" rate may prevent it from being used to upgrade old games. Some game engines would take significant redesign to make this worthwhile -- smoother is great but it doesn't give you more data than the system is processing. If the game is still limited but now also has to run an extra step, it might not make any difference-- higher level players use muscle memory for finer movements anyways and once you surpass the game engine's tick rate, may prefer low latency to high frame rates
I can imagine this being used on old games where it has a forced framerate, be it 30, 60 or even a weird number like 25 (Nintendo 64 emulation could count). Just think about it, you can play Red Alert 2 at 30 FPS but it feels like it's at 60+ FPS instead, it will be amazing!
asynch reprojection and all other implementations of it such as motion smoothing for VR have always had one major flaw, and that is when rendering stationary objects against a moving background. The best examples are driving and flying titles such as MSFS and American truck sim. The cab/cockpit generally doesn't change distance from the camera/player view so when the scenery goes past the cockpit parts of the cockpit that are exposed to the moving background start rippling at the edges. This is one of the reasons AR it not used in VR anymore and also the reason why Motion smoothing is avoided as well. And besides we are talking two different technologies DLSS V Asynch Repro. One is designed to fill in the frames and the other is an upscaler. Not really an apple to apple comparison!
One thing strikes me though. What about ingame animations (like character movement) or even cutscenes for that matter. Those aren't tied to user controls like mouse and keyboard. So.. they would still be perceived as 10 fps.. right?
it’s kinda interesting how the more and more more’s law comes to a halt we’ve been moving away from hardware improvements to just absolutely cranking the software trickery to 11, fov rendering, async spacewarp, dlss and such. like in 50 years computers as we know it today won’t be that much more powerful, itl be the sheer improvement of the code we make them run thatl make them a multi generation jump from todays probably stone age way of software engineering
Personally super excited to see 2klicksphilip's video referred to in a LTT video, a lot of Philip's content is really high quality, especially the ones where he covers DLSS and upscaling as mentioned earlier. Can't recommend checking it out enough!
When the testers started introducing themselves in my mind i said "hi im blank and i own a computer". And then there is Nicholas and you really put "owns a display" below which was why i thought that sentence in the first place. I will always have to think of that statement when someone names their qualification.
They could add this to the Steam Deck with just a software update. They already have the ability to let you toggle FSR 1.0 from outside of the game. I don't see why they couldn't add this to their Gamescope compositor.
14:02 - My TV supports interpolating frames, so I can play Nintendo Switch games at 60 FPS, instead of 30 FPS. It also has fewer artefacts than DLSS 3.0, because most games are cartoon-looking which has clear edges, borders, outlines, etc. which makes interpolating easier/better.
The main issue with these workarounds is that they depend on the Z buffer, they break down pretty quickly whenever you have objects superimposed like something behind glass, volumetric effects or screen space effects
You technically only need the depth buffer for positional reprojection (eg. stepping side-to-side). Rotational reprojection (eg. turning your head while standing still) can be done just fine without depth, and this is how most VR reprojection works already, as well as electronic image stabilization features in phone cameras (they reproject the image to render it from a more steady perspective). It might sound like a major compromise but try doing both motions, and you'll notice that your perspective changes a lot more from the rotational movement than the positional one, which is why rotational reprojection is much more important (although having both is ideal).
whoa, now I understand, why i can still move my head around the last rendered frame, when a VR game (eg. Pavlov) crashes!! super cool. ....and wouldn't it also be a gamechanger when car interfaces would use that, imagine a responsive screen in a car :o :D
Undoubtedly a very useful technology, although it would have been a more relevant demo if there some objects were actually moving on the screen. This method works better when mainly the character is moving, and not much else. If there are for example enemies running across the screen, the "real" fps needs to be a lot higher to be convincing.
"Even running diagnostics on that pesky printer that never cooperates. You know which one I'm talking about, all of them!" Actually the story of my life in IT...
You see motion above 20 frames per second but it depends how far things have moved and where it is. The centre of your vision is the slowest. It works well on some things and terribly on others. Parallax movements seem worst, so on a car or aircraft simulation the windscreen pillars completely break it. But maybe they could apply it to the far field and just composite the cockpit afterwards after all, they aren't moving.
I am wondering if game engines will have like localized rendering or something. Like if the player is standing still why not just render whatever parts of the frame are due to change, like moving characters? Either way I definitely like this technology. It won't give you more info on other player's movement, but just making the screen move more smoothly still helps you aim because you get more feedback on placing your reticle on the other player.
is there any reason it looks bad in the video, or is the screen tearing happening on the monitor too? because if they were pro gamers, they would have noticed that
Ok hear me out what if we combined checkerboard rendering and interlaced rendering at half the res and then upscale it w fsr 2.1 so basically render a whole frame w 12.5% or even less pixels !
or would good on the current steam deck since, ya know, it just runs the software in your library. if games just have to support it, then current steam deck could already start benefiting from it. and if it can at all be added on the compositor or driver level, dont see why it couldnt be updated to add support here.
As one of the top 250 vrchat players, I remember when async was a new tech in it's buggy years but new people take for granted HOW MUCH of a massive difference it makes nowadays. I was with VR since consumer conception and it's interesting to see how things have shaped up.
seen couple people "oh I need to turn it off in this" and then forget that motion smoothing is not the same and in steam VR it's quite a hidden option that resets every session.
Technically, the GPU reprojection is "rendering" the frame at 240fps, but the content being fed in to be reprojected is only updated at 30fps. You even show that it is rendering to a texture internally and then rendering a single surface (two triangles) with that texture on it. As long as the content (texture) update rate is above human flicker fusion rate, you might never notice. This falls apart at the edges of moving frames when you whip the mouse around, but if instead of rendering full resolution at 30 fps, you sacrifice 6 of those frames to overscan/oversampling, then your 20% reduction in update rate could be used instead to render 20% more pixels (so you don't have to copy from the edges so aggressively). It would resemble what GyroFlow does to stabilize video, but in 3D games, it would smooth any scene where your camera remains stationary (e.g. viewing 360° photos) and also in low-action scenes. Fast movement in the scene will not update at the same rate that you look around, so while this fixes a lot of motion sickness, I don't think it would help as much with racing (incl. driving, flying simulators) or fighting games. Put more simply, I don't think the static scene (and walking around slowly) was a good representation of the overall effectiveness of this technique. More varied examples (than a small, unmoving sandbox) would be needed.
12:40 - I use an LG 60Hz 21:9 LC-Display driven by an RTX 2080Ti (Yeah, I know, not the best combo). Anyway, just because my display can't show more frames per second, it doesn't mean I can't benefit from higher FPS while gaming. In fact, I disable V-Sync and set up my FPS cap to 119,88 FPS. This results in much lower latency, better response in games and also eliminate tearing. I mean, it feels like playing on a 120 Hz display but without owning an 120 Hz display.
They need to add this to Vikings Battle for Asgard... I've still been trying to find a way to play that game, being that it's locked at 30fps, with a reasonable outcome that didn't feel like a sluggish mess
I've used this to fix black bars on Shadow PC inside the Occulus, using this method on desktop is genius. We need to push this to game devs immediately.
From an engine developer perspective I do have a more critical stance to this as there are a few oversights no one seems to talk about: One the one side things like DLSS 3 get pixel peeped to the max and every wrong predicted pixel is taken as a flaw, but here we take the way more drastic image errors as not dramatic. Also the sample provided with some flat colors does not do it justice for the amount of visual artifacts this generates when moving around. However, the bigger point here is the proper implementation of this technique. This works fully fine, while the GPU is not loaded at all and sits idly waiting to render those reprojected frames. But as soon as the GPU is under full load and the reprojection would be useful you quickly get massive frame timing issues reprojecting the frames in time since you can't guarantee the timeslot you get on the GPU. Async Compute Pipelines do exist but definitely do not execution in time. Modern Engines pre-compute a lot of the draw calls and send them out in CommandLists to reduce the DrawCall overhead to achieve the performance in the first place. You cannot easily interrupt the GPU at any time to do a reprojection and continue where it left of. The Graphics Pipeline State would be lost the engine carefully created and sorted so it does change the least amount of time. An analogy here would be to stop a newspaper press mid-run to pick out a few example and trying to start it up again like nothing happened. VR gets away with this as the actual reprojection gets done on the Display Device (and some minor reprojection at the end of the frame just before submitting to the Headset). So for this to be available on regular Desktop games extra hardware either in the Monitor or GPU would be required. This would take the last frame and the changed viewport transform to perform the reprojection in the background, while the rest of the GPU is computing the new frame.
Thanks to Ridge for sponsoring today's video! Save up to *40% off and get Free Worldwide Shipping until Dec. 22nd at www.ridge.com/LINUS
Science!
You should do a video about Xbox Live 1.0 its back thanks to Insignia and from a tech standpoint reengineering Xbox live to work again on the Original Xbox is a interesting feat
Why has no one mentioned how async proj could improve actual 144 or 240 fps rates? Because it does...
When was the last time gamers asked for a compromised form of rendering, you ask? ... When they wanted DLSS, of course. DLSS is *exactly* that.
id love to see this with reshade
I am so happy that Philip managed to get the message THIS far out. I do fear that this tech might have issues with particles and moving objects and the like, but when you mentioned that we could use DLSS to ONLY FILL IN THE GAPS, my jaw dropped. Thast so genius! I really hope that this is one of those missed opportunity oversights in gaming, and there isnt like some major issue behind it not being adopted yet.
@ANtiKz That's async compute completely different then what they are talking about in the video.
@Nikephor my apologies with the "exact same setup" I believe I was referring to enabling ASYNC and DLSS as the other guy stated. Still, ASYNC has been an option for around a year. You can add the setting under system settings of a games config on Lutris
There are obvious caveats with scenes that have a lot of movement in them for this particular implementation, but remember that this kinda stuff has been done on VR games for a while now, there are many ways to improve it beyond what's shown here.
Or you render a slightly higher res image, crop into the middle, amd use the outside pixels as a filler. You could even render the outside stuff less accurate and like only every 4th pixel, and then approximate the rest.
I was pretty sure your display had to support async reprojection or something. Async reprojection is better than having low framerates, but not better than having normal high frame rates. VR headsets have different kinds of motion interpolation I think? and some outright dont support it.
Not mentioned in the video: you can render frames at slightly higher fov and resolution the the screen, so that there's some information "behind" the monitor corner.
Won't save you from turning 180 degrees, but it will fix most of the popup for a very slight hit on performance
@Ben Hur Thats what foveated rendering is.
You could even render the stuff outside the screen at an even lower resolution, you wouldn't notice that much since you'd only see it while in motion...
@Just Some Dinosaur Careful... thats a [Lvl. 163] PC Master-Supremacist, the bane of mobile, console, and vr gamers.....
@L Y R I L L Brainlet take
@Martin Krauser asmh
Philip is revolutionising the way we think about gaming and game dev just with common sense
@Neurotik51 using technology for VR with conventional monitors? I haven’t heard of that before
what? nothing here is new
2kliksphilip and LTT is a crossover I never knew I needed. Make it happen.
@morfgo nobody here realizes this comment is referring to Valve ignoring 3kliks instead of LTT
there was a controversy where valve used a bugfix (mapfix?, idk) that 3kliks made without crediting him within the patchnotes of CS:GO.
@morfgo They mentioned his username around 10 ri=times during the video. Isn't this crediting?
they both love counter strike it makes sense
@morfgo did you not watch the video? They talked about him several times.
@morfgo you should delete your comment before you get ratio lol
I actually was thinking about writing an injector to apply this to existing games a few years ago when I have seen the effect on the HoloLens. A few limitations though: camera movement with a static scene can look near perfect, however if an animated object moves depth reprojection cannot fix it properly, and you would need motion vectors to guess where objects will go, but that will cause artifact near object edges.
@ayaya This isn't true. Game object's position and movement aren't random. They have set values and formulas that could be co-opted to the prediction algorithm. The problem is that it would be really costly to check every frame. This would have to be built into the game itself in a very clever way.
@Shin A lower FOV might help with that, and since I already use a lower FOV... it might not be too bad.
I wonder if it can be done with ReShade. Would be fantastic for the steam deck.
@Krille k ... Pardon?
yeah.. this is the reason for video ram memory. if you render from center point, all the unused 10+gb of ram on gpu, that all modern gpu have totaly unused... can hold old w.a.s.d-permutations while working on new vector permutations. its not a problem. it should be able to tell the new delta/angle to render, using floatingpoint operations.
"He owns a display" - that's gotta hunt him for ever like the "you're fired" for Colton 😂
Love it 😁
do you really mean hunt? or did you mean to say haunt?
@Faremir Oh hi Mark.
lmaoo, fr!!
This came up a lot in that early adopter monitor firmware video. It's a great display though. Happy to have one myself.
@EndstyleGG Yeah, crazy, right? xD
Just think about that: You can see the difference on a CHclip video! Granted it's 60FPS but it's still compressed video streamed from CHclip. I can only imagine how much of the difference you can see live running it yourself. This makes it even more amazing!
A LTT vídeo at 60fps?! My god, the little animations they put like the outro card look so good 👍
yeah doesnt the dummy know 60fps is better than 8k uploads
THIS IS INSANE, I use this already on Assetto Corsa in VR so I play at 120hz but it renders 60fps, Such a light bulb moment at the start, Really wish this can catch on because I've already seen first hand how great this is
It's huge for low/mid range setups to make games more responsive but it's also nice for high end machines because you'd completely negate the impact of 1%frames and feel like you're always at your average
Plouffe's "He owns a display" gag is always going to crack me up.
Mark Rathgeber
Mark
XD
@Lu It's a joke. In one of the previous videos there was a throwaway comment about him being the display guy and he goes "I'm the display guy here. I do all the reviews of displays, I own a display..."
@Lu but his display is not a regular display... It's Plouffe's!
@Lu he bought the alienware miniled one and hes proud that he was one of the first to get it and now its a meme
@Lu But his display is.... *special*
One thing I wondered about when I first saw that video is if the PERCIEVED improvement is good enough that you could lose a couple more frames in exchange for rendering a bit further outside the actual fov, but at a really low resolution. Basically like a really wide foveated rendering. It would mean the warp would have a little more wiggle room before things started having to stretch.
I’ve never understood why this hasn’t been done before. I’ve thought it should be done since 2016 when I got my VR headset. Like you said, extremely obvious!
You might be able to hide a lot of the edge warping by basically implementing overscan where the game renders at a resolution that's like 5-10% higher than the display resolution, but crops the view to the display resolution. It should in theory be only a very minor frame rate hit since you're just adding a relatively thin border of extra resolution.
@Batuhan Çokmar Yeah that sounds awesome and a much closer approximation to how we actually see the world.
I’m glad I wasn’t the only one thinking this
Yes, definitely. Surprised they don’t do this.
@Carl O That assumes you'd need same resolution for overscan. If game is rendered at 45deg FOV at 1440p, render an overscanned area between 45 and 90deg FOV at 360p. You don't need a lot of detail, just something to make valid guesstimates within that motion blur until proper frame fills up the screen.
The magic combo there would be foveated rendering alongside the async reproj with overscan. The games that would make sense for will inevitably be a case-by-case thing for but the performance gains would be massive.
Welp, I think I already know what this feels like without watching this video. Because I've played games before that had unlocked framerates while having character animations stuck at 30 fps. I think it's pretty much that. Yea, it will feel good, but look a little disappointing.
yeah, my comparison was very laggy Gmod physics when props are trying to summon a kraken, all while you can still walk around it. In some games it will not help, but there are so many that I care about where it would.
Asynchronous projection is great in VRChat (in PCVR on my relatively old PC) where I usually get 15 fps and often even far less than that! I don't really mind the black borders that much in that case especially since the view usually tends to go a bit farther than my FOV making them usually only appear if things are going _extremely_ slow, like, a stutter or any other time I'm getting over 0.25 seconds per frame. So, perhaps another way to make the black bars less obvious, would be to simply increase the FOV of the rendered frames a little bit so that there is more margin. Would make lower frame rates, but it might be worth it in any case where the frame rates would be terrible anyways.
Imagine Async Reprojection on the Steam Deck, where Valve already has the software from SteamVR! Battery savings while feeling like at 60fps!
I stumbled upon 2kliksphilip’s channels when I was researching how to make maps in Hammer. So glad you guys have mentioned him in multiple videos now!
Here's an idea to improve this: have the game asynchronously render at like 5% (or even 1%) of the original resolution, at the refresh rate of the monitor, and use those frames to fill the gap that isn't able to render at full resolution yet (assuming the full resolution cannot be rendered as fast as the monitor's refresh rate).
I always had a feeling that tech like this is actually the real future of gaming / VR performance. And not just raw rtx 4090 performance.
and it already exists on oculus for a whlle, so that future is already in the past
BuT yOu NeEd ThAt 4090 tO mInE cRyPtOoOoOoOo
@Catmato ya but its not that special anymore. Its physx all over again. Eventually they will stop dicking around with the marketing schemes and will be a standard feature in all gpus.
@VLPR part of the magic will be cloud streaming. NVIDIA's Cloud XR and Omniverse stuff will make cloud based XR a reality sooner than most realize.
No way bro, Raytracing is the future. Just ask Hardware Unboxed.
Now that's a public interest video ! Raising awareness on this technique will certainly go a long way, especially in open source. I hope the constructors don't shy away from it from fear that it would diminish interest on their high-end GPUs.
High-end?
What r u talking abt, they can make AAA game 8k 240fps without 6slot GPU
FINALLY! I've been saying for years that the retina and brain process visual stimuli asynchronously, allowing us to perceive framerates >100 FPS. This kind of compromise is an _EXCEPTIONAL_ means to fill in the space between frames with something "close enough" so that our eyes and brain can get the smoothed out experience. reality is asynchronous. _WE_ are asynchronous... Why shouldn't a computer and it's video output take advantage of asynchronicity as well!
We need AI upscaling, AI frame generation and Asynchronous edge detection built into a monitor. Everything would instantly look better with no load on the computer. And then support for component video would make retro gamers very happy.
Can you please test this on low end hardware? So far I get the feeling it only works on devices that already have some spare performance. Cause either way the gpu has to renter something, even though it's less taxing. Maybe find some hardware that can push only ~40 fps and try it on that. I'd really want to see the effect of it.
I know philip will see this and I know he will feel awesome.
You have come a long way Philip. I am proud to be part of your community since your first tutorial videos.
@Charlie more like 14 lol
kliki boy i love you
Love him. His tutorials layed the base for my environment artist gamedev job.
Here's to Philip, love his videos on all 3 of his channels
Sadly this is useful in limited scenarios. Objects moving even relatively fast, particles, volumetrics and anything motion blurred or defocused in any way will almost always make this useless.
That's why resolution upscaling became the main way to achieve better performance. In theory this could still help, but engines would need to isolate moving objects and effects to make it work in more scenarios, and that's hard to automate given the amount of them you usually have on screen. It's the same problem as with the frame interpolation in DLSS3. Artifacts will always be there. And they are noticeable.
I commonly lock my fps to something like 20-30 fps as I play RTS, and it keeps my laptop quiet enough that it's not waking people.
this would be very nice If they would render like 75' while fov is 70' or stuff like that. If you get just a touch more than is currently on-screen, it would be unnoticeable.
All the people complained about were corners where tech made a lot of assumptions about what's there. If added just a bit of render around the edges, I don't think they would notice anything changing at all up to 15.
See, this is what I expect from DLSS 3, no increase of inputlag.
I also wonder if you could render more on the sides and then crop the image to give a native res frame, that would eliminate border artefacts more, like they do to eliminate SSR border artefacts sometimes.
I'm very interested in using a form of overscan where you're viewing a cropped in frame of the whole rendered image, so when you're panning your screen around it doesn't have the issue of stretching, unless you pan outside of the rendered frame.
Well, that's as simple as rendering a little more outside of the screen area.
This is probably my favorite type of video from LTT. Highlighting and explaining interesting technology is fascinating.
it's up there for sure
oh wait, time for another balls to the wall computer build! only the third this week. /s
But for real, they've been doing a great job with not doing what I just said
I would be very curious if a hybrid solution would be possible, such as in a fps game, synchronously drawing the environment, but asynchronously drawing the players? I’m sure there’s some limitations involved with that, but it does sound intriguing
They need this on the Steam Deck ASAP. It would make it last much longer with more demanding titles later on
This is so so incredible! I hope this will be the next-gen image helper in all upcoming and older games!
This kind of thing is what I actually always think about since way back when motion interpolation becomes common in TV plus the fact that I'm familiar with 3D (I don't do real time 3D rendering, only non real time). What I'm thinking was the fact that you have this motion data, depth, etc should be good enough to have some kind of in game motion interpolation but not really interpolation but for future frame. Even without taking the control input into account, just creating that extra frame based on the previous frame data should be good enough to give that extra visual smoothness feel (basically you'll end up with somewhat the same latency as the original FPS). Since it already working directly within the game, we should be able to account for the controller input and the AI, physics, etc to create the fake frames with an actual lower latency benefit, so basically the game engine run at double the rendering FPS so the extra data can be used to generate the fake frames.
For screen edge problem, the simple way to solve it is simply to overscan the rendering (or simply zoom the rendered image a bit) so the game have extra data to work with. Tied to this problem is actually the main problem with motion interpolation and this frame generation/fake frames thing, which is disocclusion. Disocclusion is something that was not in view in the previous frame becoming in view in the current frame. How can the game fill this gap because there is no data to fill the gaps. Nvidia I believe is using AI to fill those gaps which even with AI, it still looked terrible. But as it has been mentioned by people using DLSS3, you don't really see it, which is actually good for non AI solution, because if in motion people don't see that defects, then using non AI solution to fill the gaps (simple warp or something) should be good enough in most situation. Also doesn't need that optical flow accelerator because the reason why Nvidia use optical flow is to get motion data for elements that is not represented on the game motion data (like shadow movement) but in reality, that is not important, as in most probably won't notice when the shadow just move based on the surface motion (rather than the shadow motion itself) for that in between fake frames.
For a more advanced application, what I'm thinking is a hybrid approach where most stuff are being rendered at like half the FPS and half of it will reuse the previous frame data to lessen the rendering burden. So unlike motion interpolation or frame generation, this approach will still render the in between frame, but render it less, like probably render the disoccluded part, maybe decouple the screen space stuff and also shadow so it rendered at normal FPS instead of half so what the game end up with is alternating between high cost and low cost frame.
When I thought about that stuff, AI wasn't a thing thus I didn't think including any AI stuff in the process. Since AI is a thing right now, some stuff probably can be done better with AI like for example the disocclusion problem, rather than render the disoccluded part normally, probably it can just render the disoccluded part with flat texture as a simple guide for the AI to match that flat rendered look to the surrounding image which might be the faster way to do it.
Interpolation for the future is called extrapolation
I'm so happy Phil put a spotlight on this concept, and I'm even happier that a channel like LTT is carrying that torch forwards.
I tried to build something like that demo a few years ago, but I was trying to use motion vectors + depth to reproject my rendered frame which I never got to work correctly. In my engine I rendered a viewport larger than the screen to handle the issue with the blackness on the edges and then was going to use tier 2 variable rate shading to lower the render cost of the parts beyond the screen bounds. But VRS was not supported in any way on my build of Monogame which is what my engine was build apon so that was another killer for the project.
I am so glad that Phil popularised the idea and its awesome that someone else managed to get something like this working, how he did it in one day I will never know, I spent like 3 weeks on it and still failed to get it working correctly. I should find my old demo and see if I can get it compiling again.
GSYNC for me was real game changer and jawdropping moment when I first experienced it with the exact same rest of hardware, it worked so much better than VSYNC ever could, so much smoother especially when FPS dipped only momentarily, with the only downside to GSYNC being that at very low FPS, the Refresh Rate naturally also goes down into the teens, and that leads to super-juddery input lag from hell.
So if GSYNC and ASYNC Repro could be combined to have ASYNC take over at those lower FPS or at least support GSYNC by stopping the syncing of Refresh Rate to Framerate, that might make for the best of both worlds experience, could be pretty cool especially for mid range gaming rigs.
Take this a compliment : I love how LTT has now transformed more into a Computer Science/Electronics for Beginners channel than just another "Hey we got a NEW GPU [REVIEW] " channel.
It's why I keep watching them, I got tired of watching reviews of hardware I can't afford/don't really need yet. Though my VR rig is getting very tired.
Well... They had covered everything on that aisle...
Ahhh so this is how Quest2's run when linked to PC using AirLink. I always wondered why sometimes I would see stretching or the stacked frames if connection was weak.
As a huge VR fanatic, seeing the tech that makes standalone VR possible put to use on a flat screen game is amazing!
I asked in a VR subreddit about a year ago why nobody is making Async for computer games and people gave me shit about it like "wouldn't work that way, the idea is stupid, just not possible, etc." so I gave up. Glad I asked the right people
@FreeDooMusic Why would anyone need more than 640K.....
It's because your head moves independent of your gun that it works in VR. Camera-attached stuff like HUD and weapons can't work with it without layers (which vr supports, usually just used for hud not weapons as they won't react to the lighting right during the move without re-rendering them). Notice in the demo there is no weapon in use by the character.
@InfernosReaper Being an expert doesn't help much either. The problem is nicely described in "Technology Forecasting: The Garden of Forking Paths" by Gwern
@InfernosReaper If a person speaks in absolutes then ... THEY ARE A SITH!!! 😱
@zyxwvutsrqponmlkh Understood now.
But quoting something posted, without posting why, can lead to interpretations as to why. By posting something as not your words, says noting about the intent on posting it.
Leave anything up to interpretation, and anyone can only make guess about it. Unfortunately.
Would love this in games. Even if I can run at 90fps and make it feel like 140 that would be amazing.
I hope this guy does something with this tech quickly, before some "company" incorporates this tech into theirs and calls it DLSS 3.0
I can't wait for Nvidia to take the idea and use it exclusively on the 40 series! What a great new feature!
It's all well and good to make the video feed _smoother,_ but the information we're _not_ getting is the most important - smooth up-to-the-millisecond data on the position and movement of _other_ objects (esp. people and projectiles).
This is exactly why VR games can often _feel_ smooth yet the real gameplay be janky because we aren't actually getting smooth interaction with in-game assets.
In multiplayer FPS especially, there is simply no substitute for eliminating as much disruption to real, accurate feedback as possible, and at high levels of play the jankiness of anti-lag, prediction, ping compensation, or actual network latency are all already nearly unbearable on their own. While there may be use cases for reprojection (quite notably VR), for the most part I'd rather not have games introducing error just to make the game _feel_ smoother than the actual gameplay _is._
It's just one more lie to subconsciously deconstruct on the path to intuitive gameplay. And the fact that this _is_ (mostly) just a placebo would have been much clearer in the trials if there was actually something in the scene besides the player camera moving - or even just a gun to fire.
This makes me wonder if you can get a low latency eye-tracker to use foviated rendering on 2D games. That could get a large performance boost with extremely little compromises. But maybe it's only useful in VR due to the FOV to screen ratio.
Seems like rendering outside of the FOV further would help eliminate a lot of the problems along the edges of the view when using normal-speed movements and panning.
If the game was rendered with a slight overscan area so the GPU had more information about the objects at the edges, and then the image was then cropped back to the display resolution I bet those noticable edge smears probably wouldn't even appear.
The biggest issue I can think of... multiplayer and "tick" rate may prevent it from being used to upgrade old games. Some game engines would take significant redesign to make this worthwhile -- smoother is great but it doesn't give you more data than the system is processing. If the game is still limited but now also has to run an extra step, it might not make any difference-- higher level players use muscle memory for finer movements anyways and once you surpass the game engine's tick rate, may prefer low latency to high frame rates
I thought this was VR when I first clicked because VR has had this since 2017 at least. Awesome to see this finally make its way to the flatscreen :D
This is pretty interesting, even if the DLSS/FSR seems like the better alternative.
I can imagine this being used on old games where it has a forced framerate, be it 30, 60 or even a weird number like 25 (Nintendo 64 emulation could count).
Just think about it, you can play Red Alert 2 at 30 FPS but it feels like it's at 60+ FPS instead, it will be amazing!
3kliksphilip is a cs legend, how he isn’t payed by valve to improve their game I don’t know
Thank you for shouting out 2Kliks on such a huge platform, I know he already has a base, but, obviously nothing compared to your base.
So thats what the black border in vr games is about! Great technology, I love this kind of stuff!
Thanks for driving tech into usability :)
asynch reprojection and all other implementations of it such as motion smoothing for VR have always had one major flaw, and that is when rendering stationary objects against a moving background. The best examples are driving and flying titles such as MSFS and American truck sim. The cab/cockpit generally doesn't change distance from the camera/player view so when the scenery goes past the cockpit parts of the cockpit that are exposed to the moving background start rippling at the edges.
This is one of the reasons AR it not used in VR anymore and also the reason why Motion smoothing is avoided as well. And besides we are talking two different technologies DLSS V Asynch Repro. One is designed to fill in the frames and the other is an upscaler. Not really an apple to apple comparison!
I'm amazed this tech hasn't been implemented in more games.
One thing strikes me though. What about ingame animations (like character movement) or even cutscenes for that matter.
Those aren't tied to user controls like mouse and keyboard. So.. they would still be perceived as 10 fps.. right?
Please keep updates on this going, need to know when I can make my rx580 into a rx1160.
"...or on a Steam Deck 2?"
Steam deck runs quite a bit of open source software. There's not much stopping someone from building it into Proton.
it’s kinda interesting how the more and more more’s law comes to a halt we’ve been moving away from hardware improvements to just absolutely cranking the software trickery to 11, fov rendering, async spacewarp, dlss and such.
like in 50 years computers as we know it today won’t be that much more powerful, itl be the sheer improvement of the code we make them run thatl make them a multi generation jump from todays probably stone age way of software engineering
This would be a huge asset to game develeopers if it catches wind
I also wonder whether GPUs will spit out frames in 2 layers, one that gets reprojected, and the other that doesn’t.
So is this technology heavier on the cpu? I assume interpolating a new frame from a 2D frame is less work than creating a new frame from 3D models.
2kliksphilip is an unsung hero, his DLSS coverage is also one of his best content
@HonoredMule just wait til two pump chump philip comes thru
ouch, dont be so mean
@Elise 3klicksphilip is just more work. Both will be _automatically_ obsolete when 0clicksphilip releases.
2kliksphilip had a good idea, but 3kliksphilip is more advanced in every way!
Personally super excited to see 2klicksphilip's video referred to in a LTT video, a lot of Philip's content is really high quality, especially the ones where he covers DLSS and upscaling as mentioned earlier. Can't recommend checking it out enough!
When the testers started introducing themselves in my mind i said "hi im blank and i own a computer". And then there is Nicholas and you really put "owns a display" below which was why i thought that sentence in the first place.
I will always have to think of that statement when someone names their qualification.
this is amazing, this better be the norm in 2d games!
They could add this to the Steam Deck with just a software update. They already have the ability to let you toggle FSR 1.0 from outside of the game. I don't see why they couldn't add this to their Gamescope compositor.
14:02 - My TV supports interpolating frames, so I can play Nintendo Switch games at 60 FPS, instead of 30 FPS. It also has fewer artefacts than DLSS 3.0, because most games are cartoon-looking which has clear edges, borders, outlines, etc. which makes interpolating easier/better.
The main issue with these workarounds is that they depend on the Z buffer, they break down pretty quickly whenever you have objects superimposed like something behind glass, volumetric effects or screen space effects
You technically only need the depth buffer for positional reprojection (eg. stepping side-to-side). Rotational reprojection (eg. turning your head while standing still) can be done just fine without depth, and this is how most VR reprojection works already, as well as electronic image stabilization features in phone cameras (they reproject the image to render it from a more steady perspective).
It might sound like a major compromise but try doing both motions, and you'll notice that your perspective changes a lot more from the rotational movement than the positional one, which is why rotational reprojection is much more important (although having both is ideal).
Ya, that sounds like it could be a big issue...
Static objects look great in ASW when paning around the camera. Slow objects are ok-ish. The issue is fighter jets zooming across your screen. Yuk.
whoa, now I understand, why i can still move my head around the last rendered frame, when a VR game (eg. Pavlov) crashes!! super cool.
....and wouldn't it also be a gamechanger when car interfaces would use that, imagine a responsive screen in a car :o :D
Undoubtedly a very useful technology, although it would have been a more relevant demo if there some objects were actually moving on the screen. This method works better when mainly the character is moving, and not much else. If there are for example enemies running across the screen, the "real" fps needs to be a lot higher to be convincing.
yeah this only decreases input lag which makes it feel faster but thats all. I tried async timewarp on VR and it was disgusting and more unplayable
"Even running diagnostics on that pesky printer that never cooperates. You know which one I'm talking about, all of them!"
Actually the story of my life in IT...
You see motion above 20 frames per second but it depends how far things have moved and where it is. The centre of your vision is the slowest. It works well on some things and terribly on others. Parallax movements seem worst, so on a car or aircraft simulation the windscreen pillars completely break it. But maybe they could apply it to the far field and just composite the cockpit afterwards after all, they aren't moving.
I'm salivating over this tech. I would've killed for this when I was stuck gaming on a laptop iGPU.
The biggest problem could be solved by rendering some overscan so the edges of the screen aren't right at the edge of the render
what an interesting discovery. I dont think that 600-1500$ gpus with 350+ W should be the future for gaming.
I am wondering if game engines will have like localized rendering or something. Like if the player is standing still why not just render whatever parts of the frame are due to change, like moving characters?
Either way I definitely like this technology. It won't give you more info on other player's movement, but just making the screen move more smoothly still helps you aim because you get more feedback on placing your reticle on the other player.
I think what I would like most for this is reducing jitter, not being 100% active
Love seeing that Radeon on the test bench at the end, instead of a GeForce card. 😉
is there any reason it looks bad in the video, or is the screen tearing happening on the monitor too?
because if they were pro gamers, they would have noticed that
Ok hear me out what if we combined checkerboard rendering and interlaced rendering at half the res and then upscale it w fsr 2.1 so basically render a whole frame w 12.5% or even less pixels !
or would good on the current steam deck since, ya know, it just runs the software in your library. if games just have to support it, then current steam deck could already start benefiting from it. and if it can at all be added on the compositor or driver level, dont see why it couldnt be updated to add support here.
As one of the top 250 vrchat players, I remember when async was a new tech in it's buggy years but new people take for granted HOW MUCH of a massive difference it makes nowadays. I was with VR since consumer conception and it's interesting to see how things have shaped up.
seen couple people "oh I need to turn it off in this" and then forget that motion smoothing is not the same and in steam VR it's quite a hidden option that resets every session.
This feels like a self-own
Could this be game changing for cloud gaming? Lower latency response.
Technically, the GPU reprojection is "rendering" the frame at 240fps, but the content being fed in to be reprojected is only updated at 30fps. You even show that it is rendering to a texture internally and then rendering a single surface (two triangles) with that texture on it. As long as the content (texture) update rate is above human flicker fusion rate, you might never notice. This falls apart at the edges of moving frames when you whip the mouse around, but if instead of rendering full resolution at 30 fps, you sacrifice 6 of those frames to overscan/oversampling, then your 20% reduction in update rate could be used instead to render 20% more pixels (so you don't have to copy from the edges so aggressively). It would resemble what GyroFlow does to stabilize video, but in 3D games, it would smooth any scene where your camera remains stationary (e.g. viewing 360° photos) and also in low-action scenes. Fast movement in the scene will not update at the same rate that you look around, so while this fixes a lot of motion sickness, I don't think it would help as much with racing (incl. driving, flying simulators) or fighting games. Put more simply, I don't think the static scene (and walking around slowly) was a good representation of the overall effectiveness of this technique. More varied examples (than a small, unmoving sandbox) would be needed.
12:40 - I use an LG 60Hz 21:9 LC-Display driven by an RTX 2080Ti (Yeah, I know, not the best combo). Anyway, just because my display can't show more frames per second, it doesn't mean I can't benefit from higher FPS while gaming. In fact, I disable V-Sync and set up my FPS cap to 119,88 FPS. This results in much lower latency, better response in games and also eliminate tearing. I mean, it feels like playing on a 120 Hz display but without owning an 120 Hz display.
3:08 - this is indeed the smoothest 0fps I've ever seen
I feel like this isn’t some fancy thing that you think it is, once the models are more complex or higher res textures it’s probably barely worth it.
They need to add this to Vikings Battle for Asgard... I've still been trying to find a way to play that game, being that it's locked at 30fps, with a reasonable outcome that didn't feel like a sluggish mess
As someone who can't use V-Sync due to input lag delays feeling unplayable, I need this.
I've used this to fix black bars on Shadow PC inside the Occulus, using this method on desktop is genius. We need to push this to game devs immediately.
I see this in pavalov when loading, this effect at 0 fps is evident
you could get rid of edge affects by rendering at a higher fov the zooming in a little bit. it would likely hide the stretching issues
My PC to vr streaming setup is a bit choppy and the visual artifacts in this match almost exactly to the visual artifacts I see during lag spikes
From an engine developer perspective I do have a more critical stance to this as there are a few oversights no one seems to talk about:
One the one side things like DLSS 3 get pixel peeped to the max and every wrong predicted pixel is taken as a flaw, but here we take the way more drastic image errors as not dramatic. Also the sample provided with some flat colors does not do it justice for the amount of visual artifacts this generates when moving around.
However, the bigger point here is the proper implementation of this technique. This works fully fine, while the GPU is not loaded at all and sits idly waiting to render those reprojected frames. But as soon as the GPU is under full load and the reprojection would be useful you quickly get massive frame timing issues reprojecting the frames in time since you can't guarantee the timeslot you get on the GPU. Async Compute Pipelines do exist but definitely do not execution in time.
Modern Engines pre-compute a lot of the draw calls and send them out in CommandLists to reduce the DrawCall overhead to achieve the performance in the first place. You cannot easily interrupt the GPU at any time to do a reprojection and continue where it left of. The Graphics Pipeline State would be lost the engine carefully created and sorted so it does change the least amount of time. An analogy here would be to stop a newspaper press mid-run to pick out a few example and trying to start it up again like nothing happened.
VR gets away with this as the actual reprojection gets done on the Display Device (and some minor reprojection at the end of the frame just before submitting to the Headset).
So for this to be available on regular Desktop games extra hardware either in the Monitor or GPU would be required. This would take the last frame and the changed viewport transform to perform the reprojection in the background, while the rest of the GPU is computing the new frame.
Intel NEEDS this for their Arc GPUs
Never would've thought LTT would cover this, definitely not before Digital Foundry.