|
May 27th, 2012, 22:34 Posted By: wraggster
The gaming world was a much simpler place back in 2009 when cloud-based gameplay streaming couldn't possibly work - at least not to anywhere near the degree of the claims being made at the time. And yet, despite falling short of the local experience and indeed the scant quoted metrics on performance, OnLive could be played. It was sub-optimal in many ways, but it was playable. It delivered a viable first-gen end product which was ripe for improvement, and just three years later we are seeing workable solutions being introduced that could change everything.At the recent GPU Technology Conference, NVIDIA set out to do exactly that with the unveiling of its new GeForce GRID, an important innovation that potentially solves a lot of problems, both client and server side. Up until now, it's believed that cloud gaming has been achieved by connecting the user to a single PC inside the datacentre with its own discrete GPU. This is hardly power-efficient and it's also very expensive - however, it was also necessary, because graphics cores could not be virtualised across several users in the same way that CPUs have been for some time now. GRID is perhaps the turning point, offering full GPU virtualisation with a power budget of 75W per user. This means that servers can be smaller, cheaper, easier to cool and use less energy.GRID also targets the quality of the client-side experience, too, in that NVIDIA believes it has made key in-roads in tackling latency, the single biggest barrier to cloud gaming's success. So confident is the company of its technology that it has even committed to actual detailed metrics. So let's have a quick look at the gameplay streaming world as viewed by NVIDIA."GeForce GRID is all about minimising server and encoding latency, bringing the Cloud closer to console levels of input lag."
NVIDIA's outlook on Cloud. Perhaps we're seeing best-case scenarios being compared to worst-case as our OnLive measurements only match 'Cloud Gen I' when the service is running poorly. In other cases we see a good 50ms advantage over what is posted here. However, other metrics like 'console plus TV' do tally with our experiences.
The slide on the left, seeking to put latency into context, is perhaps a bit of a red herring on first reading, certainly for a core gamer - perhaps for any console owner. The difference in response between Modern Warfare 3 and virtually any 30FPS shooter is obvious and is a defining factor in why it is the top-selling console shooter. However, there does seem to be a perceptual threshold at which latency actively impedes gameplay or simply doesn't "feel" right - hence GTA4 feeling muggy to control, and Killzone 2 attracting so much criticism for its controls (fixed in the sequel). And we'd concur that this does seem to be around the 200ms mark, factoring in input plus display lag.The slide on the right is where things get interesting. The bottom "console plus TV" metric is actually pretty optimistic for standard 30FPS console gameplay (116ms to 133ms is closer to the console norm based on our measurements), but, on the flipside, the 66ms display lag feels a touch over-inflated. The "Cloud Gen I" bar tallies with our OnLive experiences - or rather, OnLive operating generally below par. At its best, and factoring in NVIDIA's 66ms display lag, it would actually be closer to 216ms rather than the indicated 283ms, and just a frame or two away from typical local console lag in a 30FPS game.The top bar is NVIDIA's stated aim for GeForce GRID - a touch under 150ms, including display latency. Now, that's an ambitious target and there are elements in the breakdown that don't quite make sense to us, but key elements look plausible. The big savings appear to be reserved for network traffic and the capture/encode process. The first element of this equation most likely refers to Gaikai's strategy for more locally placed servers compared to OnLive's smaller number of larger datacentres. The capture/encode latency is the interesting part - NVIDIA's big idea is to use low-latency, fast-read framebuffers linked directly to an onboard compressor.Where we are a little unclear is on the 5ms video decode, down from the 15ms of "Cloud Gen I" - a useful 10ms saving. We are aware that Gaikai's association with LG for direct integration in its Smart TVs has included an aggressive evaluation of the internals aimed at reducing latency, both in decoding the image data but also in scan-out to the screen itself, which may account for this, but it is difficult to believe that this 10ms boost would apply to all devices.Manufacturer-supplied benchmarks always need to be treated with caution and there is a suspicion here that best-case scenario is being compared to worst-case scenario on some elements, but, regardless, it's clear which areas have been targeted by NVIDIA for latency reduction.
http://www.eurogamer.net/articles/di...ud-performance
For more information and downloads, click here!
There are 0 comments - Join In and Discuss Here
|
|