At the Sony conference where Sony revealed the PS4, a service called PlayGo was mentioned, not a great amount was explained about it, apart from it allowing you to play a game at the same time you are downloading it, yesterday in an interview with Gamasutra, Mark Cerny discussed more about PlayGo, here are some of the key points he spoke of:
“The reason we use dedicated units is it means the overhead as far as games are concerned is very low,” said Cerny. “It also establishes a baseline that we can use in our user experience.”
“For example, by having the hardware dedicated unit for audio, that means we can support audio chat without the games needing to dedicate any significant resources to them. The same thing for compression and decompression of video.” The audio unit also handles decompression of “a very large number” of MP3 streams for in-game audio”
“The concept is you download just a portion of the overall data and start your play session, and you continue your play session as the rest downloads in the background”
However, PlayGo “is two separate linked systems” Cerny said. The other is to do with the Blu-ray drive — to help with the fact that it is, essentially, a bit slow for next-gen games.
“So, what we do as the game accesses the Blu-ray disc, is we take any data that was accessed and we put it on the hard drive. And if then if there is idle time, we go ahead and copy the remaining data to the hard drive. And what that means is after an hour or two, the game is on the hard drive, and you have access, you have dramatically quicker loading… And you have the ability to do some truly high-speed streaming.”
To further help the Blu-ray along, the system also has a unit to support zlib decompression — so developers can confidently compress all of their game data and know the system will decode it on the fly. “As a minimum, our vision is that our games are zlib compressed on media”
Mark Cerny also discussed how the modified the PS4 Hardware, here is a quote:
The three “major modifications” Sony did to the architecture to support this vision are as follows, in Cerny’s words:
- “First, we added another bus to the GPU that allows it to read directly from system memory or write directly to system memory, bypassing its own L1 and L2 caches. As a result, if the data that’s being passed back and forth between CPU and GPU is small, you don’t have issues with synchronization between them anymore. And by small, I just mean small in next-gen terms. We can pass almost 20 gigabytes a second down that bus. That’s not very small in today’s terms — it’s larger than the PCIe on most PCs!
- “Next, to support the case where you want to use the GPU L2 cache simultaneously for both graphics processing and asynchronous compute, we have added a bit in the tags of the cache lines, we call it the ‘volatile’ bit. You can then selectively mark all accesses by compute as ‘volatile,’ and when it’s time for compute to read from system memory, it can invalidate, selectively, the lines it uses in the L2. When it comes time to write back the results, it can write back selectively the lines that it uses. This innovation allows compute to use the GPU L2 cache and perform the required operations without significantly impacting the graphics operations going on at the same time — in other words, it radically reduces the overhead of running compute and graphics together on the GPU.”
- Thirdly, said Cerny, “The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we’ve worked with AMD to increase the limit to 64 sources of compute commands — the idea is if you have some asynchronous compute you want to perform, you put commands in one of these 64 queues, and then there are multiple levels of arbitration in the hardware to determine what runs, how it runs, and when it runs, alongside the graphics that’s in the system.”
To read the full interview, check this link:
Inside the PlayStation 4 With Mark Cerny