A couple months back, we published an article talking about game developers looking to the edge and the transformative value it can offer, particularly in relation to faster and more reliable gathering and processing of in-game data that the edge supports.
This article focuses more on the benefits of edge compute for gamers themselves and draws out a host of examples that gamers (and game creators) can relate to.
Despite the mobile games industry being valued at over $50 billion in 2018, downloads and in-app purchases have plateaued in recent years. This is in part due to a lack of innovation and the expectation of seamless and sophisticated gaming scenarios that demand lots of storage (locally and in the cloud) and maximum processing power, which gaming companies have not yet been able to deliver. Current network, storage, and processing limitations have made delivery of this kind of sophistication on a mobile or IoT device for online gaming, virtual reality (VR) and augmented reality (AR) difficult.
Edge computing, however, promises better gaming experiences by lowering latency and improving accessibility at a more affordable cost to gamers. When workloads run at the edge of the network (instead of being sent to a few centralized locations for processing), data need only travel the minimum necessary distance, reducing associated lag time and enabling more interactive and immersive in-game experiences. Furthermore, edge computing is paving the way for more subscription-based models that could ultimately put some money back in gamers’ pockets by reducing the need for game and hardware investments.
A Better In-Game Experience
Improved Multiplayer Experience
Edge computing boosts the opportunity to serve multiplayer gaming, which is both latency sensitive and bandwidth intensive. By matching a gamer by its location then placing game servers closer to them, multiplayer latency can reach single-digit milliseconds, which dramatically decreases any lagginess.
Hatch, a spin-off from Rovio – the mobile cloud gaming company behind Angry Birds, is a Packet customer (like Webscale CloudFlow) that benefits from its micro data centers deployed in cities, close to users and its unique business model in which manufacturers and developers can implement specialized hardware at Packet’s edge data centers. This allows Hatch to quickly update and refresh the 90+ games on its monthly subscription platform as the need arises, ensuring its users get superfast access to the latest developments in their mobile games.
On Packet’s services, Hatch runs low latency multi-player-gaming streaming services to users with low-end Android devices. According to Zachary Smith, CEO of Packet, “[Hatch] needs fairly specialized ARM servers in all these markets around the world. They have customized configurations of our server offering, and we put it in eight global markets across Europe, and soon it will be in 20 or 25 markets. It feels like Amazon to them, but they get to run customized hardware in every market in Europe.” Hatch could do the same thing in the public cloud in theory, however, the costs would make that an inefficient business model. Smith says, “The difference is between putting 100 users per CPU versus putting 10,000 users per CPU”. Smith believes the new model will be of interest to the latest developer generation that will be driving the next set of innovations in software.
Enabling Better VR/AR
A key advantage to edge compute for VR and AR experiences is the ability to reduce dizziness associated with low latency and slow frame refresh rates. This can lead to a laggy experience that is frustrating, potentially nausea inducing and ultimately disorienting.
AR services need an application to analyze the output from a device’s camera and/or a specific location so that a user’s experience when visiting a point of interest can be supplemented. The application needs awareness of a user’s position and the direction they are looking in, provided via the camera view, positioning techniques, or both. Following analysis, the application is then able to offer additional information in real-time to the user. As soon as the user moves, that information needs to be refreshed. Hosting the Augmented Reality service on a Mobile Edge Computing (MEC) platform instead of in the cloud is beneficial because supplementary information relevant to a point of interest is highly localized and frequently irrelevant beyond the particular point of interest. The processing of information from the camera view or user location can also be performed on a MEC server instead of a cloud server to benefit from the lower latency and higher rate of data processing possible at the edge.
The huge success of the AR game, Pokemón Go, was largely due to the way it enabled rich user interactions with the real world. Through geotagging and a connection to users’ Google data, the app could collect large amounts of data per user, including location, player movement and Internet connectivity.
The game’s worldwide success disarmed Niantic (Pokemón Go’s creators) however, who only had a minimal global presence. Server crashes, hacks that invaded user privacy and various other disruptions were experienced, leading to angry venting by users on the web and a slew of bad publicity. It’s not clear, but likely that the game’s servers were hosted on the Google Cloud Platform, which couldn’t handle the unexpectedly high volume of users. Edge computing, however, is an ideal scenario for these types of games. By moving processing to the edge, closer to the end user, similar apps could offer a superior user experience by reducing latency and service disruptions.
Privacy challenges were another significant issue with the first iteration of Pokemón Go. Reports of hacking grew in number due to the game being able to access critical pieces of user data, including camera, contacts, location and Google account. Edge computing can better overcome this problem as well by keeping processing localized in neighborhood data centers, or on the device itself, rather than sending sensitive data over the network, back to the cloud.
The Evolution of Cloud Gaming / Subscription Services
Cloud gaming looked set to catch on and become the future of video gaming back in 2009 when OnLive, the first cloud game streaming service launched. At the time IGN wrote, “this next generation cloud technology could change videogames forever” leading to time in which “you may never need a high-end PC to play the latest games, or perhaps even ever buy a console again”. The service, which at one time received a valuation of $1.8 billion, closed down for good only six years later (in April 2015), however, unloading its patents to Sony along the way.
OnLive was intended to be the simplest iteration of “pick-up-and-play” on the market, with games running on the company’s servers and the video and audio streams compressed for transmission across the Internet to be played in the homes of gamers. The service ran into its first set of challenges in 2012 when it closed after running up $40 million in debt and losing many of its employees. It reopened in 2014, launching as a monthly subscription service, initially for $14.99, a sum which was later reduced to $7.95. The company eventually closed its doors for good the following year as the business was simply unsustainable.
However, although the business failed partly because of doubts over its ability to deliver a lag-free experience, latency-free cloud gaming sold via subscription was still a revolutionary idea. The success of other streaming subscription models that work in this vein such as Netflix, Hulu and Spotify demonstrate the potential for such an idea in gaming. Indeed, new subscription services such as Sony PlayStation Now and Nvidia are beginning to gather steam in a way that OnLive never did.
Sony PlayStation Now offers “an instant, ever-changing collection of hundreds of PlayStation games – ready to download on PS4 or stream on PS4 or PC”. Last year, Nvidia unveiled a beta version in Windows of its new game streaming service, Geforce Now, which similarly to OnLive, offers users access to a library of video games in the cloud in exchange for a monthly subscription fee. A high-end PC is not needed to run the gaming client.
Game-streaming services like Sony PlayStation Now and Nvidia are placing a lot of faith in edge computing enabling their success. Latency can quickly destroy a user experience; a video game needs to be able to respond to keystrokes. Any commands issued must travel over the network in each direction to be processed fast enough by the data center for the gamer to feel like the game is responding to each keyboard and mouse stroke in real time. The sole way to ensure that kind of latency is to place the computer and processing power of the gaming data centers as close as possible to the end user.
In a recent demonstration of the service at AT&T’s Spark conference in San Francisco, Nvidia showed that the demo game, which had a resolution of 1920 by 1080, had only 16 milliseconds of delay between the laptop and AT&T’s data center in Santa Clara using its edge network.
Reduced Hardware Requirements
One of the great benefits of gaming subscription services for gamers is the way in which they reduce the need for regular investments in new systems (e.g. new PC, Sony PlayStation, Xbox, etc.) and the corresponding need to frequently update those systems and purchase the games and the components required to run them, such as graphics cards, processing, etc.
At the AT&T Spark demonstration, Paul Bommarito, Vice President of Americas Enterprise Sales for Nvidia, said, “So in the past to get this level of experience, you would need a workstation with a graphics processor costing a few thousand dollars. With GeForce Now and the graphics acceleration taking place in the cloud, you can get that level of beautiful experience on a $200 laptop. I think the best thing is 5G. If you think about that mobility capability of this high-bandwidth, low-latency network, the ability to have this gaming experience anytime, anyplace, anywhere, with GeForce Now on any device, our customers are going to love it.”
The Future of Gaming at the Edge
Edge compute makes online gaming more commercially viable than cloud compute was able to. As latency is so essential to the success of immersive mobile cloud gaming, as well as to VR and AR, compute frameworks have not been able to match its promise until now. By placing the gathering and processing of large amounts of information at the edge of the network as close to the user as possible, these challenges can start to offer the kind of low latency required to make online gaming an ongoing success.
Improved network performance in areas such as delay and packet jitter directly translates to improvements in application performance, including in areas critical to the success of online gaming, such as motion-to-photon latency and frame loss.
As Matt Caulfield, self-identified “edge computing and distributed systems enthusiast”, recently wrote in a post on Medium, “The lower the latency between a game console or PC gaming rig and the backend server, the lower the lag. The rise of competitive gaming suggests that the massive gaming community is willing to pay a premium for a better experience.”
As a result of a subscription-based edge streaming model, gamers will no longer need to regularly purchase updated hardware or software, and instead, subscribe to an edge-hosted gaming platform they can access from existing devices. Users will be able to connect to the continually evolving library of games, connecting to it remotely while the edge hardware is kept up to date elsewhere.
Perhaps edge computing, with its promise of dramatically low latencies, will reignite the streaming model in games once more. At a panel discussion at AT&T’s Spark conference, Microsoft Azure’s Royeka Jones described edge compute as “the enabler that will allow infinite possibilities around what we can do with technologies.”