The benefits of game streaming are many; instant access, no client patching, no piracy, and no load times. However, video-based streaming, such as OnLive™, does not work well enough. Cost wise—this is due to the high cost of purchasing and running custom servers in multiple locations. Experience wise—framed video tends to drop frames, is very susceptible to network jitter, and will not work on high latency networks, such as mobile networks.

We are developing an approach to stream textured geometry trimmed with a new occlusion culling mechanism that is very tolerant to lag and jitter—enough to work on 4G connections.

Game content size is increasing proportionally with what bandwidth and compression can deliver to a client, but server storage is getting cheaper per terabyte. Using this premise, we preprocess levels and store the geometry viewable from any given region in a large database on a generic server. We then use a dedicated server to stream the textured geometry that could become visible in the next second or so to the connected clients. The geometry is only what could be visible, so there is minimal overdraw and no work needed for object culling. The client can optionally render in stereo for no extra bandwidth for VR purposes.

The method that we use to preprocess the levels is unique and interesting. It is based on a new approach to precomputing high-precision from-region visibility without the high computational and implementation complexities introduced by quadric surfaces. It exploits various types of visibility coherence to enable encoding of visibility event packets with the precision and granularity needed for efficient 3D content streaming.

The optimization challenges now move almost exclusively to bandwidth. For example, an artist will apply a Catmull-Clark filter to a base object, and output a massive bag of polygons for the game to render. We can just store the base object on the server and apply the filter on the client to save a massive amount of bandwidth. This approach opens the door to the Holy Grail of micro-polygons, but that’s an aside.

Our approach uses certain visibility metadata, together with any procedural representation of an object to minimize bandwidth requirement, and reduce client-side object generation time and overdraw. Client generated procedural textures will also be a boon.

Additionally, one of the big problems with autonomous navigation is getting a small enough working set for the software to do 3D model matching. For example, if a Google® car (or an Amazon delivery drone) was model matching to anywhere in San Francisco, it would take a while. We can stream just the geometry that is visible from the region the vehicle or aircraft is in, using the visibility event delta sets, and it would have a much more reasonable set of data to work on.

The visibility metadata stream can also potentially enable some unique capabilities for defense and homeland security applications. 

Those are the very broad strokes. The details are even more interesting. 

Do something different and interesting.  Join the team.

Comment

Login
Welcome, (First Name)!

Forgot? Show
Log In
Enter Member Area
My Profile Log Out