Basic concepts

With key design elements established, the next task was to assemble them into a suitable game architecture, and one that is relatively easy to scale, maintain and adjust. The application is unusual in that the primary input from the user is their location and orientation in the world. From a computer-programming point of view we are basically using the person as a giant 3D computer mouse! (Perhaps mouse is the wrong word when we’re talking about a person?)

Portal overlay on background

Example portal with simulated overlay on background.

Location controls which portals can be seen and when they can be activated. Orientation controls what is shown on the graphical display, such that it remains fixed relative to the world.

In this first application, gameplay is linear, so once one portal is activated the next can become visible. We noted from comments in feedback sessions that it is important to allow a previously activated portal to be reactivated, in case the user missed the content or wants to show it to a person standing near them.

Our developers created the core game engine using Java, with important game parameters being specified through a configuration file. The latter is needed for rapid development because it allows quick tweaks of the game via USB connection to the smart glasses, without the need to reinstall the app.

One interactions were enabled, some additional design elements could be built as described below.

Additional element #1: portal activation

Portal activation occurs by manipulating the portal vortex to become more intense as the viewer continues to look at it. This is done by steadily increasing the rate of plasma filament creation, and also increasing the speed of the vortex. When a vortex reaches a critical intensity, a title is superposed on it. The title is simply a graphical image which has been preconfigured. The set of graphical images to use are specified in the configuration file.

Additional element #2: 3D video with approximate variable viewpoint

After a portal has been activated, the display switches to show 3D video with a scene from the past. We would like our viewer to feel as much a part of the scene as possible. To perform true variable viewpoint video would require a full 3D model of video content, filmed from multiple angles. This was beyond the budget of this initial project. Instead we took an approach in which video action as assumed to occur at a nominal depth, and so the entire video scene is modeled as a plane. As the user looks around, the relative location of this plane compared with the viewer is adjusted. Of course this simplified model will become less believable the more depth there is in the video. Nevertheless it provides a sense that the scene is external to the glasses, not simply fixed to the glasses reference frame.