This blog has moved! Redirecting...

You should be automatically redirected. If not, visit http://orangeriverstudio.com/monksbrew/ and update your bookmarks.

September 20, 2008

Day Three (at the AGDC), Part Two

As I mentioned last time, there were some really intriguing presentations on the third day of the conference. One in particular was a technology demonstration given by representatives of two companies, Emotiv Systems and 3DV Systems, which are developing innovative ways for players to interface with computers or other entertainment devices.

Randy Breen from Emotiv Systems demonstrated what he called their "Brain-Computer Interface", a device that fits on the head and is based on EEG machines. It basically translates brain waves into actions after a period of training. It's compact (I didn't even notice him wearing it during his talk), lightweight, and wireless, and includes a gyro to detect head movements. It can also detect facial expressions (blinking, smiling, eyebrow movement) and can essentially monitor emotional states. It can also detect cognitive intent to manipulate objects. Wild, but apparently true.

During the demo, he displayed the tool's SDK which exposed the various detections, and displayed an avatar mimicing his behavior -- blinking when he blinked, raising eyebrows when he did, smiling along with him. He envisions the eventual ability to move lips along with audio to visually represent speech. His last demonstration was of cognitive action: he was able to move a 3D block on screen merely by thinking about it. He then showed how the tool can be trained to perform other cognitive actions, like making the block disappear just by thinking it. Very slick.

It looked as though the tool and its software still have a ways to go before it is fast, smooth, and widely applicable. But it was nevertheless impressive, and appears to offer a ton of possibilities for games in the future.

Next was Charles Bellfield from 3DV Systems, who demonstrated their camera-based tool for detecting and harnessing player motion. Although I didn't understand much of how it worked, it appears as though a sophisticated camera/detector sits on top of a computer screen or television and sends out light waves, measuring the amount of light that reflects back from the individual and using that information to generate a fairly detailed greyscale image of the person's shape -- parts of the person that are closer appear lighter, providing a real-time, three-dimensional, depth-based representation of the subject. They are also able to specify how far from the camera the light should be detected, making it easy to remove all background and focus entirely on the subject. And, though special techniques that I didn't catch, they're able to make the system track different targets on the subject, such as hands, fingertips, feet, and so on.

The demonstrations were very cool: a flight simulator application where the plane is steered by the players hands mimicing the holding of a yoke (and guns fired by the movement of his thumbs); a kickboxing simulation that responds accurately to the player's punching and kicking motions. Even height of the player becomes important, such as in the kickboxing simulation, since the tool can detect height and translate that into the application, providing a more accurate and detailed response.

This system appeared a little closer to being ready for prime time, and could advance the incorporation of player motion into gaming in a fashion similar to the Wiimote, only without the need for a remote at all. All in all, a very cool set of presentations, and I'd really like to see how these tools make it into the marketplace.

The last presentation that I went to was given by Adrian Hon from Six To Start, who discussed the "We Tell Stories" digital fiction project from Penguin Books. This was the project a little while back that presented six alternative stories by six different authors over six weeks, each using a unique form of presentation. "The 21 Steps" by Charles Cumming was a story told through Google Maps; "Slice" by Toby Litt was a slow-motion horror story that was blog- and twitter-based; "Fairy Tales" by Kevin Brooks was a fairy-tale-maker story with a simplistic, branching narrative similar to a CYOA; "Your Place and Mine" by Nicci French was a psychologic thriller/horror story written in real time, one hour per day over five days, similar to improvisational storytelling; "Hard Times" by Matt Mason and Nicholas Felton was more like an essay, and may have suceeded less because of this.

The sixth and last story was "The (Former) General in His Labyrinth" by Mohsin Hamid, a tale about the now former Pakistani leader written in a CYOA with HTML style, a sort-of hybrid CYOA/text adventure/dungeon map, as Hon put it, although I think that's a slight bit of a stretch. Still, the design is intriguing. The player essentially chooses the path through the story by clicking on different directional arrows; although the movement through the story is tracked visually with a "storymap", the movement is not location-based but rather story node-based. That is to say, movement to one particular node represents a particular choice and reveals a specific portion of the narrative, with the map showing which nodes have been visited, and the story changes based on the nodes that have been visited. What's interesting is that the author designed it such that there is no repetition when re-visiting nodes -- the story changes, even if slightly, when backtracking, and some portions of the storymap are designed as loops that "play" differently when traversed clockwise as opposed to counterclockwise. It sounded like a clever design, and I'll have to try that one for certain.

The remainder of Hon's talk was about storytelling in games, and although it covered much of the usual ground, it was refreshing to hear it from the perspective of the literary scene rather than the gaming scene. One point he specifically made was that most readers (as opposed to gamers) have little to no desire to see interactivity in their stories -- or, alternatively, they don't think interactivity automatically makes story better -- which is interesting in light of the desire of some IF supporters to see the medium reach out to a more reader-oriented audience. He himself believed that stories in games are (or will be) better because they are interactive, but we still have a long way to go -- its not easy to write a story, as he said, and a good story in a game requires writers with sufficient independence and the trust and respect of the designers. Like Andrew Stern's talk earlier in the day, it triggered a good deal of stimulating discussion from the audience, which was refreshing to see, and the sign of a thoughtful, engaging talk.

No comments: