Microsoft’s new technology transforms your room into a video game

In an unprecedented and long-awaited move, Microsoft has patented a new gaming console that blends projector and Xbox/Kinect technology to take the video game environment literally outside the box and into your home.  The patent should serve to keep Google’s competing Interactive Spaces project at bay, a project that also uses projection and cameras to map locations and movement using blob-tracking.  The console, being touted as Xbox 720/Kinect V2, projects the 360 degree video game display onto all four of your walls, encompassing you in the game and making your room into the game environment.  It tracks furniture positions and adjusts the projection to visually eliminate them from the environment.

Thanks to science, we are one step closer to creating the Holodeck.   I’m so excited that this is happening in my lifetime.  I think it’s something that every gamer has dreamed of at least once in his or her childhood.  The project is estimated to be under construction for another few years.  In the meantime, you can start working on your startle response so you don’t wet yourself when Left 4 Dead’s Hunter pops out from behind your bed.

Here’s some more technical context for the ‘Immersive Display Experience”  (Source: US Patent via WP7’s site.)

A data-holding subsystem holding instructions executable by a logic subsystem is provided. The instructions are configured to output a primary image to a primary display for display by the primary display, and output a peripheral image to an environmental display for projection by the environmental display on an environmental surface of a display environment so that the peripheral image appears as an extension of the primary image.

An interactive computing system configured to provide an immersive display experience within a display environment, the system comprising: a peripheral input configured to receive depth input from a depth camera; a primary display output configured to output a primary image to a primary display device; an environmental display output configured to output a peripheral image to an environmental display; a logic subsystem operatively connectable to the depth camera via the peripheral input, to the primary display via the primary display output, and to the environmental display via the environmental display output; and a data-holding subsystem holding instructions executable by the logic subsystem to: within the display environment, track a user position using the depth input received from the depth camera, and output a peripheral image to the environmental display for projection onto an environmental surface of the display environment so that the peripheral image appears as an extension of the primary image and shields a portion of the user position from light projected from the environmental display.

[0002] An immersive display environment is provided to a human user by projecting a peripheral image onto environmental surfaces around the user. The peripheral images serve as an extension to a primary image displayed on a primary display.

[0003] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.

Source: US Patent via WP7’s site.

 

Advertisements

Google Steals Holodeck Idea from Star Trek with “Interactive Spaces” and it’s Fantastic

Remember when you first discovered the Holodeck, and tried to recreate it with cardboard and paper cutouts in your room? Maybe that was just me.  But get ready, because when you read this article about Google’s new software framework that creates interactive experiences in real physical space, you’re going to be blown away.  Called Interactive Spaces, the open source project was released through Google’s blog this past Monday, and you can check out a copy of the post (written by Keith Hughes of the Experience Engineering Team) below:

“There are cameras in the ceiling which are doing blob tracking, in this case the blobs are people walking on the floor. The floor then responds to the blobs by having colored circles appear underneath the feet of someone standing on the floor and then having the circles follow that person around.
 
Interactive Spaces works by having “consumers” of events, like the floor, connect to “producers” of events, like those cameras in the ceiling. Any number of “producers” and “consumers” can be connected to each other, making it possible to create quite complex behavior in the physical space.
 
Interactive Spaces is written in Java, so it can run on any operating system that supports Java, including Linux and OSX and soon Windows.
 
Interactive Spaces provides a collection of libraries for implementing the activities which will run in your interactive space. Implementing an activity can require anything from a few lines in a simple configuration file to you creating the proper interfaces entirely from scratch. The former gets you off the ground very quickly, but limits what your activity can do, while the latter allows you the most power at the cost of more complexity. Interactive Spaces also provides activities’ runtime environment, allowing you to deploy, start, and stop the activities running on multiple computers from a central web application in your local network.
 
Additional languages like Javascript and Python are supported out of the box. Native applications can also be run, which means packages like openFrameworks which use C++ are also supported out of the box. Plans are also underway for supporting the Processing language.”

No big deal or anything, I’m just sort of FREAKING OUT right now.  This is incredibly cool.  I also dig the added support for Processing, the program I’m currently learning that is fantastic for creating graphics.  Google thinks Processing’s enhanced graphic ability will make a great addition to Interactive Spaces by heightening its visual aesthetic.

I’d like to mention Rockwell Group here, as they collaborated with Google on the project’s initial designs.  They are a New York and Europe-based company whose LAB division “creates narratives and new design opportunities that provide deeper and more valuable experiences for visitors and inhabitants.”  Check out the company’s bio below:

“In general, the ambition of the LAB is to explore, experiment, and embed interactive experiences augmented with digital technology in objects, environments and stories. This activity includes in-house design and the creation of interactive environments/objects, scripting software, science and technology consultation, and maintaining networks of technology solution providers. Our toolkit includes working with custom hardware and software for RFID, UPC scanning, video processing, sonar, capacitance, shape memory alloy, LED and lighting technologies, wireless communications, and screen-based dynamically composited animation” (rockwellgroup).

I look forward to seeing what happens with Interactive Spaces while, at the same time, being super jealous that I don’t have this awesome toy to play with nor the knowledge to make it myself!