Microsoft’s new technology transforms your room into a video game

In an unprecedented and long-awaited move, Microsoft has patented a new gaming console that blends projector and Xbox/Kinect technology to take the video game environment literally outside the box and into your home.  The patent should serve to keep Google’s competing Interactive Spaces project at bay, a project that also uses projection and cameras to map locations and movement using blob-tracking.  The console, being touted as Xbox 720/Kinect V2, projects the 360 degree video game display onto all four of your walls, encompassing you in the game and making your room into the game environment.  It tracks furniture positions and adjusts the projection to visually eliminate them from the environment.

Thanks to science, we are one step closer to creating the Holodeck.   I’m so excited that this is happening in my lifetime.  I think it’s something that every gamer has dreamed of at least once in his or her childhood.  The project is estimated to be under construction for another few years.  In the meantime, you can start working on your startle response so you don’t wet yourself when Left 4 Dead’s Hunter pops out from behind your bed.

Here’s some more technical context for the ‘Immersive Display Experience”  (Source: US Patent via WP7’s site.)

A data-holding subsystem holding instructions executable by a logic subsystem is provided. The instructions are configured to output a primary image to a primary display for display by the primary display, and output a peripheral image to an environmental display for projection by the environmental display on an environmental surface of a display environment so that the peripheral image appears as an extension of the primary image.

An interactive computing system configured to provide an immersive display experience within a display environment, the system comprising: a peripheral input configured to receive depth input from a depth camera; a primary display output configured to output a primary image to a primary display device; an environmental display output configured to output a peripheral image to an environmental display; a logic subsystem operatively connectable to the depth camera via the peripheral input, to the primary display via the primary display output, and to the environmental display via the environmental display output; and a data-holding subsystem holding instructions executable by the logic subsystem to: within the display environment, track a user position using the depth input received from the depth camera, and output a peripheral image to the environmental display for projection onto an environmental surface of the display environment so that the peripheral image appears as an extension of the primary image and shields a portion of the user position from light projected from the environmental display.

[0002] An immersive display environment is provided to a human user by projecting a peripheral image onto environmental surfaces around the user. The peripheral images serve as an extension to a primary image displayed on a primary display.

[0003] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.

Source: US Patent via WP7’s site.


Google Steals Holodeck Idea from Star Trek with “Interactive Spaces” and it’s Fantastic

Remember when you first discovered the Holodeck, and tried to recreate it with cardboard and paper cutouts in your room? Maybe that was just me.  But get ready, because when you read this article about Google’s new software framework that creates interactive experiences in real physical space, you’re going to be blown away.  Called Interactive Spaces, the open source project was released through Google’s blog this past Monday, and you can check out a copy of the post (written by Keith Hughes of the Experience Engineering Team) below:

“There are cameras in the ceiling which are doing blob tracking, in this case the blobs are people walking on the floor. The floor then responds to the blobs by having colored circles appear underneath the feet of someone standing on the floor and then having the circles follow that person around.
Interactive Spaces works by having “consumers” of events, like the floor, connect to “producers” of events, like those cameras in the ceiling. Any number of “producers” and “consumers” can be connected to each other, making it possible to create quite complex behavior in the physical space.
Interactive Spaces is written in Java, so it can run on any operating system that supports Java, including Linux and OSX and soon Windows.
Interactive Spaces provides a collection of libraries for implementing the activities which will run in your interactive space. Implementing an activity can require anything from a few lines in a simple configuration file to you creating the proper interfaces entirely from scratch. The former gets you off the ground very quickly, but limits what your activity can do, while the latter allows you the most power at the cost of more complexity. Interactive Spaces also provides activities’ runtime environment, allowing you to deploy, start, and stop the activities running on multiple computers from a central web application in your local network.
Additional languages like Javascript and Python are supported out of the box. Native applications can also be run, which means packages like openFrameworks which use C++ are also supported out of the box. Plans are also underway for supporting the Processing language.”

No big deal or anything, I’m just sort of FREAKING OUT right now.  This is incredibly cool.  I also dig the added support for Processing, the program I’m currently learning that is fantastic for creating graphics.  Google thinks Processing’s enhanced graphic ability will make a great addition to Interactive Spaces by heightening its visual aesthetic.

I’d like to mention Rockwell Group here, as they collaborated with Google on the project’s initial designs.  They are a New York and Europe-based company whose LAB division “creates narratives and new design opportunities that provide deeper and more valuable experiences for visitors and inhabitants.”  Check out the company’s bio below:

“In general, the ambition of the LAB is to explore, experiment, and embed interactive experiences augmented with digital technology in objects, environments and stories. This activity includes in-house design and the creation of interactive environments/objects, scripting software, science and technology consultation, and maintaining networks of technology solution providers. Our toolkit includes working with custom hardware and software for RFID, UPC scanning, video processing, sonar, capacitance, shape memory alloy, LED and lighting technologies, wireless communications, and screen-based dynamically composited animation” (rockwellgroup).

I look forward to seeing what happens with Interactive Spaces while, at the same time, being super jealous that I don’t have this awesome toy to play with nor the knowledge to make it myself!

Try Googling “Zerg Rush” for an Interactive Google Experience…

This morning my brother told me to google “zerg rush,” which, to anyone who doesn’t play Starcraft, means complete gibberish.  Still unsure if it was work-friendly or if Zerg was a word for some kind of freaky porn, I decided to take the risk and do it anyway.  For the ladies in the house who don’t dork out on the regsies: zerg rush is a Starcraft reference for battling insectoids.

When you google ‘zerg rush,’ the O’s from Google’s logo launch an attack on your search results, eating them up unless you have extraordinary index finger strength to click them all away before they do any damage.  Even if this is true, you will still lose, I don’t care how good you are at Starcraft.  When the deed is done, you have the option to submit, share, and compare your score with others.

This is the first instance of interactive search engines that I’ve ever seen, and it means exciting things for the future.  What if you were to launch a completely interactive search engine?  What is the intention and reward from doing this?  Well, in Google’s case, it does a few things: 1. They’ve pioneered the interactive search engine style, claiming it as theirs. 2. They will draw even more people to their site while this goes viral, attracting fringe audiences, gamers, and nerds. 3. They can test out interaction on google to see how well it does, how many people it draws, and as a result, it serves as a prototype for future interactive implementation. So, it could and probably will be a huge moneymaker for them.

Google didn’t really need to test this out, since it’s pretty obvious that adding interactivity to any commercial product will take you straight to the bank.  We can’t help it-we’re a curious species and we’ve been wanting to push random buttons since we were toddlers (how often did you argue with siblings over who gets to push the elevator button…?). But I’m glad they did, because they are giving us a taste of what’s yet to come, and I can’t wait to see more.

If you’re still reading this and you haven’t googled Zerg Rush yet, what’s wrong with you?? Go do it! Get ready and click here.