Why we aren’t seeing more interactive music videos?
Music videos have always facilitated experimental art because there is an existing script, soundtrack, and tone to work with. The music provides an existing creative framework which heavily narrows the focus to production.
What’s not successful about the music video platform in interactivity?
I explored these five unique pieces (that each explore a different method of music video interactivity) to find some answers:
1.Ellie Goulding “Lights”
-User-navigated environment, emphasis on visuals
2. Arcade Fire “The Wilderness Downtown”
-User-stimulated environments, emphasis on bridging gap between “Arcade Fire” and audience
-User-controlled instrumentation, mild usage of interesting interactive visuals
4. Chairlift “Met Before”
-User-controlled narrative, emphasis on story, no interactive audio/visuals
-User-controlled narrative, engaging content, many options to choose from
While most of these music videos had limited use of interactivity in some way, RHCP’s “Look Around” really engages with the viewer by making there ample options of things to play around. Allowing the user to scroll between four videos, each starring one member of RHCP doing something silly, gives enough footage that we don’t lose interest. Hidden in each video are highlighted items you can click to see personal footage of band members just being, well, human.
With the five music videos above, the interactive waters have been tested and my diagnosis is that we need to make these videos more interesting, via more content, more interaction, or more narrative. My favorite pieces were Chairlift’s “Met Before” and RHCP’s “Look Around” because the narrative pulled me in–and after all, isn’t that what makes a music video successful? Great music is what gets you a video, but it’s not what makes a video great.
Video has always been about telling a story, and you can’t engage with your audience without one. Cold War Kids “I’ve seen enough” impressed me with its creativity, but gave me no reason to stick around for the whole song after I’d worn out the interactive capabilities. Ellie Goulding’s “Lights” was awesome for about 30 seconds, but then I lost interest after I realized I’d be seeing nothing new if I kept playing.
Maybe as this technology becomes less fresh, we’ll start seeing some real blockbuster interactive music videos. Until then, I’ll just enjoy this one:
One of the many fascinating projects displayed at San Fransisco’s Urban Prototyping Festival this year is “Pulse of the City.” Below are excerpts from their website as well as a wonderful article from Nathan Hurst at WIRED:
“The Pulse of the City team incorporated a lot more than one digital element. A heart-shaped sculpture, bigger than a parking meter, made of cardboard and auto body putty, Pulselinked an EKG board to a pair of copper handles to measure the pulse of anyone holding it. Then, with an Arduino, a midi shield, a handful of LEDs, and an XBee radio, it generated a light and music show, and shared pulse information to the web.
‘We programmed an algorithm that takes your heartbeat and makes a unique tempo, drum beat, and melody,’ said George Zisiadis, who created Pulse of the City with Matt Ligon and Rachel McConnell. “It’s the first time people ever have a sense of what they sound like.’
Of course, a portable cardboard structure isn’t quite ready to be a semi-permanent installation on a street corner somewhere. Like all the other projects, Pulse of the City was a prototype. And like the others, it’s open source. ‘We’re not going to travel around the country and install these, but anybody can,’ said Zisiadis, noting that plans for the device would be published on GitHub and Instructables.
In fact, the open-source nature of the projects represents both an opportunity and a risk for their dissemination. While it means that anyone who wishes could follow along and build their own, it doesn’t necessarily mean they will.
‘In terms of spreading, from the outside there’s somewhat of a mentality that open source is kind of magic, and if you put it out there, amazing things happen on their own,’ said Levitas. And while that’s partially true, it still takes outreach to spread the word. Now that the festival is over, GAFFTA plans to meet with each team to discuss how to proceed. Most will begin a crowdfunding campaign of some sort, said Levitas.
‘There’s not really a central node for public design, public technology,’ he said. ‘We hope to become sort of a central resource to that.'” (Nathan Hurst, WIRED)
And from the Pulse of the City UPF site:
“Pulse of the City playfully empowers pedestrians with self-awareness of their heart rates by translating them into unique musical compositions in real-time. It simultaneously streams this heart rate data to the internet for anyone to explore and analyze.
Project by George Zisiadis, Matt Ligon, Rachel McConnell and Rich Trapani.”
In an unprecedented and long-awaited move, Microsoft has patented a new gaming console that blends projector and Xbox/Kinect technology to take the video game environment literally outside the box and into your home. The patent should serve to keep Google’s competing Interactive Spaces project at bay, a project that also uses projection and cameras to map locations and movement using blob-tracking. The console, being touted as Xbox 720/Kinect V2, projects the 360 degree video game display onto all four of your walls, encompassing you in the game and making your room into the game environment. It tracks furniture positions and adjusts the projection to visually eliminate them from the environment.
Thanks to science, we are one step closer to creating the Holodeck. I’m so excited that this is happening in my lifetime. I think it’s something that every gamer has dreamed of at least once in his or her childhood. The project is estimated to be under construction for another few years. In the meantime, you can start working on your startle response so you don’t wet yourself when Left 4 Dead’s Hunter pops out from behind your bed.
A data-holding subsystem holding instructions executable by a logic subsystem is provided. The instructions are configured to output a primary image to a primary display for display by the primary display, and output a peripheral image to an environmental display for projection by the environmental display on an environmental surface of a display environment so that the peripheral image appears as an extension of the primary image.
An interactive computing system configured to provide an immersive display experience within a display environment, the system comprising: a peripheral input configured to receive depth input from a depth camera; a primary display output configured to output a primary image to a primary display device; an environmental display output configured to output a peripheral image to an environmental display; a logic subsystem operatively connectable to the depth camera via the peripheral input, to the primary display via the primary display output, and to the environmental display via the environmental display output; and a data-holding subsystem holding instructions executable by the logic subsystem to: within the display environment, track a user position using the depth input received from the depth camera, and output a peripheral image to the environmental display for projection onto an environmental surface of the display environment so that the peripheral image appears as an extension of the primary image and shields a portion of the user position from light projected from the environmental display.
 An immersive display environment is provided to a human user by projecting a peripheral image onto environmental surfaces around the user. The peripheral images serve as an extension to a primary image displayed on a primary display.
 This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
“A 3D gestural game. Using an IR 3D Camera, we translate gestures of the human body navigate a virtual landscape.
Start the game, choose a character and try and get the fastest time through the race track.
This project was launched at Skellefteå airport in Northern Sweden as an installation.”
Credits: Interactive Institute Umeå, North Kingdom and Adopticum
Everything shot in studio with 3 beamers projecting on a floor and two walls.
Directed by: Filip Sterckx
DOP: Pierre Schreuder
3D animation / Editing: Filip Sterckx
Production: Pierre Schreuder, Filip Sterckx
Technical support: Aitor Biedma
Production assistant: Nils Goddeeris
Thanks to: Het Depot, Stake5, Cools multimedia, Tom Brewaeys, Birgit Sterckx, Antoon Verbeeck, Pieter-Jan Boghe
Night Bright is an interactive installation where children physically interact with an imaginary nocturnal ecosystem. As explained by creator Design I/O:
“Night Bright is an interactive installation of nocturnal discovery where children use their bodies to light up the nighttime forest and discover the creatures that inhabit it. Listening to the creatures’ sounds children can locate them in the forest, as they play a nighttime game of hide and seek. Some creatures are curious and will investigate the light, while others are frightened and will hide in the shadows. Using their light, children can grow nocturnal plants and release fireflies from their flowers. The fireflies illuminate the environment and help locate the creatures hiding in the forest.
Night Bright was created for the Bumble children’s cafe in Los Altos, California.
Music for the video documentation courtesy of Diederik Idenburg / MOST Original Soundtracks.” (Design I/O)
Source: All images from Design I/O.
Dreamoc mixes 3D motion graphics with real objects or no objects at all to create a stunning holographic display. Marketed commercially, Dreamoc is made by Denmark-based company RealFiction to advertise retail. Leave it to the Scandinavians! This kind of technology holds great possibilities for the arts and experiential realms as well. Eager to find out more information on how exactly they use the glass pyramid to create a 3D holographic illusion.
Check out this demo video:
How do we create successful media in the 21st century? Ken Auletta’s book Googled provides some insight. I found it relevant not only because the book is a fantastic peek into Google’s inner workings, but because, as you might imagine, it necessitates interactivity.
Albie Hecht, founder of Spike TV, former president of Nickelodeon, and current CEO mogul of Worldwide Biggies uses six criteria for selecting media projects. Referred to at Worldwide Biggies as the “Six Levels of Engagement,” meeting four of the below criteria suggests a promising project while meeting all of the criteria indicates a “hit.” The following content comes directly from Albie Hecht’s words as printed in Googled (Auletta, p.146).
1. Watch (on any device)
2. Learn (by searching for information about it on the Web)
3. Play (games)
4. Connect (social networks, IM)
5. Collect (microtransactions involving money on the Web)
6. Create (user-generated content)
It is no coincidence that this passage comes before Auletta introduces Google’s 2006 YouTube acquisition for $1.65 billion: YouTube emphasizes all of these criteria, despite the fact that many cable networks doubted and condemned Google’s plan as a failed revenue-maker.
To go into detail here, YouTube let’s you:
1. Watch on computers, smart phones, tablets, and GoogleTV (but not Roku, a decision made by Google themselves)
2. Learn by easily searching (convenient with Google) for YouTube videos. Also, YouTube videos pop up in search results when searching how-to’s and instructional videos.
3. Play: while to my knowledge YouTube doesn’t host interactive videos in the same spectrum as Vimeo’s Old Spice Muscle Music guy [yet], you can create playlists, subscribe, watch trailers for games, watch game walk-throughs, and browse videos in the “Gaming” section. There’s also an aspect of playfulness in the user-generated annotations.
4. Connect via sharing videos, playlists, creating video responses to other users and videos, and of course, the comment section. The comment section needs attention as it’s filled with racism, discrimination, rudeness, and sometimes, just plain evil. Is Google YouTube responsible for their user’s comments? To some degree. While free speech censorship is a violation of Google’s policy, the sheer number of hate-speech comments is beyond the scope of their user-managed offensive content removal strategy.
5. Collect revenue by selling Ad space not only on the pages, but on the videos themselves either in commercial form (such as Vevo) or lower-third pop ups, which you can close but not prevent.
6. Create user-generated content: well this one’s easy! Users upload all sorts of home and mobile videos, video blogs, and karaoke recordings that go viral and even skyrocket people to fame. Did YouTube ever think they would be responsible for Justin Bieber? I would like to know.
Lastly, I wanted to apply these to the Draw Something app, but I’ll let the readers think about that one for now.