Through luck or foresight, Sony appear to be converging on something of a home technology miracle – but to see their approach clearly we should first step back and take a look at the development of 3D.
Do we believe in 3D yet?
We’ve had 3D video content for a long time, it just wasn’t good enough to become more than a novelty. A few years ago I caught a screening of The Creature from the Black Lagoon in the original old-school anaglyphic (red/blue) 3D. While it was an interesting novelty, it was clearly not a compelling enough experience to beat movies in 2D and colour.
Despite it’s naysayers, the modern 3D cinema experience has gained so much traction that on any given trip to the multiplex you’re almost certain to find at least one new 3D release or another, and the box office takings continue to be respectable; the business case for cinemas to upgrade their projectors conveniently boosted by also including an upgrade to digital, killing two birds with one stone. It seems that audiences are prepared to accept the costs (financial, but also the inconvenience of wearing the glasses, not being able to tilt your head, and a slight reduction in brightness) since the result is (usually) sufficiently impressive. The fact that the conversation has moved on to the quality of the 3D (or lack of it, as seen in the hasty post-production processing 3D of the recent versions of Alice in Wonderland and Clash of the Titans) is surely a good sign for acceptance of the medium. Designer and prescription versions of the glasses also suggest that we are at the next stage of technology adoption.
On other screens, the field is still nascent. Predictably, the first consumer version of autostereoscopic 3D, with it’s look-no-glasses magic, is due to appear on a small screen (to make the cost bearable) designed for a single fixed-position viewer (as is at required by the technology), backed by an experienced player in innovative interfaces: the forthcoming Nintendo 3DS.
In television, active shutter 3D at first seems to be a strange proposition: each viewer must have a pair of active shutter glasses, which will seem expensive in comparison to the well-established polarisation glasses used in cinemas and available for some 3D TVs. On the other hand, the advantage is that many 120Hz televisions are already able to produce active shutter 3D imagery. Despite the perception of being uber-early-adopter territory, 3D televisions are effectively already here.
Then there’s the equally amazing fact that a few months ago Sony rolled out a PS3 upgrade to support 3D, removing another hardware barrier – 3D players are already here, in the form of 38 million PS3 consoles.
Meanwhile, in the console wars
Here’s where things get really interesting. Nintendo, Microsoft and Sony are all pushing for new modes of interaction for the games console. Nintendo took a huge gamble but secured an early lead with the Wii in 2006 (remember how the name first sounded to you and you’ll probably experience a flashback to just how crazy the whole idea seemed at the time).
Microsoft claim to have achieved interface nirvana with the entirely controllerless Kinect. Even the oft-cited screens of Minority Report needed a peripheral to operate, although it remains to be seen if it is as incredible as it seems, and accuracy remains a question.
Given the above, Sony’s decision to back what is widely seen as just a more accurate version of the Wii’s system seems a bit baffling. Being a PS3 owner myself, and curious to understand what Sony is thinking, I recent picked up the Move Starter Pack myself.
The answer became abundantly clear as soon as I tried the demo of Tumble, a very simple stack-em-up knock-em-down game. Your movements of the controller – including depth and rotation, which feels somehow much more impressive than movement in the plane – are mapped to an on-screen version that can pick up each brick (see image at top). It’s an impressive technological trick, but it then immediately demonstrates the next problem to solve: there is no depth perception, and you have to rely on a virtual shadow that indicates exactly which part of the playfield is directly below the object you are holding.
And so it suddenly becomes clear that Sony has brought all the ingredients together for interactive augmented realisty. The 3D TVs are already here, the players are already here, and with the Move we suddenly have our 3D controller, which means the hardware for proper augmented reality in the home is pre-installed, just waiting for the right software. The final ingredient is the active shutter glasses, which simply paired with 3D viewing may seem expensive and clunky, but I suspect that image will fall away if you can put them on and then see yourself holding a lightsabre and interacting directly in a 3D virtual environment.
The fact this only works within a field-of-view that includes your TV screen is a limitation, certainly; and the question of whether or not all this can actually be used to create compelling games or usable interfaces remains to be seen – but we can rely on Nintendo to begin exploring this space intelligently with the 3DS, possibly followed by Apple, since the tablet form factor is the natural successor in autostereoscopic 3D.
Or Sony could just have got here accidentally, in which case I can only hope they read this blog.
Now that the dust has settled and the initial spike of test activity has dropped off, we’re starting to see what the fascinating combination of having both a wide pre-installed base and some very interesting functionality is achieving for Google Buzz.
It turns out that Buzz has all the key ingredients (functionality, convenience, and users) to kick off public location-based discussion, which is a pretty big deal.
Recognising that first impressions count, Google haven’t yet allowed visibility into this brave new world from the browser, although it is possible to use a workaround and get a taste of what this means – see the example screenshot above.
Here’s a bit of theory. There’s only two dimensions that really matter – time and space. And the most relevant ends of those spectra are right now and right here. Part of the appeal of Twitter was that you could find out about things happening right now. Google Buzz now takes us the final step of the way.
Just consider what this could look like a few years from now.
The world of evening venues suddenly becomes an efficient market. Buzz will tell you which venues are empty, which are too crowded and where the really interesting people are.
Imagine shoppers operating with a hive mind, honing in collectively on the most compelling local special offers, guided by the invisible hand of Google’s algorithms that highlights only the most relevant buzz – and imagine shops monitoring and reacting to that buzz.
Finally, imagine decades of quiet resentment between neighbours too polite for direct confrontation suddenly exploding into all-out Buzz-enabled flame wars over late night music, post stealing, and territorial hedge issues.
Saying that this is one to watch is an understatement.
In the final post in this series, having introduced Location Based Services and described how HP’s Mscape platform is blazing a trail in this space, it’s time to consider the implications for digital marketing.
It’s extremely challenging to fully grasp the potential of having access to location data on a mobile device. One way to approach the problem is to consider that the most useful sites online have proved to be those that help us to search for things and/or choose from the things that we have found. I segment search/choice-making strategies (implemented either by a user or by an algorithm they choose to run) in to four main types: Objective, Curated, Similarity-based and Profile-based.
This is the simplest strategy in which we have just one or two well-defined, objective criteria in mind, and any choice that satisfies these criteria will do. Perhaps we just want the cheapest (or most expensive) wine on the menu; the most popular camcorder on Amazon that costs under £250; the mobile contract that gives the most minutes for £25 a month.
When we consider how location based services (LBS) might help this strategy, the most obvious benefit is being able to search for places by distance – so we can find the nearest free WiFi zone, or cashpoint, or public convenience. When combined with other criteria, this can become even more useful – find the cheapest beer within 10 minutes, or the most popular tourist attraction within half an hour.
The other benefit of mobile services is having access to time-sensitive information. If you wanted to travel to, say, Baker Street, a service could take into account real-time public transport data, and find the best possible way to get there from where you stand right now.
There’s a further benefit – if usage of these kinds of services reaches critical mass, we will suddenly have much more efficient ‘load balancing’ across all kinds of services, as people can find pubs, buses, facilities etc that are below peak capacity.
When we seek something curated, we use a trusted curator – be it an individual or a group – to help us narrow the field of what to choose. We might listen to a particular DJ to find new music, rely on a few critics and a trusted friend to highlight movies we should seek out, or a community like Digg to direct our attention online.
Although we can already choose places based on a trusted curator – reviews of restaurants being a common example – LBS will still massively improve this type of location-finding. The ease of access to this information while on the move makes it much more appealing, because we can discover interesting places that happen to be close to us that we wouldn’t usually make a specific trip to see, and we don’t have to print out a map in advance.
Possible curated services could include places where a particular celebrity has been spotted, shops that your Facebook friends spend a lot of time in, places that have been mentioned on Boing Boing, great examples of grafitti identified by a Flickr group, or instead finding places to avoid by overlaying geographic crime data.
In this strategy, we narrow the field by looking for something similar to things we have enjoyed before. Musically, this would be when we try music in a genre that we already know we enjoy. More recently, Pandora.com uses a team of musicians to classify thousands of songs, and is thus able to stream music to a listener that is similar to a given song or artist that they choose. Of course, because similarity matching works in the same way for everyone, this is one of the most useful ways shops can lay out their wares.
When it comes to LBS, passively recorded data of places you tend to visit can be used to find similar locations, and alert you when you happen to be close by to somewhere of potential interest. For example, if you visit modern art galleries whenever you are in a new city, a similarity-matching algorithm could notice this, and any time you are 10 minutes away from one you’ve not yet discovered your device could alert you.
In true permission-marketing style, you might opt-in to a service from Wagamama that will alert you if you are within 10 minutes of one of their restaurants and it’s between 6pm and 8pm. Similarity matching could then take this further by identifying nearby places that serve similar food. If you prefer niche rather than chain clothing shops, similarity matching can identify this and point them out to you. (Indeed, a key factor in the success of chains such as Starbucks or HMV is that people can choose to visit a new one knowing roughly what to expect. LBS has the potential to provide much better information about shops you’ve never heard of, at the very instant you are trying to decide whether to go in or not. As LBS takes off, chains may find themselves under threat).
More advanced similarity-matching algorithms might identify (from automatic time/geo-tagging of your Flickr uploads) that you often take photos of sunsets, and could let you know when there’s a beautiful view just around the corner. They could identify pubs/clubs that are frequented by people like you, or find public transport routes that match your preferred balance between scenery and efficiency. Wherever there is tagged location data, similarity-matching will naturally arise.
This type of choice has only recently become an option at all. Not so long ago, the closest we could get would have been speaking to that one employee at the record shop that who knows what you’ve enjoyed in the past and can identify what new music you would love.
This sounds like a curated choice, but there is an important distinction. When we choose something curated, we leverage the fact that we know and understand someone’s way of thinking; in profile matching, we instead use the fact that someone (or an algorithm) understands our own way of thinking.
The modern version of this service is provided by Last.fm, which processes the listening data of thousands of users in order to predict for any one individual what other music they may enjoy. One of the reasons this works so well is that Last.fm can capture the music you listen to through your PC or laptop. This gives it access to a greater quantity and quality of data than if you had to manually tell it your preferences.
Similarly, LBS could track your everyday movements (both real and virtual) and use these to build an accurate picture of your preferences. If sufficiently well programmed, it can then make intelligent recommendations. You watch Sci-Fi films at the cinema and go to and Forbidden Planet – you might like this obscure retro collectible shop that’s down the back street you are about to walk past. You go to art shops and once went to Amecon – you’d probably like this manga art exhibition. You often move at a speed consistent with skateboarding – there’s a great skate location just around the corner. You favourited a Banksy book on Amazon – there’s a Banksy on the street across from here. You subscribe to techy RSS feeds – Inamo, a restaurant with a digitally projected table interface, has a table free for two right now and is 2 minutes away.
This is the top of the tip of the iceberg
Those were just a few examples of using LBS to help choose something, but the potential generalises so much further. There are many more possibilities for each of the four strategies outlined, and you can always mix and match these strategies. There will of course be applications beyond choosing, and beyond using just the location data of your own mobile device. Then there’s the possibilities offered by platforms like Mscape, enhancing the real world with parallel virtual worlds. New business models and new marketing opportunities will inevitably emerge.
So what can we do right now?
Despite relatively low penetration of suitable mobile devices, the conditions are already in place for an LBS killer app to emerge – and this would then drive further support. Although several mobile operating systems already have the potential to support LBS (including Symbian, the dominant OS among smartphones), the greatest opportunity right now sits with the iPhone and the G1. This is because they also each operate a single hub which makes downloading applications easy – the iPhone App Store, and Google’s Android Market. Each has its limitations – iPhone apps can’t yet run in the background (crucial for passive LBS), and the Android Market is some way behind in terms of maturity. But these hurdles can be overcome very quickly.
Keep an eye on interesting and well-implemented LBS applications that are coming out right now. Or better still, make one.
In the video “Roku’s Reward” (above), we are shown HP’s vision of what Mscape might be able to do in the future. Apart from a crushingly unimaginative take on the target audience (after the Wii has already established that innovation in games can engage more than just teenage boys), it suggests a few interesting ideas.
Unfortunately the most noticeable one – viewing ‘through’ a device to see a virtual world replace the real one in real-time – is the furthest from fruition, and also arguably the least practical. It certainly seems highly unlikely that their prediction in April 2007 – that such technology was just two years away -will come to pass.
Right now, the Mscape platform allows anyone to program ‘mediascapes’, which are flash- or html-based applications that can take advantage of the GPS data provided by a mobile device. Despite being a fairly hefty compromise on the vision set out in Roku’s Reward, this is still enough to create some very interesting new experiences.
User-created mediascapes can be uploaded to the Mscape site for anyone to download, try, or edit for their own purposes. The site has been running since April 2007, but hasn’t reached a huge audience (the most popular mediascape clocking just 672 downloads at the time of writing), partly because of the relatively narrow hardware requirements.
Another factor that constrains growth is the fact that geolocation applications are, by definition, based on a specific real-world location, or ‘anchored’ in Mscape’s terminology. It is still possible to write ‘portable’ mediascapes, which can be played anywhere, but these can never fully exploit the power of the platform – the ability to lay a virtual world over the real one, and leverage the interactions that emerge.
Playing with mediascapes
Consequently, I began my Mscape experience by travelling to Bristol, where there are many anchored mediascapes that have been written for locations around the city. As can be expected with user generated content in a new medium, the quality and usability of the mediascapes varied widely, but I quickly gained a feel for what was possible, and to my mind it seemed mediascapes tended to do one of two things.
Some mediascapes take an existing idea and play it out in a new way. The introductory game you are recommended to try first is just ‘whack a mole’ – you set up three locations in an open area, and then whack moles you are told are coming up in these zones simply by running over to them. Another example is a treasure hunt, in which clues direct you to certain places in the centre of Bristol, your GPS location confirms that you have found the place and solved the puzzle, and this triggers the mediascape to give you the next clue. In this particular case, the treasure hunt ended at a crepe van and even suggested you treat yourself to a crepe as your reward, immediately demonstrating some of the potential this field has for marketing.
Having run a couple of treasure hunts in the past myself, I was familiar with some fundamental gameplay problems that Mscape was able to easily fix. One problem is that if several teams are playing, everyone can follow whichever team solves a clue first. In Mscape, one can simply set up a number of different clues which each team visits in a different order, a final clue only being unlocked once all the others have been completed. A more significant problem is that a treasure hunt is no fun at all if a team gets stuck on one particular clue, as there is absolutely nothing more they can do. With Mscape, you can time how long a team has been stuck on a clue, and have hints automatically revealed at appropriate intervals.
With the help of my friend Richard Loxley‘s consummate programming skills, I designed a mediascape treasure hunt set in Regents Park, and ran it for 36 people. It was notably more successful than my previous treasure hunts, particularly in that all the participants had a lot of fun – not just those that found the treasure!
Of course, the other class of things you can do with geolocation technology is things that were never possible before. Among these is my personal favourite of the mediascapes I tested in Bristol, “In 10 seconds“.
“In 10 seconds” is not just a mediascape. It has an introductory short film and a website to be explored first, which is a smart way of building up associations with the location while also establishing back-story. It centres around a ghost story, which requires a particular kind of atmosphere that one does not tend to find in the public park in which the mediascape is located. However, there is another dimension to be considered in the creation of such an experience – time. The mediascape recommends that you play at dusk, and as this is also the time at which the short film was shot this will reinforce the associations. (I myself ended up playing it at midnight, which was also extremely effective!)
One of the things a mediascape creator has a lot of control over is audio, which as Mark Kermode has noted is a key part of creating a chilling experience. “In 10 seconds” uses location-triggered sound to great effect – ghostly voices rush past you, you hear a gate slam shut just after you pass through it, the creak of a playground swing grows louder as you approach even though you can see it is quite still. The sound design of “In 10 seconds” was the finishing touch to a really fascinating experience that could not have been created in any other medium.
Working again with Richard, I produced something similarly new, although not as profound as In 10 Seconds. A full write-up will shortly appear on the Mscape blog, but it essentially involved using the mobile device as a virtual Geiger counter, with the familiar clicking sound of radiation being triggered by location in order to allow teams to track down virtual radiation hotspots – much to the confusion of members of the public! For the finalé, we shot video in the same location as the players found themselves in, which could then be displayed on the devices themselves to give the impression of having a window on a parallel world – a simple way of approaching the kind of experience shown in Roku’s Reward. The illusion was heightened by arranging for teams in different locations to see footage from appropriately different angles.
Given all the above, it should hopefully be clear that the possibilities offered by this technology are huge – too huge, in fact, to easily comprehend. I will go in to the framework I use to try to grasp how this could be used in marketing in the third and final of these posts.