There have been a few interesting items surfacing on augmented reality recently. It is still very much a futuristic technology, but maybe not too distant future afterall. Augmented reality means intermingling digital and real objects, either adding additional digital objects to the real world, or in an inverse sense, combining real world objects into a virtual digital world.
Here is an interesting example of augmenting a digital virtual world with real world objects borrowed from street view. The interface utilizes an iPhone inertial sensor to move the view inside a virtual world, but this virtual world is a mimic of the street side in Paris at the point in time that Google’s Street View truck went past.
Fig 1 – Low tech high tech virtual reality interface
In this Sixth Sense presentation at TED, Pranav Mistry explores the interchangebility of real and virtual objects. The camera eye and microphone sensors are used to interpret gestures and interact with digital objects. These digital objects are then re-projected into the real world on real objects such as paper, books, and even other people.
Fig 3 Augmented Reality Pranav Mistry
A fascinating question is, “How might an augmented reality interface impact GIS?”
Google’s recent announcement of replacing its digital map model with one of its own creation, along with the introduction of the first Android devices, triggered a flurry of blog postings. One of the more interesting posts speculated about the target of Google’s “less than free” business model. Gurley reasoned plausibly that the target is the local ad revenue market.
Google’s ad revenue business model was and is a disruptive change in the IT world. Google appears interested in even larger local ad revenues, harnessed by a massive distribution of Android enabled GPS cell phones. It is the interplay of core aggregator capability with edge location that brings in the next generation of ad revenue. The immediate ancillary casualties in this case are the personal GPS manufacturers and a few map data vendors.
Local ads may not be as large a market source as believed, but if it is, the interplay of the network edge with network core may be an additional disruptive change. Apple has a network edge play with iPhone and a core play with media iTunes & iVideo, Google has Android/Chrome at the edge and Search/Google Maps at core. Microsoft has Bing Maps/Search at core as well as dabbling less successfully in media, but I don’t see much activity at the edge?
Of course if mobile hardware capability evolves fast enough, Microsoft’s regular OS will soon enough fit on mobiles, perhaps in time to short circuit an edge market capture by Apple and Google. Windows 8 on a cell phone would open the door wide to Silverlight/WPF UI developers. Android’s potential success would be based on the comparative lack of cpu/memory on mobile devices, but that is only a temporary state, perhaps 2 years. However, in two years the world is a far different place.
By that time augmented reality stuff will be part of the tool kit for ad enhancements:
- Point a phone camera at a store and show all sale prices overlaid on the store front for items fitting the user’s demographic profile. (Products and store pays service)
- Inside a grocery store scan shelf items through the cell screen with paid ad enhancements customized to the user’s past buying profile. (Products pay store, store pays service)
- Inside store point at a product and get list of price comparisons from all competing stores within 2 miles. (product or user pays service)
- Crowd gamer will recognize other team members (or face book friends, or other security personnel . . ) with an augmented realty enhancement when scanning a crowd (gamer subscribes to service, product pays service for ads targeted to gamer)
And non commercial, non ad uses:
- A first responder points cell phone at a building and bring up the emergency plan overlay and list of toxic substance storage. (Fire district pays service)
- Field utility repair personnel points cell at a transformer and sees an overlay of past history with parts list, schematics, etc, etc (utility pays service)
It just requires edge location available to core data services that reflects filtered data back to the edge. The ad revenue owner holds both a core data source and an edge unit location. They sell ads priced on market share of that interplay. Google wants to own the edge and have all ad revenue owners come through them so the OS is less than free in exchange for slice of ad revenue.
Back to augmented reality. As Pranav Mistry points out there is a largely unexplored region between the edge and the core, between reality and virtual reality, which is the home of augmented reality. GIS fits into this by storing spatial location for objects in the real world back at the network core available to edge location devices, which can in turn augment local objects with this additional information from the core.
Just add a GPS to the Sixth Sense camera/mic device and the outside world at an edge location is merged with any information available at core. So for example scan objects from edge location with the camera and you have augmented information about any other mobile GPS or location data at the core. Since Android = edge GPS + link to core + gesture interface + camera (still missing screen projector and mic), no wonder it has potential as a game changer. Google appears more astute in the “organizing the world” arena than Apple, who apparently remains fixated on merely “organizing style.”
Oh, and one more part of the local interface device still missing, a pointer. NextGen UI for GIS
Fig 5 – Laser Distance Meter Leica LDM
Add a laser ranging pointer to the mobile device and you have a rather specific point and click interface to real world objects.
- The phone location is known thanks to GPS.
- The range device bearing and heading are known, due to an internal compass and/or inertial sensors.
- Distance available from the range beam gives precise delta distance to an object relative to the mobile device.
Send delta distance and current GPS position back to the core where a GIS spatial query determines any known object at that spatial location. This item’s attributes are returned to the edge device and projected onto any convenient local object, augmenting the local world with the stored spatial data from the core. After watching Pranav Mistry’s research presentation it all seems not too far outside of reality.
GIS has an important part to play here because it is the repository of all things spatial.