What’s Next in Big Tech? [Part III]

Science fiction is, believe it or not, a pretty damn reliable barometer for the future sometimes. One could even argue that some technological innovations take shape because the inventor/key player saw or read about it in something science fiction-y.

For anyone who watched the original Star Trek (c. 1960s), two technological marvels of the show were the communicator (which allowed wireless real-time voice communication between two people, and was handheld) and the replicator (a wall-mounted device which would make almost anything appear from thin air with the touch of a few buttons).

Today, we have the smartphone (which exceeds the usefulness of the communicator) and the 3D Printer, or Rapid Prototyper (which is very limited in what it can manufacture, but is light years ahead of traditional prototyping and small-scale rapid manufacturing).

This is the conclusion of a three part series on the future of consumer technology. Part one discussed advanced functionality in consumer technology applications for glass. Part two discussed technology showcased in a Nokia device concept.

 

… Augmented reality.

Imagine bringing the level of information and connectivity of today’s internet, and squeezing it into an easy-to-use, context or location-relevant package. Imagine being able to have significant depth of information, on virtually anything, available at your fingertips. This is really where the concept of informatics has application directly perceivable by the masses- and now that market forces have brought smartphones (‘smart devices’ would be a better term) and cloud computing to the mass-market, the potential for augmented reality to transform the way we work, play, and live is bigger than ever.

What is augmented reality (AR)? Augmented reality is a subtype of constructed reality (reality which is artificially created). Within constructed reality exist three closely-related subtypes:

  • Virtual reality (we’ve all heard of this one): in where reality is completely replaced by a computer-generated simulated environment (a slew of awful movies broached this subject in the 90s)
  • Enhanced reality: where our perception of reality is made more meaningful by additional information not incorporating a live image or visual feed (ie, you visit Parliament Hill and wiki ‘Parliament Hill’ on your smartphone to gain a greater understanding of the history of the buildings / or you scan a QR Code to access additional information on a product)
  • Augmented reality: where reality is contextually enhanced in real-time via a live feed or with near-time imagery (think a Head-Up Display, or the old Google Goggles project; Wikitude World Browser for iPhone and Droid / Layar Reality Browser)

Virtual reality products have been around for a few decades- which is not to say that the user experience has been exceptionally good (remember that old Nintendo console that used lasers to simulate a 3D gaming environment… everything was red, and headache-ish). Immersive computer games such as the MMORPG World of Warcraft and the simulated environment Second Life are technically virtual reality products, though they’re not marketed as such because the visual interface is typically a two dimensional computer screen.

Enhanced reality products currently exist everywhere. Any device (or product) which can be used to retrieve context-relevant information functions as an enhanced-reality device, even if that wasn’t the initial intent. A great example of a product evolving into an enhanced-reality device is the cellular telephone. A decade ago, people made phone calls with their mobile phones. Mobile browsing was in its infancy, and the user experience was poor compared to today’s mobile browsing experience. Now, it’s quite common for people to use their cellular phones for data functions far more frequently than phone calls. I had a brief Twitter exchange with Mike P Moffatt the other day on this subject. He uses his mobile phone to actually make calls about three times a month. I’m much the same, as are many people who will read this article.

But, an enhanced-reality device can also be low-tech. If you carried around a complete set of the Encyclopaedia Brittanica with you, and used it to look up information on the places you visited in a context-relevant manner, then that big ol’ hardcover would qualify as an enhanced-reality device. You’d also have pipes the size of my head (and good on you).

Augmented reality devices take the enhanced-reality technology model and bring it to the next level by making the information that can be accessed reliant on visual context. For instance, consider the use of an AR app on your smartphone… If you’re visiting New York City, open the app, and point it at the Empire State Building. You might receive this information:

  • History of the building / Wikipedia entry for its architect
  • Information on tour times / contact information
  • Pictures of people hanging out there (a-la Flickr, Instagram or another social-sharing application)

What you wouldn’t get (unless there was a brain-fart in the database):

  • The current weather in Spokane, Washington
  • Traffic conditions on the 401
  • Contact information for Wall Street (Or Bay Street) venture capitalists

In this sense, the AR device functions as a mediator of information and would filter out the last three data snippets (as they’re unrelated, or only tangentially related to the Empire State Building).

The future applications of augmented reality extend far beyond social apps in mobile phones, though. Remember, the core concept of augmented reality is that it mediates and delivers information based on visual context in order to strengthen perceptual experience. Where else would AR work?

  • as a head-up display projected on the inside of your sunglasses (need directions to that hip new coffee shop? Just follow the yellow line)
  • on a digital guide map of the museum you’re visiting (perhaps using the flexible-display computing device imagined by Corning in the video from part one)
  • as an informatics display projected onto surgical protective glasses (perhaps utilizing advanced scanning devices to detect potential problems with the patient’s innards during a surgery, or displaying vitals so the surgeon never has to break concentration to ask or look at the monitors)
  • the glass pane of a retail storefront could serve as an AR device for products on display, if contextual information was projected onto the glass for people outside the store to read (the glass could also be used as a touch-sensitive screen for potential consumers to input selections, using an OLED panel and translucent substrate)
  • eventually, the human eye could serve as an AR device- if a machine interface with the visual cortex could be developed, essentially feeding information directly (and privately) to a person’s brain

Like enhanced-reality devices, the augmented-reality variety can also be low tech. A tour guide is technically an augmented-reality device (they’re providing contextually-relevant information based on visual cues; though it would be much more tech-sexy if that tour guide was a robot, granted).

What should we take away from all this discussion? Last year, the monthly volume of global internet traffic was roughly 21 exabytes (an exabyte is equal to about 1.07 million gigabytes). As the volume of information available through globalized communications systems increases (with the internet as the primary infrastructure), it will become increasingly difficult for individuals to find contextually-relevant information without some sort of adaptive filtering system in place. Google already does this by using your IP address (physical location), along with a host of other harvested data points that you have volunteered (knowingly or not, directly or through third-party channels) to filter your search results in an attempt to provide contextual relevance. This isn’t necessarily evil, or overtly bad- when doing a Google search for hair salons, would you rather be returned results for salons in your city… or in the English-speaking city where the most internet searches for hair salons are performed (likely New York, London, or Paris)?

The bottom line: techniques for mediating information quickly and efficiently will become critical, not just convenient, in the very near future.

43 replies on “What’s Next in Big Tech? [Part III]”