What’s Next in Big Tech? [Part III]

Science fiction is, believe it or not, a pretty damn reliable barometer for the future sometimes. One could even argue that some technological innovations take shape because the inventor/key player saw or read about it in something science fiction-y.

For anyone who watched the original Star Trek (c. 1960s), two technological marvels of the show were the communicator (which allowed wireless real-time voice communication between two people, and was handheld) and the replicator (a wall-mounted device which would make almost anything appear from thin air with the touch of a few buttons).

Today, we have the smartphone (which exceeds the usefulness of the communicator) and the 3D Printer, or Rapid Prototyper (which is very limited in what it can manufacture, but is light years ahead of traditional prototyping and small-scale rapid manufacturing).

This is the conclusion of a three part series on the future of consumer technology. Part one discussed advanced functionality in consumer technology applications for glass. Part two discussed technology showcased in a Nokia device concept.

 

… Augmented reality.

Imagine bringing the level of information and connectivity of today’s internet, and squeezing it into an easy-to-use, context or location-relevant package. Imagine being able to have significant depth of information, on virtually anything, available at your fingertips. This is really where the concept of informatics has application directly perceivable by the masses- and now that market forces have brought smartphones (‘smart devices’ would be a better term) and cloud computing to the mass-market, the potential for augmented reality to transform the way we work, play, and live is bigger than ever.

What is augmented reality (AR)? Augmented reality is a subtype of constructed reality (reality which is artificially created). Within constructed reality exist three closely-related subtypes:

  • Virtual reality (we’ve all heard of this one): in where reality is completely replaced by a computer-generated simulated environment (a slew of awful movies broached this subject in the 90s)
  • Enhanced reality: where our perception of reality is made more meaningful by additional information not incorporating a live image or visual feed (ie, you visit Parliament Hill and wiki ‘Parliament Hill’ on your smartphone to gain a greater understanding of the history of the buildings / or you scan a QR Code to access additional information on a product)
  • Augmented reality: where reality is contextually enhanced in real-time via a live feed or with near-time imagery (think a Head-Up Display, or the old Google Goggles project; Wikitude World Browser for iPhone and Droid / Layar Reality Browser)

Virtual reality products have been around for a few decades- which is not to say that the user experience has been exceptionally good (remember that old Nintendo console that used lasers to simulate a 3D gaming environment… everything was red, and headache-ish). Immersive computer games such as the MMORPG World of Warcraft and the simulated environment Second Life are technically virtual reality products, though they’re not marketed as such because the visual interface is typically a two dimensional computer screen.

Enhanced reality products currently exist everywhere. Any device (or product) which can be used to retrieve context-relevant information functions as an enhanced-reality device, even if that wasn’t the initial intent. A great example of a product evolving into an enhanced-reality device is the cellular telephone. A decade ago, people made phone calls with their mobile phones. Mobile browsing was in its infancy, and the user experience was poor compared to today’s mobile browsing experience. Now, it’s quite common for people to use their cellular phones for data functions far more frequently than phone calls. I had a brief Twitter exchange with Mike P Moffatt the other day on this subject. He uses his mobile phone to actually make calls about three times a month. I’m much the same, as are many people who will read this article.

But, an enhanced-reality device can also be low-tech. If you carried around a complete set of the Encyclopaedia Brittanica with you, and used it to look up information on the places you visited in a context-relevant manner, then that big ol’ hardcover would qualify as an enhanced-reality device. You’d also have pipes the size of my head (and good on you).

Augmented reality devices take the enhanced-reality technology model and bring it to the next level by making the information that can be accessed reliant on visual context. For instance, consider the use of an AR app on your smartphone… If you’re visiting New York City, open the app, and point it at the Empire State Building. You might receive this information:

  • History of the building / Wikipedia entry for its architect
  • Information on tour times / contact information
  • Pictures of people hanging out there (a-la Flickr, Instagram or another social-sharing application)

What you wouldn’t get (unless there was a brain-fart in the database):

  • The current weather in Spokane, Washington
  • Traffic conditions on the 401
  • Contact information for Wall Street (Or Bay Street) venture capitalists

In this sense, the AR device functions as a mediator of information and would filter out the last three data snippets (as they’re unrelated, or only tangentially related to the Empire State Building).

The future applications of augmented reality extend far beyond social apps in mobile phones, though. Remember, the core concept of augmented reality is that it mediates and delivers information based on visual context in order to strengthen perceptual experience. Where else would AR work?

  • as a head-up display projected on the inside of your sunglasses (need directions to that hip new coffee shop? Just follow the yellow line)
  • on a digital guide map of the museum you’re visiting (perhaps using the flexible-display computing device imagined by Corning in the video from part one)
  • as an informatics display projected onto surgical protective glasses (perhaps utilizing advanced scanning devices to detect potential problems with the patient’s innards during a surgery, or displaying vitals so the surgeon never has to break concentration to ask or look at the monitors)
  • the glass pane of a retail storefront could serve as an AR device for products on display, if contextual information was projected onto the glass for people outside the store to read (the glass could also be used as a touch-sensitive screen for potential consumers to input selections, using an OLED panel and translucent substrate)
  • eventually, the human eye could serve as an AR device- if a machine interface with the visual cortex could be developed, essentially feeding information directly (and privately) to a person’s brain

Like enhanced-reality devices, the augmented-reality variety can also be low tech. A tour guide is technically an augmented-reality device (they’re providing contextually-relevant information based on visual cues; though it would be much more tech-sexy if that tour guide was a robot, granted).

What should we take away from all this discussion? Last year, the monthly volume of global internet traffic was roughly 21 exabytes (an exabyte is equal to about 1.07 million gigabytes). As the volume of information available through globalized communications systems increases (with the internet as the primary infrastructure), it will become increasingly difficult for individuals to find contextually-relevant information without some sort of adaptive filtering system in place. Google already does this by using your IP address (physical location), along with a host of other harvested data points that you have volunteered (knowingly or not, directly or through third-party channels) to filter your search results in an attempt to provide contextual relevance. This isn’t necessarily evil, or overtly bad- when doing a Google search for hair salons, would you rather be returned results for salons in your city… or in the English-speaking city where the most internet searches for hair salons are performed (likely New York, London, or Paris)?

The bottom line: techniques for mediating information quickly and efficiently will become critical, not just convenient, in the very near future.

Tax Credits for Early-Stage Investors: It’s About Time.

This was originally a comment on an article in today’s Globe and Mail… but, it wouldn’t fit. So, we have a blog post now.

The Globe and Mail article can be read here.

My comments:

Disclosure: I’m the CEO of a technology startup, so view my comments through that lens. I’m also a proud Canadian who wants to generate wealth, jobs, and opportunities right here- instead of moving to silicon valley where I could go to a cocktail party one night and have a $4mm term sheet the next morning. I love my country that much.

This plan is a fantastic idea which should have been implemented five years ago (the startup landscape would have been a lot richer during the time period when the US startup scene was booming, ergo increased M&A activity in Canada today coming out of the recession).

To everyone who commented: this plan isn’t a government-sponsored fund, it’s a tax credit. Money here doesn’t directly go to new businesses, it’s used as a credit for investors. That’s a good thing: no new business wants big government involved in the decision-making process. Personally, I’d walk away from a government investment, no matter how big, if it involved a bureaucrat sitting on my board. Also good: the credit itself. Yes, it ‘benefits the rich’. That’s a good thing for startups, because angels and VCs in this country are VERY risk-averse. They don’t like to invest unless they’re sure the company is a home run- and it’s very, very difficult to be sure of this at the concept or early seed stage (read: before the company has significant revenues). Allowing accredited investors to write off 35% of their investment helps them hedge their risk, and reduce potential losses. Ie, if an angel takes a flier and invests $1m in a company which goes bust, their net loss is $650,000 instead of the full million. That could make the difference between a yes and a no for an angel investor looking to do a small cap deal.

Mark: I appreciate your comment and agree- but as a society, do we really want to encourage grandmothers or retirees to invest in startups? It’s a dangerous game for investors, half of all startups fail, and fail spectacularly- losing money for everyone. Small-cap, early-stage investing is better left to professional investors… not to people who will make a judgement call on whether or not they liked the entrepreneur/product. Family and friends still commonly invest in startups, but not having a write-off policy helps to ensure that these folks can afford to lose the money if things go south. If we want to encourage mom-and-pop investments, a better way would be to incentivize the OSC to change its rules and allow crowdsourced funding (ie, microinvestments into a private fund which would then professionally manage the fund and invest in startups).

The trickle-down effect from this policy would be overall beneficial to the economy. By encouraging small cap private investment in new business ($250,000 – $1mm is where this will have the most effect), job creation is increased. It’s not the government’s place to directly create jobs; that almost always leads to unsustainable gains in the labour market. It’s the government’s job to create the conditions which allow private enterprise to start and expand, thus creating jobs.

This tax credit incentive isn’t new to Canada; it’s been running in BC for a few years and is very successful. In that province, the trickle-down effect is such that for every dollar of government revenue lost to the tax credit, two dollars is directly created in corporate taxation. When you consider gains in other taxation, four dollars per dollar is created. That means that the program is self-sustaining (ie doesn’t cost the taxpayers anything), and actually generates economic activity (read: job creation).

But don’t take my word for it: start Googling. Do your homework, and read up on the concept behind what’s being proposed.

What’s Next in Big Tech [Part II]

Science fiction is, believe it or not, a pretty damn reliable barometer for the future sometimes. One could even argue that some technological innovations take shape because the inventor/key player saw or read about it in something science fiction-y.

For anyone who watched the original Star Trek (c. 1960s), two technological marvels of the show were the communicator (which allowed wireless real-time voice communication between two people, and was handheld) and the replicator (a wall-mounted device which would make almost anything appear from thin air with the touch of a few buttons).

Today, we have the smartphone (which exceeds the usefulness of the communicator) and the 3D Printer, or Rapid Prototyper (which is very limited in what it can manufacture, but is light years ahead of traditional prototyping and small-scale rapid manufacturing).

This is part two of a three part series on the future of consumer technology. Part one discussed advanced functionality in consumer technology applications for glass. Part three discusses the future of informatics and enhanced/augmented reality devices.

Consider the following video (part of the Nokia research series at http://research.nokia.com):

Nanotechnology

The idea of machines smaller than a human hair (about 60,000 nanometers) seems like enough of a stretch, yet Gianfranco Cerofolini describes the design and application (and economics) of nanoscale devices with a thickness of only a few nanometers in his book, Nanoscale Devices (Springer). The economical manufacture of devices on the molecular level opens up a wide range of applications- and yes, would also enable the technology described in Nokia’s concept video.

Self-Cleaning

While the term ‘hydrophobic’ (not to be confused with hydrophobia) refers to water-repellent properties in a material. Of particular interest in tech is also materials with oleophobic properties (oil-repellent; to prevent the piss-off that is screen fingerprints and smudges). Oil-repellent coatings are not only desirable for touchscreen interfaces, but are also becoming more necessary with their prevalence (finger smudges can reduce the sensitivity and accuracy of an untreated touchscreen input). Fingerprint-free phones may not be that far off, as Apple applied for a process patent this past February which may make the coating on the iPhone and iPod Touch permanent (the oleophobic coating on the 3GS wore off with time). Should this feature prove to be a seriously advantageous unique selling point (and anyone used to cleaning their smartphone’s screen six times a day will think it is), Apple could make a pile of money by licensing the technology to competitors. Or, they could refuse in order to preserve a competitive advantage of the iPhone in an industry where their market share is facing threats from the open-source Android platform (which is the more likely eventuality). But; the patent proves that effective oleophobic coatings for digital devices are possible, and to quote a slightly nasty colloquialism: there’s more than one way to skin a cat.

Stretching

On the surface, this seems like perhaps the most realistic point showcased in the video. Stretchable electronic devices should be easy to make, simply by substituting a chassis or frame (substrate) which stretches, instead of the rigid plastic, glass, or metal ones most commonly used… right? Not exactly. This is possible, but it creates severe mechanical stress at the juncture where the electronic components would connect to each other (called interconnect points). The stress is so severe that integrated circuit boards built on a stretchable substrate tend to only survive being flexed a handful of times. I’d be a little irritated it my Nokia Morph could only transform into a wristwatch five or six times before it stopped working- wouldn’t you?

Substantial progress is being made in this area of materials research, however. Some of the most advanced work is being carried out at the STELLA project, headquartered in Germany (STELLA focuses on stretchable electronics for medical applications). The most promising developments seem to suggest that materials designed using biomimicry concepts have the most reasonable chance of bypassing the current technical barriers and will enable the production of stretchable electronics in the near future.

The big-picture business question is, would a device such as this sell (at a substantial initial premium to traditional smartphones, naturally). To the technophiles, absolutely. To fashion junkies? I suspect so. To the customer of the discount-service provider (Virgin, Fido)? Not at first. The opportunity cost for such a device would likely be too high for this market segment to bear early-on. However, if we assume that device convergence continues to be a driving trend in the consumer technology space over the next decade (and why would it not?), then it’s a reasonable assumption that this technology would eventually find its way into the discount segment. It took roughly two years before modern smartphones made their way into the discount space; this technology would likely expand at a similar rate (though with a lower-quality offering initially).

What’s Next in Big Tech? [Part I]

Science fiction is, believe it or not, a pretty damn reliable barometer for the future sometimes. One could even argue that some technological innovations take shape because the inventor/key player saw or read about it in something science fiction-y.

For anyone who watched the original Star Trek (c. 1960s), two technological marvels of the show were the communicator (which allowed wireless real-time voice communication between two people, and was handheld) and the replicator (a wall-mounted device which would make almost anything appear from thin air with the touch of a few buttons).

Today, we have the smartphone (which exceeds the usefulness of the communicator) and the 3D Printer, or Rapid Prototyper (which is very limited in what it can manufacture, but is light years ahead of traditional prototyping and small-scale rapid manufacturing).

This is part one of a three part series on the future of consumer technology. Part two discusses technology showcased in a Nokia device concept. Part three discusses the future of informatics and enhanced/augmented reality devices.

Let’s talk about what’s next. Check out this video:

Photovoltaic [Smart] Glass

Corning bollocks’d up here- the industry term for this technology is ‘smart glass’ or ‘electrochromic glass’. Photovoltaic glass refers to a film coating applied to architectural glass in order to capture ambient sunlight for the purpose of generating electricity. The concept for smart glass was first tossed around meaningfully about a decade ago. Anyone who’s ever owned eyeglasses with Transitions™ lenses (the ones that automatically dim in sunlight) is familiar with this concept. The key difference is that in smart glass, the dimming is controlled by an electric current (usually initiated by a switch or button in consumer applications). Commercial integration has been somewhat limited, but the upcoming Boeing 787 will feature windows using smart glass instead of window shades.

Architectural Display Glass / Architectural Surface Glass / Wall Format Display Glass

Designing and manufacturing glass panels suitable for the applications seen in the video is only the first (and easiest) step. Before what was showcased can become a reality, a universal operating system (Microsoft Surface is a viable possibility) and technical standards will have to be created and adopted across industry. Additionally, a technology to actually render an image on the display would need to be developed (using the glass as a protective surface for a composite OLED display would seem the most promising, but there are certain technical hurdles to overcome before that technology can be used in displays of the size imagined in the video).

Automotive Display Glass

Ford/Microsoft brought this one into the mainstream with its Sync™ system nearly five years ago. The UX has consistently improved generation-over-generation since, and many major manufacturers (mostly in the high-end luxury space) have also begun installing their own systems as a standard component. The next evolution in this technology will be user-customizable interfaces and deeper integration with other mobile devices (phones, tablets, etc) and smart home systems (your car pulls into the garage at 7pm, and its onboard information system communicates the number and identity of the passengers, enabling the smart home system to execute certain pre-defined convenience settings- coffee or tea, perhaps). Again, an open-source system or set of industry standards would accelerate the adoption of this technology. Sync™ is a proprietary system, and it doesn’t play well with other in-vehicle information systems.

Large Pane Display Glass

The same basic challenges that exist for the technology showcased in architectural display glass exist here. Technical standards, universal operating system, display technology. Additionally, this technology would require a massive database of aggregated information and a connectivity technology to access it (the former is becoming easily accessible as companies and governments shift information to the cloud and make them publicly accessible via an API offering, while the latter would simply require installation of a WiFi radio or the inclusion of ethernet cabling in the physical infrastructure). To make the nifty little advertising feature showcased on the mobile phone a reality, a closed (or open source) mobile-based platform for market research/brand loyalty involving geolocation would be required. How likely is that to happen in the near future? Well, my firm is currently developing one that’s not too far from going beta, and some fairly sizeable retailers have expressed interest. But sssshhhh, we’re still talking to investors 😉

Flexible Display Glass

I’m a little sceptical that this is actually glass, as we think of it. One of the physical characteristics of glass is that it’s rigid. It’s more likely that this would be a flexible polymer of some sort, but with the hardness and low porousness that characterizes glass. The apex of this technology would be in developing underlying electronics that could survive being rolled up multiple times (a key hurdle, currently). It might seem a little far off, but HP is getting pretty close to making a very similar technology commercially viable. Though HP sees this technology ultimately making traditional displays thinner and lighter, it could also be applied to architectural projects to make built-in curved or shaped displays (think motion-picture or ‘moving’ wallpaper in the lobby of your office building, or a stock ticker that runs across the wall itself).

 

The ‘life-enabling’ technologies (marketers and tech analysts tend to paint all new consumer technologies as essential devices which make life easier/better) portrayed in the Corning video won’t be commercially viable for some time (5-10 years), and won’t be mainstream mass-market technologies for even longer. However, in the recent past these technology concepts would have seemed outlandish and much more fictional than factual. Personally, I can imagine my kitchen countertops serving double duty as an infomatics system for communication and convenience in the somewhat near future. Can’t you?

After all, we went from this:

Motorola v60, c. 2003
Motorola v60, c. 2003

to this:

Samsung Galaxy S, c. 2010
Samsung Galaxy S, c. 2010

…in under a decade.

Perceptual Healing

A friend and colleague, Al Mastromartino, has a favourite saying for his students (I also studied under him while at Fleming College):

Perception is more important than reality.

It’s always stuck with me, because that statement embodies something I’ve intuitively known for quite some time. Optics are critical in everything, whether professional or personal. Those who pay little heed to optics are bound to be punished for their lack of attention at some point in their life. As will those who do- but hopefully in a less damaging way.

Recently, Nivea got spanked by the public (and the media) over its ‘Re-Civilize Yourself’ campaign’. Personally, I don’t see an underlying racist intent- though I do understand how the advert could be interpreted as such. If you remove the African-American man from the layout, and insert a white man in his place, I doubt very much that anyone would consider the ad derogatory or racist (an interesting double-standard, no?).

The critical element here was the race of the subject- and giving the history of racial tension that dominated the cultural landscape of the last century in the United States (and still exists in a lesser form today), the reaction of the general public isn’t terribly surprising. Had the same ad run in Canada or Europe, I suspect the reaction to it would be underwhelming.

Interestingly enough, an African-American man was at the helm for the execution of this campaign (Cliff Carson is the VP at the agency which handles Nivea’s adverts). That pretty much eliminates any notion of racist intent from the campaign- though I suspect a few conversations about racial insensitivity were had during the aftermath. Might be a little more than awkward if it was a Caucasian giving Mr. Carson that lecture.

Sometimes, you just want to be a fly on the wall.

Facebook Pages: Where’s the Value?

Simple answer: everywhere.

Musicians, filmmakers, celebrities, politicians, photographers… Facebook pages help add value to their brands by enabling the brand to be a conversation, instead of a presentation.

But just how well is that concept leveraged by business in their communications process? How does a business or brand use Facebook pages to actually carry out their marketing agenda? Should a Facebook page be used solely as a marketing tool? How can marketing departments use them to collect relevant data and metrics?

The primary functions of the marketing department or team are to communicate value propositions to customers, support the sales team with communications/advertising and brand materials, and gather data on performance which will be used to constantly tweak all of the related processes. That’s what marketing does. Talking directly to customers about products or services outside of an institutionalized data-gathering function is not typically a part of the marketing team’s thing (except for small business and kick-ass startups, hence my love for them).

Who communicates with customers on a daily basis? Tech support. Customer service. Sales reps. Basically, anyone on the sales team/department. Who tends to manage a company’s Facebook page (or other social asset)? The marketing department/team. It seems like a natural fit, right? I mean, social media is for advertising, right?

I respectfully suggest to you, the business world, that this is an approach destined to make underutilization the norm. Yes, social media platforms can be a great part of advertising and marketing… but the real value is in conversations. Think about what social media platforms were designed to do: enhance communication among friends, family, and even strangers by leveraging a comfortably efficient medium. You can drop mom or dad a message quickly while away at college, or update your long-lost bff from grade school on your summer vacation plans without the inherent time-lag involved in the postal service. Want to send Aunt Martha some pictures from your three year-old’s birthday party? Drop them on her Facebook wall. Sharing meaningful content in a personalized context is incredibly easy- as is carrying on conversations in real time or near real time with chat functions.

It seems to me that your customers would benefit a lot more if the sales team (specifically, customer service) had control of the company’s Facebook page (and other social media assets). Does this mean that marketing should be shut out in the cold? Of course not. But, someone has to hold the reins. Sales and Marketing teams work closely by necessity anyway (at least in companies that aren’t monolithic and dusty), and social media is just another area where both teams can receive shared value from access.

Better yet, experiment with merging your sales and marketing teams if they’re separate. Marketing would benefit from increased awareness of real day-to-day issues in the sales cycle and conversion funnel, and sales would benefit from being plugged more tightly into the big picture. After all, marketers dream… salespeople do.

Marketers, I have a suggestion for your social media strategy. Don’t use it to crowdsource for advertising ideas (like Clorox recently did with GreenWorks and the stay-at-home-new-mom demographic), that’s just plain lazy- and a conceptual problem. If you ask people what they want to hear and then give it to them, you’re not communicating anything new or exciting… you might as well hold up a mirror with a little note scrawled on a cocktail napkin underneath that says ‘keep on buying me because you already do!’

TouchPad the Nemesis of the iPad? No.

This post is in response to iPad met its match in the TouchPad, by Brooke Crothers. It’s very rare for me to run across an article that’s underlying premise is so flawed, I feel compelled to respond… yet here we are. What’s even more damning is that the piece was written for C|Net, normally a producer of high quality content (though specializing in technology, not business).

Here are the basic points of the Crothers’ article [with my comments]:

  • The sell-off of the TouchPad comes close to rivaling the consumer fixation for the iPad [no, the firesale price point was the fixation]
  • HP had the necessary conditions to make a run at the iPad (vertical technology platform, enormous company resources) [no, they lack a high-quality application development ecosystem- which is currently THE critical value proposition point in this space]
  • Tablets competing with the iPad are generally too expensive for mass adoption at the $499 price point in the US [yes, the price for the iPad is artificially high- it’s not a traditional mark-up price point, but a what-will-the-market-bear price point due to the Cult of Apple marketing phenomenon; Apple’s margin on the iPad is currently around 42%]
  • HP had the resources to use the TouchPad as a loss-leader, and should have [capital resources, yes, should-have, no]

The article, though structurally well-written (as one would expect from a professional journalist), conveniently ignores the big picture. Not to mention the entire point of selling a product in the first place.

Profit.

Crothers suggests that HP could have toppled the iPad as the dominant tablet product on the market, had it allocated some of its marketing budget for the category launch to subsidizing the price of the TouchPad. That’s an oversimplification based on faulty logic, plainly put. How do we define ‘dominance’ anyway? The number of devices out there? Brand loyalty? Sales? The big one is sales. Why else are terms like ‘revenue’, ‘market share’, and ‘earnings call’ so frequently used in business journalism, if sales isn’t the gold standard of success metrics? An extension of sales, naturally, is profit.

How do you turn a profit when you’re selling a product for less than what it costs to manufacture? That’s what Crothers’ tells us HP should have done when he suggested that the company could have used it as a loss-leader. The question is, a loss-leader for what? For a loss-leader to be a winning strategy, it’s imperative that you have something in your arsenal to sell which adds value to the product, and that consumers are likely to buy. In the case of tablets, the common ones are hardware accessories (decorative covers, external speakers, screen protectors, etc) and apps (software applications, for the uninitiated). Loss-leaders do work in the tech space, proven time and again by cellular providers who subsidize handset purchases (Bell, Rogers, Telus, or any of the low-rent brands that sell you a cutting edge phone at a fraction of the retail price if you sign up for a two or three year contract). This is ultimately profitable for them because the margins on their recurring services over the two or three year term are astronomical. For instance, some number-crunching by TechCrunch recently revealed that it costs AT&T about 1/100000 of a cent (hard cost of the DCCH control channel, not the opportunity cost of an SMS, which is $0) when you send a text message from your phone, and the a-la carte cost to you is 20 cents. Even with a $10 unlimited monthly texting add-on, you’d have to send one billion text messages in a month before you gobble up AT&T’s profit on execution of that particular service. The scale of execution and margin is similar for the Canadian providers.

Here’s the basic problem for HP/TouchPad “dominance”: there’s no value-added proposition in place to back up a subsidized hardware sale. The marketplace for WebOS apps is virtually non-existent (a few third-party websites list around 7,000 for the platform, but there isn’t anything ‘official’ out there from HP). Though you can find apps for the platform, most of them are free. Taking a 30% commission on app sales (like Apple does) generates no revenue for HP when the apps are free. Don’t get me wrong, the goodwill of consumers (free apps) and developers (no commissions) to your brand is important… but when you’re spending nine figures on a product category launch, it should take a back seat to actually bringing home the bacon so you can recoup some of the costs and keep your operating cash fluid (at least in the short-term).

Could HP have taken on the iPad with the TouchPad? Sure. But not without building the applications marketplace and developer community first- and this is something that they should have been doing with the Palm Pre, long before the TouchPad was a twinkle in the design team’s eye. It’s a relative impossibility now, with three established application ecosystems out there for mobile (Apple’s AppStore, Google’s Android Marketplace, and RIM’s BlackBerry Appworld). Not many developers will take on a fourth OS… many are even abandoning RIM’s, due to poor support for developers and lower profit potential due to the smaller user base.

Personally, I think the demise of the TouchPad is a shame. I had the opportunity to demo it (long before the firesale) on behalf of a client who was negotiating a procurement contract. The UI is clean and efficient, and the hardware snappy and responsive. But without apps, it was really just a glorified eReader to me at quadruple the price. Still, I’m rooting for someone to come along and offer Apple real competition- without some significant ethical changes in Cupertino, I won’t be an Apple customer again.

Privacy: What is it, Why does it matter?

This article is written from my perspective (one of the oldest of the Gen-Y’ers)- and I’m sure that members of other generations will have different views on privacy based on collective and individualized experience. Gen X’ers and Boomers as a whole tend to to be more cognizant of personal privacy issues, while Millenials (as a whole) tend not to care much for or about personal privacy.

Here’s the underlying influencer of personal privacy: Trust. Trust influences the importance of privacy to the individual, and to the collective mindset of generational, cultural, or sociological groups. When individuals or groups have a high degree of trust for a collector of their personal information, privacy tends to matter less. When individuals or groups have a low degree of trust, privacy tends to matter more.

Trust, of course, can reflect a wide spectrum of factors:

Do I trust the organization to use the information it has collected in a manner that isn’t counterpoint to my personal or professional interests?

Do I trust in the organization’s ability to protect my personal information that they have collected and stored?

Do I trust in the organization to conduct careful diligence of outside organizations which with they share my information? [with respect to the first question]

The sensitivity of the data involved also reflects trust and privacy to a high degree. Banking information and account numbers are more sensitive than your favourite colour, for instance. Organizations burn through incredible sums of cash to protect and secure the former, while the latter passes by more or less unnoticed and uncared about by third parties.

We’ve all heard and read about in the media the issues which large companies like Facebook and Google wrestle with in regards to privacy. The media often tends to polarize the issue by painting the companies as the big, bad corporation which is primarily interested in screwing the unsuspecting consumer by using their personal information for ill-gotten financial gain. Perhaps that’s true, to some degree- though I doubt in practice it’s as ominous as it sounds, or the intent overly malicious.

Are Facebook and Google chock-full of faceless evil men and women in suits, ready to make a buck at your expense? Short answer: No. Media organizations have a vested interest in creating content which drives consumer engagement, thereby selling newspapers (or driving traffic to their websites)… which has the effect of increasing advertising revenues (the $1.75 you pay for a newspaper at the news-stand doesn’t even come close to covering expenditures, let alone make media organizations profitable). Shock, anger, and negativity drive engagement far more effectively than happiness and positivity.

That being said, can Facebook and Google manage consumer-related privacy more effectively? Absolutely. Personally, I think a good start would be a more robust system of contacting users and soliciting approval when making product or process changes which affect user privacy. It could be as simple as an email saying ‘This is what we’re doing, if you find it that distasteful, here’s how to close your account with us and move on. No hard feelings.’ (Not in those words, of course). A policy such as this would also have the benefit of forcing companies to invest more in User Experience elements… which would make it that much harder to drop the service in question when the issue of privacy surfaced again.

But what about this privacy thing in thing in the first place. What is it, and why does it matter?

Privacy is the concept that we as individuals don’t want certain things about ourselves to be generally known, for fear that they could be used against us. ‘Certain things’ could be banking information (who wants their identity or money stolen? Not me). It could also be information exposing guilt of a crime or morally ambiguous action, as tabooed by society (you sold tips about a company you work for, leading to an infraction of insider trading rules, or had an affair with a married man/woman). Perhaps you have an embarrassing genetic abnormality (a third nipple, or a crescent-shaped mole on your bottom). The consequences of these coming to light have negative ramifications for you in differing severities (read: insider trading has more dire ramifications than a mole on your butt).

Thus the creation of the concept of ‘privacy’- to protect us from those negative ramifications, and by extension, from the power dynamics leveraged by people who would use the sensitive information against us. But here’s where the concept of privacy falls apart: if a person has nothing to hide, then privacy becomes irrelevant. A massive simplification, of course… there will always be information which should be protected, at least for the conceivable future (such as banking information).

But what if privacy was ceded, to some degree, to business and social media applications and technologies? Imagine a world where everyone was on foursquare and Facebook, and didn’t have the option not to be. Imagine that the concept of personal privacy didn’t exist as it does now, and everyone was cognizant of the fact that their location could always be tracked (in the form of foursquare, or a similar service) and much of their social interactivity was laid open for all to see (in the form of Facebook, or a similar platform). Two basic groups of users would emerge: (1) those who didn’t care; either through ignorance or openness, and (2) those who did care, but self-policed their actions. The far more interesting of the two groups is the latter: those who self-police.

All human relationships and actions are driven by carrot-or-stick dynamics. We do or don’t do something because we perceive it will be good for us (and there are many frames of reference here), will be bad for someone else (who is in direct competition with us; ergo will result in a net-positive benefit for us), or will help us to avoid something that is bad for us (again, a net-positive benefit). Privacy actions work in the same way. We hide or reveal information because it will be good for us, bad for someone else, or help us avoid something that is bad for us. Consciously or unconsciously, all consumers/users approach privacy in this manner (with the exception of those totally uninformed of the issue or its potential ramifications). Knowing this, you can adjust your approach to user privacy accordingly.

There’s a question relating to user privacy that I’m asked quite frequently: should we care about user privacy, above and beyond what privacy legislation dictates?

My answer is a simple one: You should only care if your users do, or you think that they will at some point in the future.

You Twitt[erer]!

The backlash against Twitter over its addition of a native photo-sharing component is crap, and must cease immediately. No, I’m not a free-speech hating nutjob… but anyone who is still angry over the move by Twitter [ie TwitPic founder Noah Everett] is missing the big picture, and misdirecting their anger.

Here’s the bottom line: you don’t own Twitter. Twitter owns Twitter. Twitter was not created for you to start a business based on improving its service. Twitter is a business, and part of being a business is listening to your customers (or in this case, users) and improving your products and services based on their feedback. That’s Business 101. Sure, there’s a case to be made for outsourcing beyond your non-core expertise… but we’re talking web development here, not aerospace engineering.

If you’ve started a business based solely on building applications which run on another business’ service, you’re exposing your own business to a pretty serious strategic risk. There’s always a chance that the other business will decide to increase development in-house and make you obsolete in the process. And the other business is perfectly within its rights to do so, because… you don’t own them. Next time, do a little more thinking ahead and a lot less reactionary [insert appropriate expletive here].

Change the Tech, Change the World.

This entry is  a response (or rather, an extension) of an article written by British ex-pat Hermione Way about the US tech scene in Silicon Valley. It can be accessed at TheNextWeb here.

In Ms Way’s words, Silicon Valley is currently inundated with … Groupon clone after Groupon clone, yawn… yet another social media dashboard, a cloud-based enterprise solution or, worse still, another photo sharing app …

I couldn’t agree more. There’s a frenzy among VCs and other funding mechanisms to find the next Facebook. The normal due diligence and investor prudence seems to be increasingly tossed aside in favour of pre-money valuations in the range of $4-$5 million. That’s absolutely insane. Though I’m involved in the tech sector through a marketing analytics startup, I can’t shake off the conventional wisdom of my business education. How can anyone possibly value a company without sales, IP, or significant assets at four million dollars? Don’t get me wrong, I do feel that traditional valuation methods are a bit too stodgy for the realities of the internet technologies space… but the current VC culture in the valley is much more like a drunken frat party than a reliable investment vehicle.

Not to say that all funds down there are engaged in ridiculous practices, of course. I’m sure that there are still some partners and principals that haven’t gone loco.

This brings us to the root of the issue: the tech.

Why are talented developers only working on projects that are at best trivial, and at worst absolutely ludicrous? And I make this statement in the context of business opportunity, not usefulness. Sure, the Groupon model has its advantages (though I think Groupon is going to zero on a bullet in the medium-term… they lost $100m last quarter), and photo-sharing apps have a very kitsch consumerist, time-wasting usefulness. Enterprise-level cloud storage systems obviously have a market.

But all of these markets are saturated to the point of being opaque and hard like concrete (ever tried to penetrate concrete?).

If there are ten or twenty VC-backed firms in the space already doing more or less exactly what you want to do… why would you push forward? Personally, I might still push forward if I had some killer IP protection or some kind of USP that the competition couldn’t duplicate without a long development curve… but we’re talking daily deals, storage, and photo-sharing apps here. Those aren’t exactly novel concepts, and the technologies to service them work more or less in the same fashion. The only thing you can compete on is the interface and the user experience. Personally, I don’t want to beat myself over the head trying to outdo two hundred people who are just as smart as me on UX alone. Do you? Yes? Okay, move to Silicon Valley- and purchase a bottle of bourbon in the duty-free at the airport (you’ll need it). No? Read on.

Internet tech gives us, the everyday people, communication conduits through which we can change the world for the better. The internet is no longer a luxury, touching the lives of the majority of people on the planet… even the United Nations recently defined internet access as a basic human right (well, they actually declared disconnecting people from internet access a violation of human rights, but the sentiment is the same at its core: internet connectivity and access to global communications is a fundamental block of human existence). Why are we so enamoured with building trivial little mobile phone apps that do diddly-squat to improve our lives? The power of internet technologies can provide us with so much more.

I’m not saying that everyone should drop their current dev work and set out to conquer poverty, world peace, and global hunger with a 99 cent Android-powered app (though if you decide to, you have my utmost respect… and I’ll even take you out for a beer or two if you’re in town or vice-versa). Entertainment has its place within the digital ecosystem, and we need these products in order to be highly-functional human beings. But personally, I think there’s much more opportunity for greatness (and financial success) in developing internet technologies which change the way people think or which change the way people do. People who are zoned into tech entrepreneurship refer to these as ‘disruptive’ technologies or firms (though I prefer to think of them as ‘enabling’).

To close, let me pose this question to you future tech entrepreneurs out there: where do you want to be? Developing 99 cent apps for the App Store that make a fart noise, or the $5m/year SaaS application that helps retailers streamline their regional supply chains? Or maybe you’d rather be like my friends at Get Q’d, who are developing applications to eliminate waiting in line-ups (who doesn’t hate waiting in line?).

Let me give you a hint: if you’re after cash and/or personal satisfaction, there’s a lot more of both in the last two choices.