This article is written from my perspective (one of the oldest of the Gen-Y’ers)- and I’m sure that members of other generations will have different views on privacy based on collective and individualized experience. Gen X’ers and Boomers as a whole tend to to be more cognizant of personal privacy issues, while Millenials (as a whole) tend not to care much for or about personal privacy.
Here’s the underlying influencer of personal privacy: Trust. Trust influences the importance of privacy to the individual, and to the collective mindset of generational, cultural, or sociological groups. When individuals or groups have a high degree of trust for a collector of their personal information, privacy tends to matter less. When individuals or groups have a low degree of trust, privacy tends to matter more.
Trust, of course, can reflect a wide spectrum of factors:
Do I trust the organization to use the information it has collected in a manner that isn’t counterpoint to my personal or professional interests?
Do I trust in the organization’s ability to protect my personal information that they have collected and stored?
Do I trust in the organization to conduct careful diligence of outside organizations which with they share my information? [with respect to the first question]
The sensitivity of the data involved also reflects trust and privacy to a high degree. Banking information and account numbers are more sensitive than your favourite colour, for instance. Organizations burn through incredible sums of cash to protect and secure the former, while the latter passes by more or less unnoticed and uncared about by third parties.
We’ve all heard and read about in the media the issues which large companies like Facebook and Google wrestle with in regards to privacy. The media often tends to polarize the issue by painting the companies as the big, bad corporation which is primarily interested in screwing the unsuspecting consumer by using their personal information for ill-gotten financial gain. Perhaps that’s true, to some degree- though I doubt in practice it’s as ominous as it sounds, or the intent overly malicious.
Are Facebook and Google chock-full of faceless evil men and women in suits, ready to make a buck at your expense? Short answer: No. Media organizations have a vested interest in creating content which drives consumer engagement, thereby selling newspapers (or driving traffic to their websites)… which has the effect of increasing advertising revenues (the $1.75 you pay for a newspaper at the news-stand doesn’t even come close to covering expenditures, let alone make media organizations profitable). Shock, anger, and negativity drive engagement far more effectively than happiness and positivity.
That being said, can Facebook and Google manage consumer-related privacy more effectively? Absolutely. Personally, I think a good start would be a more robust system of contacting users and soliciting approval when making product or process changes which affect user privacy. It could be as simple as an email saying ‘This is what we’re doing, if you find it that distasteful, here’s how to close your account with us and move on. No hard feelings.’ (Not in those words, of course). A policy such as this would also have the benefit of forcing companies to invest more in User Experience elements… which would make it that much harder to drop the service in question when the issue of privacy surfaced again.
But what about this privacy thing in thing in the first place. What is it, and why does it matter?
Privacy is the concept that we as individuals don’t want certain things about ourselves to be generally known, for fear that they could be used against us. ‘Certain things’ could be banking information (who wants their identity or money stolen? Not me). It could also be information exposing guilt of a crime or morally ambiguous action, as tabooed by society (you sold tips about a company you work for, leading to an infraction of insider trading rules, or had an affair with a married man/woman). Perhaps you have an embarrassing genetic abnormality (a third nipple, or a crescent-shaped mole on your bottom). The consequences of these coming to light have negative ramifications for you in differing severities (read: insider trading has more dire ramifications than a mole on your butt).
Thus the creation of the concept of ‘privacy’- to protect us from those negative ramifications, and by extension, from the power dynamics leveraged by people who would use the sensitive information against us. But here’s where the concept of privacy falls apart: if a person has nothing to hide, then privacy becomes irrelevant. A massive simplification, of course… there will always be information which should be protected, at least for the conceivable future (such as banking information).
But what if privacy was ceded, to some degree, to business and social media applications and technologies? Imagine a world where everyone was on foursquare and Facebook, and didn’t have the option not to be. Imagine that the concept of personal privacy didn’t exist as it does now, and everyone was cognizant of the fact that their location could always be tracked (in the form of foursquare, or a similar service) and much of their social interactivity was laid open for all to see (in the form of Facebook, or a similar platform). Two basic groups of users would emerge: (1) those who didn’t care; either through ignorance or openness, and (2) those who did care, but self-policed their actions. The far more interesting of the two groups is the latter: those who self-police.
All human relationships and actions are driven by carrot-or-stick dynamics. We do or don’t do something because we perceive it will be good for us (and there are many frames of reference here), will be bad for someone else (who is in direct competition with us; ergo will result in a net-positive benefit for us), or will help us to avoid something that is bad for us (again, a net-positive benefit). Privacy actions work in the same way. We hide or reveal information because it will be good for us, bad for someone else, or help us avoid something that is bad for us. Consciously or unconsciously, all consumers/users approach privacy in this manner (with the exception of those totally uninformed of the issue or its potential ramifications). Knowing this, you can adjust your approach to user privacy accordingly.
There’s a question relating to user privacy that I’m asked quite frequently: should we care about user privacy, above and beyond what privacy legislation dictates?
My answer is a simple one: You should only care if your users do, or you think that they will at some point in the future.