Brian Kennish and Casey Oppenheim of Disconnect on this session. They’re a privacy start-up making simple tools to help manage data. Brian worked at DoubleClick, Google. Casey worked as (criminal?) investigator in Manhattan, lawyer about privacy. History of company: article on Facebook leaking vast private data store. Created a browser plug-in expecting small group, ended up with many users in 2 weeks. Study about how much data social networking companies collect (lots! wow.). Same thing with ad companies: “anonymous” may not be so. (Note: look for Brian’s talk at DefCon)
Browser extension: disables 3rd party tracking, depersonalizes your searches, shows blocked services & cookies, easily unblock services. Privacy icons project: four icons that represent various privacy policies.
Revenue model: pending, in the works, users may monetize their own data.
How do we know, how do we understand what’s in these TOS agreements? Hoping to crowd source various policy statements. At some point, icons will be displayed in browsers. When users understand what’s happening with their data, they’re more interested in privacy.
This session is a “behind the scenes look at Micrsoft’s internal privacy program.” See the agenda for more information. Participants: Kim Howell, Reese Solberg, Michelle Bruno.
From Kim: Website: is this a new domain, link to privacy statement? existing privacy statement and does it match/make sure it covers everything? Data collection (see above). Send questions to new site/organization, get information, iterate. More questions: authentication, communication, vendors. Are people creating new accounts? use of email? data access requests? Vendors? Next round of questions: how well does IT + PR + Lawyers work together? Does privacy statement match the service? where’s plausible deniability? Make sure what’s required is clear, what’s optional. Provide better notice about use of information, data retention. Using HTTPS? How easy/obvious is it to obtain informed consent when signing up? Companies often think that writing a privacy statement at the last minute. (Wrong)
Next iteration: What new data is being collected? being sent where? other (new) features coming up? what info is shared? location: is it always being sent, or only in use when app is open? what other info (unique device ID, cell tower info, gender, etc.) is being sent with location data? data retention? If services changes, company may need to re-opt in application users. Privacy controls? (example of circulating the data within different departments of the company, “accounting department loves this data.”) Who needs access? for what use? access to raw data or aggregated statistics? Have data handlers been trained? Unique identifiers are not the only way of identifying a person. What’s intended use of collected data?
Michelle Bruno, Technical Privacy Manager: see printed case study (not online). Focus areas:
Level setting: focus on use of customer data, customer expectations, opting out
Author guidance: “how to” guides, privacy review checklist, company activities, data sharing, research and betas
Position yourself: pro-business privacy message, culture of privacy as a value-add
Piggyback: identify existing processes that you can take advantage of: spec templates, guidelines, bug tracking, testing, release management…
Analyze and assess: comprehensive data-gathering plan to understand company’s risk
Educate: pro-privacy contacts in each group to help succeed, spread work to peers about new process/resources
Questions: tension between user controls and corporate collections? Make sure value matches, is understood by both sides. Look at what business can put in place to allow better user controls. Microsoft has a federated privacy team, Kim’s team defines what compliance looks like.
Not mentioned in this panel but of some related interest (about Terms, not Privacy Policies): TOSAmend and EFF‘s TOSback.
Data-gathering firms and technology companies are aggressively matching people’s TV-viewing behavior with other personal data—in some cases, prescription-drug records obtained from insurers—and using it to help advertisers buy ads targeted to shows watched by certain kinds of people.
How this translates, the article explains, is that these companies are now tracking you at a level of surfing and life-involvement that is highly customizable to your tv. (They don’t have to know your name, they know who you are by your habits.) Let’s say, for example, that you watched five cookie commercials (tracked), then later in the week you bought a package of cookies (tracked from purchases). These companies will start to get a picture of how many cookie commercials (or anything else that you watch) it will take to affect your behavior. Using an example from the article, the U.S. Army tested four different ads for recruitment:
One group, dubbed “family influencers” by Cablevision, saw an ad featuring a daughter discussing with her parents her decision to enlist. Another group, “youth ethnic I,” saw an ad featuring African-American men testing and repairing machinery. A third, “youth ethnic II,” saw soldiers of various ethnicities doing team activities.
Someone will likely claim that there’s no personally identifiable information being exchanged. That will be a lie, as they could only make that claim by defining “personally identifiable information” in a very different way than regular people–or government regulators–would. This is more about tracking and compiling the most intimate details of our lives, so we can be manipulated into acting a certain way.
Coaching moment: Corporate behavior like this is an example of a slippery slope. There is no real end to the social destruction that could be wrought on our world by corporate visions of a “good society.” I doubt that any one person that works for these companies would wish to be tracked and manipulated in this way. But when that person goes to work for a company that does this, the person is “just doing his job.”
There’s a clear reason why “Do Not Track” legislation is being proposed. This story points out an example of tracking that, I would argue, crosses ethical boundaries. It’s one thing to use voluntarily shared data about people. It’s another to invade their homes and lives for corporate gain.
I might be over-reacting. How do you feel about this?