June 10, 2011

Deceptive Assurances of Privacy?

Filed under: code, privacy — wseltzer @ 11:52 am

Earlier this week, Facebook expanded the roll-out of its facial recognition software to tag people in photos uploaded to the social networking site. Many observers and regulators responded with privacy concerns; EFF offered a video showing users how to opt-out.

Tim O’Reilly, however, takes a different tack:

Face recognition is here to stay. My question is whether to pretend that it doesn’t exist, and leave its use to government agencies, repressive regimes, marketing data mining firms, insurance companies, and other monolithic entities, or whether to come to grips with it as a society by making it commonplace and useful, figuring out the downsides, and regulating those downsides.

…We need to move away from a Maginot-line like approach where we try to put up walls to keep information from leaking out, and instead assume that most things that used to be private are now knowable via various forms of data mining. Once we do that, we start to engage in a question of what uses are permitted, and what uses are not.

O’Reilly’s point –and face-recognition technology — is bigger than Facebook. Even if Facebook swore off the technology tomorrow, it would be out there, and likely used against us unless regulated. Yet we can’t decide on the proper scope of regulation without understanding the technology and its social implications.

By taking these latent capabilities (Riya was demonstrating them years ago; the NSA probably had them decades earlier) and making them visible, Facebook gives us more feedback on the privacy consequences of the tech. If part of that feedback is “ick, creepy” or worse, we should feed that into regulation for the technology’s use everywhere, not just in Facebook’s interface. Merely hiding the feature in the interface, while leaving it active in the background would be deceptive: it would give us a false assurance of privacy. For all its blundering, Facebook seems to be blundering in the right direction now.

Compare the furor around Dropbox’s disclosure “clarification”. Dropbox had claimed that “All files stored on Dropbox servers are encrypted (AES-256) and are inaccessible without your account password,” but recently updated that to the weaker assertion: “Like most online services, we have a small number of employees who must be able to access user data for the reasons stated in our privacy policy (e.g., when legally required to do so).” Dropbox had signaled “encrypted”: absolutely private, when it meant only relatively private. Users who acted on the assurance of complete secrecy were deceived; now those who know the true level of relative secrecy can update their assumptions and adapt behavior more appropriately.

Privacy-invasive technology and the limits of privacy-protection should be visible. Visibility feeds more and better-controlled experiments to help us understand the scope of privacy, publicity, and the space in between (which Woody Hartzog and Fred Stutzman call “obscurity” in a very helpful draft). Then, we should implement privacy rules uniformly to reinforce our social choices.

June 9, 2011

UN Rapporteur on Free Expression on the Internet

Filed under: Chilling Effects, Internet, censorship, open, privacy — wseltzer @ 5:54 pm

“[D]ue to the unique characteristics of the Internet, regulations or restrictions which may be deemed legitimate and proportionate for traditional media are often not so with regard to the Internet.”

This statement of Internet exceptionalism comes not from the fringes of online debate, but from the UN Human Rights Council’s Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. The Rapporteur, Frank La Rue, recently presented a report emphasizing the importance of rule of law and respect for free expression.

  • State-sponsored content blocking or filtering is “frequently in violation of their obligation to guarantee the right to freedom of expression.” Blocking is often overbroad and vague, secret (non-transparent), and often lacks independent review.
  • Intermediary liability, even with notice-and-takedown safe-harbor, “is subject to abuse by both State and private actors.” Private intermediaries, like states, will tend to over-censor and lack transparency. They’re not best placed to make legality determinations. “The Special Rapporteur believes that censorship measures should never be delegated to a private entity, and that no one should be held liable for content on the Internet of which they are not the author.”
  • Disconnecting users cuts off their Internet-based freedom of expression. The report calls out HADOPI, the UK Digital Economy Bill, and ACTA for concern, urging states “to repeal or amend existing intellectual copyright laws which permit users to be disconnected from Internet access, and to refrain from adopting such laws.”
  • Anonymity. “The right to privacy is essential for individuals to express themselves freely. Indeed, throughout history, people‚Äôs willingness to engage in debate on controversial subjects in the public sphere has always been linked to possibilities for doing so anonymously.” Monitoring, Real-ID requirements, and personal data collection all threaten free expression, “undermin[ing] people’s confidence and security on the Internet, thus impeding the free flow of information and ideas online.”

    “The Special Rapporteur calls upon all States to ensure that Internet access is maintained at all times, including during times of political unrest.” I couldn’t say it better myself.

  • Editorials against PROTECT-IP

    Filed under: Chilling Effects, censorship, copyright, domain names — wseltzer @ 2:40 pm

    First the Los Angeles Times, now the New York Times have both printed editorials critical of the PROTECT-IP bill.

    Both the LAT and NYT support copyright — and announce as much in their opening sentences. That doesn’t mean we should sacrifice Internet security and stability for legitimate DNS users, nor the transparency of the rule of law. As the LAT puts it “The main problem with the bill is in its effort to render sites invisible as well as unprofitable.” Pulling sites from search won’t stop people from reaching them, but will stifle public debate. Copyright must not be used to shut down the engine of free expression for others.

    Let’s hope these policy criticisms, combined with the technical critiques from a crew of DNS experts will begin a groundswell against this poorly considered bill.

    June 8, 2011

    Privacy, Attention, and Political Community

    Filed under: privacy — wseltzer @ 2:22 pm

    In the ferment of ideas from PLSC, and the lead-up to Berkman’s HyperPublic I wanted to get back to my draft paper on “Privacy, Attention, and Political Community” (PDF)

    Privacy scholarship is expanding its concept of what we’re trying to protect when we protect “privacy.” In the U.S. legal thought, that trend leads from Warren and Brandeis’s “right to be let alone,” through Prosser’s four privacy torts, to Dan Solove’s 16-part taxonomy of privacy-related problems.

    In this thicker privacy soup, I focus on the social aspects, what danah boyd and others refer to as “privacy in public.” It is not paradoxical that we want to exchange more information with more people, yet preserve some control over the scope and timing of those disclosures. Rather, privacy negotiation is part of building political and social community. I use the political liberalism of John Rawls to illuminate the political aspects: social consensus from differing background conceptions depends on a deliberate exchange of information.

    We learn to negotiate privacy choices as we see them reflected around us. Yet technological advances challenge our privacy instincts by enabling non-transparent information collection: data aggregators amass and mine detailed long-term profiles from limited shared glimpses; online social networks leak information through continuous feeding of social pathways we might rarely activate offline; cell phones become fine-grained location-tracking devices of interest to governments and private companies, unnoticed until we map them.

    I suggest that privacy depends on social feedback and flow-control. We can take responsibility for our privacy choices only when we understand them, and we can understand them best through seeing them operate. Facebook’s newsfeed sparked outrage when it launched by surprise, but as users saw their actions reflected in feeds, they could learn to shape those streams to construct the self-image they wanted to show. Other aspects of interface design can similarly help us to manage our social privacy.

    This perspective sits before legal causes of action and remedies, but it suggests that we might call upon regulation in the service of transparency of data-collection. Architectures of data collection should make privacy and disclosure visible.

    Cross-posted at HyperPublic blog.

    Powered by WordPress