March 5, 2012

ICANN Preview: WHOIS and Privacy

Filed under: ICANN, WHOIS, code, domain names, privacy — wseltzer @ 5:39 pm

Next week, ICANN will meet in San Jose, Costa Rica. While we’ve only just barely seen the schedule, it’s clear we’ll be hearing a lot about WHOIS. The WHOIS Review Team’s draft final report is out for public comment.

In addition, ICANN just posted a summary of negotiations around the Registrar Accreditation Agreement and Law Enforcement requests. First among those requests from law enforcement:

(a) If ICANN creates a Privacy/Proxy Accreditation
Service, Registrars will accept proxy/privacy registrations only
from accredited providers; (b) “Registrants using privacy/proxy
registration services will have authentic Whois information
immediately published by Registrar when registrant is found to be
violating terms of service”

Now, even the WHOIS Review Team, which was not heavy with privacy advocates (thanks to those who were there!) acknowledged several legitimate uses of privacy or proxy services in domain registration, including from companies seeking to hide upcoming mergers or product launches; organizations sharing minority or controversial viewpoints; individuals; and webmasters registering on behalf of clients. The Non-Commercial Stakeholders Group listed others who might be concerned about publishing identities in domain registration in comments on a .CAT privacy amendment.

Would the proposed amendments (whose language is apparently agreed-upon but unshown to the broader community) protect these interests? Would they protect the confidentiality of an attorney-client relationship, where the attorney acted as proxy for a client? Will we all have to use ccTLDs (such as .is) whose operators are not bound by these rules? More once we hit the ground in San Jose…

June 10, 2011

Deceptive Assurances of Privacy?

Filed under: code, privacy — wseltzer @ 11:52 am

Earlier this week, Facebook expanded the roll-out of its facial recognition software to tag people in photos uploaded to the social networking site. Many observers and regulators responded with privacy concerns; EFF offered a video showing users how to opt-out.

Tim O’Reilly, however, takes a different tack:

Face recognition is here to stay. My question is whether to pretend that it doesn’t exist, and leave its use to government agencies, repressive regimes, marketing data mining firms, insurance companies, and other monolithic entities, or whether to come to grips with it as a society by making it commonplace and useful, figuring out the downsides, and regulating those downsides.

…We need to move away from a Maginot-line like approach where we try to put up walls to keep information from leaking out, and instead assume that most things that used to be private are now knowable via various forms of data mining. Once we do that, we start to engage in a question of what uses are permitted, and what uses are not.

O’Reilly’s point –and face-recognition technology — is bigger than Facebook. Even if Facebook swore off the technology tomorrow, it would be out there, and likely used against us unless regulated. Yet we can’t decide on the proper scope of regulation without understanding the technology and its social implications.

By taking these latent capabilities (Riya was demonstrating them years ago; the NSA probably had them decades earlier) and making them visible, Facebook gives us more feedback on the privacy consequences of the tech. If part of that feedback is “ick, creepy” or worse, we should feed that into regulation for the technology’s use everywhere, not just in Facebook’s interface. Merely hiding the feature in the interface, while leaving it active in the background would be deceptive: it would give us a false assurance of privacy. For all its blundering, Facebook seems to be blundering in the right direction now.

Compare the furor around Dropbox’s disclosure “clarification”. Dropbox had claimed that “All files stored on Dropbox servers are encrypted (AES-256) and are inaccessible without your account password,” but recently updated that to the weaker assertion: “Like most online services, we have a small number of employees who must be able to access user data for the reasons stated in our privacy policy (e.g., when legally required to do so).” Dropbox had signaled “encrypted”: absolutely private, when it meant only relatively private. Users who acted on the assurance of complete secrecy were deceived; now those who know the true level of relative secrecy can update their assumptions and adapt behavior more appropriately.

Privacy-invasive technology and the limits of privacy-protection should be visible. Visibility feeds more and better-controlled experiments to help us understand the scope of privacy, publicity, and the space in between (which Woody Hartzog and Fred Stutzman call “obscurity” in a very helpful draft). Then, we should implement privacy rules uniformly to reinforce our social choices.

June 9, 2011

UN Rapporteur on Free Expression on the Internet

Filed under: Chilling Effects, Internet, censorship, open, privacy — wseltzer @ 5:54 pm

“[D]ue to the unique characteristics of the Internet, regulations or restrictions which may be deemed legitimate and proportionate for traditional media are often not so with regard to the Internet.”

This statement of Internet exceptionalism comes not from the fringes of online debate, but from the UN Human Rights Council’s Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. The Rapporteur, Frank La Rue, recently presented a report emphasizing the importance of rule of law and respect for free expression.

  • State-sponsored content blocking or filtering is “frequently in violation of their obligation to guarantee the right to freedom of expression.” Blocking is often overbroad and vague, secret (non-transparent), and often lacks independent review.
  • Intermediary liability, even with notice-and-takedown safe-harbor, “is subject to abuse by both State and private actors.” Private intermediaries, like states, will tend to over-censor and lack transparency. They’re not best placed to make legality determinations. “The Special Rapporteur believes that censorship measures should never be delegated to a private entity, and that no one should be held liable for content on the Internet of which they are not the author.”
  • Disconnecting users cuts off their Internet-based freedom of expression. The report calls out HADOPI, the UK Digital Economy Bill, and ACTA for concern, urging states “to repeal or amend existing intellectual copyright laws which permit users to be disconnected from Internet access, and to refrain from adopting such laws.”
  • Anonymity. “The right to privacy is essential for individuals to express themselves freely. Indeed, throughout history, people’s willingness to engage in debate on controversial subjects in the public sphere has always been linked to possibilities for doing so anonymously.” Monitoring, Real-ID requirements, and personal data collection all threaten free expression, “undermin[ing] people’s confidence and security on the Internet, thus impeding the free flow of information and ideas online.”

    “The Special Rapporteur calls upon all States to ensure that Internet access is maintained at all times, including during times of political unrest.” I couldn’t say it better myself.

  • June 8, 2011

    Privacy, Attention, and Political Community

    Filed under: privacy — wseltzer @ 2:22 pm

    In the ferment of ideas from PLSC, and the lead-up to Berkman’s HyperPublic I wanted to get back to my draft paper on “Privacy, Attention, and Political Community” (PDF)

    Privacy scholarship is expanding its concept of what we’re trying to protect when we protect “privacy.” In the U.S. legal thought, that trend leads from Warren and Brandeis’s “right to be let alone,” through Prosser’s four privacy torts, to Dan Solove’s 16-part taxonomy of privacy-related problems.

    In this thicker privacy soup, I focus on the social aspects, what danah boyd and others refer to as “privacy in public.” It is not paradoxical that we want to exchange more information with more people, yet preserve some control over the scope and timing of those disclosures. Rather, privacy negotiation is part of building political and social community. I use the political liberalism of John Rawls to illuminate the political aspects: social consensus from differing background conceptions depends on a deliberate exchange of information.

    We learn to negotiate privacy choices as we see them reflected around us. Yet technological advances challenge our privacy instincts by enabling non-transparent information collection: data aggregators amass and mine detailed long-term profiles from limited shared glimpses; online social networks leak information through continuous feeding of social pathways we might rarely activate offline; cell phones become fine-grained location-tracking devices of interest to governments and private companies, unnoticed until we map them.

    I suggest that privacy depends on social feedback and flow-control. We can take responsibility for our privacy choices only when we understand them, and we can understand them best through seeing them operate. Facebook’s newsfeed sparked outrage when it launched by surprise, but as users saw their actions reflected in feeds, they could learn to shape those streams to construct the self-image they wanted to show. Other aspects of interface design can similarly help us to manage our social privacy.

    This perspective sits before legal causes of action and remedies, but it suggests that we might call upon regulation in the service of transparency of data-collection. Architectures of data collection should make privacy and disclosure visible.

    Cross-posted at HyperPublic blog.

    December 11, 2009

    The Goldilocks Problem of Privacy in Public

    Filed under: commons, events, musings, networks, politics, privacy — wseltzer @ 8:55 am

    One of the very interesting sessions at Supernova featured a pair of speakers on aspects of privacy and publicity: danah boyd on “visibility” and Adam Greenfield on “urban objects.” Together, I found their talks making me think about the functions of privacy: how can we steer the course between too much and too little information-sharing?

    danah pointed out the number of places we don’t learn enough. We “see” others on social media but fail to follow through on what we learn. She described a teen whose MySpace page chronicled abuse at her mother’s hands for months before the girl picked up a weapon. After the fact, the media jumped on “murder has a MySpace,” but before, none had used that public information to help her out of the abuse. In a less dramatic case of short-sighted vision, danah showed Twitter users responding to trending black names after the BET Awards with “what’s happening to the neighborhood?” Despite the possibilities networked media offer, we often fail to look below the surface, to learn about those around us and make connections.

    Adam, showing the possibilities of networked sensors in urban environments, described a consequence of “learning too much.” Neighbors in a small apartment building had been getting along just fine until someone set up a web forum. In the half year thereafter, most of the 6 apartments turned over. People didn’t want to know so much about those with whom they shared an address. Here, we might see what Jeffrey Rosen and Lawrence Lessig have characterized as the problem of “short attention spans.” We learn too much to ignore, but not enough to put the new factoid in context. We don’t pay attention long enough to understand.

    How do we get the “just right” level of visibility to and from others? and what is “just right”? danah notes that we participate in networked publics, Helen Nissenbaum talks of contexts. One challenge is tuning our message and understanding to the various publics in which we speak and listen; knowing that what we put on Facebook or MySpace may be seen by many and understood by few. Like danah, Kevin Marks points out the asymmetry of the publics to which we speak and listen.

    Another challenge is to find connections among publics and build upon them to engage with those who seem different, Ethan Zuckerman’s xenophilia. The ‘Net may have grown past the stage where just Internet use could be conversation-starter enough but spaces within it take common interest and create community. Socializing in World of Warcraft or a blog’s comments section can make us more willing to hear our counterparts’ context.

    Finally, our largest public, here in the United States, is our democracy. We need to live peacefully with our neighbors and reach common decisions. Where our time is too limited to bestow attention on all, do we need to deliberately look away? John Rawls, in Political Liberalism, discusses political choices supported by an “overlapping consensus” from people with differing values and comprehensive views of “the good.” I wonder whether this overlapping consensus depends on a degree of privacy and a willingness to look away from differences outside the consensus.

    September 23, 2008

    Won’t someone think of the children’s speech?: Internet Technical Safety Task Force

    Filed under: Berkman, Internet, censorship, markets, privacy — wseltzer @ 10:04 am

    I’m at Berkman for the open meeting of the Internet Technical Safety Task Force, a group convened at the pressing of state attorneys general to address children’s safety on social networking sites. The day kicked off with statements from Mass and Conn. attorneys general, to be followed by presentations from technology companies offering “solutions” and suggestions.

    Live tweeting and identi.ca-ing

    July 4, 2008

    Privacy Falls into YouTube’s Data Tar Pit

    Filed under: Internet, privacy, trade secret — wseltzer @ 3:53 pm

    As a big lawsuit grinds forward, its parties engage in discovery, a wide-ranging search for information “reasonably calculated to lead to the discovery of admissible evidence.” (FRCP Rule 26(b)) And so Viacom has calculated that scouring YouTube’s data dumps would help provide evidence Viacom’s copyright lawsuit.

    According to a discovery order released Wednesday, Viacom asked for discovery of YouTube source code and of logs of YouTube video viewership; Google refused both. The dispute came before Judge Stanton, in the Southern District of New York, who ordered the video viewing records — but not the source code — disclosed.

    The order shows the difficulty we have protecting personally sensitive information. The court could easily see the economic value of Google’s secret source code for search and video ID, and so it refused to compel disclosure of that “vital asset,” the “product of over a thousand person-years of work.”

    But the user privacy concerns proved harder to evaluate. Viacom asked for “all data from the Logging database concerning each time a YouTube video has been viewd on the YouTube website or through embedding on a third-party website,” including users’ viewed videos, login IDs, and IP addresses. Google contended it should not be forced to release these records because of users’ privacy concerns, which the court rejected.

    The court erred both in its assessment of the personally identifying nature of these records, and the scope of the harm. It makes no sense to discuss whether an IP address is or is not “personally identifying” without considering the context with which it is connected. It may not be a name, but is often one search step from it. Moreover, even “anonymized” records often provide sufficiently deep profiles that they can be traced back to individuals, as researchers armed with the AOL and Netflix data releases showed.

    Viewers “gave” their IP address and username information to YouTube for the purpose of watching videos. They might have expected the information to be used within Google, but not anticipate that it would be shared with a corporation busily prosecuting copyright infringement. Viewers may not be able to quantify economic harm, but if communications are chilled by the disclosure of viewing habits, we’re all harmed socially. The court failed to consider these third party interests in ordering the disclosure.

    Trade secret wins, privacy loses. Google has said it will not appeal the order.

    Is there hope for the end users here, concerned about disclosure of their video viewing habits? First, we see the general privacy problem with “cloud” computing: by conducting our activities at third-party sites, we place a great deal of information about our activities in their hands. We may do so because Google is indispensable, or because it tells us its motto is “don’t be evil.” But discovery demands show that it’s not enough for Google to follow good precepts.

    Google, like most companies, indicates that it will share data where “We have a good faith belief that access, use, preservation or disclosure of such information is reasonably necessary to (a) satisfy any applicable law, regulation, legal process or enforceable governmental request.” Its reputation as a good actor is important, but the company is not going to face contempt charges over user privacy.

    I worry that this discovery demand is just the first of a wave, as more litigants recognize the data gold mines that online service providers have been gathering: search terms, blog readership and posting habits, video viewing, and browsing might all “lead to the discovery of admissible evidence” — if the privacy barriers are as low as Judge Stanton indicates, won’t others follow Viacom’s lead? A gold mine for litigants becomes a tar pit for online services’ user.

    Economic concerns, the cost of producing the data in response to a wave of subpoenas, or reputational concerns, the fear that users will be driven away from a service that leaves their sensitive data vulnerable, may exercise some constraint, but they’re unlikely to be enough to match our privacy expectations.

    We need the law to supply protection against unwanted data flows, to declare that personally sensitive information — or the profiles from which identity may be extracted and correlated — deserves consideration at least on par with “economically valuable secrets.” We need better assurance that the data we provide in the course of communicative activities will be kept in context. There is room for that consideration in the “undue burden” discovery standard, but statutory clarification would help both users and their Internet service providers to negotiate privacy expectations better.

    Is there a law? In this particular context, there might actually be law on the viewers’ side. The Video Privacy Protection Act, passed after reporters looked into Judge Bork’s video rental records, gives individuals a cause of action against “a video tape service provider who knowingly discloses, to any person, personally identifiable information concerning any consumer of such provider.” (”Video tape” includes similar audio visual materials.) Will any third parties intervene to ask that the discovery order be quashed?

    Further, Bloomberg notes the concerns of Europeans, whose privacy regime is far more user-protective than that of the United States. Is this one case where “harmonization” can work in favor of individual rights?

    September 27, 2007

    Copyright and the University: 2 talks

    Filed under: law, markets, privacy — wseltzer @ 12:57 pm

    I’ll be discussing copyright at Cornell University today, at 3:00 and 7:30 p.m., talking about the university’s role in promoting balanced cultural and technology policy. Join the webcast if you like. If you add questions or comments to the blog, I’ll even try to address them.

    September 26, 2007

    Has Common Sense Flown the Coop: No copyright claims to book prices

    Filed under: ICANN, law, open, privacy — wseltzer @ 4:19 am

    The Crimson has been reporting on the Harvard Coop’s silly claims of “intellectual property” against those who come to the bookstore to compare prices. It’s escalated all the way to calling the cops, who wisely refused to throw students out of the store.

    A terrific clinical student at the Berkman Center helped us to write an op-ed on the limits of copyright, which the Crimson ran today:

    We’re not sure what “intellectual property” right the Coop has in mind, but it’s none that we recognize. Nor is it one that promotes the progress of science and useful arts, as copyright is intended to do. While intellectual property may have become the fashionable threat of late, even in the wake of the Recording Industry Association of America’s mass litigation campaign the catch-phrase—and the law—has its limits.

    Since the Coop’s managers don’t seem to have read the law books on their shelves, we’d like to offer them a little Copyright 101.

    Copyright law protects original works of authorship—the texts and images in those books on the shelves—but not facts or ideas. So while copyright law might prohibit students from dropping by with scanners, it doesn’t stop them from noting what books are on the shelf and how much they cost.

    CrimsonReading.org does students a real service by helping them to compare prices efficiently. Harvard should support them in their information-sharing efforts, rather than endorsing the Coop’s attempts to cut off access to uncopyrightable facts.

    September 6, 2007

    DMCA Truth Is Stranger than Science Fiction

    Filed under: Chilling Effects, law, open, privacy — wseltzer @ 2:14 pm

    Author Denise McCune posts a great account of the workings and failings of the DMCA’s notice-and-takedown procedures.

    As Cory Doctorow has also reported on BoingBoing, the VP of the Science Fiction and Fantasy Writers of America sent an error-filled takedown complaint to text-sharing site Scribd, causing removal of many non-infringing postings including reading lists suggesting great science fiction, and Cory’s own novels, which he’s CC-licensed for free redistribution.

    The DMCA safe-harbor is most charitably described as an intricate dance for all parties involved: the copyright claimant, the ISP, and the poster. When the dancers are synchronized, its notice, takedown, and counternotice steps give each party a prescribed sequence by which to notify the others of claims and invite their responses. That’s why the DMCA requires the claimant to identify the copyrighted works, specify alleged infringements with “information reasonably sufficient to permit the service provider to locate the material,” and state good faith belief that the uses are unauthorized. When a copyright claimant misses one of those key elements, he starts stepping on toes.

    The service provider isn’t obliged to respond to deficient notices, but if a notice contains all the right formal elements — even if it’s factually wrong about copyright ownership or copying — the service provider must choose between taking down the material or losing its DMCA safe-harbor and facing potential lawsuits. Posters who believe their material is non-infringing or fairly posted can counter-notify and even file their own lawsuits for misuse of copyright claims, under sec. 512(f).

    I share McCune’s hope that the brouhaha will help the SFWA to help authors express all their copyright interests, including that of free sharing:

    I hope the SFWA’s lawyers are sitting down with Andrew Burt and explaining how the DMCA actually works, so that actual, legitimate violations of copyright (on Scribd and on other sites) can get dealt with swiftly and promptly and the people who have asked SFWA to be their copyright representative can get infringing uses of their material removed. I’m also glad to see that the SFWA ePiracy Committee has suspended operations until they can investigate further — and, hopefully, come up with an effective process and procedure that benefits both fair and/or transformative use while also protecting the rights of copyright holders to have control over where and how their material is posted — whether that control is a more traditional “nobody gets to use this, period” or a Creative Commons-style authorization of transformative work.

    Next Page »

    Powered by WordPress