Next week, Harvard is hosting the Internet & Society Conference 2007: UNIVERSITY – Knowledge Beyond Authority.
The Internet & Society Conference is positioned to generate questions, insight and solutions from diverse perspectives across the landscape of University, with a focus on the role of University as an institution. We seek to establish University as a collective force much like 'Government' or 'Private Enterprise' in its ability to negotiate and compromise for our needs in the digital environment.
I'll be there to discuss some of the issues in my Harvard Crimson op-ed: How should the university respond to copyright claims against its students, in a manner that respects both law and the university's educational mission? If you'd like to, please join the working group or engage with questions online.
David Weinberger proposes that we balance the scales against wrongful DMCA takedowns by adding statutory damages for speech harms. If the law presumes that a copyright holder is harmed up to $150,000 per work infringed, without making it prove actual damage, we should adopt the same approach, and figures, to the measurement problems for lost speech.
We faced the speech-damages question in OPG v. Diebold, which we ultimately settled for $125,000 in damages and attorneys' fees. $5,000 of that was our claim for OPG's hosting disruption; the Swarthmore students who had to pull essential materials from their symposium website couldn't put a dollar figure on that.
Presumed damages could solve that problem and make it easier for those harmed by bad copyright claims to find and pay counsel. As Weinberger says, "Protecting free speech ought to be at least as important as protecting the rights of copyright holders."
ICANN seems to be out to re-prove Hirschman's theories of exit, voice, and loyalty by driving all of its good people to exit rather than giving them meaningful voices. Thomas Roessler, a long-time advocate of individual users' interests on the interim ALAC now suggests it's Time to Reconsider the structure of ICANN's At-Large, as he feels compelled to promise himself not to get involved with ICANN again.
Roessler and Patrick Vande Walle both express their frustration at interference and infighting in the formation of the European Regional At-Large Organization. Here's Roessler:
To this day, I still occasionally dangle my feet into these waters, though I've again and again promised myself not to do it again.
To say I'm disappointed by what I've seen recently would be an understatement: While I'm happy there is a number of people who, presumably, really want to move things, I'm appalled to see how discussions among both European and North American participants take on an increasingly divisive tone. There isn't much to be seen of a common goal to advocate users' interest in ICANN -- rather, a lot of fighting for table scraps (when there's more than enough work for anybody who wants to gamble some of their time on ICANN and its at-large activities!). ALAC's ICANN staff support seems most interested in staging pretty signing ceremonies and press events, one per ICANN General Meeting.
The result? Artificial and rushed time lines, premature consensus calls, and a lot of bad blood and mistrust among participants who really ought to be working together (and have been able to talk reasonably to each other before they got into fights around ICANN). Also, the ability for ICANN to pretend that there's real end user participation and representation, when there are really very few ways (if any) for ALAC to make a real difference in policy decisions -- even though the committee has some limited power to help shape ICANN's policy agenda.
And Vande Walle, concerned that a push for "diversity" became a stereotyped exclusion of experienced participants:
All this for the sole purpose of pushing on the side those who invested a lot of time over the years into ICANN and ALAC processes. If this is an added value to ICANN and ALAC, I do not know. Frankly, I am skeptic. Time will tell.
From now on, I will watch from the outside. So long, guys.
Hirschman notes that exit and voice are alternative means of expressing dissatisfaction with organizations in decline. The smart organization listens and reverses course, the stupid one just declines further.
ICANN needs these people. They have good ideas about how to respond to the public interest in domain name management. But, controlled by commercial interests who'd rather raise prices on their domain-name monopolies or shield trademarks against potential dilution, ICANN doesn't have the inclination to listen to the individuals who make up the public. It keeps sending us back to play in sandboxes building complex structures upon structures, all to shield the organization from having to hear our voices.
So, as the opportunity costs of attempting to deal with ICANN grow too great, good people exit. ICANN asks for bottom-up development, but when there's no way for the bottom to connect with the top, we get frustrated down here and find better things to do with our time.
Friday's OpenNet Initiative Conference concluded with a debate in the stately halls of the Oxford Union: "This House believes that the Internet is the greatest force for Democratisation in the World." Point of information: the Internet is not just about porn; people frequently get to this blog searching for "Perfect 10," only to learn about legal cases, not nude models. (I suppose there's some discussion of "naked licensing" from time to time.)
In proof that lawyers are perhaps the greatest force against democratisation, the Noes carried (lawyer/lawprofs Palfrey, Zittrain, and Amsterdam against Jimbo "Wikipedia" Wales, Ron "Psiphon" Deibert, and Chairman Bo Aung Din of the Burma PDP). Tobias Escher has the full scoop.
The OpenNet Initiative is holding its first public conference to discuss the current state of play of Internet filtering worldwide. The conference will be hosted by the Oxford Internet Institute and held at St. Anne's College, University of Oxford on May 18, 2007. The conference is free of charge and open to the public.
Join us if you can, in person or online.
Score 8.5 for public access. In Perfect 10 v. Amazon.com and Google, the Ninth Circuit reaffirmed and strengthened Kelly v. Arriba Soft, holding that neither showing image thumbnails nor inline linking/framing in an image search engine constitutes copyright infringement.
In the ongoing battle between adult-content purveyor Perfect 10 and Google, the court reversed the lower court's ruling to hold that Google could not be held directly liable for infringement even if its image search spidered in some unauthorized images. As in Kelly, the court found that search was a transformative fair use. The court further rejected the argument that Google's Adsense program made it a vicarious infringer.
The rub is that the court sent the case back for further factfinding on questions of contributory infringement and Google's safe harbor defense.
Accordingly, we hold that a computer system operator can be held contributorily liable if it has actual knowledge that specific infringing material is available using its system, and can take simple measures to prevent further damage to copyrighted works, yet continues to provide access to infringing works. (citations omitted)
While that standard sounds nice in theory, it gums up the works of search engines in practice. In the dozen takedown notices Perfect 10 sent to Google are hundreds of URLs Google must investigate or remove, if the allegation is enough to impute knowledge. In return for ease of copyright-holder policing, the public gets less access to comprehensive search.
Now the court did say considerable factfinding remained before Google could be held contributorily liable, including "factual disputes over whether there are reasonable and feasible means for Google to refrain from providing access to infringing images." Moreover, the safe harbor should protect Google, which routinely does "expeditiously" remove alleged infringements, but I've often argued the safe harbor goes beyond legal liabilities.
At the World Wide Web Conference, Building a Semantic Web in Which Our Data Can Participate panel. A few notes, loosely joined.
Open Street Maps generates and annotates street maps from open sources of data. In the UK and Canada, unlike in the U.S., street map data is protected by Crown Copyright, so folks who want to annotate maps generally can't. Can we compare the range of map-based products available between US and UK/Canada to see whether openness or closure is better for this data, for the public? It would cost $400,000CAN to collect all the maps of Canada from official sources, an audience member says, and even then you wouldn't be allowed to post and annotate them. In the US, $30 buys them all on a CD, in the public domain.
Freebase aims to create a meta-database of free information that can connect multiple sources of information. Jamie Taylor positions free information in Geoffrey Moore's terminology of core versus context. If data is not your core competency, then you should open it up, let the community contribute to your costs of maintaining it -- and helping you to find new uses for it. Along the business lifecycle, opening (or modularising) your data can allow you to focus on the core where you have comparative advantage, and force weaker competitors to move there too.
With collaborative databases, questions of the trustworthiness of the data come to the fore. Metadata becomes even more important, particularly metadata about origin, as well as validation by corroboration among multiple datasets. Freebase uses internal foreign keys to trace the source of datasets.
And thinking about the validity of contributed data can make us think about better ways to validate internally sourced data too. Can we trace its origins, compare it to others' measurements? Can we build in the metadata fields that allow us to rate the trustworthiness of elements and collaborate to focus on the weak spots? Defensive programming is good for everyone's data, even our own.
Peter Murray, talking about open access to scientific data, gives the example of PubChem. Before PubChem, each chemical supplier claimed copyright and proprietary interests in its catalogues. Now, if you're not in PubChem, you might as well not exist, so they've opened up, opening access to chemical information as well as expanding their markets.
Just found a PDF of presentations.
The Supreme Court was reviewing a claim against Microsoft for sending software abroad to users who installed it on machines which, if they were in the U.S., would apparently infringe an AT&T speech-processing patent. The Court makes much of the distinction between source and object code, and the steps between intangible program and execution. These fuel its determination that Microsoft's "intangible" Windows software cannot be a "component of a patented invention" and therefore that Microsoft is not liable for inducing infringement by exporting a master disk.
Abstract software code is an idea without physical embodiment, and as such, it does not match §271(f)'s categorization: 'components' amenable to 'combination.'... A blueprint may contain precise instructions for the construction and combination of the components of a patented device, but it is not itself a combinable component of that device.
The Court even cites Corley, but to say that Congress can act upon gaps in earlier statutes, not that it necessarily reaches complete solutions to new problems.
So, can we read MS v. AT&T as casting doubt on the assumption that a hex key is a "component or part" of a circumvention "technology, product, service, [or] device" (the key language of the §1201 prohibition on trafficking in circumvention tools)? By itself, this number doesn't circumvent, and there's not even a drop-in program with which one could readily use it to circumvent. And the posters supply no tangible copies of the number, but only a "blueprint" for making your own. At this stage, it's merely abstract information that might be, but isn't being, implemented in circumvention devices.
If it works for Microsoft, why not for the key-posters?
Often, when we're asked for "identification," it's not because the asker needs to know everything about us, but because they need to verify one aspect of identity: that I'm over 21, for example, if I'm trying to buy a drink. But since I don't have an "over 21" card that the bar can verify connects to me, I'm forced to give them my driver's license, from which they can also glean and store other data. Online, it doesn't have to be that way.
Builders of identity-management systems can design in stronger protections for their users' privacy, giving people a separate virtual "card" for every transaction, with only the necessary data included. Ben Laurie has written a good concise overview, Selective Disclosure, explaining how zero-knowledge proofs let us make verifiable assertions without giving away the store.
I claim that for an identity management system to be both useful and privacy preserving, there are three properties assertions must be able to have. They must be:
There’s often no point in making a statement unless the relying party has some way of checking it is true. Note that this isn’t always a requirement - I don’t have to prove my address is mine to Amazon, because its up to me where my goods get delivered. But I may have to prove I’m over 18 to get alcohol delivered.
This is the privacy preserving bit - I want to tell the relying party the very least he needs to know. I shouldn't have to reveal my date of birth, just prove I’m over 18 somehow.
If the relying party or parties, or other actors in the system, can, either on their own or in collusion, link together my various assertions, then I’ve blown the minimality requirement out of the water.
While digital signatures are widely used for verification, the same signature on each item is a privacy-busting linkage. With the help of third parties and selective disclosure proofs, however, we can make assertions that are minimal and don't leave a trail. We can create digital one-time cards each time we're asked for a facet of our identities.
These properties fit well with legal principle of narrow tailoring. Limiting the identification provided to that required limits spillover effects and opportunities for misuse ("mission creep"). An ID-check law shouldn't become a source of marketing information; an online purchase needn't be an entry in a growing retailer profile -- unless that's an explicit choice. We might even be more willing to give accurate information in places like online newspaper sign-ins if we knew that information could never be added to or correlated with profile data elsewhere.
The next hard part, of course, is getting those with whom we do business to accept less information where they've been accustomed to getting more by default, but at least if we build the identity technology right, it will be possible.
As Universities receive "pre-litigation letters," they should be concerned with the effect of compliance on their educational missions, Charlie Nesson and I write in an op-ed in the Harvard Crimson.
Since its founding, Harvard has been an educational leader. Its 1650 charter broadly conceives its mission to include “the advancement of all good literature, arts, and sciences, [and] the advancement and education of youth in all manner of good literature, arts, and sciences.” From John Harvard’s library through today’s my.harvard.edu, the University has worked to create and spread knowledge, educating citizens within and outside its walls.
Students and faculty use the Internet to gather and share knowledge now more than ever. Law professors at the Berkman Center for Internet & Society, for example, have conducted mock trials in the online environment of Second Life; law students have worked with faculty to offer cybercourses to the public at large. Students can collaborate on “wiki” websites, gather research materials from far-flung countries, and create multi-media projects to enhance their learning.
Yet “new deterrence and education initiatives” from the Recording Industry Association of America (RIAA) threaten access to this vibrant resource. The RIAA has already requested that universities serve as conduits for more than 1,200 “pre-litigation letters.” Seeking to outsource its enforcement costs, the RIAA asks universities to point fingers at their students, to filter their Internet access, and to pass along notices of claimed copyright infringement.
When copyright protection starts requiring the cooperation of uninvolved parties, at the cost of both financial and mission harm, those external costs outweigh its benefits. We need not condone infringement to conclude that 19th- and 20th-century copyright law is poorly suited to promote 21st-century knowledge. The old copyright-business models are inefficient ways to give artists incentives in the new digital environment.