How the technical community fails at multi-stakeholderism

One of the standard arguments that the United States and other developed countries make in opposing changes to Internet governance is that the Internet is already well governed through a multi-stakeholder model by a network of grassroots Internet technical community organisations. These are said to include the IETF (Internet Engineering Taskforce), ICANN (the Internet Corporation for Assigned Names and Numbers), the RIRs (Regional Internet Registries) and the W3C (World Wide Web Consortium).

Yet when you look a little closer, none of these organisations actually represent grassroots Internet users, or are even multi-stakeholder by any usual definition of the term. Neither are they capable of legitimately dealing with broader public policy issues, that go beyond the development of purely technical standards development and the allocation of Internet resources. As a result, the process by which they reach such decisions is undemocratic, and some of the policy choices embodied in those decisions are unsupportable.

Unfortunately those organisations often don’t seem to realise this, and will quite happily go about making policy heedless of their own limitations. An example is the failed process by which the W3C’s tracking preference working group sought to develop a specification for a standard called “Do Not Track” or DNT. The concept behind this standard (which I’ve written about in detail elsewhere) was to specify how a website or advertiser should respond to a notification expressed by a user (typically through a browser setting) that they do not wish to be tracked online.

The W3C is not the only example of this sort of dysfunction. The IETF has (to its credit) acknowledged its own limited inclusiveness (its parent body the IAB has 11 white males on its board of 13), ICANN has recently received blistering criticism over its failure to pay attention to the community’s wishes (while drawing in millions from the new global top-level domain goldrush), and soon to be released research will unveil how decisions of the RIRs such as APNIC are similarly driven by shallow discussion from a narrow segment of stakeholders (even though this takes place on notionally open mailing lists).

The underlying problem is that the Internet community bodies have been captured by industry, and by a narrow segment of civil society that is beholden to industry (exemplified by the global Internet Society, ISOC). As a result Internet technical standards are biased in favour of a US-led, free market-directed model of competition, which fails to incorporate broader public interest objectives (this has even been formalised in the OpenStand Declaration). Standards development that involves issues such as consumer privacy and access to knowledge is a political process, and as such, capture by powerful interests becomes inevitable unless safeguards are set in place.

The industry-led specifications that have resulted from this paradigm speak for themselves. In July this year, industry released a standard for mobile apps to notify users of data collection using short-form notices, rather than lengthy privacy policies. This voluntary standard, although based on a supposedly multi-stakeholder process set up by the US National Telecommunications and Information Administration (NTIA), has been criticised by American consumer groups both for its substance and for the process by which it was developed, which allowed an industry-dominated panel to push through a code that served their commercial interests.

Another example is the United States’ Copyright Alert System (CAS), by which Internet users’ privacy is sacrificed to facilitate the delivery of copyright infringement notices to those who share content online – the system does not take account of “fair use” or other copyright user rights. This follows on from the 2007 Principles for User Generated Content Services, also written by industry, that were adopted by most major content platforms, and from codes agreed by major credit card companies and payment processors in June 2011, and by advertisers in May 2012, to withdraw payment services from websites allegedly selling counterfeit and pirated goods. No consumer representatives (or even elected governments) had any say in the development of these codes. How is this a “multi-stakeholder” model?

True multi-stakeholder processes (as defined at the 2002 Earth Summit, long before the Internet technical organisations appropriated the term) are:

processes which aim to bring together all major stakeholders in a new form of communication, decision-finding (and possibly decision-making) on a particular issue. They are also based on recognition of the importance of achieving equity and accountability in communication between stakeholders, involving equitable representation of three or more stakeholder groups and their views. They are based on democratic principles of transparency and participation, and aim to develop partnerships and strengthened networks between stakeholders.

Although often described (for example by the United States government, and bodies like ISOC that follow US foreign policy) as “the” multi-stakeholder model of Internet governance, the Internet technical community organisations actually don’t tend to embody these principles very well. Although they are typically open to participants from stakeholder groups, no attempt is made to balance their participation so that the voices of weaker stakeholders (such as consumers) are not drowned out by those with the most resources or privilege. Having open mailing lists is not enough, and indeed can mask abuses of the process – after all, it has been revealed that the NSA used IETF processes with the aim of weakening encryption standards.

Continue reading in “Web Consortium’s failures shows the limits of self-regulation” at Digital News Asia.