Seeing (the Internet) Like a State: What high modernism can teach us about Internet governance

In James C. Scott’s classic work, Seeing Like a State (1999), the political scientist and anthropologist examines some of history’s most notorious attempts at social engineering, from agricultural collectivization in the Soviet Union to Mao’s Great Leap Forward to the forced “villagization” campaigns of Ethiopia and Tanzania (just to name a few), all of which were unmitigated disasters that resulted in the deaths of millions and untold human suffering.

Though unique, at least in the ways in which each meted out misery, Scott identifies a common ideology informing all of these sociopolitical arrangements: high modernism.

For Scott, high modernism “is the attempt to design society in accordance with what are believed to be scientific laws.” Where proponents of high modernism go so spectacularly wrong, according to Scott, is in viewing these laws narrowly, reducing them to account for “only those aspects of social life that are of official interest” (Scott 1999). As a result of this administrative myopia, governments ignore the value of local knowledge, or what he calls “metis,” and the ways in which these unquantifiable and “illegible” practices contribute to the “bio-diversity of society”. In this way, Scott argues that high modernist experiments fail precisely because they ignore this “bio-diversity” and the many forms of “metis” that keep it functioning.

Similarly, journalist and urban sociologist Jane Jacobs (who would have turned 100 this year) cites high modernism as the primary ideology behind many of the failed urban planning schemes of the mid-20th century, most notably the construction of insular housing projects that cemented racist housing policies into the architecture of America’s urban centers. In her 1961 work The Death and Life of Great American Cities (Jacobs 1992), Jacobs argues against the notion that geometric and aesthetic order in urban planning necessarily produces social order. Critiquing urban planning’s preoccupation with the aesthetics of the day, Jacobs describes what she calls the micro-sociology of public order in which, “the public peace — the sidewalk and the street peace — of cities…is kept by an intricate, almost unconscious network of voluntary controls and standards among the people themselves, and enforced by the people themselves” (Jacobs 1992).

If this sounds vaguely familiar it is because these “voluntary controls” and standards-setting processes are strikingly similar to those that have governed the Internet over the past thirty years.

In fact, in describing the ways in which high modernism has a tendency to inform ill-conceived efforts at social engineering through infrastructure, Scott and Jacobs’ work serves as a warning to those who would seek to impose centralized planning schemes on an inherently decentralized network of networks.

To this end, a recently published report by Bertrand de la Chapelle and Paul Fehlinger of the Internet and Jurisdiction Project titled Jurisdiction on the Internet: From Legal Arms Race to Transnational Cooperation, examines how and why nation-states feel increasingly compelled to impose Westphalian notions of territorial sovereignty in cyberspace.

“Confronted with increasing domestic pressure to address cyber issues, governments feel compelled to act on their own, using an extensive interpretation of territoriality criteria,”write de la Chapelle and Fehlinger (2016). And while the precise ways in which various governments act on this “hyper-territoriality” (de La Chapelle and Paul Fehlinger 2016) depend, to a large extent, on the domestic interests and geopolitical goals of a given nation-state, they tend to manifest in one of two ways: (1) through so-called extraterritorial extension (de La Chapelle and Paul Fehlinger 2016) or, conversely, (2) through the attempted fortification of borders in cyberspace, popularly referred to as Internet sovereignty.

Extraterritorial extension occurs when the laws of one nation-state extend beyond its borders and into other jurisdictions via the Internet. More often than not this is the result of global Internet platforms like Facebook and Twitter incorporating in one country and operating globally, exporting the laws and values of these private companies — and their home jurisdictions —  to other countries in which they do business. While this would seem an obvious positive externality of a free market system in which innovation spreads political and economic values and human rights, it can also result in conflict, as in the now-infamous “innocence of muslims” case, in which an anti-Islamic movie trailer sparked heated protests across the muslim world before being taken down by Youtube. In this way, private sector content intermediaries increasingly serve public sector functions, making policy decisions that affect users at home and abroad.

Similarly, questions regarding the extension of US law were central to a recently decided case in which a US federal appellate court ruled that the US government could not compel Microsoft to turn over emails stored on company servers located overseas (Stempel 2016).

Likewise, private sector domain name registrars based in the US (e.g. Verisign) extend US law extraterritorially when they shutter the domains of foreign registrants at the behest of governments, a practice that has become one of the primary tools used by regulators to enforce intellectual property rights online (as in the Rojadirecta case) and, in some cases, to shutter websites trafficking in politically sensitive information (as in the Wikileaks.org case). 

Meanwhile, lawmakers project domestic policies abroad through the enactment of laws designed specifically to reach across their borders, such as the EU’s General Data Protection Regulation, which requires any company, regardless of where the company is based, to delete, return, or otherwise amend data pertaining to EU user’s, upon request (EU’s General Data Protection Regulati…).

Finally, extraterritorial extension is enacted through court decisions that go on to impact users beyond the ruling court’s jurisdiction. Here, a recent decision handed down by a French court ordered Google to remove the names of claimants of the so-called “right to be forgotten,” not merely from Google searches in France (on the companies “google.fr” search engine) but globally, on all of Google’s search engines, including “google.com”.

It is in this sense that the Internet and Jurisdiction Project warns of a coming “legal arms race in cyberspace,” arguing that “extraterritorial extension of national jurisdiction is becoming the realpolitik of Internet regulation” (de La Chapelle and Paul Fehlinger 2016).

Of course, as nation-states struggle to come to terms with the borderless and porous nature of the Internet, extraterritorial extension has, in some cases, given way to its opposite: territorial fortification.

Here, many governments left with “a sense of powerlessness… to impose respect for their national laws,” (de La Chapelle and Paul Fehlinger 2016) have developed a cyber-protectionist mindset, leading to calls for the protection of so-called “Internet sovereignty”. In this way, Chinese officials have turned to “Internet sovereignty” as a response to the extraterritorial extension of foreign internet platforms and content entering their borders, serving as a justification for the expansion of the ruling party’s so-called “Great Firewall”.

And yet, these seemingly arcane technical debates over Internet governance and infrastructure do not occur inside a bubble. Rather they are very much subject to current events and geopolitics. In fact, in the wake of the Snowden revelations, many nation-states have assumed a defensive posture in regards to the protection of their data in cyberspace. Indeed, it would be fair to say that no single event has contributed to the reterritorialization of cyberspace — and all the consequences it entails — more than the Snowden revelations.

It is in this post-Snowden climate of mutual suspicion that governments now find themselves lobbying tech firms to install “backdoors” into their products, granting investigators access to the communications of suspected criminals. Meanwhile, data localization laws have gained greater traction in this post-Snowden environment, as governments, most notably in Brasilia and Moscow, look for ways to protect their citizen’s data from the prying eyes of foreign intelligence services (while preserving its utility for domestic spying).

Moreover, as the domain name system (DNS) becomes a more decentralized and commercialized namespace — especially in light of the opening up of new set of generic top-level domain names (gTLDs) — and as country code top-level domains (ccTLDs) become more popular due to their geographic specificity and the jurisdictional loopholes they provide, many ccTLD operators are turning to presence requirements, requiring registrants to maintain a physical presence in their jurisdiction. The irony of course is that while these presence requirements bolster the territorial integrity of a given ccTLD’s namespace, they also fuel the growth of commercialized ccTLDs offering cheap — or in some cases free — TLDs to users irrespective of geography, a phenomenon that has fueled the growth of what I refer to as the “offshore information economy”; an economy that like other iterations of offshoring (tax havens, gambling, flags of convenience, etc.), simultaneously reinforces and undermines notions of territorial sovereignty.

And so, as nation-states continue to experiment with legal and regulatory frameworks designed to impose Westphalian frameworks on an inherently decentralized network of networks, regulating “only those aspects of digital life that are of official interest,” Scott and Jacob’s work serves as a prescient reminder of the risks involved in such sociological — or in this case sociotechnical — experimentation, and the pitfalls of ignoring the many forms of “metis” cultivated by the networked information society.

Posted in Uncategorized | Leave a comment

Breaking open the black box of Wall Street

As anyone who has written, or is in the midst of writing a dissertation will tell you, the day-to-day experience of an ABD is a never-ending exercise in guilt management. Want to take a day off? Enjoy spending every waking minute of that day thinking about what you should be doing. But one of the lessons i’ve learned in the relatively short time i’ve spent working on my dissertation is the necessity of productive escapism. Which is to say, I no longer feel bad about reading for pleasure in the evenings.

To this end, recently I’ve been splitting my nights between the seemingly ever-expanding realm of Westeros (for the uninitiated: George R.R. Martin’s A Song of Ice and Fire series) and the canyons of Wall Street (and beyond) as told by Michael Lewis in his excellent new book Flash Boys. While the former may require some mental gymnastics to demonstrate its worth to a student of Internet governance, the latter has proven immensely relevant to my work at the crowded intersection of the politics of technology, information infrastructures, big data, actor-network theory, and ICTs.

Lewis (author of The Blind SideMoneyball, and Liar’s Poker, just to name a few) is an inveterate storyteller; among the very best in the business at taking complex stories and stripping them to their core without losing any of the nuance and context that keeps you turning the pages. His latest is no different. Here Lewis takes on the world of high frequency trading (HFT) as he tells the story of Brad Katsuyama, a former Royal Bank of Canada (RBC) manager who dared to ask fairly basic questions about the way the markets work.

Clock-of-the-Long-Now-By-Rolfe-Horn-courtesy-of-the-Long-Now-Foundation

Stuart Brand’s Clock of the Long Now

Among the most puzzling questions for Katsuyama was why the price of a given stock he saw at his terminal would vanish in the microsecond it took for his “BUY” order  (e.g. “BUY” GOOGLE @ $582/share) to make its way through the “series of tubes” between his computer and the exchange’s computers, where the trades were completed. As Katsuyama explains, “people are getting screwed because they can’t imagine a microsecond.” It’s in this sense that Katsuyama was attempting an industry-wide paradigm shift in the same way that Stuart Brand’s Clock of the Long Now is an effort to shift the way people think about time.

As it turns out, this dizzyingly complex web of tubes, lines and fiber created the conditions for high frequency traders to manipulate the fractional spreads in a stock’s price (e.g. 582.00 – 582.01) thanks to direct lines of lightning-fast access to exchanges. “Someone out there was using the fact that stock market orders arrived at different times at different exchanges to front-run orders from one market to another” explains Katsuyama, adding, “People think pushing a button is as simple as pushing a button. It’s not. All these things have to happen. There’s a ton of stuff happening.”

Sound familiar? It’s from this perspective that Flash Boys reads like a textbook case study in STS. In getting to the bottom of the HFT industry, Katsuyama and his partners essentially broke open the black box of contemporary Wall Street, and what they found would make waves, on Wall Street and beyond.

To get to the bottom of what was actually happening when people “pushed a button,” Katsuyama and co. followed in the footsteps of actor-network theorists like Bruno Latour, whether they knew it or not. “Ronan [one of Katsuyama’s partners] brought in oversized maps of New Jersey showing the fiber-optic networks built by telecom companies.” If journalism is about “following the money,” Brad Katsuyama’s job was, like Bruno Latour and countless other sociologists of technology, to “follow the actors,” from the quants who developed the algorithms that served as the mathematical DNA of HFT to the telecom companies who laid the fiber-optic lines that ran from Wall Street to Chicago to Jersey City and back again, to the lines themselves.

Brad Katsuyama (second from left) and the co-founders of IEX.

Brad Katsuyama (second from left) and the co-founders of IEX.

But Katsuyama’s other great trait, aside from not being afraid to ask the obvious questions, was convincing others who knew more than he to help him make sense of what was going on. In fact, his ability to be what others on Wall Street derisively called “RBC nice” actually allowed him access to knowledge countless others simply ignored. Its this character trait — equal parts dogged stubbornness and simple humility — that i’m currently trying/struggling to cultivate as I attempt to break open the black box of the ccTLD world. But I digress….

What makes Flash Boys so interesting is what Katsuyama and his partners decided to do about this inequity. They turned to infrastructure (I’d be remiss if I didn’t mention that i’m currently working on a chapter for a book AU colleagues Laura DeNardis and Derek Cogburn are editing tentatively titled The Turn to Infrastructure about the use of infrastructure-based governance more broadly, a topic I’m also addressing in my dissertation). In testing his hypotheses as to what was going on in the so-called “dark pools” of HFT, Katsuyama and co.’s solution came in the form of software designed to slow down (by fractions of a microsecond) the time it took trades to travel to various exchanges so that they would all reach the exchanges at exactly the same time, thereby eliminating the ability of those with bigger computers and faster connections to front-run their orders. As one of Katsuyama’s partners Rob Park explains, “Allen wrote a program — this one took him a couple of days — that built delays into the orders Brad sent to exchanges that were faster to get to, so that they arrived at exactly the same time as they did at the exchanges that were slower to get to. ‘It was counterintuitive,’ says Park. ‘Because everyone was telling us it was all about faster. We had to go faster. And we were slowing it down.'”

Essentially, Katsuyama’s team constructed speed bumps on the information infrastructure connecting buyers and sellers in the market. The decision then became whether to keep this program proprietary to RBC, a move Katsuyama saw as part of the one-ups-manship that led to HFT in the first place, or go it alone, and open their own exchange. They chose the later — hence Lewis’ so-called “Wall Street revolt — and when their newly formed exchange IEX opened it completely changed the game.

“IEX represented a choice. IEX also made a point: that this market which had become intentionally and overly complicated might be understood…. The same system that once gave us subprime mortgage collateralized debt obligations no investor could possibly truly understand now gave us stock market trades that occurred at fractions of a penny at unsafe speeds using order types that no investor could possibly truly understand. That is why Brad Katsuyama’s most distinctive trait — his desire to explain things not so he would be understood but so that others would understand — was so seditious. He attacked the newly automated financial system at its core: the money it made from its incomprehensibility.” (Lewis, 2014, p. 233)

Of course, profiting from the incomprehensibility of a given product is as old as commerce itself; a shell game that is the bread and butter of countless industries, from insurance to finance to telecommunications and now big data. This, as i’m finding out, is part of the job of a scholar who aims to interrogate this sociotechnical frontier. It’s about basic questions whose answers are exceedingly difficult (in a world of hyper-litigiousness secrecy) to discover. The challenge, day-to-day, is staying focused on the importance of this work. As Lewis writes of Katsuyama, “Brad was not by his nature radical. He was simply in possession of radical truths.”

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

File under: routing around

Information security software firm Global Stealth Inc. has developed Smart DNS Proxy, which allows global users to access region-specific blocked content such as Hulu, Spotify, Netflix, etc. Yet another example of the impact of counter-power in the networked information economy, it also reflects a growing user-centered market for technology used to hack content levees. No doubt, it is a battle that will continue to rage and it remains to be seen whether Smart DNS and similar DNS-proxy services will withstand legal challenges from content producers.

Posted in Uncategorized | Leave a comment

Stuart McMillen’s graphic adaptation of Neil Postman’s Huxley vs. Orwell discussion.

PastedGraphic-1

Image | Posted on by | Leave a comment

Could the NSA revelations break the Internet?

At the height of the SOPA/PIPA debates of 2011/12, critics of the proposals argued, rather hyperbolically, that the bills would “break the Internet”. Now, in the midst of perhaps the most significant leak of secret government documents in U.S. history, the notion of a broken internet is once again rearing its head, only this time those deploying the metaphor are not using it for dramatic effect.

Unlike the failed anti-piracy proposals, the still-unfolding NSA revelations can’t be stopped and some internet governance experts are beginning to worry whether fallout from the revelations may portend the fragmentation of the global Internet as we know it.

After abruptly canceling a state visit to the White House and delivering an impassioned attack against the NSA’s Orwellian methods at the UN, Brazilian President Dilma Rousseff announced plans to consider alternate modes of connection that would bypass the US. These plans include laying new submarine cables to physically route around the US; building more internet exchange points (IXPs) in Brazil; and attracting large content providers like Google and Facebook to construct data centers in the country so that Brazilian users’ data would stay in Brazil.

This raises several important practical questions about the effect such measures could have for global interoperability and the extent to which it could balkanize the Internet. If nations decide that the only way to shield their data from the prying eyes of the NSA is to reconstruct their internet infrastructure so as to enact an information embargo of sorts on the US, the result would be a fractured Internet with far less innovation, opportunity, and freedom. As Google CEO Eric Schmidt said recently, “the real danger from the publicity about all of this is that other countries will begin to put very serious encryption — we use the term ‘balkanization’ in general — to essentially split the internet and that the internet’s going to be much more country specific. That would be a very bad thing, it would really break the way the internet works, and i think that’s what I worry about.”

Yet, Rousseff’s proposal also illustrates some normative concerns about Internet governance that have remained relatively quiet — at least outside of internet governance circles — until recently.  For instance, just what is the actual role of the US in internet governance? Much has been made of multistakeholder approaches to internet governance, whether through entities like ICANN or via private commercial enterprises like Verisign and Google. But, as the NSA revelations illustrate, there’s a sense that many seem to be in a state of denial regarding internet governance. Perhaps this reverent devotion to multistakeholderism exposes a pathological naiveté regarding the political economy of internet governance. As instruments of control, nation-states are inherently antithetical to notions of multistakeholder governance, interoperability, and distributed structures of power, all of which are values central to a functioning internet.

Posted in Uncategorized | Leave a comment

Google’s Chromecast: Another Blow for Cable

If content is king, cable is experiencing a Stark-sized downfall.  Further disrupting an already reeling cable industry, Google yesterday unveiled Chromecast, a simple and incredibly cheap (the memory stick-sized device retails for $35) alternative to subscription based cable television.tumblr_ma48zd6M141qcaz8oo1_500

I’m not sure it’s a complete game changer — as far as I can tell it basically just cuts the cord for those using Roku, web enabled blueray players, or DV to HDMI adapters. But its the meta story here that’s most interesting to me. In this sense, Chromecast’s simplicity; its “why-didn’t-I-think-of-that” quality, says more about its competitors than it does about Google. It exposes the lack of creativity and risk-taking that, in my opinion, will be cable’s undoing. Something as obvious as this just highlights the industry’s inability, or unwillingness, to adapt. This is not to make light of the difficulty such a paradigm shift represents. It takes some serious cajones to abandon a business model that has been your bread and butter for so long.

Like newspapers and publishing more broadly, those who act fastest with a clear sense of the existential threat before them will be in a better position to win the future. But whether due to the industry’s strong ties to Washington (perhaps they think they can lobby their way out of this) or a culture that preaches digging-in and staying the course, products like Chromecast make it clear that cable is intent on going down with the ship.

Posted in Uncategorized | Leave a comment

Unpacking the networked fourth estate

On Wednesday Bradley Manning’s lawyers called Harvard Law prof and Internet scholar Yochai Benkler as an expert witness to testify as to Wikileak’s place in the new media ecosystem, or what Benkler would more broadly call “the networked information economy” (for more on this I highly recommend reading The Wealth of Networks. It has become one of, if not the foundational text on the subject).  Professor Benkler’s testimony (which can be found here thanks to a crowd-funded effort by the Freedom of the Press Foundation) was powerful in terms of countering the government’s 3349409200_c5be0f9ecc_bcase that by leaking classified material to Wikileaks Manning consorted with “the enemy.” As Benkler put it in his testimony (and in a March, 2013 New Republic Republic piece found here),”if handing materials over to an organization that can be read by anyone with an Internet connection means that you are handing it over to the enemy, that essentially means that any leak to a media organization that can be read by any enemy and/or in the world becomes automatically [treason].” An especially draconian proposition considering how nebulous the word “enemy” can be.

But Benkler’s testimony was particularly striking from a theoretical and methodological standpoint. Most interesting to me was Benkler’s defense of Wikileaks as a member of “the networked fourth estate”. For Benkler, the networked fourth estate is journalism’s response (albeit somewhat delayed) to the distributed development approach implemented most successfully by the software industry in the 90’s and 2000’s (for more on this see his seminal piece Coase’s Penguin). Where once news media ownership was consolidated in the hands of those who could afford printing presses, broadcast towers and satellites, the Internet has opened this up to a wider set of actors. But just what sort of actor is Wikileaks?

This was the question the prosecution hoped to answer for the court-martial panel by painting Wikileaks as a politically motivated activist network bent on destroying the U.S. To this end lead prosecutor Major Ashden Fein began by asking Benkler to distinguish what seperates “real journalism” from everything else.

Fein: Would you agree there is a difference between a transparency movement and a journalistic enterprise?

Benkler: “Yes. I think in general [a] transparency movement, any movement would be defined by the functions it fulfills. And if its goal is to achieve institutional or social change, then I would call it a movement not an act of journalism. But these two are not mutually exclusive. You can have the same organization commit acts of journalism or acts of movement building and movement participation. The two are not, they’re different, they’re not mutually exclusive.”

Pushing back, Fein asked whether there is a difference between activism and journalism, to which Benkler responded:

Benkler: I think there’s a difference between activism and journalism. Although again there are activists who also perform journalism, and when they perform journalism they’re doing journalism. There are journalists who perform activism. When they’re doing that, they’re activists. It’s not a unique organization or individual identity. It’s a behavior.

Fein: How do you determine when a organization is performing activism over performing journalism?

Benkler: I would define journalism as the gathering of news and information rather than for public concern for purpose(s) of its dissemination to the public. When I observe an organization doing that, I would say it’s engaged in journalism. When I say the effort to actually change an institution, I would say they’re engaged in activism.

Here we see the beginnings of a fundamental disconnect between the fairly narrow, monolithic (and antiquated) view of journalism (“all the news that’s fit to print”) espoused by the government in this case, and Benkler’s broader networked conceptualization in which many of the functions once reserved for traditional media elites (e.g. gatekeeping, the watchdog function) have been redistributed to a wider set of actors with a wider set of values.

Fein: Now, would you also agree that there’s a difference between the ideals of a journalist and the ideals of someone seeking maximum political impact?

Benkler: Not necessarily. Not necessarily. I think journalism has a broad range. There is a relatively narrow idea of more classical journalism. It’s not really classical, it’s mid 20th Century journalism that’s very focused on just being a professional. But there’s certainly politically oriented journalism.

This struck me as a critical distinction that exposes the fallacy of the prosecution’s case, at least as it concerns the legitimacy of Wikileaks. To say that Wikileaks is a purely activist enterprise engaged in muck-raking is to apply antiquated thinking to a radically new environment with radically new production models. Indeed, the same networked economy that enables whistleblowers to leak information anonymously also enables DC Homicide Watch to consistently scoop the Washington Post; Global Voices to cover stories that might otherwise go untold; OpenSecrets to monitor campaign contributions; and ProPublica to invest the resources necessary to pursue rich investigative stories.

Nor is Wikileaks alone in the whistleblowing game. Perhaps the most unlikely imitator is Honest Appalachia, a group transparency advocates from coal mining country whose mission is to inspire “whistleblowers to make critical information available to an informed citizenry.” As Benkler explained in Wednesday’s testimony, “WikiLeaks may fail in the future because of all these events, but the model of some form of decentralized leaking, that is secure technologically and allows for collaboration among different media in different countries, that’s going to survive and somebody else will build it.”

Responding to questions regarding the credibility of Wikileaks and the accuracy of the reports they obtained, Benkler explained, “what was remarkable was that through dozens of publicly reported releases, thousands of releases, there were no significant reports of Wikileaks having to retract and say, oops, this wasn’t authentic. Dan Rather, I’m sure, would loved to be able to say the same thing for himself.”

Turning to the methods employed by Wikileaks, Fein addressed the size and scope of the leaks.

Fein: Now, would you agree that mass document leaking is somewhat inconsistent with journalism?

Benkler: No. Why would I agree with that?

Disregarding the prosecution’s amnesia vis-a-vis the Pentagon Papers, “traditional” news organizations have been soliciting mass document leaks for years. In fact, several months after publication of the Afghan War Logs, then-New York Times executive editor Bill Keller revealed plans to develop their own in-house “drop box for whistleblowers”. It seems that not only is mass document leaking consistent with journalism, its also good business.

Following the prosecution’s examination of Benkler, the presiding judge in the case, Colonel Denise Lind, asked Manning’s lawyer David Coombs about the extension of journalist privilege to members of the networked fourth estate — be they individuals, organizations or activist networks. Explaining the delicate balance that must be struck when determining whether privileges may extend to an individual, Benkler cited the Supreme Court’s decision in Branzburg. Here the Court determined that freedom of the press applies to the lonely pamphleteer (the print age’s version of the basement blogger in boxers) just as it does to multinational news agencies.

Aside from explicating the networked fourth estate for a legal and military audience, Benkler’s testimony also provided some very interesting discussions concerning research methodology. Questioning Benkler’s methodology, prosecutors asked Benkler to justify his use of a primarily descriptive qualitative textual analysis. As Benkler explains, “there’s a tradeoff between what you can identify in very precise quantitative terms that are usually very thin and don’t give you the texture of the event and what you can do with textures, qualitative analysis. And I tried to use both methods for wherever they are most useable.” From the perspective of someone who uses qualitative methods almost exclusively, Benkler’s defense of his method was impressive.

From here Benkler’s discussion of methodology seemed to move naturally to his findings. Having digested a massive amount of coverage of Wikileaks both before and after publication of the War Logs and Embassy Cables, Benkler explained how he came to see a marked shift in the tenor of coverage of the organization. Despite the fact that several traditional news organizations (The New York Times, The Guardian, Der Speigel) published the very same documents, “the wrath was reserved purely for WikiLeaks.”

Posted in Uncategorized | Leave a comment