Facebookistan and Googledom

Excerpt from Consent of the Networked: The Worldwide Struggle for Internet Freedom by Rebecca MacKinnon (2012, Basic Books)

Chapter 10

Facebookistan and Googledom

In May 2010, Hong Kong–based university professor and communications scholar Lokman Tsui decided to delete his Facebook account. In a blog post explaining his decision, he likened Facebook to a country run by an authoritarian, paternalistic government that claims to be acting in its people’s best interest:

Allow me to make a wild analogy, one I believe is not entirely out of left field. Many people know that there is censorship in China. Many people also tell me that 1) the poor Chinese must feel re- ally repressed or 2) they must be okay with it. But if that’s the case, who in their right mind can be okay with censorship? They must be brainwashed.

Ask yourself this: if I decide not to leave Facebook, yet I know they do not care at all about my privacy, what does that mean? How is that different from the people who continue to use the Internet in China day in day out despite the prevalent and prolific practices of censorship? This is not a rhetorical question. Of course I realize Facebook is not the Chinese government, but I do think there are similarities between them, in kind although perhaps not in degree.

Leaving Facebook is so many magnitudes easier—physically, economically, emotionally—than it is for the average Chinese citizen   to leave China and start a whole new life in another country. A physical government’s power over the individual is not in any way comparable to the power that any Internet company holds over any person. Still, Tsui made an important point. Hundreds of millions of people “inhabit” Facebook’s digital kingdom. Call it Facebookistan.

By mid-2011 Facebook had 700 million users. If it really were a country, it would be the world’s third largest, after India and China. The social network may have started out in Mark Zuckerberg’s Harvard dorm room as a platform for college students to flirt with one another, but it is now a world unto itself: an alternative virtual reality that for many users is now inextricably intertwined with their physical reality— and one that is often celebrated as a platform not only for personal expression but for political liberation.

Facebook’s motto is “Making the world open and connected.” In his best-selling book The Facebook Effect, charting the company’s origins and growth through the end of 2009, journalist David Kirkpatrick de- scribes Zuckerberg’s deep and long-standing belief in what he calls “radical transparency”: the idea that humanity would be better off if everybody were more transparent about who they are and what they do. Anonymous online speech runs directly counter to this vision. Zuckerberg tells Kirkpatrick, “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly. . . . Having two identities for yourself is an example of a lack of integrity.”

This vision of the world—and Facebook’s role in shaping it—is deeply embedded in how Facebook’s top executives, developers, marketers, and programmers think about the service and its purpose. It is their ideology. It is the foundation upon which the laws of Facebook- istan are constructed. The terms of service, to which every user much click “agree” to create an account, require that all inhabitants of Facebook use their real names. The sovereign rulers of Facebookistan enforce this “real-ID” policy. When discovered, accounts using pseudonyms or fake identities are punished with account suspension or deactivation. This internal governance system spans physical nations, across democracies and dictatorships. It influences people’s ability to communicate not only through Facebook itself, but also through a rapidly expanding universe of other websites and services that are increasingly integrated with Facebook.


By mid-2010, an estimated 3.4 million Egyptians were on Facebook, making Egypt the top Facebook-using country in the Arab world. In the spring of 2008, the Egyptian government—which had been led for nearly three decades by the same president, Hosni Mubarak—first felt the force of Facebook activism when young people in Cairo used the social networking platform to organize protests involving more than 60,000 people against rising food prices.

Then in June 2010 a young man named Khaled Said was brutally murdered by police in Alexandria, in retaliation, his family believed, for incriminating video of them that he planned to post on the Internet. After photos of his mutilated body in the morgue made it onto the Internet, several activists including a young Google executive named Wael Ghonim set up a Facebook page called “We Are All Khaled Said,” using fake names to protect themselves from potentially meeting the same fate as their hero. The group organized a series of protests called “silent stand against torture” involving first hundreds then thousands of people in cities across Egypt. More than 1,000 people showed up for Said’s funeral. More than 8,000 people attended one protest in Alexandria.

The day before one long-planned Friday of protest—which happened to be Thanksgiving Day back at Facebook’s Palo Alto headquarters— the Khaled Said page hit its peak of activity as more people joined, members traded information, and organizers sent out updated instructions. Then suddenly, without warning, the page disappeared from view. Its creators received a notice from Facebook staff that they had violated terms of service that require administrators of pages to use their real identities—and furthermore, that accounts of people not using their real names, when discovered, would be shut down.

The page’s creators were fortunate to know people working in Silicon Valley and for international human rights groups, who contacted Facebook executives. The Khaled Said page was restored in less than twenty-four hours, but only after administrative rights for the page were handed over to another person willing to verify her true identity with Facebook staff. After the revolution, that person felt safe enough to reveal her name publicly as Nadine Wahab, an Egyptian woman living in Washington, DC. After taking over responsibility for the Khaled Said page in November 2010 when the group’s members were particularly vulnerable to arrest—and the same kind of police brutality that had killed Khaled Said in the first place—Wahab said she found Facebook’s absolutist attitudes toward anonymity exasperating. “These guys are techies,” she told me in December. “I don’t think they understand the implications that their rules and procedures have for activists in places like Egypt.”

If anybody needed a reminder of how dangerous the situation was, just one week after the Khaled Said deactivation incident, a thirty-year- old man named Ahmed Hassan Bassyouni was brought before a military tribunal. They sentenced him to six months in prison. Why? Because he created a Facebook page dedicated to advising people on the application process for joining the Egyptian military. It seemed harmless enough that a local radio station interviewed him about it— something they would not dare to do if he was running a page, for example, about military corruption. As his lawyer pointed out, the information on Bassyouni’s page was all publicly available from official sources and regularly published in newspapers. Still, he was accused of “spreading military secrets over the Internet without permission.” Unlike Wael Ghonim, who was careful to hide his identity while running the Khaled Said protest group, it had not occurred to Bassyouni that he would get in trouble for what he was doing. He made the mistake of being open about his real identity on Facebook, and paid for it dearly.

“Once these kinds of things happen in a community where brutality is constant,” Wahab told me in December 2010, “Facebook no longer feels like a safe place.” The problem is, “there are no other alternative snow. If you want to organize a movement the only place to do it effectively is on Facebook, because you have to go where all the people are. There needs to be a mechanism that enables us to do this kind of work. Either Facebook is going to get it, or we’re going to be playing cat and mouse.” Fortunately for Egypt’s activists, the story ended well, at least in the short term, with the fall of the Mubarak regime. Wael Ghonim was soon lavishing praise on Mark Zuckerberg for having created the world’s greatest organizing tool for freedom and democracy.


Members of Facebook’s management team are adamant that the real- name requirement is key to protecting users from abusive and criminal behavior. Tim Sparapani, who worked for the American Civil Liberties Union before becoming Facebook’s public policy director, explained it to me this way: “Authenticity allows Facebook to be more permissive in terms of what we can allow people to say and do on the site.” The worst behavior on Facebook, he said, is committed by people known as “trolls” who try to hide behind fake identities to get away with abusive behavior that they would not want associated with them in the real world.

His colleague Dave Willner is known as the “troll slayer.” Officially, the twenty-seven-year-old who goes to work most days in blue jeans and a T-shirt develops policy for Facebook’s “hate and harassment” team. These are the people responsible for enforcing a range of rules and policies meant to protect users from harassment and cyber-bullying. Both human and automated enforcement mechanisms aim to prevent the site from being overrun by spammers and criminals, which, he and his colleagues told me on a visit to Facebook headquarters, is an unending battle.

People want to be free to express themselves, organize whatever they want, and say whatever they want. Yet at the same time, parents want to keep their children safe from criminals, women do not want to be stalked by abusive ex-partners, and religious and ethnic rights groups will not tolerate the site’s becoming a haven for hate speech or the launching pad of lynch mobs. The Simon Wiesenthal Center is unhappy that Facebook, on free speech grounds, refuses to shut down pages dedicated to denying that the Holocaust happened. But Facebook does shut down groups that cross the line from expressing opinions into more aggressive or organized campaigns against Jewish people. Many child protection organizations complain that Facebook has not done nearly enough to keep young people safe online. Such is the problem with governance, online and offline: going too far for some, not doing enough for others.

On any given week Facebook’s “hate and harassment” team receives two million reports from users who have identified content they believe is abusive, harassing, or hateful and should be taken down. The problem is that the people who make abuse reports are not very “accurate.” Only about 20 percent of these reports are for behavior or content that fit the definition of abusiveness according to Facebook’s terms of service. Meanwhile, a lot of what the team would define as genuinely abusive never gets reported at all.

Thus a big part of the team’s job is to develop processes to identify abusive content and remove it, while not removing other postings or pages that may be edgy and upsetting to some but are not actually against the terms of service. They have developed a system that combines automated software to identify image patterns, keywords, and communication patterns that tend to accompany abusive speech, along with review procedures by flesh-and-blood human staff. Willner focuses on defining policy for the site: guidelines about exactly what people should or shouldn’t be allowed to do under what circumstances, and procedures for how violations are handled. These friendly and intelligent, young, blue jeans–wearing Californians play the roles of lawmakers, judge, jury, and police all at the same time. They operate a kind of private sovereignty in cyberspace.

“Overwhelmingly, most people won’t engage in antisocial behavior if it’s associated with their real-life identity,” Willner told me. His bosses agree. In a 2004 interview with author David Kirkpatrick, Mark Zuckerberg alluded to a kind of social contract between Facebook and the user: “We always thought people would share more if we didn’t let them do whatever they wanted,” he said, “because it gave them some order.” In 2010 Elliot Schrage, vice president for public policy, put it even more directly to the Financial Times: “We believe we are innovators in helping people manage their identities and reputations online, in contrast to the lack of control that exists on the Internet as a whole.”

In seeking to build and maintain a global platform that can be used and trusted by hundreds of millions of people of all ages, cultures, and religions, Facebook has sought to shelter its users from a virtual version of what seventeenth-century British philosopher Thomas Hobbes famously called the “state of nature.” In this primitive state, life is “nasty, brutish, and short,” due to a complete lack of government. People are thus completely free to do whatever they like without any constraints or consequences. This state of complete freedom may be ideal for the strongest and most aggressive people, and maybe occasionally for the cleverest if they band together successfully, but not for most people most of the time. Rational individuals seeking to maximize their own self- interest will seek to replace the “state of nature” with some kind of government. Thus a “social contract” is formed: individuals recognize that it is in their interest to voluntarily relinquish a certain amount of their personal freedoms to a sovereign or government, which in turn sets and enforces rules meant to serve the interests of all members of society, in aggregate. Hobbes concluded that the greater good could be served only by a strong authoritarian sovereign with concentrated powers.

One of several major problems with Facebook’s governance system, however, is that it is not enforced consistently or uniformly, and there has been no clear or straightforward appeals process for people who are not famous or do not have personal connections to members of Facebook’s management team. In June 2010 a Facebook page with more than 800,000 members called “Boycott BP,” created in response to the worst oil spill disaster in the continental United States, was disabled without warning for reasons that remain unclear to the group’s creator, Desmond Perkins. After the takedown was reported on CNN’s iReport and a variety of other news outlets, Facebook restored the page two weeks later. The explanation was terse: “The administrative profile of the BP Boycott page was disabled by our automated systems, therefore removing all the content that had been created by the profile. After a manual review, we determined the profile was removed in error, and it now has been restored along with the page.” Greg Beck, an attorney for Public Citizen, a group supporting the BP boycott, told CNN in a follow-up story that he found Facebook’s explanation frustrating. “Facebook and other social websites have become the public squares of the Internet—places where citizens can congregate as a community to share their opinions and voice their grievances,” he said. “Facebook’s owner- ship of this democratic forum carries great responsibility.”

Many users—who like most people do not actually read the terms of service—are not even aware that using a fake name is against the rules. Inconsistent enforcement means that some people have gone for years using a fake name without a problem. A username search on Facebook for “Donald Duck” turns up many dozens of users by that name. The same thing happens with a search on the Chinese characters for “cola” (though Facebook is blocked in China, a lot of people in Hong Kong, Taiwan, and Singapore use it in Chinese). The abuse team says they cannot go after everybody and must prioritize the accounts that have unusual patterns of activity or that other users actively report as having violated the terms of service. This means that if a person is not using his or her real name and is engaging in controversial or high-profile activities on Facebook, that person is on particularly shaky ground—ground that Facebook reserves the right to pull out from beneath the user at any time. After all, the user consented to this situation when setting up the account by clicking “agree”—regardless of whether he or she read and understood the legal text being agreed to.

Critics are concerned that Facebook’s core ideology—that all people should be transparent and public about their online identity and social relationships—is the product of a corporate culture based on the life experiences of relatively sheltered and affluent Americans who may be well intentioned but have never experienced genuine social, political, religious, or sexual vulnerability. As Microsoft researcher danah boyd (who officially spells her name in all lowercase) wrote on her personal blog in the spring of 2010, “I think that it’s high time that we take into consideration those whose lives aren’t nearly as privileged as ours, those who aren’t choosing to take the risks that we take, those who can’t afford to.” Activists from Iran are one example: their work would not be as effective if they could not use Facebook, but for them it is also too dangerous to use their real names.

Nobody is forcing anybody to use Facebook. Yet for political activists— or anyone trying to convince a large and diverse audience of anything— abandoning Facebook is easier said than done. In 2010, Americans spent more time on Facebook than on Google. If the largest pool of people your political or social movement most needs to reach is most easily and effectively reachable through Facebook’s vast social network, leaving Facebook is a blow to the movement’s overall impact.

It would be unfair to say that Facebook does not care at all about user opinion. It does in its own authoritarian sort of way, just as the Chinese government needs to care about public opinion if it wants to stay in power. In an online Q&A session on the New York Times “Bits” blog in May 2010, Facebook’s public policy chief Elliot Schrage described how company staff had set up special pages and discussion groups inside the platform where people could comment on privacy policies, design features, and other policies. “Whenever we propose a change to any policies governing the site, we have notified users and solicited feedback,” he said. Given the uproar over changes to Facebook privacy settings in late 2009 and early 2010, “clearly, this is not enough,” he admitted. “We will soon ramp up our efforts to provide better guidance to those confused about how to control sharing and maintain privacy.” Later in the session, in response to further critical questions by New York Times readers, he remarked: “It takes forums like this to get better ideas and insights about your needs.”

Such comments sound eerily similar to those of Chinese Premier Wen Jiabao, congratulating his government in public web chats for caring so much about public opinion. Ultimately, however, just as China is governed by technocrats and functionaries who have no popular mandate other than a general claim that people’s standard of living has improved dramatically over the past thirty years, decisions at Facebook are made by a group of managers who insist they are acting in users’ best interests. Their belief is mainly supported by the fact that millions of people continue to join and use Facebook and that frequent Facebook users have been showing a trend of staying on the site for longer periods of time.

This assumption—that everything is fine as long as growth continues, and that the complainers are in the minority—is standard authoritarian fare. It reflects the classic Hobbesian social contract: a bargain between public and sovereign that Hobbes used to justify the need for enlightened monarchy. Hobbes was very much a royalist; Zuckerberg and company may have deployed the tools that people are using around the world in pushing for democracy but they are no democrats when it comes to balancing the rights and risks their users face.

In May 2010, a group of activists tried to get people to protest Facebook’s power by deleting their accounts on a designated “Quit Facebook Day.” Though 38,146 people pledged to quit, the effort made no meaningful dent in Facebook’s growth from 400 million to 500 million users that year and seemed to have no impact on Facebook’s policies. Reflecting on this failed boycott, danah boyd wrote an essay arguing that activism rather than boycott is likely to be more effective in bringing change to Facebookistan:

Regardless of how the digerati feel about Facebook, millions of average people are deeply wedded to the site. They won’t leave because the cost/benefit ratio is still in their favor. But that doesn’t mean that they aren’t suffering because of decisions being made about them and for them. What’s at stake now is not whether or not Facebook will become passe, but whether or not Facebook will become evil. I think that we owe it to the users to challenge Facebook to live up to a higher standard, regardless of what we as individuals may gain or lose from their choices.

Activism by users, celebrity technologists, human rights organizations, and civil liberties groups has in fact managed to have some impact on the governance of Facebookistan. In response to sustained lobbying by the Electronic Frontier Foundation, the Committee to Protect Journalists, and others, Facebook’s engineers added new encryption and security settings that enable users to better protect themselves against surveillance of as well as unauthorized intrusion into their accounts. After holding a number of conversations with activists and human rights groups, in mid-2011 the company rolled out an easy-to-use appeals process for people to contest the deactivation of their accounts, a process which until then had not existed for people without personal connections to Facebook staff. Facebook also developed a new “community standards” page to explain its terms of service in a simple and accessible way. Yet while that page as well as the official legally binding Terms of Service page were translated into Arabic, as of mid-2011 those key documents explaining Facebook’s real-name policy and other “rules” whose violation could trigger account deactivation and suspension had yet to be translated into the languages of a number of other vulnerable user groups, such as Chinese and Farsi.


After a year in self-imposed exile, Lokman Tsui decided that his departure had done nothing to help change Facebook and rejoined. Though his absence made no difference when it came to keeping up with close friends, he realized that he missed being able to easily stay in contact with the many other friends, former classmates, and colleagues with whom he otherwise had “weak ties.” In a way, nobody else was punished by his exile but himself. Meanwhile, in June 2011 Tsui was getting ready to start a new job as a Hong Kong–based policy adviser for Google. In an e-mail explaining his decision to rejoin, he wrote, “I feel that I would do my new Google policy job a disservice by being disconnected from Facebook. That is, I need to be on there to know what is happening from a professional point of view.” He had also come around to danah boyd’s point of view: that engaging Facebook from the inside as a noisy constituent and customer might in the long run be more effective than flinging criticism from a position of exile. Ironically, Tsui would soon find himself on the other side of a firestorm over how Google governs its users.

In late June 2011, Google began a gradual rollout of its new social networking service, Google Plus. Despite being invitation-only for the first month or so, to give engineers and designers a chance to iron out the bugs, by mid-August it already had more than 20 million users. I joined on June 29 after receiving four invitations from friends who are all considered Internet gurus and experts of one kind or another. Thus the earliest users of Google Plus were for the most part experienced, web-savvy, and articulate people who immediately began to explore the network’s features, discussing them in great detail and comparing them with Facebook’s.

Initial reactions were largely positive. Many welcomed Google Plus’s more sophisticated approach to privacy, giving users much more finely grained control over what they choose to share, with whom, and under what circumstances. A number of early flaws in the privacy control system were quickly fixed by engineers. Another feature called the data liberation front made it possible for users to extract all of their data if they decided to leave the service or wanted to back it up elsewhere for safe- keeping. Such features were praised by civil liberties groups and activists, some of whom hoped that more competition would force all companies to improve their practices.

Many of Google Plus’s earliest members also rejoiced at what seemed to be a more flexible approach to identity compared to Facebook’s, citing its official “community standards” page, which said, “To help fight spam and prevent fake profiles, use the name your friends, family or co-workers usually call you.” The Chinese blogger who publishes widely in English under the name Michael Anti—not actually his real Chinese name—happily joined Google Plus after having been kicked off Facebook for violating its real-name policy in late 2010. Many others around the world who have professional reputations associated with long-standing pseudonyms instead of their real names signed up for Google Plus with their pseudonyms. These included  an Iranian cyber-dissident known widely in the Iranian blogosphere as Vahid Online, as well as an engineer and former Google employee whose real name is Kirrily Robert but who is much better known online and professionally by her user name, Skud.

The honeymoon did not last long. In mid-July Google moved to deactivate pseudonymous accounts en masse, without warning. This came as a shock, particularly to many people whose pseudonyms are in fact the names by which they are commonly known “in daily life.” Because a large number of these early Google Plus users happened to be bloggers, journalists, technologists, and activists, many protested their deactivations noisily in the blogosphere, in the media, and in their new Google Plus networks full of influential journalists and bloggers, who quickly relayed their stories to broader audiences. A UK-based science writer who publishes articles in the Guardian under the pseudonym GrrlScientist wrote an article about her experience. She quoted a Google spokesperson who explained that to have a Google Plus account, a person must have a Google profile, which in turn requires the use of one’s real name, despite the fact that the legal right to use pseudonyms, even in very official contexts like financial transactions, paying taxes, and filing lawsuits, is well established in many countries. Effectively, Google had joined Facebook in denying people’s right to define their own identity, a right that a large percentage of their users expect to be protected and respected even by government authorities.

As a former Google employee who had been privy to internal company discussions about identity policies prior to the launch of Google Plus, Skud had anticipated this situation and collected a vast number of web links and testimonials to prove that “Skud” is a persistent pseu- donym tied to one individual who has taken responsibility for her actions and words over many years. After her Google Plus account predictably was deactivated, she included this evidence as part of her appeal for reinstatement via a formal appeals process Google had set up to handle mistaken deactivations.

Skud’s appeal was denied and she eventually created a new account as “K Robert.” But she was not ready to give up the larger battle to convince Google to change its identity policy. She surveyed people in similar situations and found that although Google’s appeals process encouraged but did not require people to upload copies of a government-issued ID to prove their identity, almost nobody had succeeded in having their account reinstated without uploading their ID. She launched the website my.nameis.me, dedicated to discussing questions of identity and why pseudonymity and anonymity have a necessary place in a free and democratic society. Technology gurus and activists from around the world weighed in, contributing statements and testimonials. Though few mainstream news organizations had written about the human rights implications of Facebook’s real-name policy, the torrent of commentary flowing at the same time from many influential technologists—coming right on the heels of Google Plus’s launch, which was in itself a major news story—brought the public debate about online identity into the mainstream in a way that had not previously been the case.

Then Randi Zuckerberg, Mark Zuckerberg’s sister and Facebook’s head of marketing, provoked a new round of controversy when she defended the need for real-ID rules. “I think anonymity on the Internet has to go away,” she remarked at a conference. “People behave a lot better when they have their real names down. . . . I think people hide behind anonymity and they feel like they can say whatever they want behind closed doors.” Activists countered that such attitudes leave no room for the Internet’s most vulnerable users: from cyber-activists in Iran like Vahid Online (whose account was suspended in early August), to victims of domestic abuse, to people living in small communities where knowledge of their real sexual orientation or political views might incite reprisals or social ostracism of their families. Chinese Google Plus users began to noisily protest the policy in the service’s help forum and elsewhere on the site, pointing out that even many Chinese social networking platforms allow pseudonyms.

One of the most sophisticated and original arguments in favor of pseudonymity on social networks like Facebook and Google Plus was articulated by Tunisian blogger and cyber-activist Slim Amamou in an e-mail exchange that included activists and Google employees. He argued that if the intent of a social network’s identity policy is to create a trusting and safe environment, tying identity to cards issued by governments punishes the most vulnerable members of society by excluding them. In fact, when an online community includes many people who live in countries with repressive regimes, a company’s insistence on real- name policies tied to government-issued ID actually erodes trust between users and the company, and even makes it harder for some users to trust one another. A trustworthy and responsible online identity, he explained, is “related to a public history, which is the definition of a pro- file in a social network. In other words it’s the profile who creates identity through trust and not the other way around. I repeat, it’s you, Google Plus, who are supposed to generate identities and not simply trust nation states’ administrations for that.”

Such arguments, however, were unsuccessful in budging Google’s top management. In early August, the company reaffirmed its real-ID policy for Google Plus but instituted a new four-day grace period between when users are warned that they are in violation and the deactivation of their account. Privately and off the record, a number of Google employees told me in July and August 2011 that there had been fierce internal debate about how identity should be handled on Google Plus. In the end, the decision was made at the highest levels of the company that in order to make Google Plus a commercially successful platform, real- name identity would need to be enforced. Google Plus simply was not created with dissidents in mind and was not meant to be used for political dissent or by other people who are not comfortable disclosing their real identities. Other Google services including Blogger, YouTube, and Gmail would continue to support pseudonyms, and the company would remain dedicated to protecting the rights of dissidents to use those services.

This was not enough for people who had hoped that Google Plus would be a dissident-friendly Facebook alternative. Many people continued to protest, some deliberately setting up pseudonymous accounts to publicize their deactivation; others led long tactical and strategic discussions about how to engage Google management and convince them to change their policy. These people’s perseverance is important. As social networks become an increasingly influential part of citizens’ political lives, telling political dissidents, human rights activists, and other at-risk individuals that there is no place for them on the world’s most popular and widely used social networks has serious political implications on a global scale. The harm is no less real even when the companies that run those networks genuinely do not mean to cause harm.


In the long run, if social networking services are going to be compatible with democracy, activism, and human rights, their approach to governance must evolve. Right now, for all their many differences, both Google Plus and Facebook share a Hobbesian approach to governance in which people agree to relinquish a certain amount of freedom to a benevolent sovereign who in turn provides security and other services.

Fortunately, Hobbes was by no means the last word in social contract theory. He was followed by John Locke, one of the first political thinkers to set forth a logical argument for why government should be based on “consent of the governed,” the fundamental idea that inspired the English, American, French, and other more recent revolutions. In Locke’s Second Treatise of Government, a document to which Thomas Jefferson turned in drafting the Declaration of Independence, government is legitimate only when it satisfies the fundamental needs of the community. A government that violates the trust of its people loses their “consent”—and therefore deserves to be overthrown.

Locke drew inspiration from a rebellious group of men known as “The Levellers,” an informal alliance of agitators and pamphleteers during the English civil war of the 1640s who believed the monarchy should be abolished and replaced by a civil state based on English common law and a few statutes including the Magna Carta, which over four centuries earlier represented the first effort to place constraints on sovereign power.

The modern sovereign—otherwise known as government—derives authority even to some extent beyond the community of parliamentary democracies, from varying forms and degrees of consent. It is time for the new digital sovereigns to recognize that their own legitimacy—their social if not legal license to operate—depends on whether they too will sufficiently respect citizens’ rights.

The social contract on which modern democracy is based is primarily concerned with the protection and respect for citizens’ property and physical liberty. As citizens, we now use digital networks and platforms—including Facebook and now Google Plus—to defend our physical rights against abuse by whatever physical sovereign power we happen to live under, and to bring about political change. However, our ability to use these platforms effectively depends on several key factors that are controlled most directly by the new digital sovereigns: they control who knows what about our identities under what circumstances; our access to information; our ability to transmit and share information publicly and privately; and even whom and what we can know. How our digital sovereigns exert these new powers may or may not be in response to government laws or pressures, and those pressures may be direct or indirect. Either way, the companies controlling our digital networks and platforms represent pivotal points of control over our relationship with the rest of society and with government.

No company will ever be perfect—just as no sovereign will ever be perfect no matter how well intentioned and virtuous a king, queen, or benevolent dictator might be. But that is the point: right now our social contract with the digital sovereigns is at a primitive, Hobbesian, royalist level. If we are lucky we get a good sovereign, and we pray that his son or chosen successor is not evil. There is a reason most people no longer accept that sort of sovereignty. It is time to upgrade the social contract over the governance of our digital lives to a Lockean level, so that the management of our identities and our access to information can more genuinely and sincerely reflect the consent of the networked.

Table of Contents


Buy the whole book from Basic Books or Amazon.


%d bloggers like this: