www.theatlantic.com
Open in
urlscan Pro
199.232.194.133
Public Scan
Submitted URL: https://t.co/kqB3e9xGTU
Effective URL: https://www.theatlantic.com/ideas/archive/2022/06/elon-musk-twitter-takeover-mark-zuckerberg/661219/?utm_campaign=the-atlant...
Submission: On June 13 via manual from US — Scanned from DE
Effective URL: https://www.theatlantic.com/ideas/archive/2022/06/elon-musk-twitter-takeover-mark-zuckerberg/661219/?utm_campaign=the-atlant...
Submission: On June 13 via manual from US — Scanned from DE
Form analysis
1 forms found in the DOMGET https://www.theatlantic.com/search/
<form class="Search_searchForm__GtMZS" action="https://www.theatlantic.com/search/" method="GET"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" aria-hidden="true" class="Search_searchIcon__r1HQB">
<path d="M15.85 15.15l-5.27-5.28a6 6 0 10-.71.71l5.28 5.27a.48.48 0 00.7 0 .48.48 0 000-.7zM1 6a5 5 0 115 5 5 5 0 01-5-5z"></path>
</svg><label for="search-input" class="Search_visuallyHidden__kpWqH">Search The Atlantic</label><input type="search" id="search-input" name="q" placeholder="Search The Atlantic..." autocomplete="off" class="Search_searchInput__QFeYn"><input
type="submit" value="Submit Search" tabindex="-1" class="Search_visuallyHidden__kpWqH"></form>
Text Content
WE VALUE YOUR PRIVACY We and our partners store and/or access information on a device, such as cookies and process personal data, such as unique identifiers and standard information sent by a device for personalised ads and content, ad and content measurement, and audience insights, as well as to develop and improve products.With your permission we and our partners may use precise geolocation data and identification through device scanning. You may click to consent to our and our partners’ processing as described above. Alternatively you may click to refuse to consent or access more detailed information and change your preferences before consenting.Please note that some processing of your personal data may not require your consent, but you have a right to object to such processing. Your preferences will apply to this website only. You can change your preferences at any time by returning to this site or visit our privacy policy. I Accept I Do Not Accept MORE OPTIONS Skip to content SITE NAVIGATION * The Atlantic * PopularLatest SECTIONS * Politics * Ideas * Fiction * Technology * Science * Photo * Business * Culture * Planet * Global * Books * Podcasts * Health * Education * Projects * America In Person * Family * Events * Shadowland * Progress * Newsletters THE ATLANTIC CROSSWORD Play Crossword THE PRINT EDITION Latest IssuePast Issues -------------------------------------------------------------------------------- Give a Gift * Search The Atlantic QUICK LINKS * Dear Therapist * Crossword Puzzle * Manage Subscription * Popular * Latest * Sign In * Subscribe Ideas HOW TO FIX TWITTER AND FACEBOOK Social media has become too important to entrust to anyone. Are there ways to entrust it to everyone? By Jonathan Zittrain The Atlantic; Getty June 9, 2022 Share About the author: Jonathan Zittrain is a professor of law and a professor of computer science at Harvard, where he co-leads the Berkman Klein Center for Internet & Society’s Institute for Rebooting Social Media. Elon Musk’s on-again, off-again takeover bid for Twitter has spurred questions about what will happen if the deal goes through. Is it a vanity play that would allow Musk to surprise the platform’s users with new features on a given day’s whim? Or a business play to turn Twitter into a more assiduous targeted-advertising vehicle? Or a political play whose purpose is to proselytize Musk’s ideological views or, through such singular acts as re-platforming Donald Trump, to influence the outcome of the next presidential election? (Fellow centibillionaire Jeff Bezos asked aloud whether Musk’s interests in maintaining Tesla’s good graces in China could give that country leverage over a Musk-owned Twitter; an interesting question from the man who owns both Amazon and The Washington Post. Bezos concluded not.) Although it’s fair to ask these questions, it’s hard to see how anyone, possibly including Musk himself, can reliably predict what Musk will do if he becomes Twitter’s sole owner. His various pronouncements have been so vague, and his behavior across his other enterprises so impulsive and mercurial, that it’s a mug’s game to forecast what he’d do were Twitter to be fully in his possession. Derek Thompson: Elon Musk buying Twitter is weird, chaotic, and a little bit awesome That very unpredictability reveals issues beyond the evolving soap opera of a high-profile corporate acquisition. If Musk’s purchase goes through, it would mean that two very significant pieces of the world’s digital speech infrastructure, Twitter and Facebook, are each entirely in a single person’s hands. While Facebook’s parent, Meta, is a public company, when Facebook itself went public a decade ago, founder Mark Zuckerberg retained his power as CEO to fire the board, rather than the other way around. Meta also owns Instagram and WhatsApp. These platforms aren’t just offering run-of-the-mill merchandise and services, as most other businesses do; they carry and shape incalculable quantities of civic speech that can set the agenda for traditional media. For these companies to be held in completely private ownership creates real risks, no matter who the owner is. The risk of private ownership of the public square is that one person’s views could end up privileged over all others. The risk of public ownership of a public square is that, given social media’s innumerable and inevitable controls over which speech to favor, those in government with oversight could unduly exercise that power over public speech—exactly the situation that the First Amendment was drafted to prevent. Understanding each of these dangers can point us toward a third option. In the case of private ownership, harnessing a powerful platform to a single person’s preconceived political agenda risks misinforming the public, with no mechanism for an internal check or pushback. And this can occur in ways both direct and subtle—not just using the platform to knowingly tell lies, but also carefully elevating individual purported truths to paint a picture that amounts to a lie. Propaganda can work, and when it does, it serves the interests of its creators rather than of those who believe it and then act upon it. Should someone as willfully eccentric as Elon Musk take over Twitter, that sole private ownership will place in dazzling relief the power of one person to shape millions of people’s perceptions and opinions. The underlying vulnerability is that our global town squares, in part by historical accident, have either been parochial since their beginnings or been susceptible to becoming so when a public company may be taken private in a leveraged buyout. That a single person could own a center of speech has plenty of historical precedent, which might make an instance like this one seem no big deal. But the newspapers and television stations passed down as inheritances through wealthy families, or bought and sold like sports teams, had some dampers built in that resisted the ideological sway of their owners. The most influential papers separated their business and editorial operations. And broadcast TV stations, which have had to operate in the public interest to maintain government-issued licenses, traditionally ran independent newsrooms. RECOMMENDED READING * WHAT IT TAKES TO BE A TRIAL LAWYER IF YOU’RE NOT A MAN Lara Bazelon * THE STRANGE ORIGINS OF AMERICAN BIRTHDAY CELEBRATIONS Joe Pinsker * YOUR DOG FEELS NO SHAME William Brennan None of these cultural, professional, and regulatory elements applies to today’s social media. So Elon Musk’s prospective acquisition of Twitter brings to light two long-standing, interlocking problems of online governance: We the public can’t agree on what we want, and we don’t trust anyone to give it to us. We need methods of governing our civic discourse that aren’t as capricious, opaque, and unaccountable as a private owner’s can be, so that our conversations aren’t at the mercy of whipsawing shifts in the moods and notions of their philosopher-king proprietors, well-meaning or otherwise. At the same time, we don’t want the government to become entangled in our everyday private conversations. The government works best in a speech ecosystem if it can broadly empower everyone—for example, by providing inexpensive municipal broadband—and is otherwise constrained, severely and appropriately, by the First Amendment in what speech it can limit. Hence the sensible court ruling in 2019 that even a small-seeming thing like the president’s act of blocking critics from his Twitter account is subject to a First Amendment test—one likely flunked. A government-hosted Twitter or Facebook would create a circumstance in which every content-moderation decision could justifiably prompt a federal case, with little public consensus on what the right outcome should be. That’s why government-run social media is rightly a nonstarter, even as solo private ownership is so ill-fitting. So what’s the right answer? Read: Elon Musk already showed us how he’ll run Twitter There is another form of governance, one easy to overlook because of its informality and very ubiquity, that offers a different path. It takes place whenever a group of people is thrown together and acts in affirmative concert. It occurs as people pass a beer along a row at a baseball game; it can also be seen within a classroom or a jury room. It can manifest at Burning Man, in a stuck elevator, and among communities of worship. It’s in bowling leagues, Rotary clubs, and friendly poker games. It can be as chaotic as a spontaneous protest or as orderly as a self-forming queue at a bus stop. Such cooperation emerges when people don’t expect ready recourse to any outside authority. Instead, they try to work out their problems with one another, or to pursue common opportunities, in ways that get past their fear or distrust of others. Where this type of organization works, often by starting small, groups develop new norms to help them grow without blowing up. We might call this mutual aid or “community governance.” Of course, it doesn’t always work. People can be lousy to one another when there’s no external consequence for behaving badly. This is especially true online, where identity can be shielded and one bad actor can wreak havoc, with virtually no expense or effort. But believing in community governance is not naive, and cultivating circumstances in which it can flourish is a worthy and plausible project. For example, a natural disaster spurs mutual aid in some circumstances and sparks violence in others. Social media is designed to elicit some social behaviors over others, usually those that result in maximal engagement with the platform, and this has very little to do with whether people find trust in one another, or come away better or worse informed. At their best, online platforms have facilitated life-changing friendships, including for people who might lack a comfortable community in the physical places where they live, work, or study. At the same time, a medium built around short posts amplified to large audiences on the basis of outraged reaction can, unsurprisingly, reward and bring out the meanness in people. Over the years, we’ve seen community governance working in the online world, from little glimmers to real beacons: LiveJournal allowed anyone to be a diarist; Couchsurfing facilitated house-sharing without any commercial element (at least until, under pressure from AirBnb, it went for-profit); Wikipedia has created small communities of editors one encyclopedic article at a time; and Reddit, at its best, has enabled topical communities to form without lumping all of its 50 million daily users into one unbounded feed. Community governance thrives through practices and technologies that let small groups form, with frontline content moderation from leaders who themselves are long-term members, know the group’s norms, and can enlist its help in reinforcing them. This way of doing governance also involves tools to help people address privacy concerns by sharing their identities in a safe partial manner, whether they’re participating in a group for cancer survivors, for HVAC repair people, or for anime enthusiasts. Read: Wikipedia, the last bastion of shared reality The same dynamic crops up in private Facebook groups, where the moderators are drawn from among the users, rather than appearing as a nameless, possibly AI-powered cop on the other end of a “Report This Comment” button. This system of governance takes root when participants realize that their problems aren’t customer-service issues, to be dealt with by a corporate overseer, but community issues, which they can work to resolve themselves. When such communities are thriving, they generate for their members the frisson of making and experiencing supportive, selfless contributions—not because they’re compelled or paid to but because they genuinely want to. Community governance is perfectly compatible with setting enforceable boundaries on people’s behavior—indeed, it depends on it. If a fight breaks out among fans at a baseball game, that’s the end of the mutual aid passing along one another’s beers. Official law-enforcement agencies are needed to investigate and pursue cases of threatening or abusive behavior—especially online where there are fewer ties of trust between people, and disruptive trolls can pop up and vanish in ways they can’t at a bowling league. Private platforms also bear a moral, if not yet a legal, responsibility to deal with harassment and disinformation. They have not only a duty of care toward individual users but also an obligation to prevent people from using social media to, say, amplify harmful faux medical advice or to abet genocide. Community governance can help draw difficult lines in such cases, and do so in a way that confers legitimacy on the participants’ decisions. That’s why I’ve proposed, several years ago in light of Facebook’s declining to evaluate its torrents of political ads for truth before running them, that high-school students, as part of their graded coursework, work together to judge political ads slated to run on social media. The students can explain their reasoning (with any dissents) for what flunks a disinformation test, and have their judgments stand, one ad at a time. A popular platform under less centralized control can also lower the stakes of moderation decisions. For example, deplatforming Donald Trump needn’t be a top-down, all-or-nothing decision but rather an emergent phenomenon among some audiences but not others. (Limited community governance on this issue is already happening organically on Twitter to an extent, with many of his enthusiasts tweeting the banned former president’s off-site pronouncements into their timelines.) More dispersed, self-governing platforms would avoid the phenomenon of a Person or Topic of the Day creating site-wide pile-ons. Once we escape the false choice between entirely private and government-run, with decisions instantly universal, new possibilities open up. How might we push our status quo of private corporate ownership toward community governance? We could start by pressing the reigning monarchs to act on their stated values. Mark Zuckerberg has long thought of Facebook, rightly or wrongly, in nation-state terms. “In a lot of ways Facebook is more like a government than a traditional company,” he said in 2009. “We have this large community of people, and more than other technology companies we’re really setting policies.” By 2019, Zuckerberg was mentioning that power as reason for Facebook to stand back a bit on content moderation: “We should not be the arbiters of truth in deciding what is correct for everyone in the society,” he said. “People already generally think that we have too much power in deciding what content is good.” Adrienne LaFrance: The largest autocracy on earth This was one of the reasons Zuckerberg offered to Congress for why Facebook shouldn’t be refereeing the content of political ads. That line of argument, turned into policy, could be merely a self-serving way to avoid responsibility for content moderation and the loss of revenue for, say, any rejected ads. Yet the Meta founder and CEO has been contemplating novel forms of external governance, albeit fitfully, for more than a decade. Innovations have included asking Facebook users to vote on major new policies as well as setting up an external oversight board to interpret Facebook’s broad content guidelines with binding effect. When he was Twitter’s CEO, Jack Dorsey established an initiative called Bluesky, which he pitched as an effort to create networks that would be compatible with Twitter while operating outside its formal corporate control. Mark Zuckerberg’s user-voting system failed early on, perhaps predictably, in part because it treated all Facebook users as components of an undifferentiated electorate, rather than as a means of seeding community governance in smaller groups. And Facebook’s external oversight board draws more on concepts of appeal-court review than on those of informal self-governance. But these unusual experiments highlight one useful characteristic of the otherwise worrisome phenomenon of single-person ownership: The owners can afford to innovate in imaginative ways of devolving their power that a more traditional, risk-averse, bottom-line-focused corporate board might never try. Zuckerberg’s view of Facebook as resembling a nation-state suggests that traditional private governance is a mismatch for that company’s power and reach. In his ventures, Musk has at times seemed to regard himself as a savior, engaging in goals and projects that are avowedly more ambitious than wealth accumulation or corporate success alone. These aims can appear quixotic or willful—as in voyaging to Mars or defending against rogue AIs in some hypothetical future—but the extent to which both Zuckerberg and Musk embrace visionary roles suggests an opening. If Musk ends up owning Twitter, the noble thing that he and Zuckerberg could each do with their respective social-media platforms would be to thoughtfully devolve that unilateral control. Can we govern ourselves? Can we trust strangers? These questions go to the heart of a functioning civic society. No answer is preordained, but getting to a good one requires building distributed architectures, online and off, to foster cooperation among the many and to contend with the few who want to wreck it. Simply hoping either that the right person buys Twitter and imposes more enlightened control over its users’ behavior, or that government authorities can successfully regulate billions of everyday exchanges among people, seems much more wishful than the idea of making community governance work where we can. Already a subscriber?Sign in THANK YOU FOR READING THE ATLANTIC. GET UNLIMITED ACCESS TO THE ATLANTIC. Subscribe Now