What It Really Means to “Hold Big Tech Accountable” - Lawfare

2022-09-18 16:28:20 By : Ms. Tina Li

Across the globe, lawmakers are belatedly moving to regulate the companies that manage the internet. These efforts vary widely, in terms of both intent and mechanism. Antitrust efforts aim to limit the power of gargantuan companies and improve competition. Regulations targeting specific types of harmful content — child sexual material, terrorist propaganda, hate speech, election interference, and so on—aim to limit acute dangers and moderate violent agitation. Privacy regulation aims to protect individuals in a new age of both intentionally nefarious and unintentionally risky digital surveillance.

There is no doubt that improved regulation is necessary. Although policymakers have long acknowledged the risks of digital technology, they for too long centered the social and economic benefits of cheap, seamless global communications while failing to mitigate the fundamental risks that came along with it. But rectifying those mistakes carries dangers of its own, particularly when it comes to setting rules for speech. Even well-intentioned regulation may quash speech that should be allowed (though perhaps rejected and shouted down), create avenues for political factions to advance narrow partisan or social interests, and change political culture with dangerous, unintended consequences. And, as various state-level proposals in the United States reveal, not all regulatory proposals are well-intentioned. 

Having spent a career studying terrorist organizations’ use of the internet and more than five years leading Facebook’s policy work around “dangerous organizations” (defined as terrorist groups, hate organizations, large-scale criminal groups, and the like), I have seen manifestations of online harm up-close. Almost a decade before the so-called Islamic State’s abuse of Facebook led the company to hire someone like me, I was tasking cadets in my seminars at the U.S. Military Academy with writing assignments about how hypothetical terrorists might use new social media technologies. Most importantly, I have seen what companies do well in countering harm, what they do poorly, and the trade-offs they face. 

While companies and communities of specialists debate those harms and the techniques to counter them, political discourse around these issues is too often superficial. The long overdue moment for social media regulation seems to have arrived with the Digital Services Act in the European Union, a plethora of state-level regulatory proposals in the United States, and renewed interest in Washington, D.C. in adjusting Section 230 of the Stored Communications Act. Especially in the United States, I fear we have not prepared well enough for the trade-offs such efforts imply and the limits of what regulation can reasonably achieve. 

As the U.S. moves forward with reforms, well-intentioned policymakers should keep four principles in mind. First, digital harms are an inherently wicked, adversarial problem that will not be resolved perfectly—and perhaps not even satisfactorily—even in a well-designed regulatory environment. This means that regulators must wrestle with the unenviable task of deciding what constitutes good-enough harm mitigation and, thereby, how much digital harm is acceptable. Second, the three core goals of regulation—competitiveness and antitrust, harm mitigation, and privacy—stand in some tension with one another. Rather than pretend that regulation will achieve great success at no cost, a core purpose of regulation should be to assert the prerogative of democratic governments to balance those values rather than leave such judgments to companies. Third, regulation should be constructive , not punitive, which means that there must be ways to measure progress beyond just punishment. Fourth, regulation, including transparency requirements, should operate at the level of “surfaces”—discrete units enabling user-generated content—–if they are to be comprehensive. 

These ideas hardly constitute a full-fledged regulatory framework. Taken together, they suggest an extraordinarily difficult and potentially dissatisfying journey for regulators. Real values must be balanced against one another. Adversarial actors will circumvent creative defensive mechanisms incentivized by thoughtful regulation. Reasonable regulatory enforcement will be expensive. But opaque sloganeering about “holding tech companies accountable,” while politically satisfying, does not point clearly to effective policy options. Good policymaking means making difficult trade-offs that fail to fully resolve any problem and is, therefore, less likely to be politically palatable.

For too long, companies have had to face the trade-offs between privacy, harm reduction, and competitiveness alone. Many have publicly stated that they failed to invest sufficiently in resolving acute harms and the companies have manifestly failed to meet societal expectations. But that does not mean these problems will be easy to resolve, even if regulation incentivizes greater focus. For governments, regulation is an opportunity to address corporate failures and improve corporate efforts. But government regulators are likely to find that replacing the misaligned incentives of corporations with their own judgment does not immediately mean that good solutions are manifest. 

At the same time, trying to solve wicked problems like content governance is politically riskier than taking no action while scowling at social media companies. Inaction is a policy choice of its own, so regulators are already accountable for some of the current morass around content governance. But the political reality is that taking action elicits more response from the media and the public than inaction. For policymakers, taking action means making choices between suboptimal outcomes that will frustrate multiple constituencies. Regulation means responsibility. 

Companies should design products with the expectation that bad actors will abuse them. If there is a “hero-use case,” the paradigmatic usage of a product by a well-meaning user, then there is also a “villain-use case”: core ways that a user with ill intent can abuse the same product. Because there is no way to eliminate this risk of abuse, companies should also mitigate the harms they facilitate. Regulation, in turn, should incentivize companies to build responsibly and mitigate risks widely. But we should be clear-eyed about the limits of these efforts from the vantage points of both companies and regulators. 

Human beings have agency, and some of them use that agency for sinister ends. That presents innumerable risks online, some of which directly manifest in the real world as violence, children being abused, and poor health decisions that result in sickness and death. Human beings are also creative and determined, and some of them use those generally laudable traits for evil. That means they will use the range of available tools to advance nefarious goals and adjust their tactics when those tools are eliminated or made too risky. It is a truism of digital harm: Bad actors are adaptive and resilient . Delete content; they repost it. Delete it again; they play with the details to circumvent enforcement. Delete their account; they make another. Delete their network; they recreate it. Do it again, and maybe they move that network to a less hostile environment online. Regardless of the regulatory incentives, defenders and attackers online will still play a difficult game of cat and mouse. 

The inevitability of evil is not an excuse to do nothing about it. This will seem obvious to some and vexing to others, but aggressive regulation of the internet is not going to prevent harm from manifesting online, let alone in the physical world. Regulation can improve the current situation, but outcomes will be imperfect. In policymaking, the inevitability of imperfection is not an argument for inaction. In this case, this means that the regulator’s job is not just to incentivize companies to take more aggressive action; it is also to define when their actions are sufficient, which means determining when they have done enough despite the fact that harms continue to manifest.

Bluntly, defining that standard is going to be politically difficult. It means that companies will be able to comply with law while still inadvertently facilitating some harm. Companies will use their compliance with the regulatory standard to defend themselves when bad things inevitably happen regardless. Of course, regulators can set standards extremely high, but there will be costs to doing so, especially for smaller companies that already strain to comply with new regulatory requirements. 

Internet companies have long sought to balance market incentives with harm mitigation and privacy protection. The goals are not in fundamental tension: companies have a market incentive to safeguard their users and protect their privacy. But even though these principles are not always in tension, they can still clash regularly. Indeed, the corporate incentive to enhance the experience of individual users—and the kinds of changes that might satisfy individual users—is not necessarily the same as the incentive the government has to protect society as a whole. That is especially the case for individuals who are unlikely to encounter discrete instances of harm that the government has to look out for writ-large. 

Similar to corporate policymakers, regulators must triangulate values, although their challenge is to facilitate productive competition while incentivizing platforms to root out harm and protect privacy. Centering competition, safety, and privacy at a societal level is an important shift. So is the basic idea that democratically elected policymakers, rather than profit-seeking private actors, are the right people to make such judgments when the consequences are seemingly so far-reaching. The problem is that government policymakers asserting authority via public statement is not the same as institutionalizing those goals in a statute or regulatory declaration. With pristine intentions and perfect information, these trade-offs will still be vexing.

Balancing treasured values is always difficult, but it is made even more so when the trade-offs are complex and unclear. The private goal of profit-making is not fundamentally at odds with privacy or social stability, and so corporate incentives often do not fit neatly into the sometimes cartoonish portrayals of social media’s critics. Internet companies do aim to limit abuse and protect privacy as a means of attracting and retaining users. But there are limits to those positive market incentives because trust and safety investments have diminishing returns (especially when measured only by user retention and app usage); product innovation often outpaces the development of governance systems; and popular products widely used for productive purposes often also provide value to actors bent on harm. It is not necessary to present these technologies as inherently, or even predominantly, nefarious in order to make a strong case that regulation is needed.

Additionally, there are sometimes trade-offs between privacy and limiting certain kinds of abuse. A good example comes from the European Union, which has wrestled with this issue in its ePrivacy Directive . Among other provisions, ePrivacy generally prohibits companies from scanning private messages for harms. After much debate, the EU included a carve-out that allows companies to scan for child sexual abuse material. That choice to allow some scanning impinges on privacy but enhances the ability of companies to identify horrible content and root out the most persistent abuse networks. Conversely, the European Union’s decision not to allow a similar carve-out for terrorist propaganda and planning effectively provides digital safe-havens for terrorist groups but reduces corporate scanning of private messages in a highly political topic space. 

In the United States, the prospect of social media data being used to prosecute women seeking abortions has increased calls for encrypting messaging applications. Indeed, a primary value of end-to-end encryption is enabling intimate conversations that no one—including the government—should be able to access. At the same time, increased encryption will inevitably provide safe harbor for hate groups, terrorists, and others bent on harm. It is not surprising that the seditious conspiracy charges leveled against the Proud Boys and Oath Keepers rely extensively on data recovered from encrypted, or partially encrypted, messaging applications. 

Some observers will argue that there need not be a trade-off between privacy and abuse prevention because there are techniques to root out violative material that do not involve scanning the content itself. After all, those encrypted chats among Proud Boys and Oath Keepers are not secret any longer; they are evidence in a court of law, and they were surfaced by seizing devices of people in those chats and not via privacy-intrusive scanning of all message traffic. Indeed, the trade-off is not absolute; harm prevention based on metadata analysis and behavioral signals does work. But these techniques require privacy compromises of their own and none fully replace content-scanning mechanisms. Moreover, operational security among dangerous actors will not always be as sloppy as it was among the Proud Boys and the Oath Keepers—and even in that case we do not (and cannot) have a full accounting of encrypted chats that were deleted successfully.

The features of digital platforms are fundamentally dual-use. If a feature—for example, encryption, live video, group hosting abilities, mechanisms to discover and engage users one does not already know in real life, anonymity, algorithmic recommendations—provides protection or utility to one group of people, then that feature can be used by another group as well. The pros of such features may outweigh the cons, but we should not pretend there is no trade-off. There is. 

Likewise, antitrust efforts can cut against efforts to prevent harm. This manifests in all sorts of ways. Bad actors flit across multiple platforms to confuse trust and safety teams, and breaking large platforms into smaller ones will exacerbate that problem. App store standards are easily exploitable gatekeeping mechanisms, but they also help maintain baseline privacy requirements on apps. Limiting the ability of companies to discriminate against out-links to other platforms fosters competition, but it restricts the ability of platforms to exclude platforms that do little to keep their platforms secure. Of course, there are ways to finesse regulations and limit the magnitude of these trade-offs. They cannot be eliminated, however, and the ensuing regulatory subtlety is likely to be less emotionally satisfying and more difficult to enforce. 

One of the most important questions is whether regulators should apply safety requirements on companies of all sizes or solely on so-called Big Tech. If the purpose is simply to limit acute harm, the answer must be yes, because such harm often manifests on smaller platforms. But such requirements would burden small platforms in the marketplace, and so many regulators favor reducing requirements based on platform size. This is a trade-off, emphasizing competition against security. 

Noting that regulation requires trade-offs is not an argument against regulation. But it is an argument against simplified calls for regulation that suggest fixing social media and the internet is just a matter of “holding Big Tech accountable.” Such rhetoric was perhaps valuable for motivating long-overdue regulatory efforts, but today it obscures the reality that policymakers pursuing good policy have really difficult decisions to make. The polemical security of excoriating Big Tech, a nebulous term, presumably in the hope of generating headlines and easing the passage of regulation constraining tech companies, is increasingly counterproductive. The underlying premise of regulation in this space is that representative officials should make certain hard choices because private actors cannot be trusted to balance public interests appropriately. For that to happen, officials must center the trade-offs. Moreover, the public must be educated about the trade-offs inherent in regulating this space and be prepared for the inevitability of difficult trade-offs. Framing these problems as a costless exercise in holding Big Tech accountable raises expectations that will not be met. Politicians intent on being policymakers will inevitably confront that tension. Lay the groundwork for their hard choices rather than set expectations they will never meet. 

The purpose of regulation should be constructive, not punitive. Its success should be measured by the positive behavioral changes it produces, not simply by the weight of costs imposed on regulated entities. That sentiment is often lost in the public discussion of social media regulation, where, again, vague calls to hold Big Tech accountable are more common than concrete proposals for how to make social media and the internet better. Of course, incentivizing productive behavior by technology companies does, in part, mean imposing costs for failure. But those disciplinary measures should be corrective, not retaliatory. 

There are many examples of what constructive regulatory outcomes would look like. Many researchers correctly note the importance of data access to measure the impact of digital platforms on society , and of platforms’ efforts to mitigate the negative impacts. This process must cut across platforms of all sizes because harms manifest broadly and because there is no good baseline for measuring whether a platform is meeting expectations without comparison to others. Regulators should require increased transparency regarding general usage, terms of service enforcement, government removal requests, law enforcement requests and proactive platform referrals to law enforcement, and the provision of anonymized raw data feeds to centralized research entities. 

Regulators should also assess the internal decision-making process of companies. Many companies have rigorous procedures to assess the privacy implications of new products and features, which is no wonder given stringent privacy laws in Europe and significant privacy-related fines in the United States. Lots of companies have robust trust and safety teams , but for the most part, they do not require a clearly defined standard of safety before launching them. Regulators should assess success in part by whether companies develop safety reviews, and hold themselves to those reviews with the same internal weight as those dedicated to privacy.

Finally, regulators should look to ensure that trust and safety systems are appropriately robust. Companies regularly tout the number of employees or contractors working on trust and safety or the various types of artificial intelligence employed to counter bad behavior. Such information is important, but such data often obscures inadequacies in overall systems. 

A key risk associated with safety-focused regulation is that it may smother competition. That is why many safety-focused regulatory proposals make progressively more serious demands on companies with more users or more revenue. The fear is that a flat set of requirements will favor large companies with more resources, squashing competition. A regime of progressively more intense requirements is problematic because nefarious actors regularly use smaller platforms and adding new requirements as a function of user growth or revenue creates some perverse incentives. 

At the same time, treating a large complex platform as a unitary entity is fundamentally flawed. Large platforms may appear to a user as an integrated system, but the various sub-applications and discrete surfaces that compose the platform operate very differently under the hood. They may store data using different schemas and databases and rely on very different internal teams to operate and manage discrete elements. Some sub-applications effectively operate like independent platforms. Those differences mean that a defensive mechanism built for one surface may be totally irrelevant to another, even if they are owned by the same company and are integrated elegantly for a user. Large, complex platforms can veil gaps in their own safety mechanisms by describing the resources and systems they deploy overall. Never mind whether those resources are concentrated on specific sub-applications and surfaces, while leaving others poorly defended.

Regulators should address both issues by structuring oversight and requiring transparency at the level of surfaces that a platform must defend. For this purpose, a “surface,” is any component of a platform that can accept and display user-generated content. On Facebook’s Feed, that includes posts, comments, and profile-level features such as the About section. On Twitter, surfaces include usernames, tweets, and direct messages. 

Structuring oversight around surfaces has downsides but several major benefits. First, it reflects risk. Every sub-application and digital surface is a potential forum for abuse, so companies should describe, generally, their efforts to defend every surface. It is close to meaningless to list all defensive systems employed on a complex application as if that system is a single, unified entity. Doing so papers over the reality that many of those defensive mechanisms will not operate across all surfaces and that those gaps in coverage represent risk. If the purpose of regulation is to incentivize companies to fill gaps and minimize such risk, that oversight must operate more granularly. 

Second, the surface approach scales naturally. Smaller platforms will have to describe and explain their processes, but because those processes generally occur on less complex platforms, such disclosures will be less demanding than those for larger platforms. Larger platforms will need to invest significant resources in describing defenses on various surfaces, but once baseline reporting is complete, such disclosures will become simpler. 

Third, it incentivizes companies to build new products thoughtfully and invest in integrated, scalable trust and safety infrastructure. Reporting demands and potential sanctions for failing to defend surfaces will incentivize companies to centralize and fully integrate trust and safety infrastructure. Similarly, companies will have a strong incentive to ensure that new surfaces and applications are built with trust and safety features in mind from launch. This is what regulation should prioritize. The goal of regulation should not be to punish companies: It should be to produce better social outcomes. 

Structuring regulation around surfaces does have risks and costs. It will require building sophisticated, costly oversight mechanisms. It will require defining “surfaces” far more granularly than I have here, which is no small task especially in a world of dynamic, ephemeral content. And it will require significant compliance investments by companies, both larger platforms with many surfaces and smaller ones with fewer resources. Increasingly granular disclosures also create some risk of disclosing information that could be useful to attackers, though this risk can be mitigated by requiring only general information. Moreover, the cost to attackers of probing platform defenses is extraordinarily low, so determined actors are likely to find gaps regardless. 

Structuring regulation around surfaces would also require innovators to slow down and ensure safety mechanisms are built into products before they launch. This is both kind of the point but also a significant cost in a world where technical innovation is a geopolitical lever. Regulators could reasonably choose not to demand disclosure at this level, effectively choosing to accept increased safety risks and significantly reduced transparency in order to streamline regulatory enforcement and maximize innovation speed. It would be a policy choice with pros and cons. But it is a choice, which means that those regulators, even via inaction, will share some responsibility with platforms for the negative consequences that manifest as a result. 

Regulation is necessary, but it is not a panacea. The fundamental, if not absolute, trade-offs between competition, safety, and privacy will not be resolved simply because legislators, rather than private actors, adjudicate among them. Yet regulators should not shirk their responsibility to make a go at it. The broad social consensus is clearly that private entities have failed to appropriately balance these values. From my experience at Facebook, many employees bring a tremendous sense of public responsibility to their efforts to balance these values and defend societal interests. Yet it is entirely fair to conclude that private actors simply cannot do that as well as public ones, especially those that are democratically elected. Policymakers should regulate. 

They must make policy, however, clear-eyed about the difficult trade-offs at play. Smart policymakers should ignore pundits selling fancifully utopian visions of the internet. A network of connected humans is inevitably messy. It can and should be governed better, but the idea that there are simple switches—in the form of limited feature-sets or business models —that are singularly responsible for all, or even most, harms associated with the modern digital economy is a fantasy. Such analysis ignores potentially dangerous interactions between all sorts of digital features; facilitates an unwarranted complacency among innovators who build products that do not depend on widely scrutinized features; and ignores the ingenuity, persistence, and agency of bad actors. There are no dragons to slay—rather, a multitude of lesser monsters. To pretend otherwise is perilous. 

The solutions therefore will not be elegant. They will be complex and frustrating, and they will require excruciating trade-offs between important values. Regulators should not run from that reality; it is a key reason they need to make these decisions . A key example is the safety benefit of structuring corporate disclosure around “surface,” rather than platform or even sub-application. Doing so would be onerous for platforms, but it creates better incentives than other mechanisms for progressively increasing requirements on larger and more complex platforms, and it is the only mechanism to ensure that platforms are really securing all of their vulnerabilities. In general, that’s how regulators should think about success: not by how much punishment they mete out against Big Tech, but by whether tech companies, big and small, adapt to the new incentives in their products and their processes.