By Catherine Powell and Alexandra Dent

The TikTok Trap

In March, the U.S. House of Representatives' Energy and Commerce Committee questioned TikTok's CEO Shou Zi Chew for an arduous five-hour long hearing. This came amidst a sense of nationwide panic about the dangers of TikTok for young people, and the fear that American data in the hands of a Chinese company could endanger our national security. The tenor of some of the lawmakers' questions aside (i.e. "does TikTok connect to home Wi-Fi?"), the hearing itself captured two important trends surrounding the future of technology and the potential for regulation.

First, there seems to be a pervasive state of anxiety about the future of technology in the United States, and without any promise of widespread, substantive data protections, this anxiety has channeled into a wider moral panic on the state of "youth." Simultaneously, there is an obsession over the perceived geopolitical threat of China, the focus on which distracts and even undermines the larger effort to protect our data and privacy.

The response to TikTok has become a prime example of these trends at work. There is no doubt that TikTok poses real risks to young people's mental health and that the app collects large amounts of personal user data which could threaten our national security. However, experts on technology and privacy, including Julia Angwin, whom I hosted for a roundtable recently, have flagged again and again that TikTok is not unique in these risks amongst other social media and tech giants. In May, the U.S. Surgeon General issued an advisory warning about the risks of social media use to youth mental health, urging tech companies to better enforce policies for adolescents, and encouraging lawmakers to "strengthen protections to ensure greater safety for children interacting with all social media platforms." But barring any considerable response at the national level, TikTok has become a convenient scapegoat, with panic surrounding the app inciting a hodgepodge of state, local, and even university-level bans and restrictions that many argue miss the mark in addressing mental health and privacy concerns, and could violate first amendment rights.

In her remarks at CFR -- and in a recent piece in the New York Times -- Angwin also notes that the focus on TikTok's threat to national security obscures broader data privacy concerns. Angwin writes, "Banning TikTok won't keep us safe." After all "[i]f China wants to obtain data about U.S. residents, it can still buy it from one of the many unregulated data brokers that sell granular information about all of us." In our conversation, she pointed out that there is far more documented evidence of algorithmic manipulation and amplification on platforms like Facebook, in addition to high-profile examples of employees at U.S. tech companies -- such as Google, Twitter, and Microsoft -- misusing user data or spying on dissidents and others, charges all waged against TikTok. Instead, as she laid out in another recent piece for the Times, Angwin has called for a broader set of reforms, such as "algorithmic choice" where users play a greater role in curating their social media feeds.

With artificial intelligence's entrance into the public discourse too, it is clear that we have not learned from these mistakes. While early conversations on the emergent technology focused on the potential risks to jobs or of widespread mis- and disinformation, regulatory efforts have devolved into debates over plagiarism on college campuses, "the death of the college essay," and the "new arms race" between the United States and China. Again, while these impacts are certainly worthy of our concern, and their own sets of regulatory frameworks, they also serve to sideline the far-reaching employment and misinformation risks that could affect the larger population.

The ongoing Writers Guild of America (WGA) strike, where part of screenwriters' demands have focused on protections from the use of generative AI, should be a warning sign for the potential risks to the economy for continuing to kick this problem down the road. The strike is predicted to cost California's economy over $3 billion already. More broadly, with a recent Pew Research Center survey finding that nearly one fifth of U.S. workers have "high-exposure" jobs, it is not difficult to see how without more regulatory focus on the risks of generative AI to employment, strikes and shutdowns could spread across the labor force.

This all comes as the European Union (EU) takes a markedly different approach to regulation, recently adopting the Digital Services Act (DSA), which is designed to hold internet platforms more accountable for their content and mitigate "systemic risks." This includes requiring large platforms to file transparency reports, mandating access for external scrutiny, and restrictions on certain types of targeted advertising. Several of the designated Very Large Online Platforms (VLOPs), which are subject to additional scrutiny, have already struggled to comply with the new regulations, implemented August 25, in simulated "stress tests." Reportedly, a number of VLOPs including Facebook and TikTok received warnings that their DSA compliance policies needed "more work" following their stress tests.

Whether the United States follows the EU's lead, or develops other regulatory approaches, these issues have remained on Congress' agenda. In late July, the Senate Commerce Committee advanced both the Children and Teens' Online Privacy Protection Act (COPPA) and the Kids Online Safety Act amidst pushback from civil liberties groups and privacy advocates. It remains unclear whether there is enough momentum for such regulatory efforts to extend to the broader population, but as European Commissioner for Internal Market Thierry Breton argued while in Silicon Valley in July, "Technology has been ‘stress testing' our society, it is now time to turn the tables."

 

Courtesy Courtesy Council on Foreign Relations.

 

Receive our political analysis by email by subscribing here



"The TikTok Trap"