Should the Government Regulate Social Media Advertising?

FacebookFacebook, the social media giant, is often in the news for reasons that are unrelated to its stated mission of empowering people “to build community and bring the world closer together.” As laudable as that goal might be, recent news reports reveal that Facebook and other social media outlets have been used as a wedge to drive people apart. Whether the government should do something about the abuse of social media is also a divisive issue.

Social Media Advertising

Facebook’s financial success (it generated advertising revenues of $9.3 billion in a recent quarter) is based in part on its ability to target an advertiser’s message to a narrow demographic of users. Advertisements that appear on your Facebook page are probably different from those that appear when your neighbor opens Facebook, because the ads are selected based on the user’s characteristics and interests. Facebook’s Ad Manager tool lets advertisers target users by age, gender, location, and a number of other characteristics.

Facebook gleans the characteristics and interests of its users by tracking their browsing history. Facebook’s Ad Manager will classify you as a dog lover or an antique collector based on the websites you visit. If you visit a website dedicated to an upcoming festival in Phoenix, your Facebook page might start displaying ads for Phoenix hotels. Many users find that to be a little spooky, if not intrusive, although younger users tend to be inured to living in a digitally transparent world and are less likely to care that their digital footprints are being followed by large corporations.

Political opinions are among the characteristics that Facebook tracks. If you “like” conservative pages more often than “like” liberal pages, Facebook assumes that you are a conservative and pitches advertising that might appeal to a conservative viewpoint. Facebook’s Ad Manager also gathers the information you supply in your Facebook profile, including your political preference.

Ad Manager is an automated process; Facebook employees do not screen searches for specified characteristics. An investigation by journalists discovered that advertisers could search for white supremacists or people with anti-Semitic attitudes. That embarrassing revelation caused Facebook to delete certain characteristics that advertisers can target and to reaffirm an advertising policy that prohibits “attacking people based on their protected characteristics, including religion, and … discriminating against people based on religion and other attributes.”

Facebook is not alone in its use of automated advertising. Journalists discovered that Google’s key word advertising can also enable malicious advertising. Like Facebook, Google responded to that revelation by changing the responses it gives to certain keyword searches, citing its policy against derogatory speech.

Russian Advertising on Social Media

Facebook recently admitted to Congress that it sold ads totaling $100,000 to a Russian “troll farm” during the U.S. presidential campaign. The ads targeted voters as part of a campaign to influence election results.

In some cases, social media trolls “spread fake news intended to influence public opinion.” Some of the ads targeted Facebook users “who had expressed interest in subjects … such as the LGBT community, African-American social justice issues, the Second Amendment and immigration.” Facebook acknowledges that 126 million people saw election-related disinformation on Facebook.

The campaign was consistent with a larger “influence campaign” that the intelligence community linked directly to Russian President Vladimir Putin. An intelligence report concluded that “Russia's goals were to undermine public faith in the US democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency.”

Facebook is not the only social media site that Russians used to influence the presidential election. Twitter disclosed that it discovered about 200 accounts that were “linked to Russian interference in the 2016 election, a further sign that Moscow secretly employed multiple social media platforms to influence American voters.”

The Senate Intelligence Community recently reported that Russian operatives are still active and determined to “sow chaos” in American elections. That report has caused some observers to ask whether the government should protect American institutions by regulating social media advertising.

Social Media Regulation

The Federal Election Campaign Act prohibits foreign nationals who don’t hold a green card from contributing to American political campaigns, or from making expenditures or disbursements that benefit candidates in an election. Foreign nationals can spend money to speak out about issues, as long as their speech is not “closely tied to the voting process.”

The law is difficult to enforce against individuals residing outside the boundaries of the United States. In many cases, enforcing the law against Americans who facilitated foreign interference with American elections is also difficult, because the line between legal and illegal conduct is not clearly drawn. Advertising that targets a specific group by advocating white nationalism might affect an election by motivating that group to vote for a specific candidate, but if it the advertising mentions no candidate, it may constitute permissible speech about an issue rather than a prohibited attempt to influence voting.

At a Senate hearing on November 1, 2017, executives from Facebook, Google, and Twitter were asked about the role their companies played in voter suppression and why they ignored the fact that political advertising was being purchased with rubles. Their answers did not satisfy Sen. Mark R. Warner, a ranking member of the Senate Intelligence Committee. Warner had a successful career in the tech industry before entering politics and is generally sympathetic to the interests of Silicon Valley, but he has shown little patience with social media providers that take a “hands off” approach to election interference.

Free Speech and Regulation of Social Media

Warner has introduced proposed legislation that would require internet companies to disclose who purchased online political ads. Senator Warner’s Honest Ads Act has been co-sponsored by Senators Amy Klobuchar and John McCain. The legislation would not prohibit the expression of any opinion, but would add transparency by requiring purchasers to disclose their identities, the price they paid, and the groups they targeted.

The regulation of speech is always a sensitive and difficult issue. Social media platforms have been likened to the modern version of the town square, where political pamphlets are distributed in the hope of influencing public opinion. Forcing Facebook to decline political advertising could raise serious First Amendment issues, since political speech is at the heart of the First Amendment.

But there is a difference between people who post opinions on their own Facebook pages (which are subject to Facebook’s rules but probably not to government regulation) and paid advertising, which is entitled to the lesser protections that apply to commercial speech. The Honest Ads Act arguably increases the information that is available to voters rather than suppressing speech. For that reason, it would likely withstand a constitutional challenge. Whether elected representatives can withstand the Silicon Valley lobbyists who might oppose the legislation is another question.

comments powered by Disqus