Social Media Manipulation: Catfish and Social Bots
In October, excited social media influencers walked the red carpet of a posh, up-and-coming, store called “Palessi.”1 They sampled the wares and, in the first few hours, bought a few thousand dollars’ worth of shoes, with one influencer paying $640 for one pair.2 In fact, unbeknownst to the partygoers, they were being manipulated by Payless, an affordably priced shoe company, as part of a social experiment.3 Payless used the influencers to prove that its low-priced shoes were comparable to higher priced brands.4
Although there was no harm, no foul, since the influencers received their money back and got free shoes, Payless’s experiment highlights the malleability of public perception. It also exposes the dangerous practice of “catfishing” – creating a fictitious online identity to deceive a third party.5
Although Payless pretended to be a luxury brand for a benign purpose, sinister characters can harness social media in more detrimental ways. For example, a study conducted on behalf of the military found, that with just $60 and open-source data – like Facebook profiles – soldiers could be catfished to act contrary to military orders.6
The detrimental effects of catfishing can be amplified by software-controlled profiles, known as social bots.7 Instead of just creating one fictitious persona, an actor can use software programs to create thousands of personas called social bots. These fake accounts mimic humans and interact with legitimate users to shape public opinion.8 Social bots have been found to spread misinformation on many important topics, like vaccinations and politics.9
States have been struggling to keep pace with technological developments and to adequately respond to fake social media activity.10 For the first time, the New York Attorney General’s office has found that creating fake social media posts and comments to generate revenue constitutes illegal deception and impersonation.11 California recently passed a bill that will require companies to disclose whether they are using a bot to communicate with the public.12 The bill will go into effect on July 1, 2019.13 However, critics point out that the new bill raises First Amendment problems, since it broadly regulates speech on the internet.14
Companies have been taking matters into their own hands15 and Americans tend to agree with this approach. A poll by the Knight Foundation found that 46% of Americans believe this is the responsibility of the tech companies to regulate misinformation, while only 16% believe it is the government’s responsibility.16 Pinterest has taken an extreme approach to curbing misinformation; it blocks all searches related to vaccinations to prevent fearmongering.17 YouTube announced it would remove videos with “borderline content” that is detrimentally misleading.18 Although allowing companies to directly tackle misinformation would circumvent the First Amendment issues, it doesn’t completely solve the misinformation problem.
First, who makes the ultimate determination of the factual veracity of information? We don’t live in a binary world; there exists a world of grey between information and misinformation. Second, how can we ensure that tech companies can be trusted to properly regulate information?
Consumers want transparency but tech companies are profit-oriented, and these two goals directly conflict. The Cambridge Analytica scandal in 2018,19 and Facebook’s settlement agreement with the Federal Trade Commission in 2011,20 both illustrate that tech companies will misuse information about their consumers to turn a profit. It’s not hard to imagine tech companies using misinformation as a pretext for regulating content in a manner that further increases profits.
Another approach to combating misinformation that wouldn’t implicate the First Amendment, and that wouldn’t rely on tech companies, is boosting media literacy. After a Stanford University study found that 82% of middle school students are unable to differentiate news content from ads, California passed a bill to encourage media literacy.21 Media literacy refers to the process of critically evaluating messages and information produced by the media.22 Critics question whether individual responsibility is reasonable when it’s increasingly easy to access personal information and the source of information isn’t always readily apparent.23
Social media has increased the amount of information on the web, which in turn has allowed devious characters to use misinformation to manipulate the public. Sifting through the wild west of truth and falsehood can be difficult even for critical consumers. Lawmakers eager to keep the law relevant as technology rapidly develops must be wary of crossing the line into censorship or creating new hazards.
Amy Lieu, Payless Fools ‘Influencers’, Fox News (Nov. 30, 2018), https://www.foxnews.com/lifestyle/fake-payless-luxury-shoe-brand-palessi-fooled-buyers-in-chic-pop-up-event. [https://perma.cc/5ZZB-N8TJ]↩
Zachary Heck, “Catfish” Added to the Sea of Litigation, ABA (Aug. 9, 2017), https://www.americanbar.org/groups/young_lawyers/publications/tyl/topics/poplaw/catfish_added_the_sea_litigation/. [https://perma.cc/52PV-WLBF]↩
Issie Lapowsy, NATO Group Catfished Soldiers to Prove a Point About Privacy, Wired (Feb. 18, 2019, 7:00 AM), https://www.wired.com/story/nato-stratcom-catfished-soldiers-social-media/. [https://perma.cc/4UJW-RT2E]↩
Chengcheng Shao et. al., The Spread of Low-Credibility Content by Social Bots, Nature Comm. 2 (2018), https://www.nature.com/articles/s41467-018-06930-7.pdf. [https://perma.cc/G3NP-GJ99]↩
Athena Jones, First on CNN: NY Attorney General Targets Fake Social Media Activity, CNN (Jan. 30, 2019, 4:03 PM), https://www.cnn.com/2019/01/30/tech/new-york-attorney-general-social-media/. [https://perma.cc/9799-JWEF]↩
Dave Gershon, A California Law Now Means Chatbots Have to Disclose They’re Not Human, Quartz (Oct. 3, 2018), https://qz.com/1409350/a-new-law-means-californias-bots-have-to-disclose-theyre-not-human/. [https://perma.cc/2Y4F-JKQ5]↩
Madeline Lamo, Regulating Bots on Social Media is Easier Said than Done, Slate (Aug. 9, 2019, 9:07 AM) https://slate.com/technology/2018/08/to-regulate-bots-we-have-to-define-them.html. [https://perma.cc/CT9J-T8AV]↩
Tayor Telford, Pinterest is Blocking Search Results About Vaccines to Protect Users from Misinformation, Wash. Post (Feb. 12, 2019, 12:14 PM), https://www.washingtonpost.com/business/2019/02/21/pinterest-is-blocking-all-vaccine-related-searches-all-or-nothing-approach-policing-health-misinformation/?noredirect=on&utm_term=.67d84907b2c3. [https://perma.cc/52YL-YDB8]↩
Sam Gill, Should Platforms Be Regulated? A New Survey Says Yes, Knight Foundation (Aug. 15, 2018), https://knightfoundation.org/articles/should-platforms-be-regulated-a-new-survey-says-yes. [https://perma.cc/L8TZ-SCFV]↩
Telford, supra note 12.↩
Nicholas Confessore, Cambridge Analytica and Facebook: The Scandal and the Fallout So Far, NY Times (Apr. 4, 2018) https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html. [https://perma.cc/9B3A-7YKE]↩
Seth Fiegerman, Facebook Could Be in Hot Water with The FTC- Again, CNN (Dec. 19, 2018, 4:49 PM), https://www.cnn.com/2018/12/19/tech/facebook-ftc-consent-decree/index.html. [https://perma.cc/HSE3-Z87C]↩
Susan Minichiello, California Now Has a Law to Bolster Media Literacy in Schools, Press Democrat (Sep. 18, 2018), https://www.pressdemocrat.com/news/8754461-181/california-now-has-a-law?sba=AAS. [https://perma.cc/XD3S-DRBV]↩
Monica Bulger & Patrick Davison, The Promises, Challenges, and Futures of Media Literacy, Data & Soc’y Res. Inst. 7 (Feb. 2018), https://datasociety.net/pubs/oh/DataAndSociety_Media_Literacy_2018.pdf. [https://perma.cc/8BPH-B2GM]↩
Id. at 17.↩