The full text of this Article may be found by clicking the PDF link on the right.
n June 1972, Chile’s democratically elected leader, Salvador Allende, hired the British cyberneticist, Stafford Beer, to bring Chile into the computer age.1 Beer proposed “Project Cyberfolk,” a cybernetic system that would further popular participation and democracy by allowing citizens to communicate their feelings directly to the government.2 Beer built a device that would allow citizens to adjust a pointer on a voltmeter-like dial in order to indicate moods ranging from extreme unhappiness to complete bliss.3 The device would record a citizen’s happiness—ideally during a live television broadcast featuring some proposed new political policy—and electronically send the data directly to the government for real-time aggregation and review.4 Beer theorized that his system would improve public well-being and bring homeostatic stability between government and constituent.5
Beer’s dream would not be realized.6 However, despite his optimism, it is easy to see how such a device could be misused by the government or partisan groups.7 In particular, the stark data-asymmetry between constituent and government places all the power of data in the hands of the government without transparency and accountability to the citizens producing the data. Citizens would be unable to know how their data is processed or aggregated, nor would the government be obligated to reveal what a citizen’s data reveals compared to historical trends. The data could even be used to identify and persecute political dissidents based on the views born out of their data. Beer did anticipate these problems, and he designed safeguards into the system in order to foster visibility and transparency and ensure the process remained analog to keep a citizen’s meter anonymous.8
Today, political consultants, technologists, and entrepreneurs are all helping American politicians effect even larger data-asymmetries by gathering data on citizens through more advanced tools of monitoring and persuasion, and with even fewer safeguards. With these data services,9 campaigns have access to massive electronic databases containing information gleaned and purchased from public and private sources on nearly every voter in the United States.10 The public data, in part, is composed of lists of registered voters and can be obtained from official voter lists and records maintained by states.11 The private data is more emblematic of “Big Data,” encompassing a galaxy of information purchased from data brokers and revealing a limitless range of consumer habits including magazine subscription records, credit histories, and even grocery “club card” purchases.12 With all this data, campaigns can use powerful analytic tools to distill myriad disjointed and seemingly innocuous data points into an individualized voter profile that reveals intimate details about a voter’s life and behavior.13 These profiles allow campaigns to craft messages individually tailored to a voter’s attitudes or ideology as well as economize campaign resources by focusing only on “persuadables.”14
Most citizens, as they engage in their roles as consumers and voters, do not appreciate the degree to which their data is freely traded in data markets. As a matter of law, when an individual freely discloses their data to a third party, online or offline, there is no reasonable expectation that the data can be kept private15 (barring certain types of data protected by federal statute).16 Very few people are aware that their data is being shipped off and aggregated in data warehouses where it is organized, stored, and analyzed.17 This is partly due to the passive role users play in micro-targeting practices, which for the most part are surreptitious by design. For example, a practice known as “cookie matching” allows marketers to serve advertising to users based on data aggregated by actors not present at the initial transaction that generated the data.18 Given the surreptitious and unexpected nature of micro-targeting trends like cookie matching and more,19 voters lack the notice necessary to exercise autonomy over their data held in the private databases of political data companies. With more autonomy, voters can minimize privacy and democracy harms associated with political data practices, like the data’s capacity to socially engineer voters in unfair or impermissible ways,20 the potential political chilling effect caused by unaccountable, imperceptible, and pervasive surveillance,21 and the power imbalances perpetuated by unregulated “black box” algorithms.22
All of this matters because political campaigns are increasingly interacting with voters based on data and shaping the nature of that interaction based on what the data reveals.23 Much of this data is proprietary and unregulated,24 which prevents voters from knowing what their would-be elected officials know about them or how they have used their data to surreptitiously influence and persuade them. Without knowledge of or autonomy over data, voters are increasingly at the mercy of a “one-way mirror”25 that scrutinizes intimate details about their lives, judges them on that basis, and surreptitiously influences their behavior. Given the essential role the right to vote plays in ensuring our government serves its people, it is necessary to understand the ways in which a loss of informational autonomy can harm voters when they exercise that essential right, and explore ways to mitigate that harm.
This Note will argue that, when voters lose informational autonomy, democratic harms can result. It will examine the legal basis for federal and state regulation and will discuss legislative or regulatory options at the federal and state level. Part I will provide additional context about political data practices, describe the organizations that track it, and examine where the data comes from. Part II will discuss the theoretical underpinnings behind informational autonomy and discuss the various ways Big Data political trackers can harm normative conceptions of privacy and democracy. Part III will discuss challenges to reforming political data practices and explain why regulation is necessary. Part IV will examine various federal and state regulatory reforms and provide a legal basis for federal and state regulation.
* J.D., Fordham University School of Law, 2015; B.A., Political Science, University of California Los Angeles, 2010. I would like to thank Prof. Olivier Sylvain for his guidance and support while advising me on this Note, and to Kate Patton, Max Shapnik, Stephen Dixon, and the rest of the IPLJ staff for their hard work and thorough editing. For their support throughout my academic and professional endeavors, thank you to Prof. Robin Lenhardt, Prof. Zephyr Teachout, Prof. Elizabeth Cooper, Dora Galacatos, Hillary Exter, Bonda Lee-Cunningham, Henry Berger, Jerry Goldfeder, Lawrence Mandelker, Daniel Weiner, John Kowal, Julie Ebenstein and Michelle Rupp. Most importantly, thanks mom and dad for loving me and helping make my dreams come true.
Evgeny Morozov, The Planning Machine Project Cybersyn and the Origins of the Big Data Nation, The New Yorker (Oct. 13, 2014), http://www.newyorker.com/magazine/2014/10/13/planning-machine .↩
“Project Cyberfolk consisted of a relatively simple technological system that would function within a complex social system with the aim of improving its management …. Beer proposed building several [algedonic] meters and using them to conduct experiments on how technology could further popular participation and democracy.” See Eden Medina, Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile 81–92 (2011).↩
Morozov, supra note 1.↩
“Beer commissioned several prototype meters and used them in small group experiments. They were never implemented as the form of real-time, adaptive political communication that Beer imagined.” MEDINA, supra note 2, at 92.↩
“Despite Beer’s good intentions, it is easy to imagine how a government might abuse such a device or how partisan groups might manipulate them to suit their interests.” Id. at 91.↩
“Beer recognized that the meters, like the telephone voting systems already in existence at the time, brought with them the potential for political oppression … [He] insisted that the devices be analog, not digital, which would make it more difficult to identify individual meters and, by extension, individual users.” Id. ↩
See infra Parts I.A–C.↩
Chris Evans, It’s the Autonomy, Stupid: Political Data-Mining and Voter Privacy in the Information Age, 13 Minn.J.L. Sci. & Tech. 867, 867–68 (2012). ↩
See infra Part I.C.↩
“An everyday example of data gathering occurs at supermarkets, which use information they obtain from customer loyalty cards to send consumers targeted coupons and advertisements.” Preston N. Thomas, Little Brother’s Big Book: The Case for a Right of Audit in Private Databases, 18 CommLaw Conspectus 155, 158 (2009); see also Daniel Kreiss, Yes We Can (Profile You): A Brief Primer on Campaigns and Political Data, 64 Stan. L. Rev. Online 70, 71 (2012).↩
Charles Duhigg, How Companies Learn Your Secrets, N.Y. Times (Feb. 16, 2012), http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html?pagewanted=all .↩
See Kreiss, supra note 12.↩
Daniel J. Solove, A Taxonomy of Privacy, 154 U. Pa. L. Rev. 477, 528 (2006) (explaining the third-party doctrine); see also Evans, supra note 10, at 879 (“The parties to a financial transaction are said to equally own the facts to the transaction.”). ↩
See Evans, supra note 10.↩
Helen Nissenbaum, Privacy As Contextual Integrity, 79 Wash. L. Rev. 119, 121 (2004). ↩
Evans, supra note 10, at 879. ↩
See infra Part I.C. ↩
See infra Part II.C. ↩
See infra Part II.C.3. ↩
See, e.g., Ryan Cooper, How Big Data Sucked the Soul Out of Democratic Politics, The Week (Nov. 6, 2014), http://theweek.com/article/index/271411/how-big-data-sucked-the-soul-out-of-democratic-politics .↩
See Frank Pasquale, The Black Box Society 3, 193 (2015).↩
I borrow the term “one-way mirror” from Frank Pasquale. Id. at 9. ↩