Stop the Bleeding: Discontinuing Governments' AI Use - Fordham Intellectual Property, Media & Entertainment Law Journal
27319
post-template-default,single,single-post,postid-27319,single-format-standard,ajax_fade,page_not_loaded,,select-theme-ver-3.3,wpb-js-composer js-comp-ver-6.5.0,vc_responsive
 

Stop the Bleeding: Discontinuing Governments’ AI Use

Stop the Bleeding: Discontinuing Governments’ AI Use

I. INTRODUCTION

As technology develops, it creeps its way further into our lives, and although these advancements generally benefit society, they can also be a source of great detriment. Specifically, governments’1 use of Artificial Intelligence (“AI”) in broad decision-making2 processes presents a source of significant harm to already marginalized populations burdened by systemic oppression in the United States. Governments’ AI usage must be restricted to only decision-making pertaining to non-social issues. Doing so would help mitigate further oppression of marginalized populations,3 and even prevent the government from infringing on citizens’ privacy.

This Essay first defines AI in readily understandable terms and provides background information and examples regarding its general function. Second, the Essay discusses examples of governments’ prolific and expanding AI usage in vital decision-making, and how it adversely affects already marginalized populations. Third, the Essay focuses on the negative privacy implications resulting from the governments’ use of AI, focusing on its need to gather and retain vast amounts of private data for their AI systems to function. Fourth, the Essay examines potential solutions for this growing issue. Finally, concluding by arguing for restricting governmental AI usage to decisions relating only to non-social decisions,4 and disallowing its application to any decisions associated with social and interpersonal issues.5

II. BACKGROUND

A. ARTIFICIAL INTELLIGENCE DEFINED

For brevity and clarity, this Essay will refer to the “pragmatic” definition of AI,6 and avoid delving too deeply into AI’s technical aspects. AI, defined generally, refers to a computerized system that exhibits behavior commonly thought of as requiring intelligence.7 Additionally, AI systems use human-like thought processes that enable them to make their own decisions.8 Programmers9 use datasets to train and teach AI systems, and in doing so, the AI can recognize patterns and similarities within a given dataset to create derivative outputs.10 While it was once difficult to develop and train one’s own AI software, it is now easy for people to do so using different (sometimes free) tools on the internet.11 Eventually, and with enough data, AI systems can learn to perform many tasks commonly done by humans.12

Presently, governments throughout the U.S. (and the world) use AI systems to make critical decisions.13 These systems are responsible for many vital governmental choices, some of which include: selecting restaurants for inspection, predicting where to conduct city-wide rodent control, where to send building and fire inspectors, and how the USPS sorts mail.14 While these may seem like innocuous fields in which AI decisions reign supreme, insidious AI usage lurks in the background. 15 Among other uses, AI also selects passengers for airport searches, evaluates loan applications, and determines other crucial governmental decisions with far-reaching consequences.16

B. AI’S DISCRIMINATION AND BIAS PROBLEM

An AI system is only as good as the data on which it bases its decisions; often, this data is biased and discriminatory.17 Powered by tainted data, these systems exemplify levels of substantial indifference to problematic information; instead, they incorporate bad data into its decisions, obfuscating biases and hiding discriminatory tendencies then embedded within its programming.18 This issue of input bias is inherent in the source data of AI systems employed by governments and corporations alike.19 Soon after programming, these biases are “hardwired” into the respective platforms, and to further complicate matters, those using these AI systems will often attempt to obfuscate them to prevent outside scrutiny.20

Many have proposed potential solutions for the issue of AI discrimination.21 However, most recommendations will meet strong resistance and likely fail to pierce the “shroud of secrecy” surrounding the “Black Boxes” that envelop AI platforms. 22 Some proposals suggest remedying biases by imposing a burden upon entities using AI systems, requiring public disclosures and third-party audits.23 However, the government is often prone to avoiding such scrutiny, and without legislative intervention, these solutions will likely fail.24 Ultimately, the best option may be preventing government from employing AI systems in the decision-making process, at least to the degree to which the process induces social consequences.25

III. DISCUSSION

A. NEGATIVE CONSEQUENCES OF AI BIASES IN EVERYDAY LIFE

AI’s inherent biases and propensity for discrimination is one that permeates nearly every circumstance in which it operates;26 this section will focus on just a couple of these instances.

Every day more employers utilize AI hiring tools that decide which candidates best suit their open positions.27 However, because AI uses vast and biased datasets when making these decisions, they often penalize protected classes in the process.28 More often than not, these applications systemically disadvantage Black individuals and women, although they are more-than-suitable candidates.29 To compound matters, AI’s complexities make it difficult for victims to demonstrate discriminatory intent (and other Title VII elements30), practically rendering obsolete current protections against hiring discrimination.31 While some suggest that compelling transparency requirements upon AI users or forcing companies to confront their algorithms could solve the issue,32 it is unlikely that this will successfully mitigate discriminatory data’s undesirable consequences in the near future, at least until users purge their AI Systems of the tainted data.

Another example of AI’s inherent biases and resultant social ramifications pertains to its racist profiling on the internet. Since 2000, cases of “weblining” became a scourge of the internet.33 Companies were (and still are) using AI algorithms and bots to discriminate against minorities and peoples with disabilities.34 In defense of their racially biased algorithms, companies argued that they were not targeting people based on their race, rather on their internet usage; once again, bad data is the culprit.35 While racism plagues the entire world, countries with more profound “racial cleavages” (like the U.S.) are more likely to collect and operate with racist and biased datasets.36 This racism inherent to the U.S.’s majority populations only leads to skewed AI decision-making outputs and further embeds racism, discrimination, and bias into society’s very framework.3738

B. GOVERNMENTS’ AI USAGE

While AI’s discriminatory applications have met some resistance throughout the corporate world,39 governments’ AI use remains a daunting issue.40 For example, recent litigation revolved around the NYPD’s use of AI and algorithms that attempted to predict where crime is likely to occur.41 In several instances, plaintiffs sued New York government agencies to compel disclosure of their AI algorithms and shed light on innate biases and sources of discrimination.42 When governments use AI systems in their decision-making, they effectively allow for innately biased and discriminatory processes to dictate their decisions.43 Compounding the issue is that simply for AI systems to function, they must utilize vast datasets to make predictions; thus, government agencies must indefinitely retain citizens’ personal data.44

C. PRIVACY CONCERNS AND GOVERNMENTS’ DATA COLLECTION

Privacy concerns and AI technologies go hand in hand, particularly where the government is involved.45 People are often unaware of the “sheer magnitude of data that others possess about them.”46 Worsening matters, AI technologies are developing too quickly for Congress, and current possibilities for adequate regulatory schemes are too remote.47 While in the past, it may have been laborious or cumbersome for the government to collect and retain vast amounts of its citizens’ data, this is no longer the case.48 Before the 21st century, the government faced difficulty in efficiently and cost-effectively collecting and retaining its citizens’ information (at a rate that would be beneficial to AI systems); but as technology advanced, that all changed.49

There is presently a massive imbalance between individual privacy and government access to personal data in the US.50 Reports show that in recent years the U.S. government, through its surveillance programs, tapped into servers of leading ISPs in the country to access, extract, and retain data.51 Furthermore, when some began arguing that there is a greater need for governmental data collection and retention than ever before in history, privacy took a backseat to governments’ “insatiable appetite” for private data.52 This infringement, borne out of a great source of pressure for government data collection—the fear of homeland terrorist activity.53

Capitulating to its critics, Congress and the Executive branch eventually expanded the government’s authority to collect, retain, and synthesize citizens’ intimate data, eventually leading to its use for a variety of objectives, separate from the original purpose of any such authorizations.54 Since using this data alongside AI systems helps facilitate governmental duties, some argue it is necessary to retain, but the absence of any legal regime monitoring the governments’ data mining intensifies the risk of misuse and instances of rampant privacy infringement.5556

D. INSTITUTIONALIZING AI FURTHER ENTRENCHES BIASES

Aside from the issues of privacy infringement, governmental usage of AI results in decisions that are not only skewed by racist and prejudiced data, but that further embeds systemic biases into an already broken system.57 To this effect, this Essay posits a simple argument: (1) governments are increasing their reliance on AI systems for decision-making,58 (2) most AI systems are replete with structures of deep-rooted racism, discrimination, and bias,59 and finally, (3) incorporating these AI systems within the way government functions and processes decisions only further entrenches those negative attributes.

IV. POTENTIAL SOLUTIONS

Numerous legal professionals and scholars offered potential solutions for the issue of AI’s innate biases and tainted datasets. However, many proposed solutions either require drastic changes in current legal regimes or are just unlikely candidates for implementation. This section will present and examine some of these solutions to show that, ultimately, the best available solution is preventing governmental AI use for social-issue decision-making altogether.

A. THE CONFRONTATION CLAUSE

The first proposed solution argues that the Confrontation Clause in the Sixth Amendment applies to governments’ dataset transfers from private corporations.60 While this theory may mitigate the harms of governmental privacy infringement, it does not address the underlying issue of biased AI decision-making. Under this theory, any transfer of an individual’s data to the government constitutes a testimonial statement against that person.61 Accordingly, the Confrontation Clause (if expanded, as it has been by some courts) would limit governments’ ability to obtain its citizens’ data.62 This solution posits that these limitations are flexible enough to prevent governmental misuse of personal data, while simultaneously allowing data use in emergencies.63 However, this Essay rejects this solution because it does not consider the (potentially more severe) issue of AI’s skewed outputs resulting from inherently flawed datasets.

B. DATA MINING: A NEW FRAMEWORK

A second proposed solution suggests creating an entirely new data mining framework that would oversee the way government obtains, stores, and utilizes personal data.64 This solution further suggests that any such program must contain audit tools to ensure compliance.65 An essential piece of this framework would include the opportunity for data correction and changes to machine learning that may help prevent “inevitable” mistakes produced by AI systems.66 As with the previously proposed solution, this Essay rejects this solution as well. The proposal points to a need for “some form of judicial authorization” for data mining systems.67 Still, it fails to consider that while government data mining is an issue, the data itself is inherently problematic.

C. THE AI DATA TRANSPARENCY MODEL

This third solution suggests an innovative model in which users must train AI systems to ensure compliance with relevant regulations and societal expectations.68 This proposal recommends establishing third-party (objective) auditors who would evaluate any data with which AI functions, effectively confronting the issue at its source.69 These auditors must examine and assess any data accessible to an AI system and verify that its use does not conflict with existing legal rules (mainly discrimination and bias guidelines).70 The driving force underlying this solution is the idea that discriminatory and privacy-infringing datasets “reduce the likelihood that AI systems will produce good outcomes.”71 Accordingly, by facilitating this framework, “the likelihood of adverse outcomes” will decrease.72 Additionally, this proposal requires that AI users “play along” and submit to audits of their datasets before they are exposed to any AI systems.73 This solution, however, does not account for governments’ AI use74 and the challenges involved in imposing any sort of limitations upon governmental conduct.75

V. CONCLUSION

Ultimately, legislative limitations upon governments’ AI decision-making systems, relegating them to purely non-social issues is the only viable option. While AI systems provide innovation and efficiency opportunities, their decisions are tainted and lack reasonable consideration for fundamental social values.76 This Essay examined AI’s functions, inherent biases and discrimination problems, and the governments’ use of these systems. Many proposed solutions for reducing undesirable AI output, but none resolve the issue in a readily implementable manner. Therefore, a government should not utilize AI technologies for any decision-making that has social ramifications.77 Continuing to utilize AI in such a manner would further entrench the biases that continuously plague our country into our governments’ future decisions, perpetuating and institutionalizing discrimination against already marginalized populations ad infinitum.


  1. Including federal, as well as, state and local governments.

  2. See Kristin N. Johnson, Automating the Risk of Bias, 87 GEO. WASH. L. REV. 1214, 1215 (2019) (discussing governments’ usage of AI in decision-making pertaining administration of criminal justice, healthcare, employment, financial services, and access to housing and benefits).

  3. Allowing and using biased AI systems to make social decisions will only further entrench these biases. Marginalized communities already suffer enough from immense, prolific, and systemic biases and racism in the US. See Kevin E. Jason, Dismantling the Pillars of White Supremacy: Obstacles in Eliminating Disparities and Achieving Racial Justice, 23 CUNY L. REV. 139, 142 (2020); see also Shlomit Yanisky-Ravid and Cynthia Martens, From the Myth of Babel to Google Translate: Confronting Malicious Use of Artificial Intelligence—Copyright and Algorithmic Biases in Online Translation Systems, 43 SEATTLE U. L. REV. 99, 129 (2019) (discussing how current AI training practices results in discriminatory outcomes with regard to Google Translate outputs, and noting that when AI training data is flawed, biased results are inevitable and perpetuate society’s discriminatory attitudes).

  4. Such as where and when to fix potholes, develop infrastructure, etc., including legislation that will draw and define a line between different types of decisions.

  5. Much of this Essay focuses on published journal articles and materials pertaining to the topics at hand, primarily because there has not been extensive litigation regarding AI systems and discrimination. See Shlomit Yanisky-Ravid & Sean K. Hallisey, “Equality and Privacy by Design”: A New Model of Artificial Intelligence Data Transparency via Auditing, Certification, and Safe Harbor Regimes, 46 FORDHAM URB. L.J. 428, 447 (2019).

  6. See Timothy Lau & Alex Biedermann, Assessing AI Output in Legal Decision-Making with Nearest Neighbors, 124 PENN ST. L. REV. 609, 613 (2020) (defining AI and suggesting the best definition for use in the legal context).

  7. See id.

  8. See Shlomit Yanisky-Ravid, Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era—The Human-Like Authors Are Already Here—A New Model, 2017 MICH. ST. L. REV. 659, 661–63 (2017) (explaining that this feature truly separates AI and Automated Decision-Making Systems (ADMS) from existing computer programs and algorithms).

  9. Or anyone developing AI.

  10. Yanisky-Ravid & Hallisey, supra note 5 at 439.

  11. See id.

  12. Id. at 443.

  13. See Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision-making in the Machine-Learning Era, 105 GEO. L.J. 1147, 1161 (2017) (outlining many governmental uses of AI, some of which include: assessing risks of street crime, automating weapons delivery systems, targeting city water pipe replacement [to combat lead contamination], improve weather forecasts, predict toxicities of medical chemical compounds, determine an individual’s credit worthiness, and predict abuse in tax returns, etc.).

  14. Id. at 1162.

  15. See generally Ari Ezra Waldman, Power, Process, and Automated Decision-Making, 88 FORDHAM L. REV. 613 (2019) (discussing governments’ AI usage in deciding who will receive government benefits, who winds up on watch lists, and who deserves what sort of healthcare, among other important decisions).

  16. Id. These decisions by AI often have significant and dire social ramifications, as discussed in Part III. D.

  17. Id. at 621.

  18. Id. at 618.

  19. See Kristin N. Johnson, Automating the Risk of Bias, 87 GEO. WASH. L. REV. 1214, 1239–41 (2019) (explaining that users train machine learning systems and AI with datasets that often reflect and contain historical and systemic biases, which the AI then integrates).

  20. Id. at 1240.

  21. Id. at 1242.

  22. See id. at 1242–43 (noting that the way in which AI developers shroud their works is akin to a “black box,” impenetrable to prying eyes and any substantial scrutiny).

  23. Id. at 1243 (positing that this could help prevent misuse by government entities and corporations).

  24. See generally Jenny-Brooke Condon, Illegal Secrets, 91 WASH. U.L. REV. 1099 (2014) (presenting the different scenarios in which governments maintain secrets and operate in avoidance of the public’s watchful eye); see also Frederich A. O. Schwarz, Democracy in the Dark: The Seduction of Government Secrecy (2012) (noting that governments often must balance between secrecy and openness to protect its operations).

  25. See infra Parts IV & V. Further, for any solution to work, it would require a mechanism to ensure government compliance, otherwise, any the government could easily circumvent any solutions to limit or prevent disallowed AI use.

  26. See Waldman, supra note 15; see generally “The Digital ‘Earplug’ Streaming Music Services, Multicultural and Advanced Technology – from Segregation to Integration: The Case of Arabic Music and the Israeli Playlist” in NEW TECHNOLOGY AND INTELLECTUAL PROPERTY (Lior Zemer, Dov Grinbaum and Aviv Gaon, Eds., Srigim Li-On, Israel: Nevo, 2020) (forthcoming) (discussing the biased and discriminatory nature of AI-based, digital music streaming platforms in relation to Israeli and Arabic music).

  27. McKenzie Raub, Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices, 71 ARK. L. REV. 529, 537-38 (2018) (outlining the different ways AI technologies direct the hiring processes, including interacting with candidates, screening them for open positions, analyzing candidates’ tone, diction, and facial movements, etc.); Bradfield E. A. Biggers, Curbing Widespread Discrimination by Artificial Intelligence Hiring Tools: An Ex-Ante Solution, B.C. INTELL. PROP. & TECH. F., Jan. 19, 2020, at 1, 3. Also noting that the complexities behind AI make it nearly impossible to detect intentional and unintentional prejudice in these hiring processes. Id. at 4.

  28. Biggers, supra note 27 at 5.

  29. Id. (explaining that these AI systems mine data when making decisions and these datasets contain prejudices against protected classes, skewing and biasing its decision-making).

  30. There are several elements a plaintiff must show to prove a discrimination claim under Title VII of the Civil Rights Act of 1964. See Walsh v. N.Y.C. Hous. Auth., 828 F.3d 70, 75 (2d Cir. 2016).

  31. See Biggers, supra note 27 at 9.

  32. Biggers, supra note 27, at 9–10.

  33. See Lilian Edwards & Michael Veale, Slave to the Algorithm? Why A “Right to an Explanation’ Is Probably Not the Remedy You Are Looking for, 16 DUKE L. & TECH. REV. 18, 29 (2017). This term was used to refer to redlining on the web. Id.

  34. See id. at 30 (for example, these algorithms would steer racial minorities away from specific housing districts based on their race, essentially redlining the district from the internet).

  35. See id. at 29.

  36. Id.

  37. See Waldman, supra note 15 (discussing the idea that AI and computer algorithms are only as good as the data with which they operate).

  38. One can spend countless hours reading about the different ways in which AI and flawed datasets discriminate and disadvantage marginalized populations. This Essay suggests that government should not use these systems for social decisions until they implement a solution for the tainted dataset issue.

  39. Until here, this Essay focused on the issues surrounding AI usage and the discrimination it engenders.

  40. See Edwards & Veale, supra note 34 (discussing the different companies sued for their AI’s discrimination).

  41. See § 79:13. AI and Algorithmic Bias, Seeking Disclosure of The Algorithm, 4C N.Y. PRAC., COM. LITIG. IN NEW YORK STATE COURTS § 79:13 (5th ed.).

  42. See Matter of Brennan Ctr. for Justice at NYU Sch. of Law v. New York State Bd. of Elections, 73 N.Y.S.3d 666 (2018); see also Miller v. NYS Dept. of Financial, 2015 WL 1504301 (N.Y. Sup. 2015).

  43. This Essay will address this issue further in Part III. D.

  44. See Coglianese & Lehr, supra note 13.

  45. See Yanisky-Ravid & Hallisey, supra note 5 at 455 (generally, AI trainers must use vast amounts of user or personal data in their calculations, and they cannot reliably function without massive amounts of data with which to train).

  46. Id.

  47. See id. at 434. The authors note that policymakers seeking to temper AI usage face a tough task. Id. Creating too permissive of a scheme would allow for the continued discrimination found in AI technology, while a more restrictive regime would inhibit any potential benefits. See id.

  48. See Fred H. Cate, Government Data Mining: The Need for A Legal Framework, 43 HARV. C.R.-C.L. L. REV. 435, 436 (2008). The author notes that until recently, the governments’ practice of collecting and processing citizens’ data was too time-consuming, expensive, and often manifested in formats with which it was difficult to process or utilize. Id. at 435. However, this is no longer true and emerging technologies have enabled the government to “erode the protection for personal privacy previously afforded by practical obscurity.” Id.

  49. See id.

  50. George Gutierrez, The Imbalance of Security & Privacy: What the Snowden Revelations Contribute to the Data Mining Debate, 19 INTELL. PROP. L. BULL. 161, 171 (2015).

  51. Id. at 172.

  52. Cate, supra note 48, at 436.

  53. See id.

  54. See id. The government accesses data from big corporations in the private sector and then uses these vast datasets for many activities, often involving AI systems conducting predictive analysis of activities and relationships. Id.

  55. Id. at 437; see generally Jane Bambauer, Collection Anxiety, 99 CORNELL L. REV. 195, 202 (2014) (arguing that individual privacy should not necessarily supersede governments’ objectives because the potential benefits in data collection).

  56. Part IV presents potential solutions to this issue.

  57. As outlined in previous sections.

  58. See Coglianese & Lehr, supra note 13.

  59. Edwards & Veale, supra note 33.

  60. Chad Squitieri, Confronting Big Data: Applying the Confrontation Clause to Government Data Collection, 101 VA. L. REV. 2011, 2015 (2015).

  61. Id. at 2024.

  62. Id. at 2050 (explaining that any accused as the right to confront witnesses against them, and therefore, the data should be revealed to said person).

  63. See id.

  64. Fred H. Cate, Government Data Mining: The Need for A Legal Framework, 43 HARV. C.R.-C.L. L. REV. 435, 485 (2008) (explaining that any substantial change can only arise from drastic measures enacted by Congress and presidential administrations).

  65. Id. at 488.

  66. Id.

  67. See id. at 477 (the article suggests that increased oversight of data mining operations would create a high-degree of accountability and prevent data misuse. However, it does not consider that governments’ misuse of personal data may not be solely privacy-related, but also a social issue. While the solution proposes an avenue for redress, allowing parties harmed by governments’ private-data usage to recover damages, it does not consider that these harms are not always apparent. When government incorporates flawed data into its decision-making systems, it makes discriminatory choices, often out of view from any sort of audit, and with little chance of recourse for those harmed).

  68. See Yanisky-Ravid & Hallisey, supra note 5 at 473 (the authors note that AI training is of paramount importance because the benefits it provides are significant, but how it can circumvent established privacy and anti-discrimination regimes are innumerable). The authors admit that this solution will not solve all AI discrimination issues but hope it will incentivize best practices by “encouraging transparent operations… clarify areas of concern… and [provide] guidelines to an industry currently [operating] in a regulatory vacuum.” Id.

  69. Id.

  70. See id. at 475 (this auditing process can ensure reliability and lead to increased trust in AI usage).

  71. Id. at 479.

  72. See id.

  73. Id. at 485. The authors suggest providing entities that comply with their proposed framework with a “safe harbor” that provides a limited liability shield from AI-produced mistakes and offenses. Id. at 486. They argue that this would incentivize AI users to comply with their framework and focus on preventing flawed, biased, and discriminatory data from use within their AI systems. Id.

  74. And focuses more so on private corporations rather than governmental entities.

  75. Especially considering that governments’ innerworkings are often kept confidential and inscrutable. See Condon, supra note 34.

  76. See Ari Ezra Waldman, Power, Process, and Automated Decision-Making, 88 FORDHAM L. REV. 613, 632 (2019).

  77. Until it can establish a proper system. While this may hamper governments’ ability to function in the “swift” manner they are accustomed to; it is a small price to pay to prevent the further entrenchment of bias in an already discriminatory regime.

Steven W Schlesinger

Steven W Schlesinger is a second-year J.D. candidate at Fordham University School of Law and a staff member of the Intellectual Property, Media & Entertainment Law Journal. He holds a B.A. in Psychology from Touro College. Steven is an ordained Rabbi, founder of an IT company, and occasionally dabbles in blitz chess.