Should a Computer Decide Your Freedom - Fordham Intellectual Property, Media & Entertainment Law Journal
post-template-default,single,single-post,postid-27384,single-format-standard,ajax_fade,page_not_loaded,,select-theme-ver-3.3,wpb-js-composer js-comp-ver-6.7.0,vc_responsive

Should a Computer Decide Your Freedom

Should a Computer Decide Your Freedom

America is the “Land of the Free,” but critics to our penal system think we don’t live up to the title.1 Some local governments have begun answering these calls for reform, with states such as California, New Jersey, and Arizona trying to end the use of cash bail entirely.2 However, the artificial intelligence (AI) they replaced it with may have made the problem worse.3

The United States bail system is a way for defendants to remain free while they wait for their trial, with a financial penalty if they don’t show up. 4. Judges consider numerous factors when deciding the bail amount, with higher numbers based on how likely you are to skip your court date.5 The most common type of bail is cash bail, which is an upfront payment.6

While this is often a straightforward process, those who can’t afford to post bail must remain in jail until the date of their trial.7 This can lead to cases where an innocent, but poor defendant is waiting for years behind bars.8 To combat this, as well as reduce the possibility of human biases, some states replaced the cash bail system with computer algorithms (AI) that determine if someone is a flight risk.9 Note that some states require judges to use the risk assessment tools, and others make them optional.10 Essentially, a computer is deciding who goes free and who waits in jail. Unfortunately for those defendants, machines may be more biased than man.11

Artificial intelligence is a computer algorithm designed to make human-like decisions.12 AI is exposed to an extremely large amount of data points, from which it is trained to find patterns and make predictions.13 Therefore, an AI system is only as unbiased as its data. States have implemented AI as “risk assessment systems” in the place of the cash bail system for almost a decade, at the advice of the Pretrial Justice Institute.14 But in 2019, researchers from Harvard, MIT, Princeton, and several other reputable institutes have strongly urged states stop using them.15

While AI was implemented to decrease the racial disparity in pre-trial jails, it ended up doing the exact opposite. Kentucky, for example, did not have a large difference between black and white defendants granted release before the AI was implemented in 2011. Since then, Kentucky has consistently released a higher percentage of white defendants.16 This problem was consistent, even after multiple attempts to change the algorithm.17 Despite signs the system is flawed, Kentucky law allows the AI to release “low-risk” defendants without judge involvement or approval.18

Similar trends of possible bias were seen in states such as Colorado,19 Ohio20, and Texas.21 There are various theories explaining why the AI could have these biases, all of which focus on how data sets could cause AI to make flawed inferences. One suggestion is that a judge is more likely to set little to no bail in rural, white areas, and set higher bails in urban, racially diverse areas.22 Another theory is that the AI was trained with flawed or incorrect databases, skewing the results.23 It is also proposed that the data is correct, but data filled with disparities in punishment will lead to disparate results.24 Regardless of the reason(s), it is clear that pretrial risk assessments are problematic.

Last November, California residents voted to repeal Proposition 25, the 2018 law that implemented AI pretrial risk assessments.25 However, other states maintain that it is a better alternative to cash bail.26 In the past decade, the Supreme Courts of Iowa, Indiana, and Wisconsin have all upheld their use.27 Matt Henry from The Appeal explained that risk assessment tools are probably here to stay, and “[w]hile better technology has the potential to make risk assessments fairer, that result is far from guaranteed, and it is up to the people who design, implement, and employ these tools to ensure they … safeguard the rights of those at society’s margins.”28

  1. See, e.g., Mugambi Jouet, Criminal Law: Mass Incarceration Paradigm Shift?:Convergence in an Age of Divergence, 109 J. Crim. L. & Criminology 703 (2019) (for criticisms of the American penal system).

  2. Tim Simonite, Algorithms Were Supposed to Fix the Bail System. They Haven’t, Wired (Feb. 19, 2020, 8:00 AM), []; Stephanie Wykstra, Bail Reform, Which Could Save Millions of Unconvicted People From Jail, Explained, Vox (Oct. 17, 2018, 7:30 AM), [].

  3. Id.

  4. Alex Traub, How Does Bail Work, and Why Do People Want to Get Rid of It?, N.Y. Times (Jan. 11, 2019), [].

  5. Id.

  6. Id.

  7. Id.

  8. Id.

  9. Rhys Dipshan, Judges May Be Using Risk Assessment Tools Too Much – and Too Little, (July 16, 2020, 7:01 AM), [].

  10. Id.

  11. Tim Simonite, Algorithms Should Have Made Courts More Fair. What Went Wrong?, Wired (Sept. 5, 2019, 7:00 AM), [].

  12. Barclay Ballard, Artificial Intelligence Begins to Show Signs of Human-like Creativity, (June 1, 2020), [].

  13. Shlomit Yanisky-Ravid & Sean K. Hallisey, “Equality and Privacy by Design”: A New Model of Artificial Intelligence Data Transparency via Auditing, Certification, and Safe Harbor Regimes, 46 Fordham Urb. L.J. 428, 439 (2019).

  14. Simonite, supra note 2.

  15. Martha Minow et al., Technical Flaws of Pretrial Risk Assessments Raise Grave Concerns, Berkman Klein Center (2019) [].

  16. Simonite, supra note 11.

  17. Id.

  18. Id.

  19. Chris Blaylock, University Study Finds Colorado Pretrial Risk Assessment Tool is Biased Against African American Defendants, American Bail Agent Coalition (Aug. 31, 2020), [].

  20. Dawn R. Wolfe, Criminal Justice Group Drops Support For Pretrial Risk Assessment Tools as Ohio Justices Seek to Block Their Use, The Appeal (Feb. 12, 2020), [].

  21. Criminal Justice Committee Report & Recommendations, Tex Jud. Ctr. (Oct. 2016), [].

  22. Tim Simonite, supra note 11.

  23. “Researchers at Dartmouth University found in January 2018 that one widely used tool, COMPAS, incorrectly classified black defendants as being at risk of committing a misdemeanor or felony within 2 years at a rate of 40%, versus 25.4% for white defendants.” Matthew Guariglia & Hayley Tsukayama, Questions Remain About Pretrial Risk Assessment Algorithms: Year in Review 2020, Electric Frontier Foundation (Jan. 1, 2021), [].

  24. “Decades of research have shown that, for the same conduct, African-American and Latinx people are more likely to be arrested, prosecuted, convicted and sentenced to harsher punishments than their white counterparts.” Minow, supra note 15.

  25. Gariglia & Tsukayama, supra note 23.

  26. Dipshan, supra note 9.

  27. Id.

  28. Id.

Ziva Rubinstein

Ziva Rubinstein is a second-year J.D. candidate at Fordham University School of Law and a staff member of the Intellectual Property, Media & Entertainment Law Journal. She is also a member of the Dispute Resolution Society ABA Mediation Team, the Secretary of the Jewish Law Students Association, the co-treasurer of the Latin American Law Students Association, and a Board of Student Affairs Advisor. She holds a Bachelor of Engineering in Biomedical Engineering from Macaulay at CUNY City College.