27627
post-template-default,single,single-post,postid-27627,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.4,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Potential Solution for Asserting Defamation in Deepfake Videos

Potential Solution for Asserting Defamation in Deepfake Videos

Artificial intelligence is a central fixture of the 4th Industrial Revolution.[1] Once blunted tools, machines now possess the ability to augment our sense of reality. Notably, image technology has advanced to the stage where synthesized depictions are no longer discernable from actual persons or events.[2]  A user of advanced AI software now wields the ability to create ascetically near perfect real-life scenarios depicting near identical replications of completely unsuspecting people.  A central issue complementing the technology’s emergence is whether a person has recourse should their likeness be generated and used without their consent.  And on a more foundational level, is the generated image even expressly that of a subject?

There is hardly a better example of this phenomenon than so-called “deepfake” videos which have recently circulated online.  “Deepfake” videos depict people in lifelike scenarios complete with both visual and audio features.[3]  The subjects in the short films are entirely machine generated and nearly identically mimic actual persons engaging in activities ranging from slipping in a building lobby to teeing off at a golf course.[4] The life-like simulations are of such quality that industry professionals still struggle to identify their falsity.[5]  After viewing several short films, I admittedly could not pick out any obvious signs of computer generation.  The authenticity of the work made me ask how I would react if I was depicted in an AI generated film?  Or worse, being falsely implicated in unacceptable or even illegal behavior?

Currently, several states have passed or proposed legislation to outright ban the use of the technology.[6]  Absent prohibition, remedies for AI depictions likely lie with traditional defamation actions.[7]  To succeed on a defamation action, the plaintiff must generally prove, 1) a false and defamatory statement concerning another, 2) an unprivileged publication to a 3rd party, 3) fault amounting at least to negligence on the publisher’s part, and 4) either actionability of the statement irrespective of special harm or the existence of special harm caused by the publication.[8].

Deepfakes may, however, pose a unique problem under defamation theory.  While a creator may share an image markedly similar to another person, small alterations in facial-racial recognition may make a seemingly objective determination much more subjective.[9]  A deepfake creator may claim that the image merely “resembles” a real person and does not constitute a defamatory act towards that individual.  Alterations to a person’s facial dimension may be nearly completely indiscernible, however, they could be materially damaging in asserting an altered image is an exact copy.  For instance, a 1-millimeter separation in eye symmetry might go completely unnoticed from a casual observer, however it could be argued to constitute a substantial alteration of the person’s image.

The software sophistication allows for such finite tuning in images that traditional defamation claims may end up being extremely difficult or even impossible to prevail on.  Therefore, I would advocate for a national statute creating a specific cause of action for those seeking remedies for unauthorized image use.  In order for this to prevail, the law would require two main attributes.  One, it would allow plaintiffs to seek injunctions for images that materially resemble their likeness.  Materiality would ideally be assessed by whether alterations of facial images fall within millimeters for critical features, whether they be eye placement, nose symmetry or mouth width.  Any deviation less than the millimeter requirement for each facial feature would be assumed to match the plaintiff, thus allowing a plaintiff to prevail in a suit.  Second, any regulation should make social media platforms liable for disseminating doctored videos if the plaintiff prevails in their original trial against the content creator.  This would provide an incentive to police platforms and remove content from the public domain further mitigating any harm to a damaged plaintiff.

The software enabling AI image generation is so novel that it requires a new approach to traditional remedies.  Outside banning the application of the software, the ideal solution would be to extend liability for both content creators and distribution platforms.  Traditional defamation theory is currently unlikely to provide remedies for aggrieved parties whose image was used against their authorization.

Footnotes[+]

Gregory James

Greg James is a JD Candidate Class of 2023 at Fordham Law. He is an IPLJ Staff Member. From Fordham University he received a B.S. in Applied Accounting and Finance.