Tricking AI to protect your copyright

World Intellectual Property Day – Innovating for a better future
April 12, 2024
Monaco: Before the Grand Prix, let’s talk IP!
May 3, 2024

Artificial intelligences need training to create articles, images, sounds, etc., so they happily dip into existing resources – the work of journalists and artists – without their prior consent, thus blithely plundering their copyrights and financial rights.

Some authors are pressing charges:

  • In 2023, the American daily The New York Times and the Authors Guild (over 14,000 American authors and writers) against Open AI (developer of the AI ChatGPT), or three artists Sarah Andersen, Kelly McKernan and Karla Ortiz against Stable Diffusion, Midjourney and DeviantArt.
  • And in early 2024, Midjourney was again targeted after the names of 16,000 artists it had looted were leaked to the media.

And what if the solution was upstream, by trapping its creations to intoxicate the AIs that feed on them:

The GLAZE software, developed by computer science professor Ben Zhao and his team at the University of Chicago, adds pixels invisible to the human eye to the illustrations of artists who download it, but which block the AIs using these illustrations: the images generated are blurred, the elements picked up are blurred. This software, created in late 2022/early 2023 and already massively downloaded, has in fact adopted the principles of disruption of facial recognition systems.

The NIGHTSHADE program, created by the same team, tackles AI algorithms by linking the verbal requests of AI users to the images obtained: if you ask for a cat, you’ll get a pony…

The KUDURRU software, created by the Spawning company, helps internet platforms supplying images to detect massive collections of these images on their site by AIs. Alerted artists can either block access or send AIs images other than those requested, thereby distorting their training. An AI is only as reliable as its sources…

The free ANTIFAKE software, created by doctoral student Zhiyuan Yu, computer scientist and engineer Ning Zhang and a team from the McKelvey School of Engineering at Washington University in St. Louis, fights voice cybercrime in general: it distorts the audio signal of the legitimate sound file, so that it sounds just right to a human listener, while being unusable for training a voice AI. This makes it possible, for example, to combat Deepfakes, photo or video montages generally aimed at celebrities or politicians who are made to say or do whatever they want, for the purpose of disinformation in particular… A future version could help artists whose voices are reproduced without their authorization to release a “new” track.

The creators of NIGHTSHADE and KUDURRU have made it clear that they are not seeking to engage in systematic obstruction, but to enable artists and companies alike to protect their intellectual property, and to organize themselves in the face of AI developers in order to legitimately sell them their data and creations.

 

Sylvie BOYER, paralegal at Mark & Law

 

Sources: