A deep fake is a picture or video of someone doing or saying something they didn’t. In the old days pictures and video were considered “proof”, it was easy to tell if they had been altered, as with the laughable removal of out-of-favour leaders from Soviet pictures.
With the advent of useful “AI’ making deepfakes has become easy, and it is destroying one of the ways we know the truth. In addition it is putting people into positions they never took, having them say things they did not say and so forth. The common general use is for pornography, but putting words into someone’s mouth is potentially just as bad.
The law will need to be changed to deal with this.
- Making a deep fake of someone without their legal authorization must be both a criminal and civil offense, with jail time, not just fines, since in fines don’t work if someone expects to make more money than the cost of the fine.
- Consent must be active. No contracts of adhesion, never in a EULA, always requiring an individual specific contract which is compensated.
- No long term contracts. Five years at most; nothing which is open-ended or forever.
- If required for employment, this cannot last longer than the employment without a separate contract signed after the person is no longer employed. Some exceptions may be put in place for actors and whatnot.
- All deep fakes must prominently say, in a way that cannot be missed (no fine print or credits) that they are deep fakes, probably a banner at the top or bottom of every part of the video where they appear, except in movies and tv shows, but even then they must start with a prominent announcement and end with one.
- Ideally, though unlikely in the current environment, a person should receive a payment every time the deep fake is shown, there should not just be a one time fee. This should be done similarly to the residuals or radio play laws of the late 20th century. There are some technological hurdles to this, but they are not insurmountable.
- This must apply to dead people as well. Either the estate’s approval must be given and a contract signed, or people must have been dead a long time, perhaps fifty years and the requirement for a prominent disclosure that what is being seen is a deep fake.
- Anyone who uses a deep fake must keep the disclosure that it is a fake.
We have been very bad about law keeping up with technology, and when we have not (as with the DMCA) we have mostly created very bad law. It would be nice, for once, to get on something in a timely and fair manner.
If you think there are any other ways the law should be formatted, or if you disagree, says so in comments, with your reasoning.
This is a donor supported site, so if you value the writing, please DONATE or SUBSCRIBE.
StewartM
Several questions.
1) How will you determine if an image/video is a ‘deep fake’? I fear the technology will get increasingly hard to prove something is so.
2) How do you propose to deal with internet anonymity? Even once something is proven to be false, tracing it back to the creator may be difficult (particularly if the creator is tech-savvy). And what if the creator is outside the US, and outside the scope of our law, then what?
3) Related to #2, how will you deal with distributed mediums (say, usenet) which lie outside any one country’s jurisdiction? With non-distributed mediums (Google, Facebook, Twitter, etc) you can go to the service provider and get the content blocked. But not with distributed systems.
Mind you, I’m with you in-principle. However I fear these measures won’t stop it.
jemand
we desperately need new law in this area.
A few thoughts: criminalization can operate on different levels. Distribution, construction, or possession of the tooling that is capable of construction.
Distribution & possession of tooling are likely the easiest things for law enforcement to actually criminalize, as it is easier to attribute the provable act to the person. However, distribution alone is easy to obfuscate, and the tooling can be used for a lot of things for good and bad reasons, and the tooling is very general, more likely to not be used for deep fakes than it is to be used in that fashion.
Criminalizing creation of deep fakes alone will do very little to mitigate harms of deep fakes from this tech, just because of how ubiquitous the tooling will be and how easy it would be to delete evidence of creation and obfuscate the source path with some anonymous distribution.
It would be difficult to criminalize possession of the tooling to create deep fakes, as the same tools are useful for very many permissible purposes, and are also extremely widely distributed at this point.
In sum, I don’t see a clear way to criminalize deep fakes in a way that is likely to be effective given the above, and yet it is critical that we do it. Criminalizing intentional creation & distribution accompanied by making some high-profile examples might help. Criminalizing profiting off of deep fakes would help, but primarily it will help the acting industry economically and won’t do much to protect democracy from the erosion of truth of public candidates or for the safety of targets of revenge porn etc. Criminalizing intentionally creating tooling intended to streamline the experience of making deep fakes may help too — it would encourage open source communities to create software packages that clearly label for human eyes that the output is machine generated and have more prominent user licenses restricting harmful use. This kind of law could definitely go bad though, and be used more as a weapon against socially powerless groups and not against people doing harm.
Secondly, we have the question of what IS a deep fake in the first place? This is easy enough with public figures, politicians, where it’s obvious what the user intended to do. It’s also very clear in cases of revenge porn and other situations where someone known to the person has a personal photograph to input as a base / reference layer.
However, I am suspicious about some of the foundational behaviors of these models. The way it is described in press and by most practitioners, is as pure creativity and making “something new.” But take a look at some recent work, like this paper: https://arxiv.org/pdf/2301.13188.pdf
It’s clear that with the right prompt, you are getting out memorized underlying data elements, including individual people. Their technique only works to identify fully memorized images, and usually the prompts that create these are names or other clear identifiers. However, that’s also the easiest way to find and demonstrate the principle of this functionality — I by no means believe this sort of “remixed training data” behavior is limited to reproducing full images or people only from name reference.
It is not at ALL demonstrated to my satisfaction that the “purely synthetic” faces and images that are being created from generic prompts are guaranteed to universally be unrecognizable facial remixes or samplings from “face-like dimension space” and that they never produce memorized patterns from training data close enough to individuals to be recognized as such by the human gaze. So, it might potentially be possible to create an image with the power to create the equivalent of “deep fake” harm to an individual, but with no intent to do so and no knowledge that it had been done. Even at hundred million to one odds, we will quickly be at the point where hundreds of millions of synthetic images are created daily. It’s unlikely such an image would cause harm, unless it somehow was distributed widely enough to reach that individual’s social circle.
Obviously, there is a great deal of monied power that would very much like to NOT believe the above is how these models function, for obvious reasons of attribution of copyright, usability of output for commercial purposes, etc. etc. So I expect demonstrations of how the models actually function and the exact relationship between the training data to the model’s performance. (Not to mention the potentially unethical / illegal methods of data content, sourcing or use).
Purple Library Guy
Yes, I think that’s an important thing to do. Really, the question of whether it’s easy to catch people who do it is largely separate from the question of whether it should be illegal. Lots of murders they never find out who did it, either. But part of the function is to make it clear that the behaviour is proscribed and disapproved by society–that if you do it, you are a criminal. And in the end, the idea that it’s difficult to technologically detect perpetrators of such crimes does not usually mean you can’t catch them at all. You catch them because they brag about it, or because they were hired to do it and there’s some kind of record, et cetera–deep fakes are, above all, a social crime and likely to have some kind of social footprint.
As to the international dimension–first, it’s likely to actually be less international than you might expect on average, just because people will have more motivation to do it to people they have some kind of connection to; French people will generally do it to French movie stars or French politicians, not Brazilian ones. But still, that will be a problem, yes, and that’s what extradition is for. International crime is not something new that the internet invented.
I think similar things should be done about bots. They are a significant social and political problem, I can see no reason to allow them, they should be thoroughly illegal.
Thorstein
Laws to protect individuals against slander and libel are already on the books. My concern is that deep fake technology will be most widely used by the police and other oligarchic actors who are above the law.
bruce wilder
Is high-resolution, color-corrected super-realism what convinces us of “the truth”?
I do find deep-fake videos hard to question — I followed a Tiktok for a short time that presented in the apparent person of Tom Cruise with all his ticks but alongside a charmingly quirky personality in place of the famous ego — the latter was the tell, I suppose, that this was not the “real” Tom Cruise, the famous
fake, correction, actor.On twitter, there are quite a few accounts that have recently presented pictures of urban mob scenes and falsely labeled them, “France”. These are not great photos/videos — grainy and anything but steadycam. But, if you want to believe . . . .
Think of all the photos of the Loch Ness monster or Big Foot or medieval gargoyles with iPhones. People believe what they want to believe, don’t they? Whether it’s the miraculous divinity of Christ or the innocence of the police beating Rodney King. Does visual “evidence” make much difference?
I don’t doubt the desire to be relieved of doubt or the responsibility to judge. People love the idea of evidence that does not lie: DNA, for example, or a machine algorithm supposedly immune to human irregularities. Who can blame a self-driving Tesla for running over a hapless pedestrian who didn’t know how to get the car’s attention?
The dominant narrative is of “AI” taking over, making most humans redundant, superfluous to the needs of their capitalist masters, who can rely on a bot to write Wikipedia articles and business letters, routine letters that will be read as well as written by bots, eliminating all need for actual communication — the final evolution of customer service from the ninth circle of phone tree hell into . . . what?
If ChatGPT is automating the mass production of b.s., “deep fakes” are decorative accouterment in the endless stream of useless salesmanship.
StewartM
Distribution & possession of tooling are likely the easiest things for law enforcement to actually criminalize, as it is easier to attribute the provable act to the person.
I’m very leery of criminalizing mere possession and distribution of deepfakes, for a number of reasons. One, we don’t need even more reasons for government justification of its surveillance of our personal computing devices and internet usage. Two, the person possessing or distributing the material may indeed be unaware of it being a deepfake. You could criminalize distribution if you could also demonstrate the user knew that the material was faked, and that would help.
Making its manufacture illegal is good (and hits all of Ian’s nails) but tracing ‘who did it’ will be problematic. PLG is correct in that we don’t solve all crimes, but I suspect our success rate at solving murders will be much higher than catching the creators of deepfakes.
bruce wilder
omg
I was just listening to a podcast interview with Peter Turchin, author of a new book, End Times. The commercial sponsor? Intel promoting their AI deep-fake detection technology!
So trustworthy!
Aspirational Rhetoric
I feel like the guy in the movie Network shouting “Turn off your idiot boxes and Wake Up!” Silly phones act as Digital Pacifiers and computer fantasies run rampant in a majority of the populace’ psyches.
Orwell said the proles are the only ones who’ll have any sense of reality.
The entire artificial media universe is itself the problem.
People are more siloed than ever, unfortunately.
There is no free-flowing conversation anymore.
People do not know how to relate to each other as human beings. The wholesale digitization of everything is an absolute nightmare.
But the kids don’t know this. They are properly cynical, but they don’t understand what’s being done.
There is, by design, no memory left. There is no wisdom to be passed on, because there is no wisdom produced. The adults are as lost as the kids, but they aren’t even cynical.
More and more modern life is like a Truman Show. It really is. Everything is manufactured and everything is overly manicured, including the human beings.
“They” are winning.
The wholesale commodification of the earth is almost complete.
The ability to surveil the entire world population has been achieved – with most in the “first world” countries willingly succumbing to their enslavement.
There is scant privacy left. In fact, there is barely an idea of what privacy itself is.
There was supposed to be a major debate about gene therapies and how far science should be allowed to go in altering our natural selves.
The debate never happened. These things are simply being pushed out on the masses at large.
Cancer is a disease of civilization.
We are regressing and have been for quite some time.
Purple Library Guy
I think Aspirational Rhetoric is overreacting. People have been talking like this forever. I have conversations with my grandkids all the time, they’re just as capable of it as kids were when I was one (well, other than me–I was a massive reader and could sustain conversation with adults better than most). If anything, the current generation’s entertainment promotes communication better than a generation or two ago–time was, kids just sat unreacting in front of TVs, passively absorbing entertainment. Now, two of my grandkids spend a ton of time playing Roblox together (while physically miles apart), and not only does the game require a certain level of mental activity, but their communication during the game is constant. It’s not the theoretical ideal of perfection, I’m sure, but it’s categorically better than sitting in front of the TV like most of my generation did.
There’s enough stuff ACTUALLY going drastically wrong with the world, we don’t need to invent more. If nothing else, it’s a distraction from the real issues; we don’t have the luxury for it these days.