Synthetic Media: The Real Trouble With Deepfakes

By M. Mitchell Waldrop

Knowable Magazine

The snapshots above look like people you’d know. Your daughter’s best friend from college, maybe? That guy from human resources at work? The emergency-room doctor who took care of your sprained ankle? One of the kids from down the street?

Nope. All of these images are “deepfakes” — the nickname for computer-generated, photorealistic media created via cutting-edge artificial intelligence technology. They are just one example of what this fast-evolving method can do. (You could create synthetic images yourself at ThisPersonDoesNotExist.com.) Hobbyists, for example, have used the same AI techniques to populate YouTube with a host of startlingly lifelike video spoofs — the kind that show real people such as Barack Obama or Vladimir Putin doing or saying goofy things they never did or said, or that revise famous movie scenes to give actors like Amy Adams or Sharon Stone the face of Nicolas Cage. All the hobbyists need is a PC with a high-end graphics chip, and maybe 48 hours of processing time.

It’s good fun, not to mention jaw-droppingly impressive. And coming down the line are some equally remarkable applications that could make quick work out of once-painstaking tasks: filling in gaps and scratches in damaged images or video; turning satellite photos into maps; creating realistic streetscape videos to train autonomous vehicles; giving a natural-sounding voice to those who have lost their own; turning Hollywood actors into their older or younger selves; and much more.

Yet this technology has an obvious — and potentially enormous — dark side. Witness the many denunciations of deepfakes as a menace, Facebook’s decision in January to ban (some) deepfakes outright and Twitter’s announcement a month later that it would follow suit.

“Deepfakes play to our weaknesses,” explains Jennifer Kavanagh, a political scientist at the RAND Corporation and coauthor of “Truth Decay,” a 2018 RAND report about the diminishing role of facts and data in public discourse. When we see a doctored video that looks utterly real, she says, “it’s really hard for our brains to disentangle whether that’s true or false.” And the internet being what it is, there are any number of online scammers, partisan zealots, state-sponsored hackers and other bad actors eager to take advantage of that fact.

Continue to full article . . .

Picture: Today Testing (For derivative) / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0), https://commons.wikimedia.org/wiki/File:Social_Media_Marketing_Strategy.jpg

One response to “Synthetic Media: The Real Trouble With Deepfakes

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.