HomeTech and GadgetsArtificial IntelligenceDeep Fakes and Disinformation: The Fight Against Deep Learning AI and its...

Deep Fakes and Disinformation: The Fight Against Deep Learning AI and its Misuse

December 25, 2019 – Jenna Tsui is the editor and co-owner of a website called The Byte Beat. Recently, she asked if she could pitch some ideas for postings on this site and I was instantly intrigued. Her focus of late has been on cybersecurity, the evolution of artificial intelligence, future tech, and environmental science. Sounds like a match made in heaven.

In this contribution, Jenna looks at the rise of deep fakes and what it means for the future of news. Is what we will read and see real or not? Will disinformation make news reporting meaningless? Let me know what you think.


When deep fakes started going viral this year, they were widely regarded as an amusing application for deep learning artificial intelligence (AI).  Whether you wanted to see Donald Trump star in Breaking Bad or Harrison Ford in the much-maligned Solo: A Star Wars Story instead of Alden Ehrenreich, someone could use AI to superimpose a face on someone else’s body. But now that the novelty has worn off, the reality of this technological capability has left us wondering whether deep fakes produced using AI are going to be the next source of fake news that we have to combat.  Let’s take a closer look at these AI-generated images and how to fight them. 

What Makes Digital Fakes So Effective?

Why are digital fakes so effective? This Obama fake video is a good illustration. It features former President Barack Obama but the words and ideas being spoken come from the mouth of comedic actor Jordan Peele. This almost convincing video was produced using a couple of programs and 50 hours of deep learning processing. It showcases how easy it is for celebrities or people in positions of power to have their faces and voices stolen with a little bit of processing power and an actor who is a good mimic.

We’re steeped in disinformation.  Instead of being able to trust in news sources for unbiased and accurate information, today we have to take multiple steps to confirm a source of a piece of information before accepting it as true. “Fake News” has become the headline of the year and most social media websites today come equipped with fact-checkers to flag things suspected of being false or partially true.  That air of skepticism and distrust in information is part of why deep fakes are so effective. We know they’re false but want them to be true. 

Impossible to Tell the Difference

The scary thing about deep fakes is that while not yet perfect, they can be incredibly convincing. It appears that it is not hard to put words in someone’s mouth.  As deep fake technology advances and the AI used to create it becomes smarter, will you be able to tell what’s real and what’s generated by a deep learning program? It won’t be long before we won’t be able to tell truth from fake, at least, not by looking or listening. “The concern is that these techniques will rise to the point where it becomes very difficult to discern truth from falsity,” says Tim Hwang, the current Director of the Ethics and Governance of Artificial Intelligence Fund. And without a digital litmus test to determine the veracity of a video clip, even the most skeptical Internet user might find themselves drowning in a sea of deep fakes.

Fighting Back Against Deep Fakes

Industry experts are right to be worried about the potential impact of deep fakes.  One recently emerged mimics President Trump, having him talking about his collusion with the Russians. While he never said the words in the video, it didn’t stop it from going viral with opposition using it to “prove” the President’s guilt.

That’s what makes a digital litmus test so important. It would be a system designed to protect world leaders, and analyze their unique ways of speaking and moving.  It would compare authentic video footage to suspect clips, and hopefully, determine whether a video is real or a deep fake designed to sew discord in the ranks. One company, Amsterdam-based Deeptrace, is working to turn deep fake technology on itself, analyzing videos with a discriminator algorithm that spots when real videos take a turn toward the fictional. The Deeptrace software is being fed thousands of fake videos, using deep learning to hone its detection capabilities. 

What the Future Holds

Deep fakes can be a neat way to see your favorite celebrity in a movie that they didn’t work on, or transport yourself into a favorite film.  It’s a unique application for deep learning technology and AI, but only if it’s being used as entertainment. Things are likely to get worse before they get better, well before companies like Deeptrace release deep fake detection algorithms.  So for now, it’s a good idea to view everything on the Internet with a healthy degree of skepticism until we have a foolproof way to detect what is fake and what is real. 

 

With the news today, is it better to burn before reading because of the uncertainty cast by deep fakes produced by deep learning AI?
lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics