Deepfakes present a danger to the future of news media

An+example+of+a+deep+fake+is+shown+here+with+actress+Jennifer+Lawrence+%28left%29+as+Steve+Buscemi+%28right%29.++Deep+fakes+have+been+used+for+many+years.
Back to Article
Back to Article

Deepfakes present a danger to the future of news media

An example of a deep fake is shown here with actress Jennifer Lawrence (left) as Steve Buscemi (right).  Deep fakes have been used for many years.

An example of a deep fake is shown here with actress Jennifer Lawrence (left) as Steve Buscemi (right). Deep fakes have been used for many years.

Image from The Sun

An example of a deep fake is shown here with actress Jennifer Lawrence (left) as Steve Buscemi (right). Deep fakes have been used for many years.

Image from The Sun

Image from The Sun

An example of a deep fake is shown here with actress Jennifer Lawrence (left) as Steve Buscemi (right). Deep fakes have been used for many years.

Kimberly Brown, arts & entertainment editor

Hang on for a minute...we're trying to find some more stories you might like.


Email This Story






In May 2018, a video of United States President Donald Trump appeared on the internet.  It was a video of Trump giving advice to Belgium regarding climate change. Knowing Trump’s stance on the climate change issue, many people were upset.  However, it’s not Trump who deserves their anger.

This video was created by a Belgian political party, Socialistiche Partij Anders, otherwise known as the sp.a.  The video of Trump addressing the Belgian video was fake. Released by the sp.a on their Facebook and Twitter pages, the video sparked outrage among Belgian citizens.

“Humpy Trump needs to look at his own country with his deranged child killers who just end up with the heaviest weapons in schools,” a Belgian woman wrote on sp.a’s Facebook page.

Although intended to be a “practical joke” according to The Guardian, the video had a huge impact.

Invented in 2014 by Ian Goodfellow, GANs, generative adversarial network, generates new data out of existing data sets.  This includes generating new audio and text from existing media. For example, a GAN can produce a new photo of a celebrity by looking at thousands of existing photos.  This is concerning for many who question if they can trust certain videos.

“The biggest danger is the way news can be altered,” junior Stuti Ramana said. “Anybody can do anything they want with it. Computers are really fast (at creating deepfakes). With (new technology), (they) can (be created) in just a couple of hours.”

Images from thispersondoesnotexist.com
These people may seem familiar. However, none of the people pictured here are real. A computer has generated the photos seen here through a generative adversarial network.

As one would expect, deepfakes pose a threat to the spread of false information.  By using GANs, two machine learning models work together to create believable deepfakes.  While one model trains on a data set and creates the deepfake, the other model attempts to detect these forgeries.  The models work together until the second model can’t detect the forgery, creating a realistic deepfake.

Florida Senator Marco Rubio believes that the use of deepfakes could undermine our election processes and destabilize the U.S.  

“The vast majority of people watching that image on television are going to believe it, and if that happens two days before an election, or a night before an election, it could influence the outcome of your race,” Rubio said in a 2018 interview with The Heritage Foundation.

The use of deepfakes is very concerning because it enables foreign actors to influence our elections, internet and banking system, electrical grid and infrastructure.  

However, others, such as Tim Hwang, director of the Ethics and Governance of AI Initiative at MIT, do not believe that deepfakes are as dangerous as other weapons.

The biggest danger is the way news can be altered. Anybody can do anything they want with it.”

— Ramana

“As dangerous as nuclear bombs? I don’t think so,” Hwang said. “I think that certainly, the demonstrations that we’ve seen are disturbing. I think they’re concerning and they raise a lot of questions, but I’m skeptical they change the game in a way that a lot of people are suggesting.”

For Ramana, an increase in these fake videos is very concerning because they result in a lack of trust in media from the American people.

“(Because of deepfakes) I don’t think we can ever go back to trusting (media),” Ramana said. “Honestly, it’s kind of scary. I feel like we won’t know who to trust.”