top of page

The Deeply Disturbing, DeepFake Generation

  • Writer: Kari Thomas
    Kari Thomas
  • Nov 28, 2023
  • 5 min read

Updated: Dec 14, 2024

What is a DeepFake?


What is a DeepFake? A DeepFake is an image, video, or even voice recording - in which the person in the recording is not a real person, but generated by Ai. Most usually these are created to show celebrities and politicians saying and doing compromising things that they typically would not do on film. “Typically, deepfakes are used to purposefully spread false information or they may have a malicious intent behind their use. They can be designed to harass, intimidate, demean, and undermine people. Deepfakes can also create misinformation and confusion about important issues.” (Nevada Today)


According to Maximilian Schreiner for the Decoder, the technology for DeepFakes started in 2014 when “Goodfellow publishes a scientific paper with colleagues that introduces a GAN [Generative Adversarial Networks] for the first time.” This GAN is a complex system of two Ai - one that creates an image/video/voice recording, and one that attempts to “discovery forgery…[then] adapt and improve.” Back in its early days, Ai was creating images that were blurry and pixelated, and easily spotted to be a fake. 

Image: Goodfellow et al. 2014


As the new year progressed, as did this new technology. By the end of 2015 “Researchers [were] combining GANs with multilayer convolutional neural networks (CNNs) optimized for image recognition…” (Decoder) Basically what this means is that these CNNs could process much more information at a much higher rate of speed than the original GANs, and if they were run on a graphics processing card, that time was even shorter with greater results. 

Image: Radford et al. 2015


However, if you ask our Government, DeepFakes were not invented until 2017 by “An anonymous user of the social media platform Reddit, who referred to himself as ‘deepfakes’...” They also acknowledge in this document that - like “Kleenex,” “Xerox,” and “Photoshop” - “the term ‘deepfakes’ appears to have acquired a similar connotation to any synthetic media” through the general population's common use - or misuse - of the terminology. They consider many things in this same category of “synthetic media,” including things they call “cheapfakes,” “shallow fakes,” and anything else to “utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened.” (Increasing Threat of DeepFake Identities) They site the first deepfake video as the Obama BuzzFeed video from 2018.  



It is easy to see how in just a few short years we progressed tremendously in our ability to create more realistic and believable Ai generated videos. From the blurry photos with monster-esque faces to Jordan Peele’s face-swap with Obama - the jump in technology was astronomical - and only JUST the beginning. After Jordan Peele’s introduction of this tech to mainstream media with this video, it had an even greater boom - and if you thought that the Obama video was bad and dangerous, that was nothing compared to what was coming. “The AI firm Deeptrace found 15,000 deepfake videos online in September 2019, a near doubling over nine months. A staggering 96% were pornographic and 99% of those mapped faces from female celebrities onto porn stars.” (The Guardian



How to not be Deceived 


I have noticed a lot of times that these videos are pretty easy to recognize if you can remember to pay close attention to what exactly you are looking at. The first place I always check is around the lips, as to me they are the biggest giveaway - usually a DeepFake will look blurry and pixelated around the lips. According to Nevada Today, MIT has several quick glances to look for as well. “This includes paying close attention to specific attributes like facial transformations, glares, blinking, lip movements, natural sounds like coughs or sneezes, and other characteristics like beauty marks and facial hair.” 



At about 4:40 in this video, we see a DeepFake of President Nixon announcing that the Moon Landing went poorly. Again in this video, I find the lips to be the biggest giveaway - however, he also seems to have what I like to call “dead eyes” in the way that he barely looks around, and whenever he does finally look up his eyes seem too black to be human. He never fully blinks, and his eyebrows never seem to move. 


ESET tells us that “More commonly, companies may experience ‘vishing,’ a specific type of phishing that can use DeepFake audio to manipulate employees by making them believe they are following the orders of their employers.” This is actually an issue that we have been dealing with for the last few weeks at Starbucks! Corporate was notified, and they sent out emails for how all of us should handle it when answering a call that ends up being one of these. Regardless of the precautions in place, I am still a bit on the nervous side answering the phone these days. 



The Punishments for Creating and Posting


It is obvious the immense amount of harm that could be done by the posting of one of these DeepFakes - but would you believe me if I told you that we currently have next to no legal defenses or actions to take against the people making these videos? There is literally not even proposed legislation in about two-thirds of our country. The state of Louisiana has proposed a law that prohibits “the crime of ‘unlawful deepfakes involving minors,’ which is the creation, distribution or possession of any sexually explicit material depicting a minor using deepfake technology, and will be penalized by imprisonment at hard labor for less than five and not more than 20 years, a fine of not more…” But this law is merely a proposition right now, not actually in effect. NDTV says that there will be regulations soon, but when is soon? A law did go into effect in Minnesota in late July, that “makes it illegal to make and disseminate certain images using deepfake technology…to influence an election or disseminate nonconsensual sexual pornography.”



While many states do in fact have some sort of law at least proposed, the repercussions and distrust that the subject of the video has to endure are not always fixed with a law. In this generation of revenge and blackmail, people believe what they see and are often not able to be swayed away from their original thoughts. 



Can they be used for good?


This was the original question for this blog post - can a DeepFake be made with a purpose that does not deceive? Can it be used for good? After a full week of research, my answer is still no. To me, the ideas of “fake” and “deception” are one in the same, they are synonymous terms for one another and are not able to be separated. Making a DeepFake that does not Deceive sounds like an impossibility to me. 


I am really curious to know what you think… is it possible to make a “fake” that does not “deceive?” Or after all of this evidence, are you in the same boat as me? 


Let me know in the comment section below!

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

© 2023 by Kari Writes. Proudly created with Wix.com

bottom of page