Hey there :)

Clock without Javascript


Some bird images
Pardalote by fir0002 (CC-by-NC)

Introduction

What are deepfakes?

Deepfake is the process of utilising Artificial Intelligence to create, manipulate, or copy the persona of an individual in order to create media that simulates another person. Imagine that a computer can recreate someone’s face and voice and make it say whatever the user wants. These ‘deepfakes’ are generated by using the power or AI to imagine a persona and convincingly insert it into media.

How do they work?

This is how it works: Deepfake technology, regardless of medium, always uses two parts of AI to operate, one, imagines the person, another tries to detect the imagined face as either realistic or unrealistic. Both parts compete until the machine has a model of the persona. That model is then inserted into any media you can think of, such as images, video and audio. This powerful tool was once scarce.

Early history

A Video Rewrite Program was created in 1997. This modifies the face of the person in the video to match the audio. Bregler (et al.) is the first paper to take these concepts and use them to realistically animate a face. By realistic, the original article likely meant “realistic for the time”, and the model was imperfect at generating missing data (like syllables it never saw). But it proved the concept and laid the groundwork for future deepfakes.

A way to help computers make Deepfakes was created by researchers in 2001. This Active Appearance Model fitted a shape to an image, helping computers track and match faces [3], and may have been used to assist future deepfake algorithms in finding the face. This allows them to work effectively and know what parts of the image need changing.

PRACTICAL APPLICATIONS

Movie studios

Years later, large studios inserted a deceased actor’s face and voice into the final scenes in a movie, so characters don’t change drastically mid-movie, confusing viewers. However, the actor might not have to be dead. Maybe they lost their voice.

Rogert Ebert was diagnosed with thyroid cancer. This caused him to lose his voice, however, CereProc digitally restored his voice using a library of voice recordings. To him, it was successful at giving him back his voice. They also created a site where people can type messages to be spoken with George Bush’s voice, as well as other celebrities. [4]

In 2015, Val Kilmer, an actor who got throat cancer, also lost his voice. Thanks to deepfake technology by Sonantic, Kilner was able to effectively speak in the movie, and even surprised his son Jack Kilmer, who said he was ‘moved’ by hearing his voice recreated on screen.[1]

However, what if the person in the deepfake has not lost their voice. Maybe a video needs to be translated into many languages. Wouldn’t it be amazing if everyone could hear them speak natively?

Translation

David Beckham was involved in a campaign aiming to stop malaria. Called Malaria No More, it needed to be translated to multiple languages. With Deepfakes, he spread his message in 9 languages, making it more personal, and driving change. [1]

Soon, the technology became popular. In 2017, a program for creating deepfakes was made open source.[2] Now, deepfakes could be easily created at home. This can be both positive and negative.

SCIENTIFIC APPLICATIONS

Compared to several years ago, it is easier and cheaper for scientists to use deepfakes for use within research and strive towards their goals. For example, Deepfakes allow scientists to customise realistic faces, adjusting them dynamically. Here are a few examples found in literature:

Measuring Eye contact

Vijay (et al), wanted to measure how eye contact, smiling and even nodding had an impact on an observer. [2] Barabanschikov and Marinova wanted to illustrate faces on the fly, recreating the Thatcher effect, an effect that is based on inverting facial features, as well as creating a chimera image by combining features from multiple people.

Advanced uses

Intel has been researching real time deepfake detection, using deepfakes to train their models. Their approach to detection includes analysing visible photoplethysmography (vital signs and blood flow), and other signs of life that is visible or audible in the content. AI Deepfakes can't yet fake this. [6]

Additionally, Deepfakes can be used to slow down face recognition algorithms, confusing them into not being able to recognise a specific face. MyFace MyChoice uses Deepfake technology to recreate a photo of a user after being given an example photo. Since humans and AI recognise faces differently, this approach keeps the face recognisable to a person, while the AI sees a completely different face, making it difficult for data miners to use facial recognition against users of that service.[6]

RISKS AND IMPACTS

Of course, people started using it for harmful reasons. One such use is generating adult content, either using an actor who created such content, or someone who didn’t. This starts to cross the line of ethics. Consuming the content personally may be a grey area of ethics for some people, however sharing such content can be both a legal and moral issue. Even if the sharing is not intended, consuming such content should be against common ethics, and this opinion may be common within the scientific field.

Financial impacts

Deepfakes have also been used to for identity fraud, deceiving both ordinary people and corporations alike. For example, by pretending to be a family member in distress, criminals can profit from a simulated ransom threat. Companies have been tricked into giving personal data about users and changing account credentials, etc, with the use of a fake voice.

They have even been extorted out of massive amounts of electronic cash, by, for example, impersonating the voices of CEOs. Organisations often use key decision makes in order to transfer funds. Capitalising on this, criminals used the voices of CEOs and similar to trick the financial staff to transfer funds, causing $243,000USD in theft for a British energy firm and $35 million USD stolen through the deception of a branch manager in Hong Kong. [5]

Solutions

A push for solutions

Because of the potential harms for Deepfakes, people have begun to fear their use. Banning it will push the technology underground, potentially increasing the harm it could cause. However, there is hope, since deepfake detection tools have been developed to prove or disprove media authenticity, helping people know what is real or fake, and calming down some fears.

Forensics

Since deepfakes are only an estimate of the person’s mannerisms, the imagination of a computer, and content from another person, artefacts are present. Although hard to detect by people, especially at low resolution, media forensics teams and AI deepfake detection tools have come a long way, becoming highly accurate. It is even possible to identify which AI model is used to create the deepfakes, by training deepfake detectors on specific deepfake types, achieving a 97% or higher detection accuracy. [3]

Education

While Deepfake detection tools can be used to detect deepfakes, looking at the context of the deepfake, and the individual’s mannerisms will help people know what is fake. For example, people can also be trained to look for signs of suspicious activity by an individual on recorded media.[5] This will work regardless of the how convincing the videos and voices look. Sharing a secret phrase between people you really care about can thwart fake ransom attacks, where it can be difficult to judge whether it is real or not and asking for a pre-shared PIN for sensitive actions, such as porting the number of a SIM, or withdrawing funds can reduce fraud.

CONCLUSION

Finally, we can educate people about deepfakes by teaching them the ethics of using deepfakes, so they are less likely to create malicious deepfakes. When using deepfakes, it is important for people to make a note on or near the content, stating it is created with deepfakes. Finding an example of parody political deepfakes with correct attributes gives hope that Deepfake use is still largely positive. Ultimately, deepfakes are a powerful tool, with a whole lot of potential. It is up to us to use that tool wisely and encourage others to do the same.

By Aera23 [Insert notices]

References

  1. Lalla, V., Mitrani, A. and Harned, Z. (2022) Artificial Intelligence: Deepfakes in the entertainment industry, WIPO. Available at: https://www.wipo.int/wipo_magazine/en/2022/02/article_0003.html (Accessed: 20 June 2023).
  2. Becker, C. and Laycock, R. (2023) Embracing deepfakes and ai‐generated images in ... - Wiley Online Library, Wiley Online Library. Available at: https://onlinelibrary.wiley.com/doi/full/10.1111/ejn.16052 (Accessed: 20 June 2023).
  3. Multiple Authors (PDF) Deepfakes: Creation, detection, and Impact - ResearchGate (2023) Research Gate. Available at: https://www.researchgate.net/publication/362311842_DeepFakes_Creation_Detection_and_Impact (Accessed: 20 June 2023).
  4. Dave J (2020). Audio Deepfakes: Can anyone tell if they are fake? Retrieved 20 June 2023, from https://www.howtogeek.com/682865/audio-deepfakes-can-anyone-tell-if-they-are-fake/
  5. Nadia S, and Audrey de R (2022) How to combat the unethical and costly use of deepfakes, The Conversation. Available at: https://theconversation.com/how-to-combat-the-unethical-and-costly-use-of-deepfakes-184722 (Accessed: 15 July 2023).
  6. Patteson, D. (2023, July 16). Real-time deepfake detection: How Intel Labs uses AI to fight misinformation. Retrieved July 31, 2023, from https://www.zdnet.com/article/real-time-deepfake-detection-how-intel-labs-uses-ai-to-fight-misinformation/