INTEGRATIVE INSIGHTS
ON EMERGING OPPORTUNITIES

Integrative research means our extensive company research informs every thesis and perspective. The result is deep industry knowledge, expertise, and trend insights that yield valuable results for our partners and clients.

About the Authors:
Howard Smith
Managing Director
Howard Smith has nearly three decades of experience at First Analysis, working with entrepreneurs as an investor and as an advisor on growth transactions to help build leading technology businesses. He leads the firm's work in the Internet of Things, cybersecurity and internet infrastructure sectors. He also built the firm's historical franchises in call centers and computer telephony. His thought-leading research in these areas has been cited for excellence by the Wall Street Journal and other publications. He supports First Analysis' investments in EdgeIQ, Fortress Information Security, ObservIQ, Stamus Networks, Tracer and VisiQuate. Prior to joining First Analysis in 1994, he was a senior tax consultant with Arthur Andersen & Co. He earned an MBA with honors from the University of Chicago and a bachelor's degree in accounting with highest honors from the University of Illinois at Urbana-Champaign. He is a certified public accountant.
Liam Moran
Associate
Liam Moran is an associate with First Analysis. Prior to joining First Analysis in 2020, he was in the executive development program with Macy's, where he was responsible for managing the financial modeling surrounding Macy's $3 billion asset-based loan, capital project valuations, and corporate forecasting. Liam graduated from Kenyon College with a bachelor’s degree in economics and a concentration in integrated program in humane studies. He was a four-year member of the Kenyon varsity swimming team.
First Analysis Cybersecurity Team
Howard Smith
Managing Director
Matthew Nicklin
Managing Director
Liam Moran
Associate
First Analysis Quarterly Insights
Cybersecurity
Detection solutions prevent the spread of harmful deepfakes
January 10, 2024
  • Deepfake creation technology has evolved significantly from the rudimentary face swaps that first allowed everyday users to create low-quality deepfakes in the mid-2010s. Since then, deepfake creators, including bad actors, have developed a variety of creation methods, and the technology continues to evolve rapidly.
  • Governments, individuals and corporations are eager to find ways to stop malicious deepfakes, given their sometimes enormous monetary and societal costs. Deepfake detection companies address this need. They essentially reverse engineer the deepfake creation process to identify manipulated content.
  • The criteria for choosing among deepfake detection solutions vary based on use case. We discuss use cases in news media, law enforcement and other governmental functions, banking, and general commerce. Each differs in the level and type of deepfake detection it needs.
  • We highlight a sample of large technology companies that offer some deepfake detection solutions and highlight some deepfake specialists, including three for which we provide detailed profiles.

TABLE OF CONTENTS

Includes discussion of three private companies

Growing rapidly, harmful deepfakes extract high monetary and societal costs

Deepfake creation models continue to grow in complexity, creating more convincing fakes

Combatting malicious deepfakes with detection software

Use cases influence buying behavior

Some players in the deepfake detection market

The truth is out there

Cybersecurity index opens wide lead over Nasdaq

Cybersecurity M&A: Notable transactions include Talon Cyber Security and Tessian

Cybersecurity private placements: Notable transactions include SimSpace and Phosphorus

Growing rapidly, harmful deepfakes extract high monetary and societal costs

Deepfakes are synthetic media generated by artificial intelligence (AI), created either entirely anew or by modifying real content, to produce compelling imitations of reality. They take form in a wide variety of media such as photos, videos, and audio recordings. Deepfakes make it difficult to distinguish fact from fiction. The incidence of deepfakes was 10 times greater in 2023 than in 2022, according to SumSub, an identity verification and fraud prevention company, clear evidence that deepfake creation technology is being used more than ever.

Although most sentiment around deepfakes is negative, the technology can be beneficial. One example is in marketing, where actors and marketers can leverage talent by licensing actors’ identities to swiftly and cost-effectively generate advertisements with deepfake technology instead of requiring actors to perform. Another example is using deepfake creation technology to personalize ad content based on individual customer preferences and demographics. Beyond marketing, deepfake technology is increasingly being used for entertainment content such as television shows, movies and podcasts. Deepfake technology is used to manipulate actors’ appearances and facial expressions to best fit production needs.

Of course, deepfake technology is often also used to cause harm, a vivid example being unauthorized use of people’s likenesses in pornography. In fact, the majority of current deepfake regulation in the United States deals with banning its use for nonconsensual pornography. For purposes of this report, however, the most relevant harmful use of deepfake technology is in creating deepfakes to influence geopolitical events and public policy and to perpetrate fraud. For example, hackers recently created and published a deepfake video of Ukrainian President Volodymyr Zelenskyy urging Ukrainians to lay down their arms in the conflict with Russia. (This deepfake was quickly identified and removed.) In early 2019, a deepfake video of Ali Bongo, president of Gabon, played a role in sparking a military coup there. Many more examples are being found and reported regularly. In the context of fraud, the Federal Trade Commission reported imposter scams resulted in $2.6 billion in losses in 2022, affecting over 36,000 victims. A somewhat well-known example is bad actors who impersonate grandchildren and urgently ask for money from a grandparent. In the corporate world, the CEO of a UK-based energy company received what he thought was a call from its parent company’s CEO requesting he have money wired to a Hungarian supplier. The CEO recognized the voice and transferred the funds, not realizing the voice was generated by AI; the money was lost.

©2024 by First Analysis Corporation.
One South Wacker Drive
  ·  
Suite 3900
  ·  
Chicago, IL 60606
  ·  
312-258-1400