Facebook, Microsoft and a number of US-based Universities have joined forces to sponsor a contest promoting R&D to combat deepfakes, or images/ videos/ other content altered through artificial intelligence (AI) to mislead viewers.
Misleading information (or misinformation) is not new – propaganda is a tactic as old as time. However, with advancements in AI and a glut of data being generated and consumed on a daily basis, our ability to discern integrity at scale, in what we see, hear or read, is increasingly complex.
why is this important?
Isn’t trust the bedrock of a functioning and civil society? ie voting (democratic elections), news, media, personal reputations, business transactions, politics. In the first known case of phishing and scamming using audio deepfakes – in March malicious actors were able to create a near-perfect impersonation of a chief executive’s voice – and then used the audio to fool his energy company into transferring euro220K/USD$243K to their bank account.
We need to build capacity & capability to detect when something doesn’t have integrity – video, audio, image, text. At the moment, we rely on our human intuition and capabilities but this is becoming difficult to discern (and how do we teach these human skills going forward?)
The deepfake detection challenge is a start to what should be huge investment & innovation opportunities going forward.
the challenge: https://deepfakedetectionchallenge.ai/
Image by analogicus from Pixabay