Meta Believes Social Media Can Shield Us from Deepfakes

0
29
Meta Believes Social Media Can Shield Us from Deepfakes

Deep fakes represent one of the most significant threats posed by AI technology. Creating realistic fake images, audio clips, and videos has become alarmingly uncomplicated. Below, you’ll find examples featuring deep fakes of Morgan Freeman and Tom Cruise.

Despite social media currently being a platform for amplifying deep fakes, Adam Mosseri, head of Instagram, believes it can also serve an essential function in debunking them …

How Deep Fakes Are Produced

The primary technique employed in generating deep fake videos involves generative adversarial networks (GAN).

This process involves an AI model that either creates a fake video clip or showcases an authentic one. Another AI model works to detect the fakes. By continually running this cycle, the first model learns to produce increasingly convincing imitations.

However, diffusion models like DALL-E 2 have started to take precedence. They utilize real video footage and apply various modifications to create numerous versions. Users can provide text prompts to guide the AI model in generating desired outcomes, which simplifies its use—and with increased usage, these models improve over time.

Illustrative Examples of Deep Fake Videos

Consider this well-known deep fake of Morgan Freeman, produced three years ago when the technology was considerably less advanced:

Next, there’s a deep fake of Tom Cruise as Iron Man:

Additionally, Brits may recognize Martin Lewis, a popular financial advisor, in this deep fake promoting a cryptocurrency scam:

According to Meta executive Adam Mosseri, social media could actually improve the situation by helping to identify and flag fake content. Yet, he acknowledges that the current system isn’t foolproof and emphasizes the importance of scrutinizing sources.

As technology has evolved, we’ve increasingly refined our ability to generate realistic images, both static and dynamic. Movies like Jurassic Park amazed me at age ten, but that required a $63 million budget. Four years later, I was even more impressed by GoldenEye for N64 due to its real-time graphics. When we look back at these earlier media, they now seem rudimentary. Regardless of your stance on the technology, generative AI is undeniably producing content that closely resembles actual recordings—and it’s advancing rapidly.

A friend of mine, @lessin, suggested nearly a decade ago that we should evaluate any claim not just based on its content, but also on the credibility of its source. While this concept might have gained traction earlier, it feels especially relevant now as we collectively realize the importance of questioning who is delivering information, rather than just what is being communicated when determining a statement’s legitimacy.

Our responsibility as online platforms is to label AI-generated content as accurately as possible. Nonetheless, some material will inevitably bypass detection, and not every misrepresentation arises from AI. Therefore, we must also provide context surrounding the source so users can judge how much trust to place in the content they encounter.

It’s becoming increasingly crucial for viewers and readers to adopt a discerning perspective when they engage with material claiming to depict reality. My advice is to *always* consider the identity of the speaker.

Image: Shamook

FTC: We use income earning auto affiliate links. More.

XGIMI 750 150