Like your average millennial, I spend a fair amount of my time scrolling through my social media feeds. I start on Instagram, then Facebook (sometimes) and then work my way to LinkedIn. Call it my daily ritual.
Since my LinkedIn account is often flooded with work-related information, I didn’t really anticipate coming across articles that I find deeply problematic or even personal.
Don’t get me wrong, I am huge on strong headlines. My motto for creating impactful headlines that cut through the noise has always been simplify and exaggerate. It works every time. Despite the widespread reputation of the media to spin the truth, being factually correct is the only way to go.
So when I stumbled across the following article, the headline struck a chord with me.
The headline reads ‘That viral video of George Floyd’s death shows why deepfakes are incredibly dangerous’.
Back in May when the infamous George Floyd video circulated, like many, I stumbled across it on social media. And just like many black people, it made me examine myself and my contribution to my community in a world that struggles to see our value. As a mother of a black boy, I feel obligated to speak out when I see injustice. I no longer want to be reticent when issues about race and diversity come up. With the rise of deepfakes, I fear what is to come will be even more divisive to society than what we see right now.
Luckily for me, I’ve had more clients who wanted to take care when addressing those issues than championing damaging narratives. They knew their position as leaders, the platform it afforded them and acknowledged that they didn’t have a lived-in experience to draw from, so did substantial research and had conversations with those that are impacted before making statements.
What troubled me about this article is the clickbait headline, how it insinuates the infamous disturbing video is deepfake with no proof. It appears that the author has chosen to go with a headline that is timely, sensitive and is likely to get some emotional reaction from the reader. To make things worse, if you read the article, there is no mention of the video being fake, just speculation. It was simply fake news woven into something true to give it that believability factor.
While most people can identify the videos that have been tampered with, we have seen the quality of deepfake videos improve in the past year. When synthesised content created using artificial intelligence is at play it makes for even more complicated arguments about what is fake news and what is real. If seeing is no longer believing, who is responsible for verifying the news in an era where media livelihood is tied to click-through rates and ad spend?
Technology giants such as Facebook, Twitter and Google, who are a hotbed for conspiracy theorist and fake news, could play a huge role in the fight against disinformation but little has been done so far.
According to recent research, Facebook owns the four most downloaded apps of the decade, including Facebook, Facebook Messenger, Instagram and WhatsApp. This means between all these platforms, Facebook has the power to roll out a plan that can minimise or control the spread of fake news, which will soon be fuelled by the development of deepfake videos.
Yet, Mark Zuckerberg has called on governments to play an active role in controlling internet content. He asked for “common rules that all social media sites need to adhere to, enforced by third-party bodies, to control the spread of harmful content”. Although that sounds like a reasonable request, it strikes me as an unwillingness to play his part and lead by example.
Big technology firms made a name for themselves by disrupting the traditional ways of doing almost anything, which is mostly beyond governments’ imagination. Therefore, asking the government to take control of fake news and misinformation, which their platforms are responsible for spreading, feels like a joke, particularly because the government is simply incapable. If you recall Mark Zuckerberg's testimony to Congress back in 2018, their ability to understand how these businesses work is nothing short of laughable.
Ultimately, there is no regulation that can work unless users understand the power they hold. The tech giants have shown that they are not willing to tame the beast they have created, and the media is fighting for survival and therefore too concerned with its profit margins. For me, the answer is tough but clear. We, as consumers, must realise our power and stop rewarding brands and entities that don’t contribute towards the safety of our society. And for those that feel like we have no power, all you have to do is look at the impact of the #BlackoutTuesday protests on 2nd June in response to the death of George Floyd, which led to a 40% dip in UK social media ad spend. We, as a society, can reinforce and demand companies be responsible businesses.
The amount of deepfake content online is growing at a rapid rate. At the beginning of 2019 there were 7,964 deepfake videos online, according to a report from startup Deeptrace; just nine months later, that figure had jumped to 14,678. It has no doubt continued to balloon since then.