Are we ready for (extremely realistic) fake audio clips to join ‘Photoshopped’ images and DeepFake videos in the arsenal of electoral manipulators?
The Campaign Tech newsletter (part of Campaigns and Elections) has the following story:
“Remember Lyrebird, the fake audio startup? They’re running a private beta for a new tool called Overdub that lets users upload an audio clip and then amend what’s being said by the speaker by simply editing the transcript of the original audio clip.
Descript (Overdub/Lyrebird’s parent) stresses that they’re committed to only letting users synthesize their own voice.
But we all know that beating bad actors is a game of whack-a-mole. And even if Descript is successful on this front, it’s absurd to think every fake audio vendor will be (or will want to be).
The takeaway: sophisticated off-the-shelf audio manipulation tools are going to become as accessible as photo-editing tools.”
And while the American fear is that this might be used by political opponents, much of the rest of the world will be concerned about overseas disruptors, some of whom want to tilt the balance in favour of a particular candidate and others who simply want to cause chaos.
Such fakes, if uncovered, would fall under Twitter and Facebook’s policies of labelling fakes as such and demoting them in their algorithms. But they have said they will not do so if such tweets or posts come from newsworthy figures such as world leaders.