In the latest string of big news stories on facial recognition, it's hardly surprising that London is the next victim of the authorities' attempt to reduce crime and curb antisocial behaviour through surveillance.
There's been anger over the recent announcement that Cardiff football stadium would be using the invasive technology, and last year it was revealed that the Metropolitan Police supplied images for scans in King's Cross estate, causing outrage across the nation. Personally, I'm not too concerned by the news, because if you have nothing to hide then you shouldn't be worried, right? But I can see where the issues are. In May we found out that the software misidentified members of the public, so-called "false positives", in 96% of scans - if we're going to be subjected to this technology, surely it should at least be accurate in the majority of cases. Another concern is deepfake technology- if you watched 'The Capture' you'll know what I'm worried about. While they may seem futuristic, deepfakes are already impacting our lives and businesses; last year a UK business was scammed out of over $240,000 after the CEO fell for a fake voicemail, generated by artificial intelligence, supposedly from the Chief Exec of their German parent company. And it's not just businesses that are being targeted. According to Forbes, there are now 15,000 deepfakes videos on social media, making it even harder to know who to trust in the age of "Fake News.
The Metropolitan Police has announced it will use live facial recognition cameras operationally for the first time on London streets. The cameras will be in use for five to six hours at a time, with bespoke lists of suspects wanted for serious and violent crimes drawn up each time.