UK authorities have been monitoring thousands of people using the London Underground.
They were monitoring their movements, behavior and body language with artificial intelligence surveillance software designed to see if they were committing crimes or in unsafe situations, new documents reveal published by Wired.
The machine learning software was combined with live footage from CCTVs to try to detect aggressive behaviour, the rattle of weapons or knives, as well as look for people hiding or evading fares.
From October 2022 to the end of September 2023, Transport for London (TfL), which runs the city's tube and bus network, tested 11 algorithms to track people passing through Willesden Green tube station, in its northwest city.
The proof of concept is the first time the transport operator has combined AI and live video to create alerts sent to frontline staff. More than 44.000 alerts were issued during the test, with 19.000 of them delivered to station staff in real time.
Documents sent to Wired detail how TfL used a wide range of algorithms to track people's behavior while at the station.
It is the first time full details of the trial have been reported and follows a statement from TfL (in December) saying it will expand the use of artificial intelligence to detect breaches at more stations across the British capital.
In the trial at Willesden Green – a station that had 25.000 visitors a day before the Covid-19 pandemic – the AI system was created to detect potential safety incidents to allow staff to help people in need. But it also targeted criminals or any anti-social behaviour.
Three documents obtained by Wired describe how artificial intelligence models were used to detect wheelchairs, vaping, people accessing unauthorized areas or putting themselves at risk by approaching the edge of train platforms.