Met Police release footage to detect live-streamed terror attacks

Facebook, News
Share



Footage of police firearms officer training has been released as part of the work by the Metropolitan Police with Facebook to improve detection of live-streamed Christchurch-style terror attacks.

The force hopes the videos will help the social network develop technology that can identify terror attacks so the company can tip police off about attacks early on and prevent them being broadcast.

The tech giant provided officers at the Met’s firearms training centres with body cameras and the footage will be shared to help its artificial intelligence more accurately and rapidly identify videos of real-life first person gunman incidents.

Facebook came under fire for the spread of a live stream video showing the New Zealand mosque shootings in March, which left 51 dead.

 
 

The video was viewed fewer than 200 times during the live broadcast and was watched about 4,000 times in total before being removed.

However, footage of the terror attack was downloaded by users and shared widely online in the aftermath of the far-right attack.

Facebook largely relies on AI to spot terrorist content and remove it as quickly as possible, but in the case of the Christchurch terrorist attack, the company claims it did not have enough first-person footage of violent events for the system to analyse.

The police hope that the technology could help the company notify security services more quickly of unfolding terror attacks, as well as flagging potentially violent and extremist content for removal.

A man holds a gun
Footage of police firearms officer training has been released as part of the Metropolitan Police’s work with Facebook to improve detection of live-streamed Christchurch-style terrorist attacks (Scotland Yard/PA)

Commander Richard Smith, head of the Met’s Counter Terrorism Command, said: “Facebook reached out to the Met as we have worked with them on numerous occasions before to remove online terrorist propaganda.

“The live-streaming of terrorist attacks is an incredibly distressing method of spreading toxic propaganda, so I am encouraged by Facebook’s efforts to prevent such broadcasts.

“Stopping this kind of material being published will potentially prevent the radicalisation of some vulnerable adults and children.

“The footage we are capturing shows our highly-skilled firearms officers training to respond with the utmost expertise to a wide range of scenarios including the kind of attacks we want to stop terrorists broadcasting.”

 

The videos will also be shared with the Home Office so that they can be passed on to other tech firms to help them develop similar technology.

Officers now routinely attach camera’s to their equipment, allowing for a unique “shooter” perspective.

The Met’s firearms team carry out filmed training exercises that simulate terrorist incidents and hostage situations in different scenarios such as public transport and on waterways.

Machine learning technology, or artificial intelligence, requires a large amount of different imagery to help it learn to identify terrorist firearms footage.

UK police have establish the world’s first unit designed to work with online service providers to remove terrorist material online.

The Counter Terrorism Internet Referral Unit, which is based within the Met, supports hundreds of national counter terrorism investigations by probing suspects and their networks.

Earlier this year in May, Facebook, along with Amazon, Google, Microsoft and Twitter, agreed on a nine-point action plan following a meeting with world leaders in Paris named the Christchurch Call To Action.

Facebook says it has banned more than 200 white supremacist organisations from its platform, as well as removing more than 26 million pieces of content in the last two years related to global terrorist groups like so-called Isis and al Qaida.

Erin Saltman, counter terrorism policy manager, Facebook, said: “Violent extremist and hate based content has no place on our platforms and in the last two years, we have removed 26 million pieces of content from global terrorist groups.

“The footage from this partnership with the Met Police will improve our artificial intelligence technology, helping us more quickly identify and remove dangerous content.

“Crucially, we will make this technology available to the wider tech industry so collectively, we can prevent the spread of harmful content.”

Chris Price
For latest tech stories go to TechDigest.tv