Continuous digital innovation and its adoption by industry provides criminals with new opportunities to exploit unforeseen vulnerabilities for personal gain. However, in equal measure, many of these technological advances also enhance the police and security apparatus’ ability to combat all manner of illegal activity. This is demonstrated by the national security, police and counter-crime agencies currently working to leverage the potential benefits of data analytics and artificial intelligence.

Predictive policing is essentially the analysis of accurate data to predict the likelihood of a crime occurring in the future and providing the opportunity for preventative or pre-emptive action to be taken. In the UK, a £4.5m proof-of-concept project called the National Data Analytics Solution (NDAS) has been commissioned. The programme draws on information already held by the police – including incident logs, custody records, and conviction histories – for roughly five million people. Using machine-learning techniques, the aim is to calculate a risk score for an individual based on their likelihood of committing crimes in the future.

Understandably, such technology is not without controversy, and there are notable ethical concerns around the risk of algorithmic bias and infringement on an individual’s civil liberties. Airbus partner, RUSI, has been chosen by the UK’s Centre for Data Ethics and Innovation (CDEI) to conduct research in this field and develop a code of practice for the trialling of predictive analytical technology in policing.

For this potentially significant technology to be effectively integrated into the police and security services’ capabilities, a collaborative joint governmental-industry approach is required to ensure that issues raised by the use of machine-learning algorithms for decision-making are addressed and any notion of bias mitigated.