As the data accumulated by humanity increases exponentially, so does the technology designed to analyse it. Hence, in recent years, the artificial intelligence (AI) has emerged at the heart of international debate on its ethical and responsible use.
On 26 March Mr. Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics, United Nations Interregional Crime and Justice Research Institute (UNICRI), e-visited the OSCE Academy in Bishkek to address the use of AI to ensure law and order. He delivered the online lecture on Global Perspective on the AI enabled Crime and Crime Prevention. This lecture was a part of the course ‘Contemporary Security Issues’ taught by Dr. Elena Zhirukhina within MA Programme in Politics and Security.
During his lecture, Mr. Beridze emphasized that AI is a powerful instrument to meet demands of law enforcement work. No human analyst can compete with productivity of advanced algorithms that process big data to assist in crime prevention and investigation. Mr. Beridze noted that currently there are four main domains of AI usage in the law enforcement context: audio processing, visual processing, resource optimization, and natural language processing. Audio processing, for example, can help with voice profiling to identify suspects. Visual processing improves performance of surveillance systems to monitor public order. Resource optimization assists in distributing resources more efficiently by identifying hot spots. And finally, natural language processing helps in classification of evidence and interpretation of foreign languages.
However, AI-powered technology not only opens prospects to improve quality and speed of law enforcement work, it also can be misused or abused, underlined Mr. Beridze. That is why any application of AI in law enforcement should comply with human rights and principles, such as fairness, accountability, transparency, and explainability. Fairness suggests that any application of AI should be non-discriminatory in its nature. Accountability demands a clear attribution of responsibility for an action or decision influenced by algorithms. Transparency refers to clarification of processes connected to creation and use of AI, while explainability requires that solutions reached by algorithms be interpretable by the users and subjects.
Mr. Beridze also underscored the importance of raising awareness of AI-associated benefits and risks among the public. He said that public trust is a key to successful deployment of AI-powered capabilities in crime prevention and criminal justice, and that public trust can be only earned by human rights compliant and responsible use of this powerful technology.
For more information about the latest developments in AI, please visit the UNICRI website (can be reached at: http://www.unicri.it/) and follow Mr. Irakli Beridze on Twitter @Irakli_UN.