Home > About

AI Compass: Bringing Together Behavioural, Ethical and AI Research

Our cities are growing—and with them, the risk of crowd-related crises in public spaces. Artificial intelligence (AI) has the potential to support police and other crowd managers in their vital work to prevent or manage such crises. But how can we ensure that the AI tools used are privacy-conscious, transparent, traceable, unbiased, and ethical? That’s where AI Compass comes in. 

Crowding is a regular occurrence at busy transit hubs, shopping streets, beaches during peak seasons, and stadiums, and during demonstrations or festivals. Increasingly, we are seeing such crowding escalate into dangerous incidents. These incidents pose serious risks to public safety and can cause significant societal and economic harm.

Proof of Concept: AI in Action

To tackle this pressing issue, AI—combined with sensing technologies—has proven valuable for decision-makers, police, and crowd managers. One example is the AI-based decision support system Crowd Safety Manager (CSM), which has been tested at major events such as the 2022 Vuelta in Brabant and Utrecht, the 2023 Rotterdam Marathon, and Koningsnacht 2023 in The Hague. By providing a shared operational picture and forecasting crowd conditions, CSM has effectively supported situational awareness and proactive decision-making in control rooms.

Barriers to Trust and Adoption

While AI clearly has potential when it comes to analyzing large volumes of data, concerns remain, particularly around automation in human-AI collaboration. A recent survey in Germany among public servants and police revealed that 68% of respondents said they “distrust” or “rather distrust” software in the context of crisis management. These concerns can easily slow down or even block the adoption of AI in crowd management.

The AI Compass Initiative

That’s why 27 organizations joined forces in 2024 to launch AI Compass. This project brings together behavioural science, ethics, and AI research to better understand how human-AI teams can and should be designed. Our goal is to improve the management of crowd-related crises—especially under conditions of time pressure and uncertainty—while safeguarding human oversight and ethical values.

Core Questions We Aim to Answer

We focus on addressing questions such as:

  • How can transparency and accountability be ensured, even when dealing with sensitive or protected data?
  • How can values like liveability, safety, and hospitality be reflected in the recommendations of an AI system?
  • How can we ensure that humans stay in control, even when AI systems automatically collect and analyze information?
  • How do we prevent a “Big Brother” effect?

To tackle these and other challenges, we’ve defined five pathways. These pathways will guide the development of truly responsible AI tools for crowd management—tools that enhance public safety while respecting societal values.