If you’ve ever watched Minority Report, you know how striking the idea is of arresting someone for a crime they haven’t yet committed. Today, elements of that fictional world are emerging in real life as police departments around the globe turn to AI to predict where and by whom crimes might be committed.
This approach, known as predictive policing, uses artificial intelligence and data analytics to sift through crime reports and social data, identifying potential hotspots and individuals at risk. While the promise of enhanced public safety is appealing, the lack of transparency raises serious concerns about privacy and bias. For example, in Pasco County, algorithm-driven profiling led to undue scrutiny of residents, eventually sparking legal challenges centred on protecting constitutional rights.
Similarly, experiences in Chicago and Los Angeles—where public backlash forced authorities to abandon similar systems—underscore the inherent risks of relying on opaque AI tools without strict guidelines. When algorithms work as black boxes, citizens lose the ability to question or understand decisions that could significantly affect their lives.
San Jose, California, offers a refreshing counterpoint. By embedding principles of transparency and equity, the city is taking steps to demystify these tools and ensure that technology supports rather than replaces fair justice. If you’re concerned about bias or loss of accountability, San Jose’s model is an encouraging sign that reforms are both possible and necessary.
As predictive policing continues to evolve, communities face a tough choice: regulate the technology for greater accountability, reimagine its role in public safety, or back away entirely. Balancing innovative use of AI with strong ethical standards is essential for building trust and protecting individual rights.