AI Tools and Public Trust: You can have both

This article first appeared in IACP Police Chief Magazine, January 2023 Issue.

Child sex trafficking crimes are being solved thanks to the support of artificial intelligence (AI) and machine learning (ML) technology advances. Designed to support law enforcement in sex trafficking investigations, web-based applications are being used by officers in all 50 U.S. states and Canada. These innovations grew out of an acknowledgement that technology was playing an enabling role for crime, but it had not yet been harnessed as part of the solution. That reality is changing with the application of data science and AI to combat crime on many fronts. 

So, why is it that the public has strong reservations with police using breakthrough technologies despite their crimefighting value?

The criminal element continuously exploits new and evolving technologies to perfect their tradecraft, defraud victims on a massive scale, anonymize their identities, perpetuate unspeakable crimes, and launder their ill-gotten proceeds. Combating these complex crimes requires a mix of traditional and modern investigative approaches. Policing organizations must adopt and deploy advanced investigative capabilities to sift through terabytes of data, rapidly identify victims at imminent risk, detect evolving threats, and optimize resource deployments. 

AI-powered tools such as facial recognition, predictive analytics, social network analytics, and license plate readers are powerful force multipliers for law enforcement operations in their mission to maintain community safety and security. But in the pursuit of maintaining the competitive edge in fighting  crime, law enforcement organizations have not always exercised the appropriate due diligence when onboarding and deploying advanced technologies. While intended to support the public interest, AI-powered policing solutions, when mishandled, can be highly intrusive of individuals’ privacy, perpetuate the over-policing of marginalized communities, and contribute to a growing chasm of mistrust between law enforcement institutions and the citizenry they serve. 

In deploying AI-based solutions, police forces will be confronted with the need to balance new capabilities against risks across each stage of the technology lifecycle, including data quality and integrity, data and algorithmic bias, and model drift. The potential for AI to generate unintended negative consequences exists in any industry setting but is especially sensitive in law enforcement with the range of high-impact applications and the specter of a fragile public trust. 

The Call for Trustworthy AI

This truth underlies the widespread call for trustworthy AI within law enforcement. A 2019 report highlighted the particular challenge of finding the balance between security and privacy.[1] Citing the risk that these systems may result in the violation of fundamental human rights, the report recommended that AI in law enforcement should be characterized by fairness, accountability, transparency, and explainability. 

A 2021 UK report found many benefits in the use of AI for policing but expressed alarm at the proliferation of AI tools potentially being used without proper oversight: “(W)e discovered a landscape, a new Wild West, in which new technologies are developing at a pace that public awareness, government, and legislation have not kept up with.”

There are hard costs to AI done badly in the commercial sector, but the potential for loss of trust between the community and its police is a special kind of risk. If poor AI results in a bad music recommendation, that’s one thing, but what if it impacts civil liberties? Criminal risk assessment is one of the most controversial tools used by law enforcement. Designed to estimate the likelihood of re-offense and inform sentencing decisions, these assessments have the possibility that historical bias in the data and algorithms opens the door for AI to inform police operations—and get it wrong.[3] Predictive policing initiatives employing the use of AI have come under fire following studies that found they reinforced existing patterns of policing and historical bias, leading to over-policing of certain neighborhoods.[4] These data-driven programs have been halted in several U.S. cities amid public outcry, sometimes following years of operation without public knowledge. 

These examples highlight why every police board and department in any community today should be moving forward on AI governance. Just like commercial entities are working to implement safe and ethical AI, law enforcement bodies need to lean into the challenge of developing the principles, processes, and tools to operationalize ethical AI. A 2022 publication acknowledged the many reports and white papers offering principles for the responsible use of AI by the government, civil society organizations, and the private sector but lamented the absence of a shared framework for thinking about the responsible and ethical use of AI that is specific to policing.[5] The IACP Technology Policy Framework was released in 2014 and sets out some good considerations for police leader policy, but it would likely benefit from a review to ensure it reflects the current policing environment and advancements in technology over the past decade, including AI.[6]

Case Study: Facial Recognition

In their efforts to combat the horrific crime of online child sexual exploitation, various Canadian police agencies partnered with a private sector entity to leverage facial recognition technologies (FRT) to identify, locate, and rescue at-risk victims. A joint investigation by federal and provincial privacy authorities concluded that the use of this technology was fraught with risks. Privacy commissioners were highly critical of police use of a vendor that had amassed billions of photographs of individuals from the internet without their consent. They further criticized police for their lack of transparency and for “serious and systemic failings” regarding effective controls when deploying novel technologies. 

In response to the federal privacy commissioner’s investigative findings, the Standing Committee on Access to Information, Privacy and Ethics, a committee of the Parliament of Canada, launched a study on the use and impacts of FRT and the growing power of AI. The committee’s study concluded in October 2022 with the tabling of a report in Parliament containing 19 recommendations.[7] Some noteworthy recommendations included

  • creating a regulatory framework around the use of FRT and setting out clear penalties for violations by police;
  • amending procurement policies to require government institutions that acquire FRT or other algorithmic tools to make those acquisitions public;
  • establishing robust policy measures for the use of FRT, which could include immediate and advance public notice and public comment and consultation with marginalized groups and independent oversight mechanisms; and 
  • implementing legislation that defines acceptable uses of FRT or other algorithmic technologies and prohibits other uses, including mass surveillance.

Executive police leaders and policing oversight bodies need not, and must not, wait for legislation and regulatory frameworks to be passed before being compelled to action. Maintaining and nurturing the social contract between law enforcement and the communities they serve are core tenets of modern policing. And prohibiting the use of advanced technologies is not the answer. Police require access to modern crime-fighting tools to deliver their mission of public safety. The conversation needs to be about how to effectively govern the use of AI and surveillance technologies. Solutions that enhance accountability, transparency, and trust are much needed.

Secret Weapon Transparency

What questions should law enforcement policy makers be asking? The ethical deployment of AI requires development of a set of policies, processes, and procedures to measure, monitor, and ensure the trustworthiness of data and models. These are effective guardrails for AI and should incorporate a set of quantifiable measures and metrics on which the trustworthiness of models can be evaluated. 

There are examples of policing institutions taking concrete steps. In February 2022, the Toronto Police Services Board adopted a policy on the use of the artificial intelligence technology—the first of its kind in Canada.[8] The policy establishes board governance over the use of new and existing technologies using AI and establishes an assessment and accountability framework. Similarly, in March 2021, the Royal Canadian Mounted Police (RCMP) created the National Technologies Onboarding Program (NTOP) “to centralize and bring more transparency to the processes that govern how the RCMP identifies, evaluates, tracks, and approves the use of new and emerging technologies and investigative tools that involve the collection and use of personal information.”[9] NTOP seeks to establish national assessment standards and evaluate the impact of advanced investigative technologies on the privacy of individuals. 

AI legislation has been proposed in the European Union (EU) and Canada that will impose massive penalties for breach of rules around AI development and use.[10] The EU’s Artificial Intelligence Act focuses the greatest regulatory burden on high-risk applications, including provisions specifically pertaining to law enforcement. The passage of these and other new regulatory instruments targeting data and AI is inevitable, but that doesn’t mean organizations should wait to implement robust approaches to AI governance.

Governance Is Not One-Size-Fits-All

AI governance needs to be department specific and case specific. Several practical steps have been shown to be successful in practice, including education, development of a department-specific framework, and establishment of AI governance guardrails. 

A possible action plan could include

  1. A police board–level education or awareness workshop (e.g., including understanding and actions to assign to appropriate committee and opportunity to strategize)
  2. Conducting and monitoring an AI inventory (e.g., existing, planned, embedded projects) of AI-enabled law enforcement applications within a jurisdiction
  3. Development of a case-specific framework based on stakeholder-driven questions for the establishment of trust
  4. Implementation of a platform for technology guardrails and transparent monitoring according to predetermined metrics with triggers for governance actions

AI governance brings considerations at both the department and project levels. Effective oversight can best be achieved through the establishment of an AI governance platform for guardrails, metrics, and a repository of tools for due diligence. AI is dynamic, so the use cases must be monitored over time. The key is to translate important AI technical details into governance visuals and operational thresholds that trigger action.

Conclusion

It isn’t enough to just talk about public trust when it comes to advanced technologies and AI. Law enforcement organizations need to operationalize actions. That means outlining a process to define, measure, monitor, and report on aspects such as fairness, bias, explainability, and privacy. The public will expect due diligence and transparency. 

Emerging technologies like AI are going to be critical to law enforcement’s ability to fight crime in the modern era. AI governance done early and well provides communities greater safety and security and lays the foundation to build trust. AI governance done poorly or when a crisis looms perpetuates inequality and costs communities. It’s time for theoretical discussion to make way for AI governance in practice.

Notes:

[1] UN Interregional Crime and Justice Research Institute, “New Report: Artificial Intelligence and Robotics for Law Enforcement,” news release, Thursday, March 21, 2019.
[2] UK House of Lords, Justice and Home Affairs Committee, Technology Rules? The Advent of New Technologies in the Justice System, HL 180, Session 2021–2022.
[3] Karen Hao, “AI Is Sending People to Jail—and Getting It Wrong: Using Historical Data to Train Risk Assessment Tools Could Mean That Machines Are Copying the Mistakes of the Past,” MIT Technology Review, January 21, 2019. 
[4] Pranshu Verma, “The Never-Ending Quest to Predict Crime Using AI,” The Washington Post, July 15, 2022.
[5] Kevin Cole, “Joh on Ethical AI in American Policing,” CrimProf Blog, May 17, 2022. 
[6] IACP Technology Policy Framework (2014). 
[7] Pat Kelly, Report of the Standing Committee on Access to Information, Privacy and Ethics: Facial Recognition Technology and the Growing Power of Artificial Intelligence, Canada, 1st Session, 44th Parliament (2022). 
[8] Toronto Police Services Board, “Use of Artificial Intelligence Technology,” Policy P2022-0228-6.3, February 28, 2022. 
[9] Royal Canadian Mounted Police, “Response to the Report by the Office of the Privacy Commissioner into the RCMP’s Use of Clearview AI,” June 10, 2021.
[10] European Union, “The Artificial Intelligence Act,” website; House of Commons, Canada, C-27, Digital Charter Implementation Act, 2022. 

ABOUT THE AUTHORS

Niraj Bhargava

Executive Chairman and CEO, NuEnergy.ai; 

Niraj Bhargava is the CEO and cofounder of NuEnergy.ai and an expert in AI governance. He has over 30 years of experience in technology, business creation, and leadership.

Joe Oliver

Assistant Commissioner (Ret.), RCMP

Joe Oliver retired as assistant commissioner of the Royal Canadian Mounted Police in 2020 with more than 34 years of experience in policing. He is also a former IACP international vice president.

Mardi Witzel

CPA Ontario Council Member

Mardi Witzel is a board director with 20 years of experience and currently sits on the CPA Ontario Council. She is focused on AI and ESG and works with NuEnergy.ai as an AI governance associate.

Click here to follow us on LinkedIn to keep up to date with new content.