The role of artificial intelligence in law enforcement: Surveillance, ethics, and predictive
Synopsis
The introduction and integration of artificial intelligence (AI) into legal infrastructures offers tangible benefits but also problematic social consequences. Evidence-based policy has given way to algorithm-based policies, where solutions such as algorithmically-informed decision-making have helped mitigate problems like police biases and surveillance failures but also attracted equally intense criticism. The potential for the subversion of fundamental civil rights and liberties like freedom from discrimination and privacy are often invoked in this context. The integration of AI into governance processes potentially leads to systemic problems like the exacerbation of existing social inequalities based on race, ethnicity, gender, nationality, and other protected categories. For disciplines like law that traditionally predicate their findings on details and case-specific facts, the loss of transparency in how algorithms make decisions poses both moral and practical problems. AI-related decisions are typically algorithmically opaque, where even AI designers may not understand or know what goes into a model's decision. In addition, such decisions are often final, incapable of appeal to a higher authority (Garvie et al., 2016; Ferguson, 2017; Brayne, 2020). The state is seen as the driving force behind the use of AI in law enforcement, particularly in its role as regulator of the entire law development, passage, and enforcement process. Such AI development, passage, and enforcement processes inevitably privilege how states balance their responsibility to guard civil liberties and the need to uphold civil orderliness. This delineation of responsibility also shapes the responsibilities of state and private sector AI developers and manufacturers as well (Joh, 2016; Lum & Isaac, 2016).