The use of artificial intelligence in the workforce has steadily increased over the last few months. Several industries have integrated AI into daily operations to maximize work productivity.
One of the latest industries to join the AI trend is law enforcement, within both the U.S. and overseas. While AI may be useful for certain tasks, local and international police departments have run into some trouble due to its use, raising doubts over its reliability.
At least 10 people have been falsely arrested nationwide, with at least eight suffering from permanent consequences, leading to job loss, damaged relationships and missed payments on car and home loans.
Among those falsely identified is Trevis Williams, a 6-foot-4 man mistaken for a sexual assailant who had been reported to be 5-foot-6. Williams was arrested in April after his picture was pulled from the NYPD database during an investigation where the police were looking for a different man who had exposed himself in front of a woman in an East 17 Street building.
The NYPD used AI to analyze a still from surveillance footage and pull up images from the database of those with similar face contours.
Ultimately, the prosecutors dismissed the case when Williams’ public defenders used phone records to show that Williams was driving from Connecticut to Brooklyn at the time of the assault.
Similar situations have been reported in Missouri, Michigan and London, UK, where people have been arrested under the guise of identification via facial recognition technology.
Porcha Woodruff was seven months pregnant when Detroit police accused her of carjacking. The woman was arrested despite a lack of surveillance evidence.
Shaun Thompson, a London resident, was stopped and searched after facial recognition technology mistook him for a wanted man.
The false identifications that have taken place are manifestations of the potential consequences of AI’s incorporation into larger parts of investigations and arrests.
In most of these cases, officers were overdependent on AI, which led to a lack of effort and urgency for human-led investigations. Certain officers also grew overzealous with their findings and led cases with emotion rather than logic.
In Williams’ case, if the NYPD had simply searched through the man’s phone records, it would have revealed information that contradicts the facial recognition’s results.
In Woodruff’s case, if police had analyzed the surveillance footage a bit closer, they would have realized that there was no trace of a pregnant woman in the carjacking they were investigating.
AI has no place being used for sensitive matters such as identifying criminals, but since it has already been so heavily implemented, the most that can be recommended is to enforce regulations regarding the degree of reliance on AI, as well as a system that would increase officer productivity.
Most AI facial identifiers work when provided with a clear photo of a possible assailant. Grainy and poor-quality surveillance cameras cannot provide the level of clarity needed for an accurate result, making officers look lazy and desperate when they immediately input these images into the software. Nonetheless, facial recognition software should be subject to harsher legal scrutiny.
Unfortunately, it has already fostered an environment for botched investigations where innocent people are accused of others’ crimes. These shortcomings could have been prevented had there been a more thorough analysis of AI in the workplace.