AI impact statement from dead man raises questions about future of evidence

A recent court case in Arizona saw a murder victim deliver an impact statement from beyond the grave, thanks to Artificial Intelligence (AI).
Chris Pelkey, who died in a road rage shooting three years ago, was recreated using AI technology, allowing him to address his killer, Gabriel Horcasitas, and sparking debate about the future of AI in legal proceedings.
Pelkey’s family utilized voice recordings, videos, and pictures to create the AI-generated video, with his sister, Stacey Wales, writing the words. The AI version of Pelkey expressed forgiveness towards Horcasitas, stating, “In another life, we probably could have been friends.”
Judge Todd Lang, who oversaw the case, acknowledged the use of AI and sentenced Horcasitas to ten-and-a-half years in prison. Lang stated: “I loved that AI…I heard the forgiveness. I feel that that was genuine.”
While retired federal judge and Duke Law School professor Paul Grimm was not surprised by the use of AI in this context – noting Arizona courts’ existing use of AI to summarize rulings – some experts have raised concerns about the broader implications.
For example, though he doesn’t question the family’s intentions, Carnegie Mellon University business ethics professor Derek Leben told the BBC he worries about the precedent this case sets. “If we have other people doing this moving forward, are we always going to get fidelity to what the person, the victim in this case, would’ve wanted?” Leben asked.
The use of AI in court raises complex ethical and legal questions. While proponents argue it can provide a powerful voice for victims and offer valuable insights, concerns exist about potential manipulation, accuracy as well as the reliability of AI-generated evidence. As AI technology advances, the legal system will likely grapple with determining the admissibility and weight of such evidence in future cases.
Discover more from Tech Digest
Subscribe to get the latest posts sent to your email.