Researchers are considering ethical questions to prevent human bias playing too large a role in computer decision-making.
As AI develops, humans need to consider questions around privacy rights.
Law Professor Harry Surden is one of many people across ŷڱƵ Boulder studying the ethics of AI.
When people think about artificial intelligence, or AI, they can be quick to jump to the all-too-common sci-fi depiction of a heartlessly rational computer willing to kill people to fulfill its programming.
Real AI is lightyears away from that. Today, AI is still pretty far from basic things humans can accomplish, like being able to grasp abstract concepts, according to Harry Surden, a University of ŷڱƵ Law Schoolprofessor and AI expert.
AI is more like advanced pattern matching, Surden said. Though AI researchers are already grappling with new ways to avoid those future pitfalls you might see in the movies.
Surden is one of several people across the ŷڱƵ Boulder campus exploring the future of AI from a holistic perspective. He believes evolving technologies using computer decision-making offer major prospects for breakthroughs in medicine, economics and the law. Humans can help guide the computers to fulfill their intended purpose: to serve us well.
“A lot of people, especially people of low-income means, are underserved by the lawyering community,” he said.
The ability to sort through millions of documents cheaply and quickly to find facts relevant to a lawsuit, for instance, could help those low-income people get better results in the legal system.
Even as those computer systems develop, Surden said, issues like racial bias are already entering the conversation.
“Your systems are built based upon the data that’s put in, and if there are biases in the data due to historical or institutional practices, these biases get reflected in these computational systems in ways that are very subtle and hard to detect,” Surden said.
The ethical lines around AI are often developing alongside the technology, making it critical that experts ask the right questions.
Applications already on your phone can read location data or things that you said online. The apps could then, in theory at least, use that data to make other decisions, like which advertisements you will see.
“Things that you can figure out with a high degree of probability are certain facts, such as disease status or somebody’s sexual orientation that they haven’t publicly revealed by looking at some related fact,” Surden said.
Surden believes these questions and others will lead to growing pains, but overall, he’s optimistic about how the world will integrate with thoughtful artificial intelligence systems in years to come.