AI: Pattern recognition instead of human values
Technology could make it easier for people to attach emotions to machines
The evolution of artificial intelligence could lead to a place where people develop emotional attachments to non-living things even as we don’t understand how those things arrive at their conclusions, according to a member of Central Michigan University’s faculty.
That creates a risk in using AI to do things where human values are helpful tools, CMU philosophy faculty member Matt Katz said during the recording of a recent episode of The Search Bar. At the end of the day, they are machines.
“It’s just pattern recognition,” he said. “It doesn’t feel anything.”
While AI can be trained to identify objects like fish in a tank or tumors in medical scans, the processes of how it interprets these patterns are not fully understood.
That means not knowing for certain how an AI might draw a conclusion if asked to determine which prisoners are paroled or what job candidates move to the next hiring round.
The challenge of comprehending these processes is likely to become more difficult as the technology’s sophistication increases. The more human they seem, the more likely it is that people could forget that they’re pattern-recognition machines.
People could also develop emotional attachments to them, which is even more plausible as machines learn to provide answers with elements that resemble emotions, Katz said.
For more thoughts on artificial intelligence from Matt Katz, check out the “What are the ethical concerns with AI?” episode of The Search Bar.