The Problem with AI that Acts Like You

Human-like AI models raise questions of bias and our right to personal data

Sahir Dhalla
5 min readJan 23, 2023

--

Tara Winstead on Pexels

When discussion on artificial intelligence (AI) gained popularity in the twentieth century, mathematician, scientist, and philosopher Alan Turing proposed a test for machines that was initially intended to measure intelligence.

The imitation game, later known as the Turing test, tests a machine’s ability to exhibit human-like behaviour. It involves three participants: two humans and one machine. Participant A, one of the humans, communicates to Participants B and C, asking them questions and analyzing their responses to figure out which of the two participants is human.

While philosophers have criticized the test’s ability to measure intelligence, there is no question that it is a good marker for how adept these machines are at human mimicry. As artificial intelligence has gone from fiction to reality in recent years, and its development has increased exponentially, it seems inevitable that machines will eventually be capable of passing the Turing test.

Could AI act like humans?

A few months ago, on June 11, Blake Lemoine, a former Google engineer, released a transcript of a conversation he had with LaMDA. LaMDA is Google’s machine-learning model that…

--

--

Sahir Dhalla

Exploring the intersection and cutting edge of neuroscience, AI, and philosophy