Study discusses the relevance of the Turing Test in today’s world (image: Kohji Asakawa/Pixabay)
In an article published in the journal Intelligent Computing, computer scientist and philosopher Bernardo Nunes Gonçalves claims that machines have already passed the “Turing Test” – in other words, they have shown themselves capable of imitating human cognition – and highlights the need for more robust methods of assessing artificial intelligence.
In an article published in the journal Intelligent Computing, computer scientist and philosopher Bernardo Nunes Gonçalves claims that machines have already passed the “Turing Test” – in other words, they have shown themselves capable of imitating human cognition – and highlights the need for more robust methods of assessing artificial intelligence.
Study discusses the relevance of the Turing Test in today’s world (image: Kohji Asakawa/Pixabay)
By José Tadeu Arantes | Agência FAPESP – In 1950, the British mathematician Alan Turing (1912-1954), one of the pioneers of computer science, proposed replacing the question “Can machines think?” with a more practical operational criterion. He devised the so-called “imitation game,” in which a human interrogator had to distinguish whether he was interacting with another human or with a machine based solely on a written conversation. If the machine managed to fool a significant number of evaluators, then it was thinking by any reasonable definition of the word.
The so-called “Turing Test” was not a test in the strict sense of the word, with defined protocols. It was more of a philosophical provocation designed to challenge the mental rigidity of its interlocutors. But today, 75 years later, any user of generative artificial intelligence (GAI) platforms such as the American ChatGPT and the Chinese DeepSeek knows that the machines have passed the test: the consistency of their responses and the sophistication of their expressions actually surpass those of many human interlocutors.
We are already living in one of the futures described by science fiction. An article published in the journal Intelligent Computing by the Science group discusses the relevance of the Turing Test in today’s world, tracing the historical context in which the concept arose, its influence on the development of GAI, and the technical, social and philosophical implications of the new reality.
Its author, Bernardo Nunes Gonçalves, has a PhD in computer modeling (National Laboratory for Scientific Computing, LNCC, 2015) and philosophy (University of São Paulo, USP, 2021), was a fellow (2023-2024) at King’s College, Cambridge in the United Kingdom, and is currently a permanent researcher at the LNCC and an associate researcher at the Center for Artificial Intelligence (C4AI), an Engineering Research Center (ERC) created by FAPESP and IBM at USP.
“Turing argued that human intelligence was largely an unknown and undefined phenomenon and that the best way to evaluate artificial intelligence [AI] would be through observable behavior. His idea challenged the belief in the unique superiority of the human mind and served as a benchmark for the development of artificial intelligence,” says Gonçalves.
Alan Turing in 1936, during a period of study and research at Princeton University in the United States (source: Wikimedia Commons)
The concept has influenced popular culture. In science fiction, Stanley Kubrick’s classic film 2001: A Space Odyssey featured the HAL-9000 supercomputer, an advanced AI capable of passing the Turing Test, raising questions about the autonomy and reliability of machines. In the “real world,” two machines made history: in 1997, IBM’s Deep Blue supercomputer, capable of analyzing up to 200 million moves per second, defeated then-world chess champion Garry Kasparov; and in 2011, IBM’s Watson, using natural language processing and advanced machine learning, beat two of the greatest champions of the Jeopardy! quiz show.
“Turing’s insightful observation was that artificial intelligence, in order to be intelligence, could not depend exclusively on explicit programming, but rather on autonomous learning, similar to the development of human intelligence. This perspective led him to predict that by the end of the 20th century, machines would learn to play the ‘imitation game’ convincingly, and that the idea of ‘thinking machines’ would be natural among the most educated people,” says Gonçalves.
It is worth repeating that the bold way in which Turing used phrases like “thinking machines” was based on the assumption that we do not really know what human intelligence is.
The article argues that current GAI models, based on transformers and deep learning, not only mimic human responses but also learn to improve their performance without relying strictly on prior programming. Their results improve with the amount of training, certain non-pre-programmed functions emerge when the model reaches a critical point, and they are able to sustain long conversations in a coherent and convincing manner for non-experts.
The main innovation of transformers is the attention mechanism, which allows the model to focus on different parts of the input when processing a particular piece of data. This makes them more efficient than previous architectures, which processed data sequentially and therefore more slowly. As for deep learning, it is a type of machine learning, but it differs in that it allows models to learn directly from the data, without the need for human intervention to extract features. Both transformers and deep learning are based on neural networks, which mimic the way human neural circuits work.
“Stuart Shieber [a computer scientist at Harvard University in the United States] has shown that it isn’t possible to create an AI based purely on memorization because the amount of memory needed to cover all possible conversations would be greater than the known universe itself. This suggests that today’s AIs have some level of generalization and reasoning, and aren’t just limited to repeating patterns,” Gonçalves argues.
He also discusses the social consequences of the evolution of artificial intelligence. He points out that Turing not only predicted that machines would replace manual labor but was also provocative in warning that the “masters” themselves could be replaced. This means that automation will not only affect operational functions but also intellectual professions. “To prevent the benefits of AI from being concentrated in the hands of a few, a broader debate is needed on the equitable distribution of the wealth generated by automation. This is in keeping with the vision of Turing, who believed that technology should serve society as a whole, not just the economic interests of an elite,” he says.
Another critical point raised in the article is the unsustainability of the current computing model. The energy consumption of today’s AI systems is gigantic, in contrast to Turing’s vision, which advocated a more natural model inspired by the human brain, with its low energy consumption. According to Gonçalves, AI needs to evolve to be more sustainable and less dependent on intensive computing.
The article concludes by suggesting that as AI becomes more sophisticated, new forms of evaluation will be needed, which could be inspired by the original Turing Test. It suggests: rigorous statistical protocols to prevent AI from simply “learning to cheat” traditional tests; automated adversarial tests that eliminate the need for human judges and make evaluation more objective; and checks based on probabilistic approximations to make machine evaluation practical and efficient. “These methods would help address emerging challenges such as bias in training data, adversarial manipulation and contamination of models with previously known information,” Gonçalves emphasizes.
It is always good to remember that the Turing Test was proposed 75 years ago when the first computers were just beginning to be designed and built. Alan Turing was at the forefront of this process. The 2014 movie The Imitation Game, directed by Morten Tyldum, tells part of his short story, which is both great and tragic. Among his many achievements, it was he who cracked the operating code of the Enigma machine, which was considered impenetrable and used to exchange messages in Nazi Germany. This feat saved thousands of lives and contributed significantly to the defeat of Nazi fascism during the Second World War. But it remained unknown for decades because all the work was done in the utmost secrecy.
In 1952, Turing was convicted of “gross indecency” for his homosexuality, which was illegal in the United Kingdom. As an alternative to prison, he opted for forced hormone treatment, which was effectively a form of chemical castration. On June 7th, 1954, aged just 41, he was found dead in his home. The official cause of death was suicide by cyanide poisoning. It was not until 2009 that the British government issued a formal apology for his treatment. And in 2013, after a public campaign, Turing was posthumously granted a “royal pardon.”
“We’re already living one of the ‘Turing futures,’ where machines are able to mimic human cognition to the point of being indistinguishable in certain interactions. This does not mean that artificial intelligence has reached its full potential. There are still fundamental challenges to be solved, such as computational sustainability, fairness in the distribution of benefits, and the need for more robust evaluation methods. Turing’s vision remains more relevant than ever, not only as a technical criterion but as a starting point for deeper debates about the impact of AI on society and humanity,” concludes Gonçalves.
In addition to the C4AI funding, the study on which the article is based was supported by FAPESP through the postdoctoral fellowship and the research internship abroad fellowship awarded to Gonçalves.
The article “Passed the Turing Test: Living in Turing Futures” is available at: spj.science.org/doi/10.34133/icomputing.0102.
The Agency FAPESP licenses news via Creative Commons (CC-BY-NC-ND) so that they can be republished free of charge and in a simple way by other digital or printed vehicles. Agência FAPESP must be credited as the source of the content being republished and the name of the reporter (if any) must be attributed. Using the HMTL button below allows compliance with these rules, detailed in Digital Republishing Policy FAPESP.