In 2006 I visited Professor Emeritus John McCarthy in his little house on the outskirts of the Stanford University campus. It was almost exactly 50 years after McCarthy coined the term ”artificial intelligence” at a conference on computers and problem resolution, and I wanted to ask him how he thought it went. Then, even when the subject was still in its infancy, the claims were hardly modest. One of the most prominent debaters, Marvin Minsky, said that we should be excited about the future in which we got to serve as housepets of the computers. Most scientists agreed that it was only a matter of time before computers became more intelligent than us, and ”took over”.
But what happened next? In one version of history, there followed 50 years of failure, where computers time and again proved unable to solve even simple problems. The human mind could not be captured in computer science algorithms. The dream, or nightmare, of artificial intelligence turned into a quiet fascination with human, natural intelligence and its unique position in the cosmos.
After sitting down in an armchair at Professor McCarthy’s house, I asked him if he was not, after all, disappointed over developments. He shook his head. The technique, he explained, developed more quickly than he had dared to hope. When he started, a computer could not play chess. Now computers are the best chess players in the world. Solutions developed for artificial intelligence drive the world’s largest search engines and in the computer game world are examples of simulated warrior and civilisations that would have been unthinkable in1956. In fact, McCarthy said, that 50 years after Mendel discovered heredity, there were very few examples of how this simple concept could be used practically. But, one hundred years later, sequencer Craig Venter had mapped the entire human genome. In comparison with that example, said McCarthy, AI had a lot longer.
McCarthy also pointed to the strange phenomenon that whenever something was practically possible, then it was not called artificial intelligence anymore. Take, for example, computer games. When we fight against the empires of the simulated war game, we do not see the virtual opponent as an artificial intelligence, but as part of a great game. The search results that come from search engines and which are constantly refined, we see as search technology. The advanced expert systems to help us perform anything from declarations to diagnoses are great tools, but are not they intelligent? When something just works, well, it seems we no longer call it intelligence. Today, we use techniques developed through research on artificial intelligence every day without thinking about it, without even choosing to recognise that there could be signs of intelligence in searches and computer games.
What is it that lies behind this phenomenon? One explanation could be that intelligence is the last bastion in the war on man’s unique status in creation. Natural science’s triumph throughout history can be taken as an anti-humanist crusade, a sort of human’s journey to the human. From being the centre of the universe, we came to realise that we are at the periphery of the periphery in an ever-expanding cosmos, from being a perfect crown of creation and God’s image we were reduced to the product of a blind and almost careless evolution that seems to be content with what is sufficient. Could then also our intelligence, the ultimate Aristotelian difference, be re-created by computers? Are not even our thoughts and feelings unique? Is there anything special about being human that cannot be explained scientifically or reduced to engineering solutions? Is there nothing we can do that a computer cannot do?
Wherever we begin to study artificial intelligence, this is the question that sooner or later we face, and rightly so. It is an important issue. Ultimately this is about a component of our self-understanding. Are we just advanced systems of algorithms implemented in a biological machine, or is there something more? Something unique? Something truly human? The philosophy has approached the issue from different directions. Perhaps the most interesting answer, we find in Wittgenstein’s philosophy, with its emphasis on the body’s impact on cognition. As Gunnar Svensson, probably our most prominent expert on Wittgenstein, noted, the concept of intelligence is something we use on something that people do. Not printed circuit boards. It is a term that describes a series of human actions that can be interpreted in a given way, but does not refer to some mysterious inner state or a set of calculations. We call someone intelligent when they behave in a certain way, answer our questions, and interact with us as a human being. To say that a computer is intelligent is simply to misunderstand both how we use the word ”computer” and the word ”intelligence”, and should we manage to create intelligence, it is only by creating a human being.
The idea that the early research on artificial intelligence misunderstood the relationship between body and intelligence, has given rise to extensive research on robots and how we respond to them. In his latest book Alone Together, Sherry Turkle studies this phenomenon, and points to how we seem programmed to interact with the human-like as if it was intelligent. She warns that our willingness to attribute artifacts of intelligence will be abused, and used to create placebo intelligence of virtual pets or robot nurses for the elderly and the sick. When we provide a computer with a body, we will be fooled into thinking that it is intelligent. Let us call this the Wittgensteinian criticism: the uniqueness of man is her natural history, her biology. Her intelligence is part of her body, and her cognition cannot be abstracted away from biology and in computer science. In itself that does not mean that in effect we could create intelligent beings, but it would be about much more than simply programming a computer to respond in a certain way, it would involve creating bodily intelligence far beyond today’s robots.
Another critical line finds its power not in biology but in quantum physics. Emeritus Professor Roger Penrose, formerly of Oxford has, in the books The Emperor’s New Mind and Shadows of Mind, developed a complex theory which essentially asserts that consciousness is rooted in quantum physical phenomena that cannot be duplicated by computers. A few months ago, I listened
to the Penrose when he talked about how ”microtubes” have certain quantum properties that he thought might contain the explanation of both free will and consciousness. It was a fascinating lecture, and I cannot judge the veracity of the research results (they are very controversial), but what struck me was the similarity between Penrose’s ever more complex theory of consciousness and the attempt to rescue a geocentric astronomy with ever more sophisticated calculations and epicycles. In both the Wittgensteinian criticism and Penrose’s work emerges a strong link between intelligence and consciousness. It seems almost impossible to imagine real artificial intelligence but also to conceive of a consciousness. If a computer becomes intelligent, it means not only that it is smart, but also that it moves from a something to a someone.
This opens up a chasm of philosophical concerns that have to do with everything from how to know that there are other minds at all, to what it means to be conscious, and if we can be aware that we are aware, and here much of the early research was stuck. Models of consciousness were debated separately and together, while new logical systems were created to understand how a person reasons. But over the past ten years, the subject has changed. Peter Nørve, a research leader at Google, notes in his history of artificial intelligence, how the subject suddenly stopped to care about consciousness models and instead began to implement a variety of probability and probabilistic models in the rapidly growing amounts of data available in today’s Internet society.
An example of this is Google’s translation service. The service includes no awareness models, and is based not even on any particular semantic understanding of different words, but only in probabilistic pattern recognition in large, growing amounts of text. The results are sometimes comical, and it’s easy to say that the translation technology is not intelligent, but the results are much stronger than any previous attempts to create automatic translation software. And it’s easy to look at improving the curves and understand that it is not impossible that in our lifetime we will be able to get near perfect translations from computers. Sounds incredible? Consider the development of chess. There are traces of intelligence that are not based on any assumptions about consciousness at all. Consciousness has kind of become a hypothesis that is no longer needed in AI research. The increasingly successful recommendation engines available on websites like Amazon.com are based on the same thing: pattern recognition from the enormous amounts of data. And then these data sets are growing ever faster, and the results get better, not least because computer programs can develop themselves and adapt to new behaviours and new data.
These teachable programs mimic human behaviour, and try to learn from what we do to create better and better patterns of adaptation, and we are depending on them to get better and better. One perspective, namely the pursuit of artificial intelligence, is the only defence we have against a relentless, noisy society where the amount of information doubles every year. This is why the most prominent AI researchers today are working for search engines and social networks: we use what we can recreate by intelligence to organise and sort the vast amounts of information in our society. In most of the tools we use today: search engines, social networking, mapping tools, and even in something as simple as spell check, there are traces of the research on artificial intelligence, and perhaps traces of intelligence.
And it gives of course rise to another question: What happens to our own intelligence when it depends on technological crutches? The issue is not new. The emergence of the information society has been overshadowed by criticism from thinkers such as Jacques Ellul and Neil Postman and, in our own time, by Jaron Lanier and Nicholas Carr. Technology, they say, is eating away our humanity. The big problem with the artificial intelligence is that it destroys true intelligence, undermines our understanding and ability to concentrate and leaves us without the ability to fathom or cultivate our own depths.
One way to read Carr and Lanier is as the draft of a critique of artificial intelligence, that changes the question of possibility to one of suitability. Should we develop artificial intelligence if it is at the expense of our own capacity for real thought and reflection? The question of man’s unique position becomes a question about a kind of self-discipline. We may not ultimately be unique, but we should try to be so for as long as we can. Technology is forcing us into the role of users, and degrades us. It’s easy to feel some sympathy for this view. In a world where real education has been replaced by a quick search online or on Wikipedia, it feels as if something has been lost.
But, at the same time, this nostalgia has an unpleasant pedigree – it’s the same nostalgia that once complained that the Bible was made available in the peasants’ language, the same nostalgia that would rather see literature restricted so as not to distort the thinking of the masses. It is the same nostalgia that warned of man’s lust of knowledge in the myth of Faust, which could see Prometheus’ fate as a deserved punishment for having given the people fire. This is not an either or. It is not either we or the machines that are intelligent. It is not either book or computer. They can be combined in a variety of ways, and knowledge can be communicated and refined in countless new ways with digital tools while, at the same time, it is entirely possible to enjoy a requiem or a book. The new technology spreads knowledge and makes it more accessible, it provides knowledge-creation tools to an increasing number. To stop technological development thus becomes a kind of knowledge aristocracy masquerading as reflection.
Of course, it is possible to say that the computer has done to culture what the clock did to time: splitting it up, and turning it from experience, to small mechanical devices. That we now think of culture as information, and that the outcome of this will be a misappropriation of not only culture but also of our humanity. But this is to render unto man hisIt doesn’t translate well to English to use ’man’ with ’her’.It doesn’t translate well to English to use ’man’ with ’her’. choice, and transform him into an easy prey for technology’s tendency to expose him and make him available. The technology can transform a magical forest to timber, but it does not happen without us making a conscious choice. But of course, the criticism has dimensions that call for reflection. One thing that seems clear is that the discussion of artificial intelligence should slowly give way to a discussion of how intelligent is our context. Perhaps intelligence is not an atomic phenomenon that can be wholly appreciated on an individual basis. Perhaps intelligence is rather an idea of our ability to handle complexity in different contexts, where people and technology constitute a larger whole. There are those who believe that this Gaia-like mind is what we really should be concentrating on: how to build an intelligence that is not human or artificial, but planetary.
These days, Watson, an IBM machine, is competing against the human masters of Jeopardy. In the practice rounds did it very well. Whether it wins or not is less interesting. The trend is clear. With the current speed of development, the technology will soon reach a point where the majority of practical points behave as if, and can be handled as if, they were intelligent. It’s just a matter of time. Whether there will be a human remnant or not, that is, something that machines cannot do but which people are able to do successfully, is an open question. Will machines be able to compose music, write novels and create artwork that is truly original and which breaks new ground? Will they be able to love, forgive and hate? We are back-pedalling on these questions, but would do well to ask why we do it. We do not become less special if we can build intelligent computers – would it not be the opposite? If we can create thinking machines, it is surely a sign of the truly unique human: an instinct for knowledge that does not stop, even when trying to understand the very nature of knowledge and consciousness.
Redan prenumerant?
Logga inUpptäck Axess Digital i 3 månader utan kostnad
Allt innehåll. Alltid nära till hands.
- Full tillgång till allt innehåll på axess.se.
- Tillgång till vårt magasinarkiv
- Nyhetsbrev direkt till din inbox








