
While the Dartmouth workshop in 1956 is the official birthplace of AI, it wasn’t the first time that people had thought about building thinking machines. However, without access to computers, there wasn’t much you could do before 1956 to advance those dreams. That was unless you were an exceptional thinker.
Perhaps the most exceptional mind to think about thinking machines before 1956 was the British mathematician Alan Turing. Time magazine named Turing one of the 100 most important people of the 20th century. He, more than anyone, is responsible for the digital age in which we now live.
During the Second World War, Alan Turing helped build one of the first practical computing devices, the beautifully named Bombe. This was used to crack the German Enigma military codes, a mathematical feat that likely shortened the war by at least two years, saving millions of lives. In 1936, before these codebreaking exploits and before anyone on the planet had actually built an electronic computer, Alan Turing came up with an abstract mathematical model of a computer. This model is simple but immensely powerful. It describes, for example, everything from your smartphone to the fastest supercomputer.
Turing wanted to answer a simple question. What can a machine compute? Can it, for example, prove complex mathematical results like Fermat’s Last Theorem? Or write a beautiful sonnet about falling in love? If we’re going to get machines to think, he reasoned, it would be good to know their limits.
Turing came up with a deceptively simple and tautological-sounding answer to this question: a machine can compute anything that his mathematical model of a computer can. And, by extension, if something cannot be computed by his mathematical model, then making the computer bigger or faster won’t help. Turing’s mathematical model of a computer is now called a “Turing machine”. And, for good measure, Turing also identified a number of problems that a Turing machine, and therefore even today’s fastest supercomputer, cannot compute.
For instance, you’d like to know that a flight control system in an aeroplane can never stop. But this is the sort of problem that Turing identified as impossible in general to compute. This is mind-blowing. Before we physically had the first electronic computer, Alan Turing had worked out the fundamental limits of what it, and indeed every computer that has followed, could possibly compute. It’s like one of the Wright brothers predicting the barrier that the speed of sound would present to faster flight before Orville Wright had even made that first flight over the sand dunes of Kitty Hawk, North Carolina. Now, before you dismiss this book as the history of an impossible dream, Turing’s limits of what you can and cannot compute did not include artificial intelligence. His results left open the possibility of AI, the possibility that we might reduce thinking to computation.
Turing’s findings about the limits of computers would be enough to earn him a spot in a history book about AI. But his contributions go well beyond identifying these limits. In 1950, Alan Turing wrote what is generally considered the first scientific paper about AI. It begins, “I propose to consider the question, ‘Can machines think?’” He had already provided a good definition of the word “machine” back in 1936. But that still left the problem of what the word “think” means. Turing proposed to side-step this definitional problem with an ingenious idea. He called it the “imitation game”, but it is now more commonly called the “Turing test”. If a person remotely conversing with a machine and a human cannot tell them apart, then might we not say that the machine thinks?
The Turing test has its critics. Should we really be building machines that try to deceive us into thinking that they’re human? What questions distinguish man from machine? And, since a machine cannot experience the world like a human being, is it even a fair test? If the machine fails, does that truly mean it cannot think? Despite such concerns, the Turing test gives you a good idea of what AI researchers like me are trying to do. We’re trying to get computers to do the sorts of things that humans do that we believe require thinking. This covers sensing, reasoning and acting. Making sense of what we see and hear. Reasoning about what we see and hear. Then making plans, and acting on those plans. All of these require intelligence. And so, getting a robot to sense, reason and act in the world requires artificial intelligence.
Alan Turing was thus the genius who helped start the field of AI. Sadly, he died two years before the Dartmouth workshop. He was just 41 years old. A half-eaten apple lay beside his bed and an inquest later determined that cyanide poisoning was the cause of his untimely death. In 2009, the UK prime minister, Gordon Brown, apologised for Turing’s prosecution for “homosexual acts”, which many believe led to him lacing that apple with cyanide.
Of course, Turing isn’t the only exceptional person who thought about AI before computers became commonplace. Some others deserve a mention too. Ada Lovelace is one such person. Ada was the daughter of the poet Lord Byron. And, like Alan Turing, she died tragically young, at just 36.
She worked with Charles Babbage on his mechanical computer, the Analytical Engine. Babbage was a Victorian polymath, mathematician, inventor, mechanical engineer and aspiring politician. You can see half of his brain in the Science Museum in London. Oddly, the other half is six kilometres away, in the Hunterian Museum at the Royal College of Surgeons. Babbage had a simple but important ambition: to reduce the errors in the mathematical tables used for navigation and artillery. And so, he turned to the most cutting-edge technology of Queen Victoria’s era, mechanical gears and the punched cards of the Jacquard loom, to design a programmable computer that could compute such tables without error.
Babbage’s Analytical Engine had many parts found in modern-day computers. It had memory in which to store data, a logic unit that could do arithmetic and even a printer on which to produce output. It was a remarkable device that could read in and execute different programs. Babbage memorably described it as being able to “eat its own tail”, like “a locomotive that lays down its own railway”. Unfortunately, the Analytical Engine was never finished. If it had been, it would have been a Turing machine, able to compute anything today’s fastest supercomputer could – just slower. It’s worth imagining how such a mechanical beast might have transformed Victorian Britain.
Ada Lovelace was clearly captivated by the possibility of what Babbage’s marvellous Analytical Engine might do. And she, in turn, captivated the older Charles Babbage. He called her “the Enchantress of Number”, but I suspect it was not just numbers that she enchanted.
To demonstrate the Analytical Engine’s potential, Lovelace wrote the world’s first complex computer program. It was a set of instructions to calculate Bernoulli numbers. The first computer programmer was thus a woman. As were many of the first “computers” – humans who performed complex astronomical and other calculations before one of Turing’s machines took over such arduous tasks.
However, Lovelace had more ambitious dreams for the Analytical Engine than just calculating Bernoulli numbers. She wrote:
[I]t might act upon other things besides number . . . Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent . . . We may say most aptly that the Analytical Engine weaves algebraic patterns, just as the Jacquard loom weaves flowers and leaves.
Holy moly! Where did this come from? Babbage was interested in calculating tables of numbers. But Lovelace somehow looked forwards over a century to a time when computers would manipulate sounds, images, videos and many other things besides numbers. Your smartphone is, at the end of the day, a small computer. And it is so versatile because it manipulates not just numbers but also sounds, images and videos. It is thus part music player, part camera, part video recorder and part game engine.
Despite Lovelace’s idea that computers could do more than just calculate numbers, she was also one of the first critics of artificial intelligence. Indeed, she was quick to dismiss the dream of building machines that would be creative: “The Analytical Engine has no pretensions whatever to originate anything,” she said. “It can do whatever we know how to order it to perform. It can follow analysis, but it has no power of anticipating any analytical relations or truths.” Lovelace’s complaint has haunted the field of AI ever since. Computers just do what we tell them to do. They lack that human spark of creativity. It’s a criticism of AI that we will test multiple times during this history.
Excerpted with permission from The Shortest History of AI, Toby Walsh, Pan Macmillan.
📰 Crime Today News is proudly sponsored by DRYFRUIT & CO – A Brand by eFabby Global LLC
Design & Developed by Yes Mom Hosting