As befits such a broad and complex technology, Artificial Intelligence has a complicated history. This article will provide a high-level overview of the history of AI, touching on some of the major milestones, but it’s by no means comprehensive. You could write a book about every era of AI’s development, and many people have mused on the subject (like here, here and here), but this post should familiarize you with the greatest hits of AI’s journey to date. 

AI’s history is one of cycles: sprints of rapid and exciting advancement interspersed with frustrations and setbacks, usually occurring when AI’s progress stalls against the limits of that day’s computer power. These dormant periods are often referred to as “AI Winters,” the two most notable being the 1970s and early 1990s. As progress slowed the public’s interest lagged, funding dropped, and AI research fell deeper into hibernation. 

But as computers have grown more powerful, interest in AI has always renewed, and today is the most exciting time to be working in the field. However, we’re getting ahead of ourselves. Let’s go back to the beginning of the history of AI.  

The Beginnings

The history of AI, and its very concept, was born in a paper written by Alan Turing entitled Computing Machinery and Intelligence. Released in 1950 in the magazine Mind, Turing’s article outlined the “Imitation Game,” what we call today the “Turing Test,” a then-hypothetical challenge in which a computer would attempt to trick a person into thinking they’re interacting with a human, not a machine. This seminal paper was one of the first to plant the idea of intelligent machines in the public mind. 

With the idea raised, work began in earnest to build a machine that could attempt Turing’s challenge. Before the decade was out early computer scientists had developed the foundational algorithms that would form the basis of modern neural networks. The problem they encountered was computational power: 1950s computers were enormous, expensive to use, and lacked the power to truly test those algorithms.

Progress

1956 saw the Dartmouth Summer Research Project on Artificial Intelligence. Organized by John McCarthy, an Assistant Professor of Mathematics at Dartmouth College, this small six-week conference gathered bright minds from a breadth of fields to advance the new field of artificial intelligence–a term newly coined by McCarthy for the event. This summer workshop is largely credited with turning AI from a fringe pursuit into a legitimate field of research. 

One of the most exciting results of the Dartmouth Project was Logic Theorist. Written by Allen Newell, Herbert A. Simon, and Cliff Shaw, the Logic Theorist program was designed to prove mathematical theorems from Principia Mathematica, a 1913 work by British philosophers and mathematicians Alfred North Whitehead and Bertrand Russell. The program proved 38 of the 52 theorems it was given, in some cases more efficiently than Whitehead and Russell themselves. 

By creating a computer program that could “think” like a human, Newell, Simon, and Shaw had produced the first true embodiment of Artificial Intelligence. 

In 1958 Frank Rosenblatt, a psychologist and AI pioneer at the Cornell Aeronautical Laboratory, created another early iteration of AI. He called it the Perceptron. The computer–which weighed 5 tons and occupied an entire room–used 400 interlinked photocells connected to a series of “neurons.” After dozens of tests, the Perceptron taught itself to distinguish from punch cards with holes on the left side from ones with holes on the right–“the first machine which is capable of having an original idea,” said Rosenblatt. His invention laid the foundation for the neural networks of today.

Unfortunately, the momentum of the 50s wouldn’t last, and as advancement in AI sputtered and stalled against the hard limits of the era’s computing power, the 1970s heralded the first AI Winter.

Progress would resume in fits and starts throughout the 80s and 90s. Over the intervening decades, however, Moore’s Law–the theory that processors double in power and halve in price every few years–has been tirelessly working away. Now researchers had the muscle required to kickstart their work. 

New Frontiers

The year 1997 thrust AI back into the spotlight when Deep Blue, a supercomputer designed by IBM, beat reigning world champion Garry Kasparov at a game of chess. Kasparov drew on hard-earned wells of skill, experience, and intuition. Deep Blue, however, could analyze the board, search up to 200 million possible moves in less than a second, and select the best one. 

Deep Blue’s methods may seem inelegant, but they proved effective. Compared to modern AI, IBM’s supercomputer was rudimentary–today you could run a chess program on a laptop that would wipe the board with Deep Blue. But this victory represented a huge and very public accomplishment for the field. 

The late 90s also saw the emergence of the earliest Convolutional Neural Networks, which today are key in many computer vision systems. CNNs were first proposed in a paper called Gradient-Based Learning Applied to Document Recognition, which outlined a CNN that could learn to recognize human handwriting. 

In 2009 Netflix held a public contest with an award of $1 million to whichever team could develop the best algorithm to automatically assess a user’s viewing history and recommend films it predicted they would enjoy. The team BellKor’s Pragmatic Chaos took the prize with an algorithm 10% more accurate than the existing one used by Netflix.

That same year saw the start of large datasets like ImageNet, the earliest days of Big Data, which, providing immense pools of data to draw from, has supercharged the advancement of AI.  

Another highly public victory came in 2011. In that year Watson, another supercomputer developed by IBM, beat out champions Ken Jennings and Brad Rutter on an episode of the quiz show Jeopardy (netting itself $1 million in the process). Both Deep Blue and Watson were products of IBM’s Grand Challenge program that focused on special, highly-public projects designed to excite public interest in emerging technology. 

Later that year marked the first foray of AI into the hands of the public: the launch of Siri on Apple devices. In the 1950s, the most basic forms of AI required a computer the size of a house. Today, you carry exponentially more advanced Artificial Intelligence in your pocket, on a slim device just a few inches across. Millions of people interact with Siri, and now a host of other virtual assistants, every day. AI had hit the main stage.

In 2014 AI finally passed a test set out more than half a century earlier. It was then that Eugene Goostman, an AI chatbot created by Russian developers, managed to convince 33% of judges at a competition marking the 60th anniversary of Turing’s death that it was a 13-year-old human boy from Ukraine, not a machine. Finally, Turing’s predictions had begun to come true.

Three years later, DeepMind Technologies’ AlphaGo supercomputer once again beat a reigning human world champion. This time it was the Chinese board game Go, considered far more complex than chess. Trained extensively on both human and computer games of Go, AlphaGo beat Ke Jie, the world champion, in three consecutive matches (afterward AlphaGo was awarded the highest professional ranking by the Chinese Weiqi Association).

Present Day

That brief lesson in the history of AI brings us to today, a very exciting time. Aside from Moore’s Law, the rise of Big Data has probably been the most significant factor in AI’s advancement to date. Today we have a beautiful pairing of vast stores of information to feed our AI and computers powerful enough to process all of it. 

Just how big is Big Data? This Forbes article may help you understand. The rate at which humanity generates data is growing exponentially. It’s been said that ninety percent of the digital bits and bytes that exist today were created in just the last two years. Sometime in 2020 it’s expected we’ll have generated 44 zettabytes of digital information–that’s forty times more bytes than there are stars in the observable universe. The more data we have to work with, the more AI can do.

Today AI is advancing faster than ever in the history of AI and influencing our world in increasingly profound ways. You may not realize it, but AI probably touches your life dozens of times a day––the way we move through our cities, the way we work, the information we consume, how we care for our health, and many more aspects of our everyday lives. The Xtract AI team strives to be a leader in the field, finding new ways that AI can improve human lives (like helping us recycle better and fight Covid-19), and keep the momentum going.