Artificial Intelligence and Its Comprehensive History

AI, technology, history of ai, microprocessor, artificial intelligence, STEM

Image by Claude AI UK

Artificial intelligence, or AI, has been a prominent term throughout the past decade, and in recent years, the focus on this topic has intensified, particularly during the Coronavirus Disease (COVID-19) Pandemic. With its embossing popularity, individuals have gained some understanding of the capabilities of AI, considering the intricate ways our lives are intertwined with this technology. Regardless of your profession (in this era and likely to intensify in the future), you are bound to encounter its effects and impacts on the industry in which you operate. To be more precise, this revolutionary field of computer science has fundamentally altered the way in which human lives unfold, how work is conducted ─ and, perhaps most significantly, how we perceive and conceptualize information.

Artificial Intelligence, AI, story, history, knowledge, blog

Image by NASA on Unsplash

Possibly we all have a glimpse of what AI can do; most of us have experienced its utility, and many are using various tools in its realm to make our tasks easier and faster. But where does the concept derive from? Who are the pioneers and implementors of the concept? In this article, SEAMEO STEM-ED will take you through its six-decade journey and provide you with a comprehensive history of this marvelous invention.

The Birth of AI (1950s - 1960s)

John von Neumann with the Institute of Advanced Studies (IAS) computer, around 1951. Von Neumann persuaded IAS to expand from doing theoretical studies to building a real computer, with meteorology calculations as a key test of its scientific value. Courtesy Computer History Museum, Object ID 500004275 © Alan Richards via the Shelby White and Leon Levy Archives Center, Institute for Advanced Study

John von Neumann with the Institute of Advanced Studies (IAS) computer, around 1951. Von Neumann persuaded IAS to expand from doing theoretical studies to building a real computer, with meteorology calculations as a key test of its scientific value. Courtesy of Computer History Museum, Object ID 500004275 © Alan Richards via the Shelby White and Leon Levy Archives Center, Institute for Advanced Study

If we examine human history, technology, and innovation have emerged to make our lives and jobs easier, accelerated by hardships such as natural disasters, economic setbacks, political unrest, social inequalities, and wars — and as a result, have shaped what comes after. Tracing back to the 1950s and the following decade, marked by post-World War II, individuals became adept at formulating the usage of machines and computational apparatuses. In 1950, John von Neumann, a child prodigy who later became a distinguished mathematician of the century, and Alan Turing, a logician (who devised the first systematic method for decrypting messages from the Enigma machine), were considered the founding fathers of the technology behind AI (the term was not coined then). These two exceptional minds formalized a machine capable of computing and executing the tasks for which it was programmed. Neumann, on one hand, was known to lay the foundation for the fundamental concept of computer design. However, Turing, on the other hand, later raised concerns about the possible intelligence of a machine in what is now known as the “Turing Test” (some referred to as the “Imitation Game”), which questioned the boundary between human and machine through a teletype dialogue.

'Always inventing, inventing, inventing': McCarthy at work in his artificial intelligence laboratory at Stanford (AP). Image on Independent

In 1956, the conference that is considered the birthplace of the term 'AI' took place at Dartmouth College, hosted by John McCarthy, a computer scientist often associated with the term and recognized as one of the 'founding fathers of AI.' The event included key figures such as Marvin Minsky, a mathematician and computer scientist grouped with the fathers of AI; Nathaniel Rochester, a chief architect of IBM's first scientific computer and the prototype of its first commercial computer; and Claude Shannon, a mathematician and computer scientist known as the 'father of information theory.' This eight-week workshop laid a profound foundation for AI and its program research. The proposal for the conference included the assertion that: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." In relation, Herbert Simon (a prolific political scientist who received the Nobel Prize for Economics in 1978) and Allen Newell (a prominent computer scientist and cognitive psychologist) introduced “Logic Theory,” the first program deliberately engineered to perform automated reasoning and has been described as "the first artificial intelligence program” while McCarthy convinced the participants to adopt the term “Artificial Intelligence” to describe the field.

Dartmouth Conference 1959, fathers of AI, history of AI

Left: Marvin Minsky, Claude Shannon, Ray Solomonoff and other scientists at the Dartmouth Summer Research Project on Artificial Intelligence (Photo: Margaret Minsky). Right: A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (1955)

The participants in this conference included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell, and Herbert A. Simon, all of whom would create important programs during the first decades of AI research. In anecdotal reference, the conference fell short of McCarthy’s expectations; participants attended and left before the intended duration, and there was a failure to agree on standard methods for the field.

The AI Winter (1970s-1980s)

AI winter, history of AI,

Image by Alexandre Debiève on Unsplash

Despite what's called "The golden years or early enthusiasm" — where the development of algorithms demonstrated immense computational capabilities through Eliza (the first chatbot created by Joseph Weizenbaum and released in 1966), a simulated conversation between humans and machines; and WABOT-1, the first intelligent humanoid robot built in Japan — AI faced stagnation due to limited funding resources. As a consequence, it lost the public interest.

The boom of AI (1980s)

the boom of AI, history of ai, technology, advancement

Image by Google DeepMind on Pexels

After its winter, AI bounced back with "Expert System," which was a program that emulate the decision-making ability of a human expert. In 1980, the first national conference of the American Association of Artificial Intelligence (AAAI) was held at Stanford University. In the same year, the commercial landscape witnessed a groundbreaking moment with the introduction of XCON (expert configurer), the pioneering expert system. Tailored to streamline the process of ordering computer systems, XCON automated component selection based on the specific needs of customers. The following year, in 1981, the Japanese government embarked on the ambitious Fifth Generation Computer project, allocating a substantial $850 million (equivalent to over $2 billion today). The project's lofty goal was to develop computers with the unprecedented ability to translate, engage in human-like conversations, and exhibit reasoning at a human level.

AAAI Conference 1985, AI, history of AI

Image by AAAI on AAAI Website

However, by 1984, the AAAI warned of another potential "AI Winter" since there was a decline in funding and interest (once again), posing significant challenges for future research endeavors. In spite of these concerns, 1985 brought a notable milestone with the demonstration of AARON, an autonomous drawing program, at the AAAI conference, showcasing continued advancements in the field of artificial intelligence. A year later in 1986, a momentous situation occurred as Ernst Dickmann (a German aeronautics engineer) and his team at Bundeswehr University of Munich successfully created and demonstrated the inaugural driverless car, also known as a robot car. Such an innovation had the capability to navigate roads at speeds of up to 55 mph, provided the roads were devoid of other obstacles or human drivers. Meanwhile, the following year, in 1987, marked the commercial launch of Alacrity by Alactrious Inc. Alacrity distinguished itself as the pioneering strategy managerial advisory system, utilizing a sophisticated expert system equipped with over 3,000 rules, setting a precedent for strategic decision-making in the realm of artificial intelligence.

The Second AI Winter (1980s to early 1990s)

AI Winter, History of AI

Photo by ThisisEngineering RAEng on Unsplash

As the AAAI has warned, the recurrence of a funding halt and pullbacks, both from public and private organizations, resulted from unmet expectations by the technology, as well as a high cost versus low return. One example that encapsulates this issue is the conclusion of the Fifth-Generation project, a 10-year initiative initiated in 1982 by Japan's Ministry of International Trade and Industry (MITI) to create computers using massively parallel computing and logic programming. Another example is the collapse of list processing (LISP), the second-oldest high-level programming language, due to the availability of cheaper and more accessible programs, including those from IBM and Apple.

Age of Machine Learning and Neural Networks (1990s-2000s)

machine learning, about AI, technology, innovation, AI

Photo by Pietro Jeng on Unsplash

The 1990s witnessed the rise of machine learning, a subfield of AI that focuses on creating algorithms capable of learning and improving from data. Neural networks, inspired by the structure of the human brain, gained popularity. Machine learning techniques, such as decision trees, support vector machines, and neural networks, were employed to build AI systems capable of recognizing patterns and making predictions.

During the 1990s and 2000s, there were many milestones for AI and its achievements. Starting in 1997, the reigning world chess champion and grandmaster, Garry Kasparov, was defeated by IBM’s Deep Blue, a chess-playing computer program. This mankind versus machine debate was highly publicized and demonstrated a leap into the unknown potential of what the technology could achieve. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows, trespassing into the realm of spoken language interpretation endeavors.

Big Data and Deep Learning (2010s-Present)

History of AI

Image by Google DeepMind on Pexels 

We are currently in the age of "big data," characterized by the ability to amass vast amounts of information too extensive for individual processing. The integration of artificial intelligence in this context has proven highly advantageous across various industries such as technology, banking, marketing, and entertainment. It has become evident that, even if algorithmic improvements are limited, the sheer volume of big data and the power of extensive computing enable artificial intelligence to learn effectively through persistent effort.

To shine some light on the remarkable events, in 2011, IBM's Watson showcased its natural language understanding and problem-solving abilities by winning the quiz show Jeopardy. Google introduced "Google Now," a predictive information feature, in 2012. The Chatbot "Eugene Goostman" triumphed in the Turing test in 2014. In 2018, IBM's "Project Debater" engaged in successful debates with seasoned opponents. Additionally, Google's AI program "Duplex" demonstrated its virtual assistant capabilities by seamlessly making a hairdresser appointment during a phone call, leaving the recipient unaware they were interacting with a machine.

The Future of AI


Image by Google DeepMind on Pexels

What does the future hold? The anticipation for the next significant breakthrough is palpable, with experts, policymakers, and businesses actively speculating on the forthcoming innovations. The prospect of seamless dialogue with machines has become increasingly viable, thanks to advancements such as ChatGPT, Bing, Bard, and numerous others. Transportation, a critical aspect of human lives, has become a focal point in societal discourse. The deployment of assisted driving vehicles and autonomous vehicles on the streets of various countries has ignited both optimism and pessimism from various perspectives.

Image by Tara Winstead on Pexels

Within the business sector, there is a rapid evolution of cutting-edge machine learning algorithms and personalized task execution, reshaping the landscape of how businesses operate. In the realm of science, there is a broad spectrum of transformative advancements. In education, the adoption of personalized learning, adaptive learning platforms, and automated assessment methods is fundamentally altering traditional approaches. Concurrently, in science, these innovations extend across domains such as robotics, precision medicine, collaborative research, and automated laboratory processes.

All these aspects and their applications serve as current examples, providing a glimpse of what is in progress. The potential and capabilities of the technology may exceed our current expectations. Nevertheless, ensuring the responsible regulation of AI implementations will be imperative, extending the scope of control over the future of machines into the hands of their creators.

Image by Harvard on Harvard Website


SEAMEO STEM-ED has long emphasized the importance of science and technology. As part of our mission to enhance the Southeast Asia education system, AI will be more inclusively integrated into our programs and initiatives. Dr. Kritsachai Somsaman, our Director, provides a comprehensive introduction for teachers and educators to better understand AI and its usage in the article titled "Embracing the Challenge: Navigating the Use of AI in Education," published on The Head Foundation website.  

Read full article: