The History Of AI

Hugh O'Neal
October 26, 2022
The History Of AI: image 1

Artificial intelligence (AI) is the buzzword in technology right now, and for a good reason. Over the past few years, several innovations and achievements previously exclusively in science fiction are gradually becoming a reality.

Experts see artificial intelligence as a production factor that creates new sources of growth and changes the way industries work. The tutorial contains the milestones in the history of AI and defines its path.

How has artificial intelligence evolved?

The history of artificial intelligence dates back to antiquity when philosophers pondered the idea that artificial beings, mechanical humans, and other automatons could exist in some way.

Artificial intelligence became increasingly tangible throughout the 1700s and beyond, thanks to early thinkers. Philosophers have speculated about how human thinking may be artificially mechanized and controlled by intelligent non-human machines. The thought processes fueled interest in AI began when classical philosophers, mathematicians, and logicians considered symbol manipulation (mechanically), which eventually led to the invention of the programmable digital computer, the Atanasoff Berry Computer (ABC), in the 1940s. Its distinct creation inspired the idea of ​​creating an «electronic brain».

It was almost a decade before AI icons helped us understand the field we have today. Alan Turing, a mathematician, proposed a test to measure the ability of a machine to reproduce human actions to a degree indistinguishable from human behavior. Later, the field of AI research was founded during a summer conference at Dartmouth College in the mid-1950s. John McCarthy, a computer and cognitive scientist, began to use the term «artificial intelligence».

Since the 1950s, many scientists, programmers, logicians, and theorists have helped solidify the modern understanding of artificial intelligence. With each new decade came innovations and discoveries that changed people’s fundamental knowledge of artificial intelligence and how historical advances have turned AI from an unattainable fantasy into a tangible reality for current and future generations.

AI in 1940–1960: The birth of AI amid the rise of cybernetics

The time between 1940 and 1960 was marked by a combination of technological advances (accelerated by World War II) and a desire to understand how to integrate the functioning of machines and organic beings.

In the early 1950s, John von Neumann and Alan Turing did not coin the term AI but were the founders of the technology behind it. They transitioned from computers to 19th-century decimal logic (which thus dealt with values ​​from 0 to 9) and machines for binary logic (which rely on Boolean algebra to deal with the more or less essential strings of 0s and 1s).

The term «AI» may be attributed to John McCarthy of the Massachusetts Institute of Technology. Marvin Minsky (Carnegie Mellon University) defines it as «the creation of computer programs perform tasks which are currently performed more satisfactorily by humans» because they require high-level mental processes.

  • 1955: The term «artificial intelligence» is used in a proposal for a «two-month study of artificial intelligence involving ten people» submitted on August 31, 1955.
  • 1957: Frank Rosenblatt develops the perceptron. Perceptron is an early artificial neural network enabling pattern recognition based on a two-layer machine learning network.
  • 1958: John McCarthy developed the Lisp programming language, which has become the most popular programming language used in artificial intelligence research.
  • 1959: Arthur Samuel coined the term «machine learning» in 1959, which reported programming a computer so «it learns to play checkers better than the person who wrote the program may play».
  • 1966: Researchers have focused on developing algorithms to solve mathematical problems. Joseph Weizenbaum created the first chatbot called ELIZA.

Artificial intelligence innovation overgrew in the 50s and 60s. The popularity of creating new programming languages, robots, and automata, research, and films depicting creatures with artificial intelligence grew. It strongly emphasized the importance of AI in the second half of the 20th century.

AI in the 1970s: The release of Star Wars

Like the 1960s, the 1970s gave way to accelerated progress, especially in robots and automata. However, artificial intelligence faced challenges in the 1970s, such as declining government support for AI research.

  • 1970: WABOT-1, the first anthropomorphic robot, was created in Japan at Waseda University. Its options included movable limbs, the ability to see, and the ability to speak.
  • 1977: George Lucas’s Star Wars is released. The film features the humanoid robot C-3PO, designed as a protocol droid and «wielding over seven million forms of communication.» Also featured in the film as C-3PO’s companion is R2-D2, a small astromech droid unable to speak humanly (the flip side of C-3PO); instead, R2-D2 communicates using electronic beeps.
  • 1979: The Stanford Cart became one of the first examples of an autonomous vehicle. She successfully crossed the chair-filled room without human intervention in about five hours.

The period from 1974 to 1980 was the first duration of the AI ​​winter. The AI ​​winter refers to a time when computer scientists faced a severe lack of government funding for AI research. It slowed down the pace of technology development.

AI in 1980-1990: Expert systems

The History Of AI

AI was a massive development at the beginning of this period, but the craze fell again in the late 1980s and early 1990s. Programming such knowledge required a lot of effort, and from 200 to 300 rules, there was often a «black box» effect, so it was unclear how the machine reasoned. It should be recalled in the 1990s; the term «artificial intelligence» became almost taboo.

  • 1980: Waseda University in Japan built Wabot-2 as a humanoid robotic musician capable of communicating with humans, reading music, and playing medium-difficulty melodies on an electronic organ.
  • 1981: Japan’s Ministry of International Trade and Industry allocates $850 million to implement the Fifth Generation Computer project. It aimed to develop computers that could translate languages, carry on conversations, interpret images, and reason like humans.
  • 1984: Electric Dreams, a movie about a love triangle between a man, a woman, and a personal computer, was released. The film, directed by Steve Barron, showed how computers become indistinguishable from humans.
  • 1986: A Mercedes-Benz van equipped with cameras and sensors became the first self-driving car.
  • 1988: J Rollo Carpenter developed the Jabberwacky chatbot to «simulate natural human conversation in an interesting, entertaining and humorous manner.»
  • 1997: Deep Blue became the first computer chess program to defeat a reigning world champion.
  • 1998: Dave Hampton and Caleb Chang created the Furby in 1988, the first domestic robot or pet.

The period from 1987 to 1993 was the second AI Winter. Investors and the government again stopped funding AI research due to the high cost but inefficient result. An expert system like XCON was very cost-effective.

2000-2010: Rapid development of AI

The new millennium was dawning; after 2000, fears subsided, and AI continued to grow. As expected, more AI creatures have been created, as well as creative media (movies in particular) about the concept of AI and where it may go.

  • 2001: Steven Spielberg released a film about AI David in 2001. The film revolved around David, a child-like android programmed to love.
  • 2002: AI entered the home in the form of the Roomba.
  • 2004: NASA’s Spirit and Opportunity robotic exploration rovers navigate the surface of Mars without human intervention.
  • 2006: AI entered the business world. Companies like Facebook, Twitter, and Netflix have also started using AI.
  • 2009: Google secretly developed a self-driving car. By 2014, it had passed the Nevada self-driving test.

The boom in the discipline of AI is due to more accessible access to vast amounts of data. To be able to use image classification and cat recognition algorithms, it was previously necessary to do the sampling yourself. Now a simple Google search may turn up millions of pictures.

AI from the 2010s to present

The current decade has been significant for innovation in artificial intelligence. Since 2010, artificial intelligence has become part of our daily lives. We use smartphones with voice assistants and computers with «intelligent» features most of us take for granted. AI is no longer a pipe dream.

  • 2011: IBM Watson won a quiz that included hard questions and riddles. Watson has proven to understand natural language and quickly solve tricky questions.
  • 2011: Apple presented Siri, a virtual assistant on the Apple iOS operating system. Siri uses a natural language user interface to infer, observe, respond to, and recommend to its human user.
  • 2012: Google launched a «Google Now» Android app feature that could provide the user with information in the form of a forecast.
  • 2014: Amazon built Amazon Alexa, a home assistant that evolved into smart speakers that act as personal assistants.
  • 2016: Hanson Robotics creates a humanoid robot named Sophia. She got the status of the first «robot citizen».
  • 2018: Google developed BERT, the first «unsupervised bi-directional language representation can be used in various natural language tasks using transfer training».
  • 2020: Baidu releases the LinearFold artificial intelligence algorithm for medical and scientific and healthcare teams developing a vaccine during the early stages of the SARS-CoV-2 (COVID-19) pandemic.
  • 2022: Development of the Metaverse, a permanent digital environment where users work and play together. The concept has been a hot topic since Mark Zuckerberg talked about its creation by merging virtual reality technology with the social underpinnings of his Facebook platform.

Now, AI has evolved to a remarkable level. Algorithms of deep learning, big data, and data science are all the rage right now. Companies like Google, Facebook, IBM, and Amazon work with AI to create unique devices. We may only guess what lies ahead of us.

The future of AI

Soon, AI language looks like something we can’t do without. The future has already arrived. It’s difficult to remember the last time we called the company and spoke directly to the person. These days the machines are even calling us! One can imagine interacting with an expert system in a fluid dialogue or a conversation in several languages that are being translated in real-time.

We can also expect self-driving cars to be on the road in the next twenty years (and this is a conservative forecast). In the long term, the goal is general intelligence; a machine surpasses human cognitive abilities in all tasks. It’s like the sentient robot we’re used to seeing in movies.

It seems incredible it will be achieved in the next 50 years. Even if possible, ethical issues will be a solid barrier to implementation. When that time comes (but better yet, before it even comes), we will need to talk seriously about the politics and ethics of machines (ironically, both of these subjects are human). Still, now we will allow AI to improve and steadily get out of control in society.