Brief History of AI

Embed Size (px)

Citation preview

  • 8/8/2019 Brief History of AI

    1/2

    AIs Origins

    The origins of Artificial Intelligence (AI) usually incorporate the theories and thoughts proclaimed by several ancient Greek philosophers and

    scientists. Although, arguments can be made that the Egyptians originally garnered that stake with the advancements they made around 800

    B.C. In the ancient Egyptian city of Napata, a statue of the great Amun was constructed to move its arm up and down and to speak to onlookers

    as well. Even though the statue was not actually intelligent, it did have an impact on the Egyptians at the time of being able to portray

    intelligence.

    The first concrete advancement of artificial intelligence was derived by Aristotle in the 5th century. This theory is known as syllogistic logic, the

    first formal deductive reasoning system. Others followed, such as Euclids Elements, which was a 13 book collection based on a formal model of

    reasoning, geometry, and number theory. Muslim scientist al-Khwrizm coined the acceptance of algebra and the term algorithm. In the

    13th Century, Ramon Lull invented the Zairja, a machine which attempted to generate ideas in a non-mathematical, mechanical format. In

    the 15th Century, the first modern measuring machine was invented, the clock.

    [edit] The 17th-19th Century

    In the 17th century, Descartes gave advent to more advanced thinking when he proposed the idea that animals were, in theory, nothing more

    than complex machines. Thomas Hobbes wrote the famous Leviathan, which basically translates to reason is nothing more than reckoning.This idea that reasoning in human beings can be simplified into algebraic calculations was further personified by Gottfried Leibniz, who coined

    the term characteristica universalis, or a universal language of reasoning. He also invented the Liebniz Computer, based off of Pascals eight-

    digit calculator, the Pascaline. This upgraded system was able to multiply and divide based on repetitive addition. Modern mathematical reason

    advanced with George Booles work The Laws of Thoughts, which introduced binary algebra.

    [edit] Early 20th Century

    The 20th century produced the revolution of formal logic with Bertrand Russell and Alfred North Whiteheads Principia Mathematica. This

    brought on Godels incompleteness proof, Alan Turigs Turig machine, and Churchs Lambda calculus. Around this time, the term robot was

    first coined by Karel Capek. Capek wrote a play called Rossums Universal Robots which portrayed robots as mechanical slaves created to

    work for humans. The robots are treated poorly because they are seen as just machines, until a troubled scientist incorporates emotions into

    their programming. This causes a revolt among the robots, which kill off nearly all the humans and take over the world.

    The true driving factor of AI came in the 1940s with the creation of the electronic computer. Advancements in computer theory and computer

    science led to advancements in AI as well. Since machines could now begin to manipulate numbers and symbols, this manipulation was thought

    to somehow be the basics of human thought. Princetons Walter Pitts and Warren McCulloch began work on the nueral network, which

    attempted to give a mathematical description of the human brain. In 1955, the Logical Theorist was created by Allen Newell and Herbert Simon

    which proved correct 38 of the first 52 theories from Principia Mathematica. Norbert Wiener also brought forth the feedback theory which

    concluded that all intelligent behavior derives from feedback mechinisms (for example, a thermostat reads the room temperature at 65

    degrees, realizes it should be set at 72 degrees and turns up the heat). This led up to the summer of 1956.

    [edit] The Summer of Artificial Intelligence

    We all know 1967 as the Summer of Love, but we can also coin the summer of 1956 as the Summer of Artificial Intelligence. At Dartmouth,

    John McCarthy (known as the father of AI) called upon noted scientists of the day such as Marvin Minsky (protg to Pitts and McCulloch), Ray

    Solomonoff, Herbert Simon, Allen Newell and IBMs Claude Shannon and Nathaniel Rochester for a month long conference. McCarthy called it

    the Dartmouth summer research project on artificial intelligence and ever since then the name stuck. He stated that artificial intelligence was

    "the science and engineering of making intelligent machines."

  • 8/8/2019 Brief History of AI

    2/2

    [edit] 1956-1979

    The 20 years to follow brought forth many advancements in AI, such as McCarthys LISP programming language, IBMs 701 general purpose

    electronic computer, and Newell, Shaw and Simons first General Problem Solver (GPS). Various artificial intelligence research facilities were set

    up at MIT, Carnegie Mellon, and Princeton. Other programs cropped up like Daniel Bobrows STUDENT which could solve high school-level

    algebraic word problems. Joseph Weizenbaums ELIZA could carry out full conversations by changing the grammar from questions and

    occasionally re-using different sentences.

    In the 1960s, America and its federal government starting pushing more for the development of AI. The Department of Defense started backing

    several programs in order to stay ahead of Soviet technology. The U.S. also started to commercially market the sale of robotics to various

    manufacturers. The rise of expert systems also became popular due to the creation of Edward Feigenbaum and Robert K. Lindsays DENDRAL.

    DENDRAL had the ability to map the complex structures of organic chemicals, but like many AI inventions, it began to tangle its results once the

    program had too many factors built into it. The same predicament fell upon the program SHRDLU which would use robotics through a

    computer so the user could ask questions and give commands in English. It went over well initially, but could only be used for toys and under

    certain areas of expertise. It had come to the experts realization that there was way too much information in the English language to be

    programmed for its potential use as a fully common sense" problem solver, also known as commonsense knowledge.

    With all the short comings of many of these programs, the funding that was heavily available in the beginning of the decade was short lived,

    and started to die off into the 1970s. This led to what is known as an AI Winter in which popularity and funding both start to decline at the

    same time in the AI community. This happened in the early 1970s up until 1980.

    [edit] The 1980s

    In 1980, momentum started to swing upward for AI supporters with the re-invention of expert systems. Expert systems are set up in a certain

    domain area and programmed to answer questions and identify solutions which are backed by certain rules applied by that domains experts.

    The Japanese government was impressed with its trial runs and started funding the project. It was also adapted by corporations like Campbells

    Soup Company and General Motors. Carnegie Mellon developed an expert system called XCON in the 1980s, which became a large success.

    Fruition sprouted, but fell spoiled to the earth again when the second AI winter hit in 1987 with the downfall of the LISP program and other AI

    hardware. Round three came in 1993 with expert systems losing a lot of support.

    [edit] 1993-Present Day

    From 1993 until the turn of the century, AI has reached some incredible landmarks with the creation of intelligent agents. Intelligent agents

    basically use their surrounding environment to solve problems in the most efficient and effective manner. In 1997, the first computer (named

    Deep Blue) beat a world chess champion. In 1995, the VaMP car drove an entire 158 km racing track without any help from human intelligence.

    In 1999, humanoid robots began to gain popularity as well as the ability to walk around freely. Since then, AI has been playing a big role in

    certain commercial markets and throughout the World Wide Web. The more advanced AI projects, like fully adapting commonsense

    knowledge, have taken a back-burner to more lucrative industries.