4

Click here to load reader

Artificial intelligence(simulating the human mind)

Embed Size (px)

Citation preview

Page 1: Artificial intelligence(simulating the human mind)

Cognitive Science Artificial Intelligence: Simulating the Human Mind to Achieve Goals

Samantha Luber University of Michigan

Ann Arbor, U.S.A.

E-mail: [email protected]

Abstract: This paper provides a general overview of the interdisciplinary study of cognitive science, specifically the area of the field involving artificial intelligence. In addition, the paper will elaborate on current research for cognitive science artificial intelligence, highlight the importance of this research by providing specific examples of its applications in present society, and briefly discuss future research opportunities for the overlapping fields of cognitive science and artificial intelligence.

Keywords: Artificial intelligence, cognitive science

I. AN OVERVIEW OF COGNITIVE SCIENCE ARTIFICIAL INTELLIGENCE

Since ancient times, people have conducted countless experiments in attempts to better understand the human mind. These tests eventually lead to the development of psychology. In the late 1930’s, cognitive science emerged as an extension of psychology topics; it is concerned with how information is stored and transferred in the human mind. It is an interdisciplinary science, linking psychology, linguistics, anthropology, philosophy, neuroscience, sociology, and learning sciences [1]. A useful tool for cognitive researchers, artificial intelligence is the branch of computer science concerned with creating simulations that model human cognition. In addition to serving as a research tool, artificial intelligence also contains a scientific aspect, focusing on studying cognitive behavior of machines [2]. Developed to encapsulate the concept of both early cognitive science and intelligence simulated by machines, modern cognitive science artificial intelligence focuses on how humans, animals, and machines store information associated with perception, language, reasoning, and emotion.

A. Artificial Intelligence in Cognitive Science The central principle of cognitive science is that a

complete understanding of the mind cannot be obtained without analyzing the mind on multiple levels. In other words, numerous techniques must be used to fully evaluate and understand a process of the mind. Artificial intelligence is a powerful approach that allows researchers in cognitive science to study behavior through computational modeling of the human mind [2]. There are numerous approaches to simulating how the mind is structured with approaches

ranging from creating and observing artificial neurons to representing the mind as a high-level collection of rules, symbols, and plans [3].

B. Cognitive Science in Artificial Intelligence In addition to simulating intelligence to model and study the human mind, artificial intelligence involves the study of cognitive phenomena in machines and attempts to implement aspects of human intelligence in computer programs. These programs can be used to address a variety of complex problems with the goal of doing so more efficiently than a human. New theories in the cognitive science field often influence improved artificial intelligence agents that better simulate the human thought process [2]. Achievements in cognitive science help improve artificial simulation of the human mind. In turn, more accurate artificial intelligence provides better models of the human mind for cognitive science researchers to use. Although the goals of cognitive science and artificial intelligence differ, collaboration between the two fields is essential for their success. Cognitive science artificial intelligence refers to the interdisciplinary study that overlaps these areas in attempt to achieve both cognitive science and artificial intelligence goals.

II. CURRENT RESEARCH IN COGNITIVE SCIENCE ARTIFICIAL INTELLIGENCE

A fundamental goal of cognitive science artificial intelligence is to use the power of computers to understand and supplement human thinking. In artificial intelligence, an intelligent agent refers to a computer-simulated entity that interacts with its environment and works to achieve goals, both simple and complex [3]. By observing which problems an intelligent agent can solve and how the computer program solves these problems, researchers in the cognitive science field aim to develop theories about how the brain learns and constructs logical rules, how intelligence arises within the brain, insights on which pieces of information humans will forget and remember, and the kinds of resources the human mind uses [2].

In addition to gaining a better insight into the nature of the human mind, the ultimate goal of cognitive science artificial intelligence is to eventually develop human-level, machine intelligence. At this level, the intelligent agent

___________________________________ 978-1-61284-840-2/11/$26.00 ©2011 IEEE

Page 2: Artificial intelligence(simulating the human mind)

would not be distinguished from human intelligence, a challenge known as the Turing test [3]. Because intelligent agents often face situations with incomplete information, encoding data for all possible situations is a limited approach to simulate human intelligence [4]. In other words, because there are infinitely many situations that can arise in the real world, it is impossible to design an intelligent agent pre-programmed with solutions to all of the problems it may face. Instead, an intelligent agent must be equipped with the ability to make decisions based on the information it has and re-evaluate its past solutions to improve future decisions. Consequently, a more fundamental understanding about how the human mind learns and solves problems is necessary to design an intelligent agent with the same intelligence. Currently, numerous research projects are making progress in these goals of both simulating human intelligence to study the human mind as well as the simulation of human intelligence to solve complex problems.

A. Simulating Theory of Mind A central topic in cognitive science and psychology, theory of mind refers to “one’s ability to infer and understand the beliefs, desires, and intentions of others, given the knowledge one has available” [5]. To investigate the various theories that explain how theory of mind takes place on the cognitive level, Dr. Hiatt and Dr. Trafton use the ACT-R cognitive architecture to simulate how accurately children can predict the actions of others as they age, a prime example of using artificial intelligence to study the human mind. ACT-R consists of modules associated with different areas of the brain, buffers which each hold a symbolic item, and a pattern matcher that determines actions to be taken based on the contents of the buffers. Furthermore, this core cognitive architecture has the ability to interface with the environment via visual, audio, motor, and aural modules and learn new facts and rules through reinforcement learning; based on these capabilities, ACT-R is a suitable system for simulating the mind of a growing child [5]. Based on the idea that children learn and mature as they grow, Dr. Hiatt and Dr. Trafton include a maturation parameter associated with the age of the simulated child. A higher level of maturity corresponds to a more advanced ability in the child to select between their inferred beliefs about the beliefs and actions of others [5]. From simulating the theory of mind development of numerous children, Dr. Hiatt and Dr. Trafton found evidence supporting the legitimacy of main theories of how theory of mind is developed in existence today.

B. The SOAR Project Dr. Laird, a professor in computer science at the

University of Michigan, developed the SOAR system, a cognitive architecture programming structure with the goal of simulating a human brain, as a unique, alternative approach to the traditional and restricted hard-coding data approach. The SOAR system stores information retrieved

from the intelligent agent’s environment in working memory [6]. Because immediate sensory data is sometimes insufficient for decision-making in the real world, storing previous situations is useful in differentiating between situations that would otherwise appear identical to the intelligent agent at a specific instance [7]. In short, maintaining memory of past events “makes it possible to not only make correct decisions but to learn the correct decision” [7]. When the intelligent agent enters an impasse, the agent can search its memory for a solution to the problem. If the problem is unique, the agent remembers its actions in case the problem is encountered again [6]. In summary, the SOAR cognitive architecture system relies on maintaining information from decisions and outcomes in past experiences to improve future decisions in simulating human behavior. The SOAR system is a useful tool for using simulated human intelligence to solve complex problems.

C. Simulating Creativity In his papers on the simulation of human level

intelligence in the decision process, Dr. Zadeh emphasizes the importance of imitating creativity in intelligent agents. Although knowledge of past experiences is a useful tool in decision-making, Dr. Zadeh acknowledges that “creativity is a gifted ability of human beings in thinking, inference, problem solving, and product development” [8]. In his formal definition of the unique ability, creativity is divided into three categories: abstract, concrete, and artistic. More relevant to engineering applications, concrete creativity involves generating new, innovative solutions in an environment limited by goals and available conditions [8]. Aiming to equip intelligent agents with the creative ability of the human mind, Zadeh provides an outlined approach for implementing the creative process in a computer program. The ability of an intelligent agent to create new approaches to solving problems is vital for modeling human level intelligence.

D. Simulating Rationality The multi-agent recursive simulation technology for N-th order rational agents (MARS-NORA) is a procedure developed by Dr. Mussavi Rizi and Dr. Latek to rationally choose a course of action for multiple artificial intelligence agents in a dynamic environment. Similar to how a human weighs the pros and cons of a decision, MARS-NORA requires agents to derive the probability distribution of utility gained for each possible course of action [9]. MARS-NORA has two algorithms for determining the optimal course of action once all possible algorithms are considered: myopic planning and non-myopic planning. In myopic planning, the zero-order agent chooses a random action. Each proceeding agent chooses its optimal course of action based on the actions of agents of lower order, overall resulting in the on-average optimal action of the multi-agent [9]. Because the actions of previous agents limit the actions of agents of higher order, myopic planning is not suitable for

Page 3: Artificial intelligence(simulating the human mind)

situations in which the multi-agent acts asynchronously with other multi-agents. Myopic planning also fails if the multi-agent wishes to derive multiple optimal courses of action or takes inconsistent amounts of time to complete each action; instead, non-myopic planning can be used [9]. Because asynchronous multi-agents’ actions influence each other, the non-myopic planning algorithm considers three situations. First, in the event that a multi-agent has both a higher order of rationality and a longer planning horizon than the other multi-agent, the stronger agent selects its optimal course of action while the latter agent accepts a short term loss and returns to a synchronous state with the stronger agent [9]. The second situation involves a multi-agent that has a higher order of rationality than its opposing multi-agent but a short planning horizon. In this situation, the multi-agent with the shorter planning time is “locked” into their path of actions and will not make optimal decisions. The third situation involves multi-agents with relatively equal orders of rationality and planning horizon length. In this case, the agents have similar cognitive abilities and can cooperate to optimize their actions [9]. With two algorithms for deriving an optimal choice of action for multi-agents, MARS-NORA allows agents to behave rationally by following the decision process of humans during the action selection process.

E. Achieving Top-down Goals The ICARUS Architecture is a cognitive architecture comparable to SOAR. The architecture supports top-level goals by guiding the agent’s behavior to accomplish its tasks while maintaining reactivity. However, because ICARUS does not support adding, deleting, or reordering top-level goals, the ability to manage multiple top-level goals in this cognitive architecture is somewhat limited, especially since the goals of a human are often changed and prioritized [10]. Dr. Choi at Stanford University addresses this limitation in his extension of the ICARUS architecture. In his revision of the goal management system, each general goal now includes a goal description and relevance conditions, used to prioritize goals based on the current state of the agent [10]. The new system receives information about its surroundings during each “cycle.” Once information about the environment is retrieved, the goal management system can add, remove, or re-prioritize goals based on the agent’s “belief state” through a goal nomination process. Top-level goals are prioritized based on initial priority and relevancy to the current state of the environment [10]. Furthermore, when selecting actions to achieve goals, Dr. Choi’s extension retrieves the agent’s skills relevant to the current goal and generates a plan to accomplish the goal, utilizing the non-primitive skills first [10]. This goal management system design is more realistic to a human’s behavior as goals change as the surrounding environment changes. Improved by modifying the architecture to better resemble a human’s goal management process, Dr. Choi’s extension of the ICARUS cognitive architecture is an effective system for guiding an artificial

intelligence agent’s behavior to achieve top-level goals in a dynamic environment.

As seen in these current research projects, cognitive science artificial intelligence can be used to supplement research in cognitive science and vice versa. Furthermore, these works contribute to achieving improved human-level intelligence simulations in the cognitive science artificial intelligence field. Although no artificial intelligence has come close to achieving the goal of human-level intelligence, intelligent agents are consistently being re-evaluated and improved.

III. APPLICATIONS AND THE IMPORTANCE OF COGNITIVE SCIENCE ARTIFICIAL INTELLIGENCE

As seen in the goals of the previously mentioned researchers in the field, there are numerous, important, real world applications of cognitive science artificial intelligence research. In our society, engineers and architects constantly face tasks, such as constructing a highway or designing a traffic light, that require optimizing a design despite physical and financial limitations. For instance, in the traffic light example, an engineer must consider the tradeoff economics between using stronger materials and the price of these materials or calculate statistics on the large amounts of traffic data available for the intersection to determine light timing. With the ability to consider large amounts of information and design considerations in a short period of time, advanced intelligence can be developed to solve these types of complex logic problems [2]. In this way, the use of artificial intelligence as a tool for engineers could make the design process faster, more efficient, and more accurate.

In addition, the creation of a human level intelligent agent provides a “better mirror” of the human mind that is easier to study than the human brain for cognitive science researchers. By studying realistic simulations of human cognition, theories can be drawn about humans’ nature and cognitive limitations. Furthermore, researchers can achieve specific cognitive science goals, such as understanding how intelligence develops in the brain or how damage to different parts of the brain affect cognition [2]. Progress in these areas can powerfully impact how the human mind is understood, with the potential of leading to improvements in present society. For instance, a better understanding of how the human mind learns and retains information can lead to improved learning methods implemented in schools to accelerate human progress. In the same way, improved theories on how different areas of the brain affect behavior can help develop medical solutions for victims of brain trauma [3]. The useful applications of pursuing research in cognitive science artificial intelligence continue to grow as research in the field continues.

IV. THE FUTURE OF COGNITIVE SCIENCE ARTIFICIAL INTELLIGENCE

Page 4: Artificial intelligence(simulating the human mind)

Although there have been many breakthroughs in the cognitive science artificial intelligence field, researchers are continually working to improve intelligent agents. The human mind has the impressive capability of preforming numerous mental and physical tasks with little mental strain [2]. On the other hand, computer simulated intelligence is limited by the speed and capacity of hardware for performing computations. The development of advanced nanotechnology to increase hardware speed and memory will reduce this restraint on simulating human level intelligence [11]. Furthermore, while theories of cognitive science artificial intelligence have fostered improved understanding of the human mind, advancements in the psychology and cognitive science fields, that help better understand human behavior, can be used to further improve intelligent agents [3]. Finally, an issue more acknowledged by the public than researchers in the field, include ethical and organizational concerns with the coevolution of humans and intelligent systems; these issues may one day have to be addressed [11]. For instance, society would have to address restrictions on how a human-simulating robot can behave.

Because intelligent agents are still far from achieving artificial intelligence goals, such as passing the Turing test, or cognitive science goals, such as achieving human level intelligence or improving the present understanding of the human mind, there are still many opportunities for research achievements in the cognitive science artificial intelligence field. This room for growth shows great potential for developing technology to increase the progress of mankind.

V. CONCLUSIONS

Artificial intelligence is an extremely useful tool for cognitive science research of both fundamental and high level understanding of the human mind by simulating the human mind. In the same way, cognitive science theories provide useful insight on human cognition that can be encoded into artificial intelligence. Cognitive science artificial intelligence is a powerful research area that

addresses the goals of both cognitive science and artificial intelligence. As research in the field continues, improved intelligent agents will be developed with the ability to simulate human-level intelligence in the final cognitive science research goal of fully understanding the human mind or to address important, complex problems of mankind through artificial intelligence.

REFERENCES [1] Thagard, Paul. (2009). Cognitive Science. The Stanford Encyclopedia

of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.). [2] Simon, H. (2010). Cognitive Science: Relationship of AI to Psychology

and Neuroscience. AAAI. [3] Wang, Y. (2008). Proceedings of the Seventh IEEE International

Conference on Cognitive Informatics: ICCI 2008: August 14-16, 2008, Stanford University, California, USA. [Piscataway, N.J.]: IEEE Xplore.

[4] Bickhard, M., and Terveen, L. (1995). Foundational Issues in Artificial Intelligence and Cognitive Science. Elsevier Science Publishers.

[5] Hiatt, L. M., and Trafton, J. G. (2010). A Cognitive Model of Theory of Mind. 10th International Conference on Cognitive Modeling: ICCM 2010: August 5-8, 2010, Philadelphia, PA, USA.

[6] Lehman, J.F., Laird, J., and Rosenbloom, P. (2006). A Gentle Introduction to SOAR, an Architecture for Human Cognition: 2006 Update.

[7] Laird, J.E., and Wang, Y. (2007). The Importance of Action History in Decision Making and Reinforcement Learning. Proceedings of the Eighth International Conference on Cognitive Modeling. Ann Arbor, MI.

[8] Zadeh, L. (2008). On Cognitive Foundations of Creativity and the Cognitive Process of Creation. Proceedings of the Seventh IEEE International Conference on Cognitive Informatics: ICCI 2008 : August 14-16, 2008, Stanford University, California, USA. [Piscataway, N.J.]: IEEE Xplore.

[9] Latek, M., and Mussavi Rizi, S.M. (2010). Plan, replan and plan to replan Algorithms for robust courses of action under strategic uncertainty. BRIMS 2010: March 21-24, 2010, Charleston, SC, USA.

[10] Choi, D. (2010). Nomination and Prioritization of Goals in a Cognitive Architecture. 10th International Conferenceon Cognitive Modeling: ICCM 2010: August 5-8, 2010, Philadelphia, PA, USA.

[11] Jacobstein, N. (2005). The Prospects for AI. IT Conversations. <http://itc.conversationsnetwork.org/shows/detail713.h tml>.