4
On Basic Principles of Intelligent Systems Design Victor Korotkikh and Galina Korotkikh Faculty of Business and Informatics Central Queensland University Mackay, Queensland, 4740, Australia [email protected], [email protected] Abstract In the paper we present results of computational experiments to suggest the possibility of a general optimality condition of complex systems: a system demonstrates the optimal performance for a problem, when the structural complexity of the system is in a certain relationship with the structural complexity of the problem. The optimality condition could be used as a basic principle in the design of intelligent systems optimizing their performance in the dynamic environment. 1. Introduction One of the key desired properties of an intelligent system is the ability to optimize its performance in the dynamic environment. However, despite significant efforts basic principles possibly determining the optimal performance of a complex system still have not been found. Moreover, it is not even clear what concepts might underlie principles of intelligent systems design [1]. To contribute in this direction we use an irreducible description of complex systems in terms of self- organization processes of prime integer relations [2]. These processes determine nonlocal correlation structures of complex systems and may characterize their dynamics in a strong scale covariant form. By using the description a concept of structural complexity is introduced to measure the complexity of a system by such correlation structures [2]. Significantly, based on the integers and controlled by arithmetic only the self-organization processes can describe complex systems by information not requiring further explanations. This gives the possibility to develop an irreducible theory of complex systems [3]. In the paper we present results of computational experiments [4] to suggest a possible optimality condition of complex systems as a basic principle of intelligent systems design. The optimality condition presents the structural complexity of a system as a key to its optimization. According to the optimality condition the optimal result can be obtained as long as the structural complexity of the system is properly related with the structural complexity of the problem. A connection between the majorization principle in quantum algorithms [5] and the optimality condition is also discussed. 2. Optimization algorithm as a complex system We consider an optimization algorithm A, as a complex system, of N ≥ 2 computational agents minimizing the average distance in the benchmark traveling salesman problems (TSP) [6]. All agents start in the same city and each agent at each step visits the next city by using one of two strategies: a random strategy, i.e., visit the next city at random, or the greedy strategy, i.e, visit the next closest city. The state of the agents solving a TSP problem with n > 2 cities at step j = 1,...,n-1 can be described by a binary sequence s j = s 1j ...s Nj , where s ij = +1, if agent i = 1,...,N uses the random strategy and s ij = -1, if the agent uses the greedy strategy. The dynamics of the agents is realized as they choose step by step their strategies and can be encoded by an N x (n-1) binary strategy matrix S = {s ij , i = 1,...,N, j = 1,...,n-1}. We introduce a control parameter v to monotonically change the structural complexity of the algorithm A and investigate whether its performance could behave in a regular manner as a result. Let D ij be the distance traveled by agent i = 1,...,N after j = 1,...,n-2 steps and Proceedings of the Sixth International Conference on Hybrid Intelligent Systems (HIS'06) 0-7695-2662-4/06 $20.00 © 2006

[IEEE 2006 Sixth International Conference on Hybrid Intelligent Systems (HIS'06) - Rio de Janeiro, Brazil (2006.12.13-2006.12.13)] 2006 Sixth International Conference on Hybrid Intelligent

  • Upload
    galina

  • View
    216

  • Download
    4

Embed Size (px)

Citation preview

Page 1: [IEEE 2006 Sixth International Conference on Hybrid Intelligent Systems (HIS'06) - Rio de Janeiro, Brazil (2006.12.13-2006.12.13)] 2006 Sixth International Conference on Hybrid Intelligent

On Basic Principles of Intelligent Systems Design

Victor Korotkikh and Galina KorotkikhFaculty of Business and Informatics

Central Queensland UniversityMackay, Queensland, 4740, Australia

[email protected], [email protected]

Abstract

In the paper we present results of computational experiments to suggest the possibility of a general optimality condition of complex systems: a system demonstrates the optimal performance for a problem, when the structural complexity of the system is in a certain relationship with the structural complexity of the problem. The optimality condition could be used as a basic principle in the design of intelligent systems optimizing their performance in the dynamic environment.

1. Introduction

One of the key desired properties of an intelligent system is the ability to optimize its performance in the dynamic environment. However, despite significant efforts basic principles possibly determining the optimal performance of a complex system still have not been found. Moreover, it is not even clear what concepts might underlie principles of intelligent systems design [1]. To contribute in this direction we use an irreducible description of complex systems in terms of self-organization processes of prime integer relations [2]. These processes determine nonlocal correlation structures of complex systems and may characterize their dynamics in a strong scale covariant form. By using the description a concept of structural complexity is introduced to measure the complexity of a system by such correlation structures [2]. Significantly, based on the integers and controlled by arithmetic only the self-organization processes can describe complex systems by information not requiring further explanations. This gives the possibility to develop an irreducible theory of complex systems [3]. In the paper we present results of computational experiments [4] to suggest a possible optimality

condition of complex systems as a basic principle of intelligent systems design. The optimality condition presents the structural complexity of a system as a key to its optimization. According to the optimality condition the optimal result can be obtained as long as the structural complexity of the system is properly related with the structural complexity of the problem. A connection between the majorization principle in quantum algorithms [5] and the optimality condition is also discussed.

2. Optimization algorithm as a complex system

We consider an optimization algorithm A, as a complex system, of N ≥ 2 computational agents minimizing the average distance in the benchmark traveling salesman problems (TSP) [6]. All agents start in the same city and each agent at each step visits the next city by using one of two strategies: a random strategy, i.e., visit the next city at random, or the greedy strategy, i.e, visit the next closest city. The state of the agents solving a TSP problem with n > 2 cities at step j = 1,...,n-1 can be described by a binary sequence sj = s1j ...sNj, where sij = +1, if agent i = 1,...,N uses the random strategy and sij = -1, if the agent uses the greedy strategy. The dynamics of the agents is realized as they choose step by step their strategies and can be encoded by an N x (n-1) binary strategy matrix S = {sij, i = 1,...,N, j = 1,...,n-1}. We introduce a control parameter v to monotonically change the structural complexity of the algorithm A and investigate whether its performance could behave in a regular manner as a result. Let Dij be the distance traveled by agent i = 1,...,N after j = 1,...,n-2 steps and

Proceedings of the Sixth International Conference on Hybrid Intelligent Systems (HIS'06)0-7695-2662-4/06 $20.00 © 2006

Page 2: [IEEE 2006 Sixth International Conference on Hybrid Intelligent Systems (HIS'06) - Rio de Janeiro, Brazil (2006.12.13-2006.12.13)] 2006 Sixth International Conference on Hybrid Intelligent

All distances traveled by the agents belong to the interval [Dj

-, Dj+]. The control parameter v specifies a

threshold point

dividing the distance interval into two parts, i.e., successful [Dj

-, Dj(v)] and unsuccessful (Dj(v), Dj+]. If

the distance Dij traveled by agent i = 1,...,N after j=1,...,n-2 steps belongs to the interval [Dj

-, Dj(v)], then the agent's last strategy is considered successful. If the distance Dij belongs to the interval (Dj(v), Dj

+], then the agent's last strategy is considered unsuccessful. By using the control parameter v we aim to mimic the period doubling route to chaos [7] so that as a result the structural complexity of the algorithm A could vary monotonically. For this purpose each agent in choosing the next strategy follows an optimal if-then rule, which relies on the Prouhet-Thue-Morse (PTM) sequence +1-1-1+1-1+1+1-1 ... and has the following description [2]: 1. If the last strategy is successful, continue with the same strategy.2. If the last strategy is unsuccessful, consult PTM generator which strategy to use next. The change of the parameter v from 0 to 1 transforms the successful interval for j = 1,..., n-2 from the whole distance interval [Dj

-, Dj+], when v = 0 to just one

point [Dj-, Dj

-], when v = 1. Due to the optimal rule the transformation defines a flow in the space of strategy matrices. By using the parameter v we try to control the flow to satisfy the condition of monotonicity of the structural complexity. Extensive computational experiments have been conducted [4] by using the problems of a class P = {eil76, eil101, st70, rat195, lin105, kroC100, kroB100, kroA100, kroD100, d198, kroA150, pr107, u159, pr144, pr144, pr152, pr226, pr136, pr76, ts225} selected from the benchmark traveling salesman problems [6]. Let Di(p,v) be the distance traveled by agent i = 1,...,N for a problem p from P and a value v of the control parameter. The performance of the algorithm A is characterized by the average distance

which has to be minimized. In the experiments the algorithm A is applied to each problem p of the class P with the number of agents N = 2000 and values vk =

kδ, k = 0, 1,...,20 of the control parameter v, where δ = 0.05.

3. The optimality condition of the algorithm

Remarkably, it has been found that for each problem p from P the performance D(p,v) of the optimization algorithm A behaves as a concave function of the control parameter v with the only optimum at v*(p). We use this computational result to investigate whether the performance optimums could be characterized in terms of an optimality condition relating the structural complexities of the algorithm A and the problem. In particular, we consider the performance optimums {v*(p), for p from P} and approximate the structural complexities of the algorithm A and the problem as follows. We apply the algorithm A with the value v*(p) of the control parameter to a problem p from P and obtain a strategy matrix S(v*(p)). By using the strategy matrix S(v*(p)) we compute the variance-covariance matrix V(S(v*(p))) and its quadratic trace

where λi, i = 1,...,N are the eigenvalues of the variance-covariance matrix V(S(v*(p))). We approximate the structural complexity C(A(p)) of the algorithm A in the solution of the problem p by

and the structural complexity C(p) of the problem p by the quadratic trace

of the normalized distance matrix M(p) = {dij/dmax, i,j = 1,...,n} of the problem p, where λi, i = 1,...,n are the eigenvalues of the normalized distance matrix M(p), dij

is the distance between cities i = 1,...,n and j = 1,...,n and dmax is the maximum of the distances. A relationship between the structural complexity C(A(p)) of the algorithm A and the structural complexity C(p) of the problem is sought to identify an optimality condition. For this purpose, we consider the points with coordinates {x = C(p), y = C(A(p)), for p from P}. The result of the analysis suggests a possible

Proceedings of the Sixth International Conference on Hybrid Intelligent Systems (HIS'06)0-7695-2662-4/06 $20.00 © 2006

Page 3: [IEEE 2006 Sixth International Conference on Hybrid Intelligent Systems (HIS'06) - Rio de Janeiro, Brazil (2006.12.13-2006.12.13)] 2006 Sixth International Conference on Hybrid Intelligent

linear relationship between the structural complexities. The regression line is calculated as

The coefficient of determination of 0.71 tells that 71 percent of the variation in the structural complexity of the algorithm A is explained by the regression line. Therefore, within accuracy of the linear regression, it becomes possible to formulate an optimality condition of the algorithm A: If the algorithm A shows its optimal performance for a problem p, then the structural complexity C(A(p)) of the algorithm A is in the linear relationship with the structural complexity C(p) of the problem p

In the optimality condition the structural complexity combines dynamics and structure. Namely, according to the optimality condition the performance is optimal when dynamical properties of the algorithm A expressed in terms of the structural complexity are in the linear relationship (1) with of structural properties of the distance network also expressed in terms of the structural complexity.

4. The structural complexity in the optimality condition and optimal quantum algorithms

Despite different origin complex systems have much in common and are investigated to satisfy universal laws. The description of complex systems in terms of self-organization process of prime integer relations points out that the universal laws may originate not from forces in spacetime, but through arithmetic. There are many notions of complexity introduced in the search to communicate the universal laws into theory and practice. The structural complexity of a system is measured by the nonlocal correlation structures and the relationships between the parts they provide. In particular, as self-organization processes of prime integer relations progress from a level to the higher level, the system becomes more complex, because its parts at the level are combined to make up more complex parts at the higher level. Therefore, the higher the level self-organization processes progress to, the greater is the structural complexity of a corresponding complex system. Existing concepts of complexity do not explain in general how the performance of a complex system may

depend on its complexity. The computational experiments indicate that the concept of structural complexity could make a difference. In the search to identify a mathematical structure underlying optimal quantum algorithms the majorization principle emerges as a necessary condition for efficiency in quantum computational processes [5]. We find a connection between the majorization principle in quantum algorithms and the optimality condition. According to the majorization principle in an optimal quantum algorithm the probability distribution associated to the quantum state has to be step-by-step majorized until it is maximally ordered. This means that an optimal quantum algorithm works in such a way that the probability distribution ρk+1 at step k+1 majorizes ρk < ρk+1 the probability distribution ρk at step k. There are special conditions in place for the probability distribution ρk+1 to majorize the probability distribution ρk with intuitive meaning that the distribution ρk is more disordered than ρk+1 [5]. The algorithm A revealing the optimality condition operates within the description having features recognizable in quantum mechanics. It is shown that self-organization processes of prime integer relations specify a new type of correlations that do not have reference to the distances between the parts, local times and physical signals [3]. The space and non-signaling aspects of the correlations are familiar from explanations of quantum correlations through entanglement. The time aspect of the nonlocal correlations might suggest new items into the agenda. Moreover, the algorithm A uses a similar principle, but based on the structural complexity. The algorithm is made dependent on the control parameter v to work in such a way that for a problem p the structural complexity C(A(p,vk+1)) of the algorithm A with the value of the parameter vk+1 majorizes C(A(p,vk)) < C(A(p,vk+1)) its structural complexity C(A(p,vk)) with the value of the parameter vk as long as vk < vk+1. The concavity of the algorithm's performance suggests efficient means to find optimal solutions.

5. Conclusions

We have presented results of computational experiments to suggest a possible optimality condition of complex systems as a basic principle of intelligent systems design: A complex system demonstrates the optimal performance for a problem, when the structural

Proceedings of the Sixth International Conference on Hybrid Intelligent Systems (HIS'06)0-7695-2662-4/06 $20.00 © 2006

Page 4: [IEEE 2006 Sixth International Conference on Hybrid Intelligent Systems (HIS'06) - Rio de Janeiro, Brazil (2006.12.13-2006.12.13)] 2006 Sixth International Conference on Hybrid Intelligent

complexity of the system is in a certain relationship with the structural complexity of the problem. The optimality condition presents the structural complexity of a system as a key to its optimization. From its perspective the optimization of a system should be primarily concerned with the control of the structural complexity of the system to match the structural complexity of the problem. The ability to evaluate the structural complexity of the problem before the actual computations would be especially helpful. In this case the optimization of the system could be reduced to the turning of its structural complexity to match the already known structural complexity of the problem. The optimality condition might open a way to new powerful computational capabilities of intelligent systems. The experiments indicate that the performance of a complex system may behave as a concave function of the structural complexity. Once the structural complexity could be controlled as a single entity, the optimization of a complex system would be potentially reduced to a one-dimensional concave optimization irrespective of the number of variables involved its description.

5. References

[1] J. Fromm, “Ten Questions about Emergence”, arXiv:nlin.AO/0509049, and references therein.

[2] V. Korotkikh, A Mathematical Structure for Emergent Computation, Kluwer Academic Publishers, Dordrecht, 1999.

[3] V. Korotkikh, and G. Korotkikh, “On an Irreducible Theory of Complex Systems”, arXiv:nlin.AO/0606023.

[4] V. Korotkikh, G. Korotkikh, and D. Bond, “On Optimality Condition of Complex Systems: Computational Evidence”, arXiv:cs.CC/0504092.

[5] R. Orus, J. Latorre, and M.A. Martin-Delgado, “Systematic Analysis of Majorization in Quantum Algorithms”, arXiv:quant-ph/0212094.

[6] G. Reinelt, “TSPLIB”, version 1.2, (accessed 28/11/2000).

[7] M. Feigenbaum, “Universal Behaviour in Nonlinear Systems”, Los Alamos Science, vol. 1, 1980, 1.

Proceedings of the Sixth International Conference on Hybrid Intelligent Systems (HIS'06)0-7695-2662-4/06 $20.00 © 2006