Upload
dinhkiet
View
219
Download
0
Embed Size (px)
Citation preview
___________________________________________________________________________________ __________________________ ______________________________________________________ Bibliography
[1] Abbas H. and Fahmy M. (1994). Neural Networks for Maximum
Likelihood Clustering. Signal Processing. 36(1), 111-126.
[2] Abe Shigeo (2001). Pattern Classification: Neurofuzzy methods and their comparison. ISBN 1-85233-352-9
[3] Ackley D. H., Hinton G. E. and Sejnowski T.J. (1985). A learning algorithm for Boltzmann machines. Cognitive Science. 9(1), 147-169.
[4] Alavala R. Chennakesava (2008). Fuzzy Logic and Neural Networks : Basic Concepts and Application. Daryaganj, Delhi, IND: New Age International, pp. 121-128.
[5] Al-Mubaid H. and Ghaffari N. (2006). Identifying the most significant genes from gene expression profiles for sample classification. In Proc.
IEEE Conf. Granural Comput. pp. 655-658.
[6] Anders Holst (1997). The use of Bayesian Neural Network Model for Classification Tasks, Studies of Artificial Neural Systems.
[7] Angeline P. (1998). Using Selection to Improve Particle Swarm Optimization. In International Conference on Evolutionary Computation,
Piscataway, New Jersey, IEEE Service Center, USA. pp. 84-89.
[8] Antoniou G. (1997). Nonmonotonic Reasoning, MIT Press.
[9] Athitsos V. and Sclaroff S. (2005). Boosting nearest neighbor classifiers for multiclass recognition, CVPR ’05 of IEEE Computer Society,
Washington, DC, USA
[10] Athitsos V., Alon J. and Sclaroff S. (2005). Efficient nearest neighbor classification using a cascade of approximate similarity measures, IEEE
Computer Society (CVPR ’05), Washington DC, USA, pp.486^93.
[11] Back T., Fogel D.B. and Michalewicz Z. (1997). Handbook of Evolutionary Computation. Oxford University Press and Institute of
Physics, New York.
_______________________________________________ _________________________________________ ___________________________________ ___ _____________________________________ Bibliography[12] Milos Hauskrecht, Richard Pelikan, Michal Valko and James Lyons-
Weiler (2007). Feature Selection and Dimensionality Reduction in
Genomics and Proteomics In: Fundamentals of Data Mining in
Genomics and Proteomics, 149-172, DOI: 10.1007/978-0-387-47509-7_7.
[13] Back T., Hoffmeister F., and Schwefel H. (1991). A survey of evolution
strategies. Belew, Richard K. and Lashon B. Booker (editors),
Proceedings of the fourth International Conference on Genetic
Algortihms; 1991 July 13; University of California, San Diego. San
Mateo: Morgan Kaufmann, ISBN: 1-55860-208-9.
[14] Baek S., Jeon B., Lee D. and Sung K. (1998). Fast Clustering Algorithm
for Vector Quantization. Electronics Letters, 34(2), 151-152.
[15] Baldi P., Brunak S. (1998). Bioinformatics: The Machine Learning
Approach. MIT Press, Cambridge, MA.
[16] Caro G. D. and Dorigo M. (1998). AntNet : Distributed
stigmergetic control for communications networks. Journal of
Artificial Intelligence Research (JAIR), 6:791-812.
[17] Barto A.G., Sutton R. S. and Andersson C. W. (1983). Neuronlike
adaptive elements that can solve difficult learning control problems.
IEEE Trans on Systems, Man and Cybernetics. 13(5), 834-847.
[18] Basak J., De R.K. and Pal S.K. (2000). Unsupervised feature selection
using a neuro-fuzzy approach, IEEE Transactions on Neural Networks.
11(2), 366-375.
[19] Beale R. and Jackson T. (1990). Neural Computing.' An Introduction.
Department o f Computer Science, University of York. IOP Publishing:
1990. Briston, England. ISBN 0-85274-263-2.
__________________________________________________ ___________________________________ _______________________________________________________________________________Bibliography[20] Beleue L.M., Bauer K.W. (1995). Determining input features for
multilayered perceptron, Neurocomputing. 7(2), 111-121.
[21] Bellman R. (1961). Adaptive Control Processes: A Guided Tour. Princeton University Press, New Jersey.
[22] Beyer K., Goldstein J., Ramakrishnan R. and Shaft U. (1999). ‘When is
nearest neighbor meaningful?’, Proc. of the Seventh International
Conference on Database theory, Jerusalem, Israel, pp.217-235.
[23] Bhattacharyya C., Grate L.R., Rizki A., Radisky D., Molina F.J., Jordan
M.I., Bissell M.J., Mian I.S. (2003). Simultaneous classification and
relevant feature identification in high-dimensional spaces: Application
to molecular profiling data. Signal Process. 83,729-743.
[24] Blake L., Merz C.J. (2001). “UCI Repository of machine learning
databases, “http://www.ics.uci.edu”.
[25] Blum L., Langley P. (1997). Selection of relevant features and examples
in machine learning, Artif. Intell. 97(1), 245-271.
[26] Boero G., Cavalli E. (1996). Forecasting the exchange range: A
comparison between econometric and neural network models. AFIR II,
981.
[27] Bramlette M. (1991). Initialisation, Mutation and Selection Method in
Genetic Algorithms for Function Optimization. In Proceedings of the
Fourth International Conference in Genetic Algorithms, Morgan
Kaufmann. pp 100-107.
[28] Brill F.Z., Brown D.E., Martin W.N. (1992). Fast genetic selection of features for neural network classifies IEEE Trans. Neural Netw. 3(2),
324-328.
[29] Broomhead D. S. and Lowe D. (1988). Multivariable factional interpolation and adaptive networks. Complex Systems. 2(3), 321-355.
------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------- Bibliography[30] Camargo L. S. and Yoneyama T. (2001). Specification of Training Sets
and the Number of Hidden Neurons for Multilayer Perceptrons. Neural Computation. 13, 2673-2680.
[31] Campbell W.M., Assaleh K.T. and Broun C.C. (2002). “Speaker
recognition with polynomial classifiers”, IEEE Trans. Speech and Audio Processing. 10(4), 205-212.
[32] Castellano G., Fanelli A. and Pelillo M. (1997). An iterative pruning
algorithm for feedforward neural networks. IEEE Transactions on Neural Networks 8,519-531.
[33] Ce Zhu, Lai-Man Po (1998). Minimax partial distortion competitive
learning for optimal codebook design, IEEE Transactions on Image
Processing, 7(10), 1400-1409.
[34] Chakraborty D. and Pal N.R. (2008). Selecting Useful Groups of
Features in a Connectionist Framework, IEEE Transactions on Neural
Networks. 19, 381-386.
[35] Chakraborty D. and Pal N.R. (2001). Integrated feature analysis and
fuzzy rule-based system identification in a neuro-fuzzy paradigm,” IEEE
Trans Syst. Man Cybem. B, Cybem. 31(3), 391-400.
[36] Chakraborty D. and Pal N.R. (2004). A neuro-fuzzy scheme for simultaneous feature selection and fuzzy rule-based classification. IEEE
T r a n s . Neural Netw. 15(1), 110-123.
[37] Chow C. K. and Liu C. N. (1968). Approximating discrete probability
distributions with dependency trees. IEEE Trans. Information Theory 14,
462-467.
[38] Chow T.W.S. and Huang D. (2005). Estimating optimal feature subsets using efficient estimation of high-dimensional mutual information, IEEE
Transactions on Neural Networks. 16(1), 213-224.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Bibliography[39] Coello Coello C. and Lechuga M. (2002). MOPSO : A Proposal for
Multiple Objective Particle Swarm Optimization. In Congress on Evolutionary Computation, Piscataway, New Jersey, IEEE Service Center, USA. 2,1051-1056.
[40] Coleman G. and Andrews H. (1979). Image Segmentation by Clustering. In Proceedings of IEEE. 67, 773-785.
[41] Cover T. and Hart P. (1967). Nearest neighbor pattern classification, IEEE Transactions on Information Theory, 13(1), 21-27.
[42] Cybenko G. (1989). Approximation by superpositions of a sigmoidal function, Math. Contr. Signals Syst. 2, 303-314.
[43] Daniel T. Larose (2005). Discovering Knowledge in Data: An Introduction to Data Mining, Wiley.
[44] Dash M., Liu H. and Yao, J. (1997). Dimensionality reduction for unsupervised data. In Proceedings of 19th IEEE International Conference on Tools with AI. ICTAI.
[45] David Hand, Heikki Mannila, and Padhraic Smyth (2001). Principles of
Data Mining, MIT Press, Cambridge, MA.
[46] Davis L. (Ed.) (1991). Handbook of Genetic Algorithms. Van Nostrand
Reinhold, New York.
[47] De R., Pal N.R. and Pal S.K. (1997). Feature analysis: Neural network and fuzzy set theoretic approaches, Pattern Recognit. 30(10), 1579-1590.
[48] Debrup Chakraborty and Pal N.R. (2008). Selecting Useful Groups of Features in a Connectionist Framework. IEEE Transactions on Neural
Networks. 19(3), 381-396.
[49] Derrig R.A. and Ostaszewski K. (1995). Fuzzy techniques of pattern recognition in risk and claim classification. J. Risk Insurance. 62, 447-
482.
_________________________________ ____ ______________________ _________ ________________________ _________ ___________________________________________________________ Bibliography[50] Devroye L., Gyorfi L. and Lugosi G. (1996). A Probabilistic Theory of
Pattern RecognitionSpringer-Verlag.
[51] Dorigo M. and Di Caro G. (1999). The Ant Colony Optimization Meta-
Heuristic. New Methods in Optimization, D. Come, M. Dorigo and F. Glover, Eds., McGraw-Hill.
[52] Dorigo M„, Maniezzo V. and Colomi A. (1991). Positive Feedback as a
Search Strategy.Technical Report, Report no. 91-016, Dipartimento di Elettronica, Politecnico di Milano, Italy.
[53] Dorigo M. (1992). Optimization, Learning and Natural Algorithms (in
Italian), PhD thesis. Dipartimento di Elettronica, Politecnico di Milano,
Italy.
[54] Duda R.O., Hart P.E. and Stork D.G. (2001). Pattern classification, John
Wiley and Sons (Asia) Pte. Ltd..
[55] Fodor I.K. (2002). A Survey of Dimension Reduction Techniques",
LLNL Technical Report.
[56] Engelbrecht P. (2001). A new pruning heuristic based on variance
analysis of sensitivity information. IEEE Trans. Neural Netw. 12(6),
1386-1399.
[57] Ergezinger S. and Thomsen E. (1995). An accelerated learning
algorithm for multilayer perceptrons: Optimization layer by layer, IEEE
Transaction on Neural Networks. 6(1), 31-42.
[58] Farlow S.J. (1994). The GMDH algorithm, InrFarlow S.J. (eds.), Selforganizating methods in modelling: GMDH type algorithm, New
York.Marcel Dekker, pp. 1-24.
[59] Fogel D.B. (1994). An introduction to simulated evolutionary
optimisation. IEEE Trans, on Neural Networks, 5(1), 3-14.
_____________________________ _____________________ _________________________________ ____________________________________________________________________ Bibliography[60] Fnedman J. (1994). ‘Flexible metric nearest neighbor classification’,
technical report 113, Stanford university statistics department, Stanford.
[61] Frigui H. and Krishnapuram R. (1999). A Robust Competitive
Clustering Algorithm with Applications in Computer Vision. IEEE
Transactions on Pattern Analysis and Machine Intelligence. 21(5), 450- 465.
[62] Fukunaga K. (1989). Statistical Pattern Recognition. New York: Academic.
[63] Ganong W.F. (1973). Los Altos, CA. Review of Medical Physiology,
Lange Medical Publications.
[64] Goldberg D. E. (1989). Genetic Algorithms in Search, Optimization and
Machine Learning, Addison Wesley.
[65] Gordon A.D. (1999). Classification, 2nd Edition, Chapman and
Hall/CRC. Print ISBN: 978-1-58488-013-4. eBook ISBN: 978-1-58488-
853-6
[66] Graupe Daniel (2007). Principles of Artificial Neural Networks (2nd
Edition). River Edge, NJ, USA: World Scientific, pp. 1.
h ttp :/ /s i te .e b ra ry .c o m /lib /m o m p /D o c ? id = 1 0 1 8 8 7 6 9
[67] Gray P., Hart W., Painton L., Phillips C., Trahan M. and Wagner J.
(2004). A Survey of Global Optimization Methods, Sandia National Laboratories, 1997, http://www.cs.sandia.gov/opt/survey.
[68] Dorigo M. and Gambardella L. (1997). Ant colony system: A
cooperative learning approach to the traveling salesman problem. IEEE
Transactions on Evolutionary Computation, l(l):53-66.
[69] Guyon Isabelle, Elisseeff r e (2003). An Introduction to Variable and
Feature Selection, Journal ofMachine Learning Research. 3,1157-1182.
1 4 Q
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Bibliography[70] Hamerly G. and Elkan C. (2002). Alternatives to the K-means
Algorithm that Find Better Clusterings. In Proceedings of the ACM
Conference on Information and Knowledge Management (CIKM-2002), pp. 600-607.
[71] Hamerly G. (2003). Learning Structure and Concepts in Data using Data
Clustering, PhD Thesis. University of California, San Diego.
[72] Haykin S. (1994). Neural Networks, A Comprehensive Foundation,
Macmillan Publ. Co., Englewood Cliffs, NJ.
[73] Hertz J., Krogh A., and Palmer R. (1991). Introduction to the Theory of Neural Computation. Addison-Wesley.
[74] Hinton G.E., Dayan P. and Revow M. (1997). Modeling the manifolds
of images of handwritten digits”, IEEE Trans. Neural Networks, 8(1),
65-74.
[75] Ho T.K. (1998). The random subspace method for constructing decision
forests, IEEE Transactions on Pattern Analysis and Machine Intelligence.
20(8), 832-844.
[76] Hopfield J.J. (1982). Neural networks and physical systems with
emergent collective computational abilities, Proc. of the National
Academy of Sciences of USA. 79(8), 2554-2558.
[77] Homik K. (1991). Approximation capabilities of multilayer
feedforward net-works, Neural Networks, 4,251-257.
[78] Homik K., Stinchcombe M. and White H. (1989). Multilayer
feedforward networks are universal approximators, Neural Networks, 2,
359-366.
[79] Pedrycz W. and Gomide F. (1998). An Introduction to Fuzzy Sets:
Analysis and Design, MIT Press.
______________________________________________________ _____________________________________________________________________________________________________________ Bibliography[80] Ivakhnenko A.G. (1971). Polynomial theory of complex systems, IF.F.F.
Trans. Syst, Man Cybem-I, pp. 364-378.
[81] Ivakhnenko A.G., Madala H.R. (1994). Inductive learning algorithm for complex systems modeling. Boca raton : CRC Inc.
[82] Jain A. and Dubes R. (1988). Algorithms for Clustering Data. Prentice Hall, New Jersey, USA.
[83] Jain A., Duin R. and Mao J. (2000). Statistical Pattern Recognition : A Review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1), 4-37.
[84] Jain A., Murty M. and Flynn P. (1999). Data Clustering: A Review, ACM Computing Surveys. 31(3), 264-323.
[85] Janikow C., and Michalewicz Z. (1991). An Experimental Comparison of Binary and Floating Point Representations in Genetic Algorithm. In Proceedings of the Fourth International Conference in Genetic
Algorithms, Morgan Kaufmann. pp. 31-36.
[86] Jollife T. (1986). Principal Component Analysis. New York: Springer-
Verlag.
[87] Judd D., Mckinley P. and Jain A. (1998). Large-scale Parallel Data Clustering. IEEE Transactions on Pattern Analysis and Machine
Intelligence. 20(8), 871- 876.
[88] Kamber M., Han J. (2006). Data mining: Concepts and techniques, 2nd ed. CA: Morgan Kaufmann Publisher. San Francisco.
[89] Kartalopoulos S. V. (1996). Understanding Neural Networks and Fuzzy Logic. Basic Concepts and Applications, IEEE Press, pp. 153-160.
[90] Kaukoranta T., Franti P. and Nevalainen O. (1998). A New Iterative Algorithm for VQ Codebook Generation. International Conference cm
Image Processing, pp. 589-593.
------------------------------------------------------------------------ ---------------------------------------- ------------------------------------------------------------------------ ---------------------------------------------------------------------Bibliography[91] Kennedy J. and Eberhart R. (1995). Particle Swarm Optimisation, in
Proceedings of IEEE International Conference on Neural Networks, Perth, Australia. 4, 1942-1948.
[92] Kennedy J. and Eberhart R. (2001). Swarm Intelligence. Morgan Kaufmann.
[93] Kennedy J. and Medes R. (2002). Population Structures and Particle
Swarm Performance, In Proceedings of the IEEE Congress on Evolutionary Computation, Hawaii, USA.
[94] Kennedy J. and Spears W. (1998). Matching Algorithms to Problems:
An Experimental Test of the Particle Swarm and Some Genetic
Algorithms on the Multimodal Problem Generator, In IEEE
International Conference on Evolutionary Computation, Achorage,
Alaska, USA
[95] Kennedy J. (1999). Small Worlds and Mega-Minds : Effects of Neighborhood Topology on Particle Swarm Performance, In
Proceedings of the Congress on Evolutionary Computation, pp. 1931-
1938.
[96] Kil-Sung Kim, Sung-Kwun Oh, and Hyun-Ki Kim (2005). Pattern
Classification Using Polynomial Neural Networks for Two Classes' Problem, citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.2676.
[97] Kirkpatrick S., Gelatt Jr. C.D., and Vecchi M.P. (1983). Optimization by
simulated annealing, Science, 220(4598), 671-680.
[98] Kohavi R., John G.H. (1997). Wrappers for feature subset selection,
Artificial Intelligence, pp.273-324.
[99] Kohonen T. (2001). Self-Organizing Maps, Springer Series in
I n f o r m a t io n Sciences, Springer, Berlin, Heidelberg, 3rd edition 2001.
Volume 30.
---------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------Bibliography[100] Kohonen T. (1982). Self-organized formation of topologically correct
feature maps. Biological Cybernetics, 43:59-69.
[101] Kohonen T. (1990). The Self-Organizing Map, Proceedings of the IEEE. 78:1464-1480.
[102] Kon M. and Plaskota L. (2000). Information complexity of neural
networks, Neural Networks 13: 365-375.
[103] Konar A. (1999). Artificial Intelligence and Soft Computing: Behavioral
and Cognitive Modeling of the Human Brain, CRC Press.
[104] Konar A. and Das S. (2007). Analysis of Biological Data : A Soft
Computing Approach.River Edge, Bandyopadhyay, Sanghamitra(Editor),
NJ, USA: World Scientific, 2007. pp. 21-52. http://site.ebrary.com
[105] Kononenko I. (1989). Bayesian neural networks. Biological Cybernetics
61: 361-370.
[106] Kosala, R. and Blocked, H. (2000). Mining Research: A Survey. ACM
SIGKDD Explorations. 2 (1), 1-15.
[107] Kosko, B. (1991). Neural Networks and Fuzzy Systems: A Dynamical
Systems Approach to Machine Intelligence, Prentice-Hall.
[108] Kotsiantis S.B. (2007). Supervised Machine Learning: A Review of
Classification Techniques, by: In : Informatica, 31(3), p. 249-268.
[109] Koza, J. R. (1992). Genetic Programming. Cambridge, Mass.: The MIT
Press.
[110] Krefiel U. and Schum ann J. (1997). Pattern classification techniques
based on function approximation”, in Handbook of Character
Recognition and Document Image Analysis, H. Bunke and P.S.P. Wang
(Editors), World Scientific, Singapore, pp. 49-78,1997.
--------------------- --------------- ----------- ---------------------------------- Bibliography[111] Krink T. and L0vbjerg M. (2002). The Life-Cycle model: Combining
Particle Swarm Optimisation, Genetic Algorithms and HillClimbers. In
Proceedings of Parallel Problem Solving from Nature Vn, pp. 621-630.
[112] Ladislav Zjavka (2010). Generalization Of Patterns By Identification
With Polynomial Neural Network, Journal of ELECTRICAL ENGINEERING. 61(2), 120-124.
[113] Lansner A. and Ekeberg O. (1987). An associative network solving the
4-Bit ADDER problem. In Caudill M. and Butler C. (eds.), TFF.F. First
Annual International Conference on Neural Networks, volume 2, pp.
549-556. IEEE, New York. Conf. proc., San Diego, California, June 21-24, 1987.
[114] Lansner A. and Ekeberg O. (1989). A one-layer feedback, artificial neural network with a Bayesian learning rule, International Journal of
Neural Systems. 1: 77-87.
[115] Larose, Daniel T. (2005). Discovering Knowledge in Data : An
Introduction to Data Mining. Hoboken, NJ, USA: John Wiley & Sons,
Incorporated.h ttp ://s ite .e b ra ry .co m /lib /m o m p /D o c? id = 1 0 1 1 4 0 9 6 & p p g = 2 0
[116] Lee C. and Antonsson E. (2000). Dynamic Partitional Clustering Using
Evolution Strategies. In The Third Asia-Pacific Conference on
Simulated Evolution and Learning.
[117] Lengelle R. and Denoeux T. (1996). Training MLP’s layer by layer
using an objective function for internal representations. Neural
Networks. 9(1) 83-87.
[118] Leung Y., Zhang J. and Xu Z. (2000). Clustering by Space-Space
Filtering. IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 22, no.12, pp. 1396- 1410.
______________________________________________________________________________________________________ ______________________________________________________________Bibliography[119] Lim, T.S., Loh, W.Y. and Shih, Y.S. (2000). A comparison of
prediction accuracy, complexity, and training time of thirty-three old
and new classification algorithms,” Machine Learning, vol. 40, pp. 203- 228.
[120] Linde Y., Buso A., and Gray R. M. (1980). An algorithm for vector
quantizer design. IEEE Trans. Communications, 28(1): 84-95.
[121] Lippmann, R.P., (1989). Review of neural networks for speech
recognition, Neural Comput, vol. 1, pp. 1-38.
[122] Liu, C.L. and Sako, H. (2006). Class-specific feature polynomial
classifier forpattem classification and its application to handwritten
numeral recognition, Pattern Recognition, vol. 39, pp. 669-681,2006.
[123] Liu, H.and Yu L. (2005). Towards integrating feature selection
algorithms for classification and clustering, IEEE Trans. Knowl. Data
Eng. 17(4), 491-502.
[124] Lovberg, M., 2002. Improving Particle Swarm Optimization by
Hybridization of Stochastic Search Heuristics and Self Organized
Critically, Master's Thesis. Department of Computer Science, University
of Aarhus, Denmark.
[125] Bullnheimer R , Hartl R. and Strauss C. (1999). An improved ant system
algorithm for the vehicle routing problem. In Annals of Operations
Research, 89, 319-328.
[126] MacKay D.J.C. (1992a). Bayesian interpolation. Neural Comput 4(3),
415—447.
[127] MacKay DJ.C. (1992b). A practical Bayesian ftameworie for backprop
networks, Neural Comput. 4(3), 448-^72.
[128] MacKay DJ.C. (1992c). The evidence ftamewo* applied to
classification networks, Neural Comput. 4(5), 698-714.
_______________________ _________________ ____________________________ ___ ________________________________________________________________________________________ Bibliography[129] Mahamed G. H. Omran (2004). Particle Swarm Optimization Methods
for Pattern Recognition and Image Processing, Phd Thesis, University of Pretoria.
[130] Michalewicz Z. (1996). Genetic Algorithms + Data Structures =
Evolution Programs (3rd edition). Berlin, Germany: Springer-Verlag.
[131] Michalewicz Z. (1996). Genetic Algorithms + Data Structures =
Evolution Programs, third edition. Springer-Verlag, Berlin.
[132] Michalski R.S., Karbonell J.G. and Kubat, M. (1998). Machine Learning
and Data Mining: Methods and Applications. John Wiley and Sons, New York.
[133] Michie D. (1961). In Barnett S. A. and McLaren A. (Eds.), Trial and
error: 129-145, Harmondsworth, UK: Penguin Science Survey.
[134] Michie D., Spiegelhalter D.J. and Taylor C.C. (1994). Machine Learning,
Neural and Statistical Classification. Editors: England: Ellis Horwood
Limited, pp 1-5
[135] Minsky M. L. and Papert S.A. (1969). Perceptrons, Cambridge, MA,
MIT Press.
[136] Misra B.B., Dehuri S., Dash P.K. Panda G. (2008a). A reduced and comprehensible polynomial neural network for classification, Pattern
Recognition Letters of Elsevier Journal. 29 (2008), 1705-1712.
[137] Misra B.B., Dehuri S., Dash P.K. and Panda G. (2008b). Reduced Polynomial Neural Swarm Net for Classification Task in Data Mining,
IEEE Congress on Evolutionary Computation (CEC 2008).
[138] Misra B.B., Satapathy S.C., Biswal B.N., Dash, P.K., Panda, G. (2006a). Pattern classification using polynomial neural networks, IEEE Int. Conf.
on Cybernetics & Intelligent Systems (CIS).
Bibliography[139] Misra B.B., Satapathy S.C., Hanoon N., Dash P.K. (2006b). Particle
swarm optimized polynomials for data classification, Proc. of the TF.F.E
Int. Conf. on Intelligent Systems Design and Application.
[140] Mitchel T. M., (1997). Machine learning, McGraw Hill.
[141] Mitra P., Murthy C.A., Pal S.K. (2002). Unsupervised feature selection
using feature similarity. IEEE Transactions on Pattern Analysis and
Machine Intelligence. 24(3), 301-312.
[142] Muller J., Lemke F., Ivakhnenko A.G. (1998). GMDH algorithms for
complex systems modeling. Math and Computer Modeling of
Dynamical Systems. 4,275-315.
[143] Muni D.P., Pal N.R. and Das J. (2006). Genetic programming for
simultaneous feature selection and classifier design, IEEE Transactions
on Systems, Man, and Cybernetics- PART B, Vol.36, No.l, pp. 106-117.
[144] Nakanishi H., Turksen I.B., Sugeno M. (1992). A review and
comparison of six reasoning methods. Fuzzy Sets and Systems; 57:257-
94.
[145] Neal R.M. (1996). Bayesian learning for neural networks, in Lecture
Notes in Statistics. Springer-Verlag (118), Berlin, Germany.
[146] Nikolaev N. L. and Iba H. (1999). Automated discovery of polynomials
by inductive genetic programming. In: Zutkow J, Ranch J (eds.)
Principles of Data Mining and Knowledge Discovery (PKDD’99).
Springer, Berlin, pp.456-462.
[147] Nowlan S. J. (1991). Soft Competitive Adaptation: Neural Network
Learning Algorithms based on Fitting Statistical Mixtures, PhD thesis,
Pittsburg, PA, school of Computer Science, Carnegie Mellon University.
[1481 Oh S.K., Pedrycz W. (2002). The design of self^rganizing Polynomial
Neural Networks. Infonnation Sciences. 2002(141), 237-258.
__________________________________________________________________ __ ____________________________________________________________________________________________Bibliography[149] Oh S.K., Pedrycz W. and Park BJ. (2003). Polynomial neural networks
architecture: analysis and design, Computers and Electrical Engineering, vol. 29, pp. 703-725.
[150] Pachepsky Ya, Rawls W., Gimenez D., Watt J.P.C. (1998). Use of soil penetration resistance and group method of data handling to improve
soil water retention estimates. Soil Tillage Res.; 49:117-26.
[151] Pal N.R. (1999). Soft computing for feature analysis, Fuzzy Sets and
Systems, Vol.103, No.2, pp.201-221.
[152] Pal N.R. (2002). Fuzzy logic approaches to structure preserving
dimensionality reduction. IEEE Transactions on Fuzzy Systems. 10(3), 277-286.
[153] Pal N.R. and Chintalapudi K.K. (1997). A connectionist system for
feature selection, Neural Parallel Sci. Compute. 5(3), 359-381.
[154] Pal N.R., Eluri V.K. and Mandal G.K. (2002). Fuzzy logic approaches
to structure preserving dimensionality reduction, IEEE Transactions on
Fuzzy Systems, Vol. 10, No.3, pp.277-286.
[155] Pal S. K. and Mitra, S. (1999). Neuro-Fuzzy Pattern Recognition:
Methods in Soft Computing, John Wiley & Sons, Inc.
[156] Pao Y.H. (1989). Adaptive Pattern Recognition and Neural Networks,
Addison-Wesley, Reading, MA.
[157] Parekh R. and Yang J., and Honavar V. (2000). Constructive Neural Network Learning Algorithms for Pattern Classification. IEEE
Transactions on Neural Networks. 11(2), pp. 436-451.
[158] Patrikar A. and Provence J. (1992). Pattern classification using polynomial networks, Electronics Letters, vol. 28, no. 12, pp. 1109-1110.
[159] Pedrycz W. (1996). Fuzzy Sets Engineering, CRCPress, pp. 73- 106.
______________________ ____________________ ___ _____________________________________ ______ ________________________________________________________________________Bibliography[161] Pena J.M., Lozano J.A., Larranaga P. and Iwza I. (2001).
Dimensionality reduction in unsupervised learning of conditional Gaussian networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.23, No.6, pp.590-603.
[162] Peng J., Heisterkamp DR. and Dai H.K. (2001). LDA/SVM driven
nearest neighbor classification, CVPR 01 of IEEE Computer Society, Los Alamitos, CA, USA.
[163] Peter Cabena, Pablo Hadjinian, Rolf Stadler, Jaap Verhees, and Alessandro Zanasi (1998). Discovering Data Mining: From Concept to Implementation, Prentice Hall, Upper Saddle River, NJ.
[164] Poggio T. and Girosi F. (1990). Networks for approximation and learning, Proc. of the IEEE, 78(9): 1481-1497.
[ 165] Quinlan J.R. (1987). Generating production rules from decision trees, in: Proc. Intemat. Joint Conf. on Artificial Intelligence, San Francisco, CA: MorganKaufmann, pp. 304—307.
[166] Quinlan J.R. (1993). Programs for Machine Learning. Morgan
Kaufmann, San Francisco, CA.
[167] Raudys S.J., Jain A.K. (1991). Small sample size effects in statistical pattern recognition: Recommendations for Practioners. IEEE Transactions on Pattern Analysis and Machine Intelligence. 13(2), 252-
264.
[168] Raudys S.J., Pikelis V. (1980). On dimensionality sample size classification error and complexity o f classification algorithms in pattern
recognition. IEEE Transactions on Pattern Analysis and Machine
Intelligence. 2,243-251.
[169] Raymer M.L., Punch W.F., Goodman E.D., Kulm L.A., Jain AJC. (2000). Dimensionality reduction using genetic algorithms. IEEE Trans.
Evol. Comput. 4(2), 164-171.
__________________ ___________________________________________________ ____________________________________________________________________________________________ Bibliography[170] Rezaee M.R., Goedhart B., Lelieveldt B.P.F., Reiber J.H.C. (1999).
Fuzzy feature selection, Pattern Recognition. 32,2011-2019.
[171] Richard M. D. and Lippmann R. (1991). Neural network classifiers estimate Bayesian a posteriori probabilities, Neural Comput., vol. 3, pp.461-483.
[172] Ritter H. and Kohonen T. (1989). Self-organizing semantic maps. Biological Cybernetics, 61:241-254.
[173] Robert J. and Howlett L.C.J. (2001). Radial Basis Function Networks 2: New Advances in Design.
[174] Roberto, Ruiz Jose, C. Riquelme, Jesus, S. and Aguilar Ruiz. (2006). Incremental wrapper- based gene selection from microarray data for cancer classification, Pattern Recognition, 39,2383 - 2392
[175] Romero O. and Abell6 A. (2009). A survey of multidimensional modeling methodologies, International Journal of Data Warehousing and Mining, 5 (2), 1-23.
[176] Rosenblatt F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review
65: 386-408.
[177] Roy A. (2000). On connectionism, rule extraction, and brain-like learning. IEEE Transactions on Fuzzy Systems, 8(2): 222-227.
[178] Rubanov N.S. (2000). The layer wise method and the backpropagation hybrid approach to learning a feed forward neural network, IEEE Transactions on Neural Networks, 11(2) 295-305.
[179] Ruck D.W., Rogers S.K., and Kabrisky M. (1990). Feature selection using a multilayered perceptron. J. Neural Netw. Comput. 40-48.
[180] Rudin W. (1976). Principles of mathematical analysis, McGraw-Hill,
1976 high-order neural networks, Appl. Opt., vol. 26, no. 23, pp. 4972-
4978, 1987.
------------------------------- ----------------------------- ----------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------- Bibliography[181] Rumelhart D. E. and McClelland J.L. (1986). Parallel Distributed
Processing: Exploring in the Microstructure of Cognition, MIT Press, Cambridge, MA (1986).
[182] http://flutuante.wordpress.com/2009/08/02/nerves-or-herbs/
[183] Rumelhart, D. E., Hinton, G. E. and Williams, R. J. (1986). Learning
internal representations by error propagation. In: Rumelhart D.E.,
McClelland J.L. et al. (eds.) Parallel Distributed Processing :
Explorations in the Microstructure of Cognition. MIT Press, Cambridge,
MA, Vol. 1, pp. 318-362.
[184] Salman A. (1999). Linkage Crossover Operator for Genetic Algorithms,
Ph.D Dissertation. School of Syracuse University, USA, 1999.
[ 185] Saxena A. and Dubey A. (2010). An Analytical Review of Classification
Methodologies. APEEJAY Journal of Management & Technology.
ISSN: 0974-3294.5(1), 27-37.
[186] Saxena A., Pal N.R. and Vora M. (2010). Evolutionary methods for
unsupervised feature selection using Sammon’s stress function, Springer
Journal on Fuzzy Information and Engineering. 2(3), 229-247.
[187] Saxena A., Patre D. and Dubey A. (2011). Investigating a novel GA- Based Feature Selection Method Using Improved KNN Classifiers.
Accepted in International Journal of Information and Communication
Technology (IJICT).3(1).
[188] Saxena A., Panda G. and Dash P.K. (2005). Lossless Data Compression
using Neural Networks, Journal of Institution of Engineers (India), pp.
65-69
[189] Setino R. (1997). Neural network feature selector. IEEE Trans. Neural
Netw. 8(3), 654-662.
___________________________ ______________ ________________________________________________________________ __________________________________________________________ Bibliography[190] Setiono R. and Loew W.K. (2000). FERNN: An algorithm for fast
extraction of rules from neural networks, Applied Intelligence 12,15-25.
[191] Setiono R. and Liu H. (1997). Neural-network feature selector, IEEE
Transactions on Neural Networks, Vol. 8, No.3, pp.654-662.
[192] Shapiro S. (1990). Encyclopedia of Artificial Intelligence. Shapiro S. (editor), Wiley, New York, NY.
[193] Shi Y. and Eberhart R. (1998). A Modified Particle Swarm Optimizer.
In Proceedings of the IEEE International Conference on Evolutionary
Computation, Piscataway, New Jersey, pp. 69-73.
[194] Shi Y. and Eberhart R. (2001). Fuzzy Adaptive Particle Swarm
Optimization. In Proceedings Congress on Evolutionary Computation,
Seoul, S. Korea, 2001.
[195] Shi Y. and Eberhart R. (1998). Parameter Selection in Particle Swarm
Optimization. Evolutionary Programming VII: Proceedings of EP 98, pp.
591-600.
[196] Shurmann J. (1996). Pattern Classification: A Unified View of
Statistical and Neural Approaches, Wiley Interscience, New Yoric.
[197] Siddique M.N.H. and Tokhi M.O. (2001). Training Neural Networks: Backpropagation vs. Genetic Algorithms, IEEE International Joint
Conference on Neural Networks. 4,2673- 2678.
[198] S i e d l e c l d W. and Sklansky J. (1989). A note on genetic algorithms for
large-scale feature selection', Pattern Recognition Letters, 10(5), 335-
347.[199] Sil J and Konar A. (2001). Reasoning using a probabilistic predicate
transition net model, Int. J. of Modeling and Simulations 21(2), 155-
168.
___________________ ____________________________ _________________________ __ _________________________________________________________________________________________Bibliography[200] Sivakumar Ganesan (2004). Soft Computing Techniques: Theory And
Application For Pattern Classification,
[201] Sivakumar A., Kannan K. (2009). A novel feature selection technique
for number classification problem using PNN-A plausible scheme for
boiler flue gas analysis Original Research. Article Sensors and Actuators B: Chemical. 139(2), 280-286.
[202] Song Y., Huang J., Zhou D., Zha H. and Lee G.C. (2007a). IKNN:
Informative K-Nearest Neighbor Pattern Classification, Lecture Notes in
Computer Science, Springer, Berlin / Heidelberg, pp.248-264.
[203] Song Y., Huang J., Zhou D., Zha H. and Lee G.C. (2007b). IKNN:
Informative K-Nearest Neighbor Pattern Classification’, Proceedings of
the 11th European conference on Principles and Practice of Knowledge
Discovery in Databases, Warsaw, Poland.
[204] Steppe J.M. (1996). Integrated feature and architecture selection. IEEE
Trans. Neural Netw. 7(4), 1007-1014.
[205] Sykacek P. (2000). On input selection with reversible jump Markov
chain Monte Carlo sampling in Advances in Neural Information
Processing Systems. S.A. Solla, T.K. Leen and K.-R. Muller (Editors),
MIT Press. 12, 638-644.
[206] Tank D.W. and Hopfield J.J. (1986). Simple neural optimization
networks: an A/D converter, signal decision circuit and a linear
p ro g r a m m in g circuit, I E E E Trans, on Circuits and Systems 3 3 ,5 3 3 - 541.
[207] Teodorescu H.N., Kandel A. and Jain L. C. (Editors) (1999). Fuzzy and
Neuro-Fuzzy Systems in Medicine, CRC Press, London.
[208] Tettamanzi and Tomassini (2001). Soft Computing. Springer-Verlag.
[209] The Gartner Group, www.gartner.com.
_________________________ _______________ ___________________________ ___ ____________________________________________________________________________________________ Bibliography[210] Trunk G.V. (1979). A problem of dimensionality: A simple example.
IEEE Transactions on Pattern Analysis and Machine Intelligence. 1(3), 306-307.
[211] Tsoi A.C. and Pearson A.R. (1991). Comparison of three classification
techniques, CART, C4.5, and multiplayer perceptrons. Advances in Neural Information Processing Systems 3, 963969.
[212] Tsuchiya N., Ozawa S. and Abe S. (2000a). Training three-layered
neural network Classifiers by Solving Inequalities. In Proceeding of a an
Inemational Joint Conference on Neural Networks, Como, Italy. 3, 555- 560.
[213] Tsuchiya N., Ozawa S. and Abe S. (2000b). Fast Training of Three
Layered Neural Network Classifiers by Solving Inequalities.
Transactions of the Institute of Systems, Control and Information
Engineers, Japan. 13(6), 276-283.
[214] Turi R.H. (2001). Clustering-Based Colour Image Segmentation, Ph. D.
Thesis. Monash University, Australia.
[215] Gambardella, L. and Dorigo, M. (1997). HAS-SOP: Hybrid ant system
for the sequential ordering problem. Technical Report IDSIA 11-97,
IDSIA, Lugano, Switzerland.
[216] Van den Bergh, F. (2002). An Analysis of Particle Swarm Optimizers, Ph.D. Thesis. Department of Computer Science, University of Pretoria,
South Africa.
[217] Vivarelli F. & Williams C. (2001). Comparing Bayesian neural network
algorithms for classifying segmented outdoor images. Neural Networks
14, 427-437.
[218] Wan Miligen B. Ph., Tribaldos V., Jimenez J.A.and Santa Cruz C. (1998). Comments of “An accelerated learning algorithm for multilayer
----------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------ ----------------------------------------------------- Bibliographyperceptron. Optimizing layer-by-layer, IEEE Transactions on Neural Networks, 9(2), 339-341.
[219] Wang G J. and Chen C.C. (1996). A fast multilayer neural network
training algorithm based on the layer-by-layer optimizing procedures.
IEEE transactions on Neural Networks. 7(3), 768-775.
[220] Wasserman, P. D. (1989). Neural Computing, Theory and Practice. Van Nostrand Reinhold.
[221] Wei H.L. and Billings S.A. (2007). Feature subset selection and ranking
for data dimensionality reduction, IEEE Transactions on Pattern
Analysis and Machine Intelligence, 29(1), 162-166.
[222] Weigend A.S., Rumelhart D.E. and Huberman B.A. (1991).
Generalization by weight-elimination with application to forecasting. In:
R.P. Lippmann, J. Moody and D. S. Touretzky (eds.), Advances in
Neural Information Processing Systems 3, San Mateo, CA: Morgan
Kaufmann.
[223] Weiss H. (2003). Genetic Algorithms and Optimum Robot Design,
Institute of Robotics and Mechatronics,
http://www.robotic.dlr.de/Holger.Weiss.
[224] Whitley D. and Rana S. (1998). Search, Binary Representations, and
Counting Optima. In Proceeding of a Workshop on Evolutionary
Algorithms, Sponsored by the Institute for Mathematics and its
Applications.
[225] Widrow B. and Hoff M. E. (I960). Adaptive switching circuits, IRE WESCON Convention Record, Institute of Radio Engineers, New York,
4(5), 96-104.
[226] Widrow B. and Winter R. (1988). Neural nets fcr adaptive filtering and
adaptive pattern recognition, Computer. 21,25-39.
------------------------------------------------------------------- ---------------------------------------------------------------- ---------------------------------------------------- -----------------------------------------------------------------------Bibliography[227] Wilson E.O. (1975). Sociobiology: the new synthesis, Belknap Press,
Cambridge, MA.
[228] Yager R. R. and Zadeh L. A. (1994). Fuzzy Sets, Neural Networks and
Soft Computing, Van Nostrand Reinhold: New York.
[229] Yam J. and Chow W. (2001). Feed forward Networks Training Speed
Enhancement by Optimal Initialization of the Synaptic Coefficients.
IEEE Transactions on Neural Networks 12,430-434.
[230] Yan S.C., Xu D., Zhang B.H.J., Yang Q. and Lin S. (2007). Graph
embedding and extensions: A general framework for dimensionality
reduction, IEEE Transactions on Pattern Analysis and Machine
Intelligence, 29(1),40-51.
[231] Yao X. (1999). Evolving artificial neural networks. Proc. IEEE. 87(9),
1423-1447.
[232] Yu G.Z., Shao S.H., Luo B. and Zeng X.H. (2009). A hybrid method for
high-utility item sets mining in large high-dimensional data,
International Journal of Data Warehousing and Mining, Vol.5, No.l, pp.
57-73.
[233] Yumin Chen, Duoqian Miao and Ruizhi Wang (2010). A rough set
approach to feature selection based on ant colony optimization. Pattern
Recognition Letters. 31(2010), 226-233.
[234] 7Hang P., Venna B„ Kumar K. (2005). Neural vs. statistical classifier in
conjunction with genetic algorithm based feature selection, Pattern
Recognition Letters. 26 (7), 909-919.
[235] Zhang H„ Berg A.C. and Maire M„ and Svm-knn J.M. (2006). Discriminative nearest neighbor classification for visual category
recognition, CVPR 06, IEEE Computer Society, Los Alanutos, CA,
USA, pp.2126-2136.
_____________________ ___________________ _ ____________________________ ___ ________________________________________________________________________________ Bibliography[236] Zhou Z. (2004). Rule Extraction: Using Neural Networks or For Neural
Networks?, Journal of Computer Science and Technology, 19(2), 249 - 253.
[237] Zurada J.M., Malinowski A. and Usui S. (1997). Perturbation method
for detecting redundant inputs of perceptron networks, Neurocomputing.14, 177-193.
[238] Duda R.O. and Hard P.E. (1973). Pattern Classification and Scene
Analysis. New York: Wiley-Interscience.
[239] Breiman L., Friedman J.H., Olshen R.A. and Stone C.J. (1984).
Classification and regression trees, Wadsworth. International, Monterey(CA).
[240] Vapnik V. N. and Chervonenkis A.Y. (1971). Theory Probab. Its Appl.
(USSR) 16, 264.
[241] Xiang, Zhigang (1997). Color Image Quantization by Minimizing the
Maximum Intercluster Distance. ACM Transactions on Graphics, 16( 3 ).
[242] Jolliffe I.T. (1986). Second edition (2002). Principal component analysis.
Springer-Verlag.
[243] Hart P. (1968). The condensed nearest neighbor rule. IEEE Trans. Inform.
Theory, 515-516.
[244] Fulong Chen, Hui Lin, Liming Jiang, Zhiqiang Wei, Qing Zhao and
Shilai Cheng (2007). The Comparison between ENVISAT ASAR and
ALOS PALSAR Differential SAR Interferometry in Large Scale Man-
made Linear Objects Deformation Monitoring, Asian Association on
Remote Sensing, http://www.a-a-r-s.org/acrs/proceedings2009.php
[245] Shortliffe E. (1976). Computer-based Medical Consultations: MYCIN.
Elsevier, New York.
- — ____________________________________________ ____________________________________ _____________________________ ________________________________________________Bibliography[246] Duda R. O., Hart P. E., Konolige K., and Reboh R. (1979). A computer-
based consultant for mineral exploration. Technical report SRI Project 6415, SRI International, Menlo Park, CA.
[247] Davis R. and Lenat D. B. (1982). Knowledge-Based Systems in
Artificial Intelligence. McGraw-Hill, New York.
[248] Michalski R. S. and Chilausky R. L. (1980). Learning by being told and
learning from examples: An experimental comparison of the two
methods of knowledge acquisition in the context of developing and
expert system for soybean disease diagnosis. Policy Analysis and Information Systems. 4,125-160.
[249] Bruner J.S., Goodnow J.J., and Austin G.A. (1956). A Study of Thinking. John Wiley, New York.
[250] Mitchell T.M. (1977). Version spaces: A candidate elimination approach
to rule learning. In Proceedings of the Fifth International Joint
Conference on Artificial Intelligence, William Kaufmann, Los Altos,
CA. Conf. proc., Cambridge, Massachusetts, August 22-25. 305-310.
[251] Alippi C. and Braione P. (2006). Journal : IEEE Transactions on
Systems, Man, and Cybernetics - TSMC. 36(5), 649-655.
[252] Rivest R.L. (1987). Learning decision lists. Machine Learning. 2, 229-
246.
[253] Quinlan J. R. (1983). Learning efficient classification procedures and
their application to chess end games. In Michalski R. S., Carbonell J. G.,
and Mitchell T. M. (eds.), Machine Learning: An Artificial Intelligence
Approach, chapter 15. Tioga Publishing Company, Palo Alto, CA.
[254] Breiman L., Friedman J.H., Olshen R.A. and Stone C.J. (1984)..
Classi cation and Regression Trees. Wadsworth, Belmont, CA.
----------------------------------------------------------------- ------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------- Bibliography[255] Young-Min Kim, Seong-Yong Koo, Jong Gwan Lim and Dong-Soo
Kwon (2010). A Robust Online Touch Recognition for Dynamic
Human-robot Interaction,IEEE Transactions on Consumer Electronics, 56(3), pp. 1979-1987.
[256] Parzen E. (1962). On estimation of a probability density function and
mode. Annals of Mathematica Statistics. 33:1065-1076.
[257] McLachlan G.J. and Basford K. E. (1988). Mixture Models: Inference
and Applications to Clustering. Marcel Dekker, New York.
[258] Dempster A.P., Laird N.M., and Rubin D.B. (1977). Maximum
likelihood from incomplete data via the EM algorithm. J. Royal Statistical Society B 39: 1-38.
[259] Good I.J. (1950). Probability and the Weighing of Evidence. Charles Griffin, London.
[260] Pearl J. (1988). Probabilistic Reasoning in Intelligent Systems:
Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA.
[261] Van der Walt C.M. and Barnard E. (2007). Measures for the
characterisation of pattem-recognition data sets, in Proceedings of the
18th Annual Symposium of the Pattern Recognition Association of
South Africa (PRASA), Pietermaritzburg, South Africa, pp. 75-78.
[262] Back and Fogel D.B. (1999). Glossary, in Evolutionary Computation 1:
Basic Algorithms and Operators, FL, USA, Morgan Kaufinann: San
Francisco, Orlando(Editor).
[263] C.A. Silva, J.M.C. Sousa, T. Runkler and R. Palm. (2005). Soft computing optimization methods applied to logistic processes,
International Journal o f Approximated Reasoning 4 0 (200 5 ) 2 8 0 -3 0 1 .
[264] Sinha N.K., Gupta M.M. (Editors) (1996). Intelligent control systems,
IEEE Press, fuzzy PID controller [J], Automata. 2000 (36): 673-«84
------------------------- -------------------------------------— ___ ______________________________________________________________________ ___ _________________________ Bibliography[265] Lotfi A. Zadeh. (1993). Fuzzy logic, neural networks and soft
computing. Microprocessing and Microprogramming. 38(1-5), 13.
[266] Roy R., Furuhashi T. and Chawdhry P.K. (1999). Advances in Soft.
Computing - Engineering Design and Manufacturing, (Editors) Springer-Verlag, London, 301-314.
[267] Gupta M.M. and Musilek P. (2000). Fuzzy Neural Networks in
Cognitive Modeling, International Journal of General Systems. 29(1),7-28.
[268] Michel R. and Middendorf M. (1998). An island based ant system with
lookahead for the shortest common supersequence problem. In
Proceedings of the Fifth International Conference on Parallel Problem
Solving from Nature. 1498, 692-708.
[269] Yen G.G. and Lu H. (2000). Hierarchical genetic algorithm based
neural network design, In: IEEE Symposium on Combinations of
Evolutionary Computation and Neural Networks, pp. 168-175.
[270] Holmstrom L., Koistinen P., Laaksonen J. and Oja E. (1997). Neural and
statistical classifiers-taxonomy and two case studies. IEEE Transactions
o n Neural Networks. 8(1), 5-17.
[271] Carpenter G. A., Grossberg S., Markuzon N., Reynolds J.H., and Rosen
D.B. (1992). Fuzzy ARTMAP : A neural network architecture for
i n c r e m e n t a l supervised learning of analog multidimensional maps, IEEE
T r a n s a c t i o n s on Neural Networks. 39(5), 698-713.
[272] Moody J. and Darken Ch. (1989). Fast learning in networks of locally-
tuned processing units, Neural Computation. 1,281-294.
[273] Zadeh L.A. (1992). Fuzzy Logic, Neural Networks and Soft Computing, 1 page course announcement o f CS 294-4, Spring 1993, University of California
at Berkeley.
Bibliography
[274] Holland J. (1975). Adaptation in Natural and Artificial Systems. The
University of Michigan Press.
[275] Altenberg L. (1994). Evolving better representations through selective
genome growth, Proceedings of the 1st IEEE Conference on
Evolutionary Computation, 1,182-187
[276] Bishop C.M. (1995). Neural Networks for Pattern Recognition, Oxford University Press.
[277] Saxena A. and Kothari M. (2007). Unsupervised approach for structure
preserving dimensionality reduction, In the Proc. of 6th International
Conference on Advances in Pattern Recognition (ICAPR07), World
Scientific Publishing Co. Pte. Ltd. Singapore, pp315-318.
[278] Bonabeau E., Dorigo M. and Theraulaz T. (1999). From Natural to
Artificial Swarm Intelligence. Oxford University Press, New York, USA.
[279] Fletcher R. (2000). Practical Methods of Optimization, second edition.
John Wiely & Sons.