1.An Introduction to Neural Networks © 2013 Lower Columbia College

Embed Size (px)

Citation preview

  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    1/23

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    LCC HOME | CLASSES | CONTACT US SEARCH | A - Z | QUICK FIND

    INTRO NEURAL NETWORKS

    An Introduction to Neural Networks:

    The Perceptron

    Feedback

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdfhttp://www.lowercolumbia.edu/http://lcc.ctc.edu/classeshttp://lcc.ctc.edu/info/contactus.xtmhttp://lcc.ctc.edu/searchhttp://lowercolumbia.edu/nr/exeres/BC73AB4B-1AF6-4E63-BF5C-792CCE2CF181http://lcc.ctc.edu/scripts/staffinfo.exe?mailto=2087?subject=Feedback:http://lowercolumbia.edu/students/academics/facultyPages/rhode-cary/intro-neural-net.htm
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    2/23

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    The human brain is essentially a large and unimaginably complex Neural Network. We can also thinkof the brain as an organized series of interconnected subsections of Neural Networks. We will look at

    how nature has implemented the Neural Network, and then look at the workings of the most common

    artificial Neural Network, the Perceptron

    Neurons

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    3/23

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    The Neural Network of a mature human brain contains about 100 billion nerve cells called neurons.

    These neurons are the fundamental part of the Neural Network. Neurons form complex networks of

    interconnections, called synapses, with each other. A typical neuron can interconnect with up to

    10,000 other neurons, with the average neuron interconnecting with about 1,000 other neurons.

    Synapse of Interconnecting Neurons

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    4/23

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    For a more detailed information on the synaptic interconnections between neurons at the microscopic

    level, there are interesting animations to be found at:

    Chemical Synapse

    The Mind Project

    Brain Basics - Firing of Neurons

    The Biological Neural Network

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdfhttp://highered.mcgraw-hill.com/sites/0072495855/student_view0/chapter14/animation__chemical_synapse__quiz_1_.htmlhttp://www.mind.ilstu.edu/flash/synapse_1.swfhttp://www.bris.ac.uk/synaptic/public/basics_ch1_1.html
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    5/23

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    Altho the mechanisms of synapse itself is compelling, the focus of this article is the Neural Network

    itself, and particularly, the Perceptron. The Perceptron is a simple and common configuration of an

    artificial Neural Network. We will start with a brief history of Artificial Intelligence and the Perceptron

    itself.

    Artificial Intelligence - Mimicking the Human Brain

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    6/23

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    With the advent of electronic computers in the 1940s, people began

    to think about the possibility of artificial brains, or what is commonly

    known as Artificial Intelligence. In the beginning, some thought that

    the logic gate, the building block of digital computers, could serve as

    an artificial neuron, but this idea was quickly rejected. In 1949, Donald

    Hebb proposed an artificial neuron that more closely mimicked the

    biological neuron, where each neuron would have numerous

    interconnections with other neurons. Each of these interconnects

    would have a 'weight'multiplier associated with it. Learning would be

    achieved by changing the weight multipliers of each of the

    interconnections. In 1957, Frank Rosenblatt implemented a Hebb

    neuron, which he called a 'Perceptron'.

    In 1974, Paul Werbos in his

    PhD thesis first described the

    process of training artificial

    neural networks through a process called the

    "Backpropagation of Errors". Just as Frank Rosenblatt

    developed the ideas of Donald Hebb, in 1986 David E.

    Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams took

    the idea of Paul Werbos and developed a practical

    Backpropagation algorithm, which led to a renaissance in the

    field of artificial neural network research.

    Where that renaissance has led is to a new 'Perceptron', a

    multi-layered Perceptron. The multi-layered Perceptron of

    today is now synonomous with the term 'Perceptron', and has also become synonomous with the

    term 'Neural Network' itself.

    The Feed-Forward Multi-Layered Perceptron

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdfhttp://delabs-circuits.com/cirdir/theory/theory6.htmlhttp://www.youtube.com/watch?v=AyzOUbkUf3M
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    7/23

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    What makes the modern Feed-Forward Multi-Layered Perceptron so powerful is that is essentially

    teaches itself by using the Backpropagation learning algorithm. We will look into how the Multi-

    Layered perceptron works, and the process by which it teaches itself using Backpropagation.

    Neural Networks Using the Multi-Layered Perceptron

    NASA: A Prediction of Plant Growth in Space

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    8/23

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    Obviously, anything done in space must be done a efficient as possible. To optimize plant growth,

    NASA created this Perceptron, taught with actual data, to simulate different growth environments.

    Mayo Clinic: A Tumor Classifier

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    9/23

    df d mi b PRO i Are you a developer? Try out the HTML to PDF API

    The above perceptron is self-explanitory. A perceptron need not be complex to be useful.

    An Early Commercial Use: The Original Palm Pilot

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    10/23

    df di b PRO i Are you a developer? Try out the HTML to PDF API

    Altho it may seem awkward now since most cell phones have a full keyboard, the early Palm Pilot used

    a stylus, or electronic pen to enter in characters freehand. It used a perceptron to learn to read a

    particular user's handwriting. An unforseen popular use of the Palm Pilot was for anthopologists to use

    it to enter script from ancient languages that they transcribed from ancient stone and clay tablets.

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    11/23

    df di b PRO i A d l ? T t th HTML t PDF API

    Papnet: Assisted Screening of Pap Smears

    Papnet is a commercial neural network-based computer program for assisted screening of Pap

    (cervical) smears. A Pap smear test examines cells taken from the uterine cervix for signs of

    precancerous and cancerous changes. A properly taken and analysed Pap smear can detect very early

    precancerous changes. These precancerous cells can then be eliminated, usually in a relatively simple

    office or outpatient procedure.

    Type These Characters: The anti-NeuralNet Application

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    12/23df di b PRO i A d l ? T t th HTML t PDF API

    You may have seen the kind of prompt shown above when logging on to some web site. Its purpose is

    to disguise a sequence of letters so a Neural Net cannot read it. With this readable-only-by-a-human

    safeguard, web sites are protected against other computers entering these sites via exhaustive

    attempts at passwords.

    The Individual Nodes of the Multi-Layered Perceptron

    Since the modern Perceptron is a Neural Network in itself, to understand it we need to go back to

    its basic building block, the artificial neuron. As was stated, the original perceptron served as the

    artificial neuron. We will call what serves today as the artificial neuron the Threshold Logic Unit, or

    TLU.

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    13/23df di b PRO i

    Are you a developer? Try out the HTML to PDF API

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    14/23df di b PRO iAre you a developer? Try out the HTML to PDF API

    The original Hebb neuron would sum all of its inputs. Each input in turn was the product of an

    external input times that external input's cooresponding weight multiplier. The Threshold Logic Unit,

    or TLU, adds a significant feature to the original Hebb neuron, the Activation Function. The

    Activation Function takes as input the sum of what is now called Input Function, which is

    essentially the Hebb neuron, and scales it to a value between 0 and 1.

    The selection of the mathematical function that implements the Activation Function is a pivotal

    design decision. Not only does it control the mapping if the Input Function's sum to a value

    between 0 and 1, its selection directly affects the development of the Perceptron's ability to teach

    itself, as we will see shortly. A common Activation Function that we will use is the sigmoid

    function.

    With the sigmoid Activation Function, each TLU will now output a value between 0 and 1. This 0-1

    valued output, coupled with a weight multiplier with a value between -1 and +1, will keep values within

    th t k t bl l l With th A ti ti F ti th TLU l l i i th

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    15/23df di b PRO iAre you a developer? Try out the HTML to PDF API

    the network to a manageable level. With the Activation Function, the TLU more closely mimics the

    operation of the neuron.

    Teaching the Perceptron Using Backpropagation

    Let's start with a very simple perceptron with 2 Inputs and 2 outputs, as shown below. As our

    perceptron receives Input 1 and Input 2 it responds with the values of its ouputs, Output 1 and

    Output 2.

    The process of teaching this perceptron consists of giving it a series of input pairs, and then

    comparing the actual output values that were generated with the desired output values that

    correspond to each pair of inputs. Based upon the difference between the actual output values and

    the desired output values, adjustments are made.

    The only things that are changed during training are the weight multipliers. Consequently, the

    process of teaching the perceptron is a matter of changing the weight multipliers until the actual

    t t l ibl t th d i d t t A t t d li th t

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    16/23

    Are you a developer? Try out the HTML to PDF API

    outputs are as close as possible to the desired outputs. As we stated earlier, the perceptron uses

    the process ofBackpropagation to change its weight multipliers, and thus teach itself.

    Mathematics of Learning via Minimizing Error

    Locating Output 1 in our simple perceptron, we can see that it has 2 inputs. Those inputs to Output 1

    are the outputs from each of the TLUs in the Hidden Layer. The Hidden Layer outputs are each

    multiplied by their corresponding weigth multipliers, wo11 and wo21. The weights are identified bytheir source and destination TLUs. For example, the ID 'wo21' stands for weight to an ouput from

    Hidden node 2 to Output node 1.

    Looking again at the weigth multipliers of the 2 inputs to Output 1, wo11 and wo21, we can make a

    3 dimensional graph where the x-axis corresponds to the value of the wo11 weight multiplier and the

    y-axis corresponds to the value of the wo21 weight multiplier. The meaning of the height, or z-axis

    will follow.

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    17/23

    pdfcrowd.comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    Altho weights multipliers can be negative, we will consider only the possible positive values for these

    weights, which are between 0 and 1. For a given value of the 2 coordinates (wo11, wo21), there is

    an associated amount of difference between the desired value of Output 1 and the actual value. This

    difference we will call the delta, and is the value of the height in the z-axis.

    At the ideal values ofwo11 and wo21, the Delta is zero, so the height is zero. The farther any

    pair ofwo11 and wo21 values are from the ideal values, the height of the delta (size of the error)

    increases. The result is that this 3D graph forms a bowl or funnel shape, with the ideal values of

    wo11 and wo21 at the bottom

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    18/23

    pdfcrowd.comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    wo11 and wo21 at the bottom.

    So any time we find ourselves at some point (wo11, wo21) on the graph that is not the ideal

    point, we will want to slide downhill toward the bottom of our virtual bowl. Differential Calculus gives

    us this ability with the Gradient. The mathematical function for the Gradient is:

    Yikes! Fortunately, we don't have to worry about the particulars. We just need to know that

    mathematically, once we have identified a non-ideal pair of values for wo11 and wo21 in our virtual

    bowl, we have a means of determining the direction to go to get closer to the ideal values.

    Now we can get a feel for the theoretical "Backpropagation of Errors" process that Paul Werbos

    described in 1974. We will now go into the steps of the actual process that was finally implemented

    by a team in 1986. Obviously, it was not a trivial task.

    In short, the team of 1986 constructed a generalization of the complicated mathematics for a

    generic perceptron, and then simplified that mathematical process down into simple parts. This

    mathematical simplification was essentially doing on a very large scale what we do on a small scale

    when we simplify a fraction to lowest terms.

    To imagine the level of complexity of the original model, consider that in our simple perceptron,

    Output 1 has only 2 inputs. These 2 inputs form a 3 dimensional graph. To model up to n inputs,

    mathematicians had to imagine a virtual 'bowl' in n+1 dimensional hyperspace. Then they had to

    describe an n+1 dimensional gradient.

    After all this complexity, the first major simplification was to eliminate the n+1 dimensional

    hyperspace. Differential Calculus is still involved, but only in 2 dimensions with 1 independent

    variable. So the adjustment to each incoming weight multiplier could be considered independently.

    Taking another look at our perceptron:

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    19/23

    pdfcrowd.comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    We can now mathematically adjust out weights to Output 1 with the following equations:

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    20/23

    pdfcrowd.comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    The Learning rate constant is a fractional multiplier used to limit the size of any particularadjustment. With computers the perceptron can go thru the same learning sequence over and over

    again. So each adjustment can be small. If adjustments are too large, the adjustments might

    overcompensate and the weights would just oscillate back and forth.

    Whew! Things are now much better now than they were with partial derivatives in hyperspace, but

    there is still the matter of the derivative of the sigmoid function, which is our TLU's Activation

    function. As stated earlier, using the sigmoid function was a pivotal design decision. By looking at

    its derivative, we can see why:

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    21/23

    pdfcrowd.comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    Consequently, the derivative of the sigmoid function, or Activation function, becomes a simple

    algebraic expression of the value of the function itself. Since the value of the Activation function, which

    we call y, has to be computed anyway to determine the output value of any TLU, the derivative termbecomes trivial.

    With this simple derivative term, once we compute y, the adjustment to wo21 becomes the simplealgebraic expression:

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    22/23

    pdfcrowd.comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    ONTO PAGE 2:

    THE PROCESS OF BACKPROPAGATION

    Copyleft 2010 - Feel free to use for educational purposes

    by Cary Rhode, Math instructor at Lower Columbia College, Longview, WA, USA

    and all around Great Guy [email protected]

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdfhttp://lowercolumbia.edu/students/academics/facultyPages/rhode-cary/backpropagation.htmhttp://www.gnu.org/copyleft/mailto:[email protected]
  • 7/30/2019 1.An Introduction to Neural Networks 2013 Lower Columbia College

    23/23

    pdfcrowd.comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    Affirmative Action & Website Privacy Policies | LCC Home | Contact Us | Feedback 2013 Lower Columbia College

    VISITS: 17471 SINCE 2/8/2010

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fintro-neural-net.htm&id=ma-130509045813-d4dcc930http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdfhttp://lowercolumbia.edu/students/policyhttp://lowercolumbia.edu/http://lcc.ctc.edu/info/contactus.xtmhttp://lcc.ctc.edu/scripts/staffinfo.exe?sendMsg=3296?subject=Feedback:/students/academics/facultyPages/rhode-cary/intro-neural-net.htm