Upload
others
View
16
Download
0
Embed Size (px)
Citation preview
1) Mamdani fuzzy model maxโmin/ product
๐ 1 โช ๐ 2 โช โฆ
2) Sugeno fuzzy models Takagi-Sugeno-Kang (TSK): consequent part of a rule is a polynomial function of inputs.
Defuzzification:
๐งโ =๐ค1๐ง1 + ๐ค2๐ง2
๐ค1 + ๐ค2
๐งโ =๐ค1(๐ 1๐ฅ + ๐1๐ฆ + ๐1) + ๐ค2(๐ 2๐ฅ + ๐2๐ฆ + ๐2)
๐ค1 + ๐ค2
Type 1: TSK model (1st order)
๐ง1 = ๐ 1๐ฅ1 + ๐1๐ฆ
1 + ๐1
๐ง1 = ๐ 1๐ฅ + ๐1๐ฆ + ๐1
Type 0: (or 0๐กโ order TSK)
๐ง1 = ๐ 1๐ฅ0 + ๐1๐ฆ
0 + ๐1
๐ง1 = ๐ 1 + ๐1 + ๐1
(consequent part is just a number)
๐ง1 = ๐ถ1
1st order TSK and models are commonly used in modeling (forecasting) applications.
Neuro fuzzy models (NF) are fuzzy model โ but they are different from conventional fuzzy systems. They
can use machine learning algorithms to update parameters.
3) Tsukamoto fuzzy models Premise parts โ same
Consequent parts โ monotonic functions
Defuzzification (output):
๐งโ =๐ค1๐ง1 + ๐ค2๐ง2
๐ค1 + ๐ค1
(Read by yourself for more info โ Section 4.5, Book 2)
Chapter 4: System Training The difference between a fuzzy system and a neuro fuzzy system is that we can implement the fuzzy
system like a neural network, then we can train system parameters.
We can use machine learning or training algorithms to optimize membership function parameters. This
includes the TSK model (the consequent part parameters) and system reasoning structures. Parameters
can be linear or nonlinear.
Linear: ๐ง = 3๐ฅ + 5๐ฆ + 2
Non-linear: ๐งโ = 2๐ฅ2 + 3๐ฆ3 + ๐ฅ + 2
4.1 Least Squares Estimator (LSE) For linear parameter optimization:
๐ฆ = ๐1๐(๏ฟฝโ๏ฟฝ 1) + ๐2๐(๏ฟฝโ๏ฟฝ 2)โฆ ๐๐๐(๏ฟฝโ๏ฟฝ ๐)
Parameters = {๐1 ๐2 โฆ ๐๐}
Output = ๐ฆ
Input vectors = uโ 1, uโ 2 , โฆ , uโ n
(Because uโ = {uโ 1 uโ 2 โฆ uโ n})
4.1 Least Squares Estimator or linear parameter optimization:
๐งโ =๐ค1๐ง1 + ๐ค2๐ง2
๐ค1 + ๐ค2
๐งโ =๐ค1(๐1๐ฅ + ๐1๐ฆ + ๐1) + ๐ค2(๐2๐ฅ + ๐2๐ฆ + ๐2)
๐ค1 + ๐ค2
Linear parameters: ๐1, ๐1, ๐1, ๐2, ๐2, ๐2
๐๐ด2= ๐
โ(๐ฅโ๐)2
๐2 ; ๐ค2 = ๐โ(๐ฅ0โ๐
๐)2
Nonlinear: MF (membership function) parameters
๐ฆ = ๐1๐1(๏ฟฝโ๏ฟฝ ) + ๐2๐2(๏ฟฝโ๏ฟฝ ) + โฏ+ ๐๐๐๐(๏ฟฝโ๏ฟฝ )
๏ฟฝโ๏ฟฝ = {๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐}๐
๐ = {๐1, ๐2, โฆ , ๐๐}๐
Linear parameters:
{๏ฟฝโ๏ฟฝ 1, ๐ฆ1}, {๏ฟฝโ๏ฟฝ 2, ๐ฆ2},โฆ , {๏ฟฝโ๏ฟฝ ๐ , ๐ฆ๐}
General representation:
{๏ฟฝโ๏ฟฝ ๐ , ๐ฆ๐} ; ๐ = 1, 2, โฆ ,๐
๐1(๏ฟฝโ๏ฟฝ 1)๐1 + ๐2(๏ฟฝโ๏ฟฝ 1)๐2 + โฏ+ ๐๐(๏ฟฝโ๏ฟฝ 1)๐๐ = ๐ฆ1
๐1(๏ฟฝโ๏ฟฝ 2)๐1 + ๐2(๏ฟฝโ๏ฟฝ 2)๐2 + โฏ+ ๐๐(๏ฟฝโ๏ฟฝ 2)๐๐ = ๐ฆ2
โฎ
๐1(๏ฟฝโ๏ฟฝ ๐)๐1 + ๐2(๏ฟฝโ๏ฟฝ ๐)๐2 + โฏ+ ๐๐(๏ฟฝโ๏ฟฝ ๐)๐๐ = ๐ฆ๐
Matrix representation:
[ ๐1(๏ฟฝโ๏ฟฝ 1) ๐2(๏ฟฝโ๏ฟฝ 1) โฏ ๐๐(๏ฟฝโ๏ฟฝ 1)
๐1(๏ฟฝโ๏ฟฝ 2) ๐2(๏ฟฝโ๏ฟฝ 2) โฏ ๐๐(๏ฟฝโ๏ฟฝ 2)โฎ โฎ โฏ โฎ
๐๐(๏ฟฝโ๏ฟฝ ๐) ๐๐(๏ฟฝโ๏ฟฝ ๐) โฏ ๐๐(๏ฟฝโ๏ฟฝ ๐)โฎ โฎ โฑ โฎ
๐1(๏ฟฝโ๏ฟฝ ๐) ๐2(๏ฟฝโ๏ฟฝ ๐) โฏ ๐๐(๏ฟฝโ๏ฟฝ ๐)]
[
๐1
๐2
โฎ๐๐
] = [
๐ฆ1
๐ฆ2
โฎ๐ฆ๐
]
๐ ๐ = {๐1, ๐2 , โฆ , ๐๐}
Summary:
โข Vectors โโโ (column representation, typically)
โข Matrix ๐ด
โข Scalar
๐ ๐๐ = {๐1(๏ฟฝโ๏ฟฝ 1), ๐2(๏ฟฝโ๏ฟฝ 2), โฆ , ๐๐(๏ฟฝโ๏ฟฝ ๐)}
{๏ฟฝโ๏ฟฝ ๐; ๐ฆ๐}
๐ด ๐ = ๐ฆ
If ๐ด is non-singular (det โ 0)
๐ดโ1๐ด ๐ = ๐ดโ1 ๐ฆ
๐ = ๐ดโ1๐ฆ
๐ โ ๐
๐ = # of training data points
๐ = # of linear paramerers to be optimized
โIn general, the training data points should be 5-times the number of linear data points to be optimizedโ
โข Noise in experiments
Unavoidable (always present)
๐ด ๐ + ๐ = ๐ฆ
Error vector:
๐ = ๐ฆ โ ๐ด ๐
Objective function:
๐ธ(๐ ) = (๐ฆ1 โ ๐ 1๐๐ )
2+ (๐ฆ2 โ ๐ 2
๐๐ )2+ โฏ+ (๐ฆ๐ โ ๐ ๐
๐๐ )2+ โฏ+ (๐ฆ๐ โ ๐ ๐
๐ ๐ )2
๐ธ(๐ ) = โ(๐ฆ๐ โ ๐ ๐๐๐ )
2
๐=1
๐ ๐ = ๐ฆ๐ โ ๐ ๐๐๐ ; Where ๐ = 1, 2, 3, โฆ ,๐
๐ธ(๐ ) = ๐ 1๐๐ 1 + ๐ 2
๐๐ 2 + โฏ+ ๐ ๐๐๐ ๐ + โฏ+ ๐ ๐
๐ ๐ ๐
๐ธ(๐ ) = โ๐ ๐๐๐ ๐
๐
๐=1
Consider:
๐ ๐๐ = (๐ฆ โ ๐ด๐ )๐(๐ฆ โ ๐ด๐ )
= [๐ฆ ๐ โ (๐ด๐ )๐](๐ฆ โ ๐ด๐ )
= [๐ฆ ๐ โ ๐ ๐๐ด๐](๐ฆ โ ๐ด๐ )
= ๐ฆ ๐๐ฆ โ ๐ฆ ๐๐ด๐ โ ๐ ๐๐ด๐๐ฆ + ๐ ๐๐ด๐๐ด๐
= ๐ฆ ๐๐ฆ โ ๐ฆ ๐๐ด๐ โ ๐ฆ ๐๐ด๐ + ๐ ๐๐ด๐๐ด๐
= ๐ฆ ๐๐ฆ โ 2๐ฆ ๐๐ด๐ + ๐ ๐๐ด๐๐ด๐
๐ = {๐1, ๐2, โฆ , ๐๐}๐
๐๐ธ(๐ )
๐๐ =
๐(๐ฆ ๐๐ฆ )
๐๐ โ 2(๐ฆ ๐๐ด)
๐+ [(๐ด๐๐ด) + (๐ด๐๐ด)
๐] ๐
Let:
๐๐ธ(๐ )
๐๐ = 0 ; ๐ = ๐ ฬ
Consider:
๐(๐ฆ ๐๐ด๐ฅ )
๐๐ฅ = ๐ด๐๐ฆ
= 0 โ 2๐ด๐๐ฆ + [๐ด๐๐ด + ๐ด๐(๐ด๐)๐] ๐ ฬ
โ2๐ด๐๐ฆ + 2๐ด๐๐ด๐ ฬ = 0
๐ด๐๐ด๐ ฬ = ๐ด๐๐ฆ
(๐ด๐๐ด)โ1
(๐ด๐๐ด) ๐ ฬ = (๐ด๐๐ด)
โ1๐ด๐๐ฆ
๐ ฬ = (๐ด๐๐ด)โ1
๐ด๐๐ฆ
๐ ฬ =๐ด๐๐ฆ
๐ด๐๐ด
Example 3.1 (Jangโs Book)
๐ = 7
Experiment Force (Newtons) Length of Spring (inches) 1 1.1 1.5 2 1.9 2.1 3 3.2 2.5 4 4.4 3.3 5 5.9 4.1 6 7.4 4.6 7 9.2 5.0
๐ฟ = ๐0 + ๐1๐น
{
๐0 + 1.1๐1 = 1.5๐0 + 1.9๐1 = 2.1
โฏ๐0 + 9.2๐1 = 5.0
๐ด๐ ฬ = ๐ฆ โ ๐
[
1 1.51 2.1โฎ โฎ1 5.0
] [๐0
๐1] = [
๐ฆ1
๐ฆ2
โฎ๐ฆ7
] โ [
๐1
๐2
โฎ๐7
]
๐ ฬ = [๐0
๐1] =
๐ด๐๐ฆ
๐ด๐๐ด
Use MATLAB (๐๐๐ฃ and .โ operators)
Or, manually (since it is a 2x2 matrix) via:
[๐ ๐๐ ๐
]โ1
=1
๐๐ โ ๐๐[๐ โ๐โ๐ ๐
]
๐ ฬ = [๐0
๐1] = [
1.200.44
]
๐ฟ = 1.20 + 0.40๐น