27
Lecture 5: Mathematics of Networks (Cont) CS 790g: Complex Networks Slides are modified from Networks: Theory and Application by Lada Adamic

Lecture 5: Mathematics of Networks (Cont)

  • Upload
    jorryn

  • View
    37

  • Download
    2

Embed Size (px)

DESCRIPTION

Lecture 5: Mathematics of Networks (Cont). CS 790g: Complex Networks. Slides are modified from Networks: Theory and Application by Lada Adamic. Characterizing networks: How far apart are things?. Network metrics: paths. - PowerPoint PPT Presentation

Citation preview

Page 1: Lecture 5: Mathematics of Networks (Cont)

Lecture 5:

Mathematics of Networks (Cont)

CS 790g: Complex Networks

Slides are modified from Networks: Theory and Application by Lada Adamic

Page 2: Lecture 5: Mathematics of Networks (Cont)

Characterizing networks:How far apart are things?

2

Page 3: Lecture 5: Mathematics of Networks (Cont)

Network metrics: paths A path is any sequence of vertices such that every

consecutive pair of vertices in the sequence is connected by an edge in the network. For directed: traversed in the correct direction for the edges.

path can visit itself (vertex or edge) more than once Self-avoiding paths do not intersect themselves.

Path length r is the number of edges on the path Called hops

3

Page 4: Lecture 5: Mathematics of Networks (Cont)

Network metrics: paths

4

Page 5: Lecture 5: Mathematics of Networks (Cont)

Network metrics: shortest paths

A

B

C

DE

1

2

2

3

3

5

Page 6: Lecture 5: Mathematics of Networks (Cont)

Structural metrics: Average path length

6

1 ≤ L ≤ D ≤ N-1

Page 7: Lecture 5: Mathematics of Networks (Cont)

Eulerian Path Euler’s Seven Bridges of Königsberg

one of the first problems in graph theory Is there a route that crosses each bridge only once and returns to

the starting point?

Source: http://en.wikipedia.org/wiki/Seven_Bridges_of_KönigsbergImage 1 – GNU v1.2: Bogdan, Wikipedia; http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_License Image 2 – GNU v1.2: Booyabazooka, Wikipedia; http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_License Image 3 – GNU v1.2: Riojajar, Wikipedia; http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_License

Page 8: Lecture 5: Mathematics of Networks (Cont)

Eulerian and Hamiltonian paths

Hamiltonian path is self avoidingIf starting point and end point are the same:

only possible if no nodes have an odd degree as each path must visit and leave each shore

If don’t need to return to starting pointcan have 0 or 2 nodes with an odd degree

Eulerian path: traverse eachedge exactly once

Hamiltonian path: visiteach vertex exactly once

Page 9: Lecture 5: Mathematics of Networks (Cont)

Characterizing networks:Is everything connected?

9

Page 10: Lecture 5: Mathematics of Networks (Cont)

Network metrics: components If there is a path from every vertex in a network to every

other, the network is connected otherwise, it is disconnected

Component: A subset of vertices such that there exist at least one path from each member of the subset to others and there does not exist another vertex in the network which is connected to any vertex in the subset Maximal subset

A singeleton vertex that is not connected to any other forms a size one component

Every vertex belongs to exactly one component

10

Page 11: Lecture 5: Mathematics of Networks (Cont)

components in directed networks

A

B

C

DE

FG

H

Weakly connected componentsA B C D EG H F

11

Strongly connected components Each node within the component can be reached from every

other node in the component by following directed links

Strongly connected componentsB C D EAG HF

Weakly connected components: every node can be reached from every other node by following links in either direction

A

B

C

DE

FG

H

Page 12: Lecture 5: Mathematics of Networks (Cont)

components in directed networks Every strongly connected component of more than one

vertex has at least one cycle

Out-component: set of all vertices that are reachable via directed paths starting at a specific vertex v

Out-components of all members of a strongly connected component are identical

In-component: set of all vertices from which there is a direct path to a vertex v In-components of all members of a strongly connected

component are identical

12

A

BC

DE

FG

H

Page 13: Lecture 5: Mathematics of Networks (Cont)

network metrics: size of giant component if the largest component encompasses a significant fraction of the graph,

it is called the giant component

13

Page 14: Lecture 5: Mathematics of Networks (Cont)

bowtie model of the web The Web is a directed graph:

webpages link to other webpages The connected components tell us what set of pages can

be reached from any other just by surfing no ‘jumping’ around by typing in a URL or using a search engine

Broder et al. 1999 – crawl of over 200 million pages and 1.5 billion links. SCC – 27.5% IN and OUT – 21.5% Tendrils and tubes – 21.5% Disconnected – 8%

14

Page 15: Lecture 5: Mathematics of Networks (Cont)

Independent paths Edge independent paths: if they share no common edge Vertex independent paths: if they share no common

vertex except start and end vertices Vertex-independent => Edge-independent

Also called disjoint paths These set of paths are not necessarily unique

Connectivity of vertices: the maximal number of independent paths between a pair of vertices Used to identify bottlenecks and resiliency to failures

15

Page 16: Lecture 5: Mathematics of Networks (Cont)

Cut Sets and Maximum Flow A minimum cut set is the smallest cut set that will

disconnect a specified pair of vertices Need not to be unique

Menger’s theorem: If there is no cut set of size less than n between a pair of vertices, then there are at least n independent paths between the same vertices. Implies that the size of min cut set is equal to maximum number

of independent paths (for both edge and vertex independence)

Maximum Flow between a pair of vertices is the number of edge independent paths times the edge capacity.

16

Page 17: Lecture 5: Mathematics of Networks (Cont)

Graph Laplacian

17

Page 18: Lecture 5: Mathematics of Networks (Cont)

Eigenvalues and eigenvectors have their origins in physics, in particular in problems where motion is involved, although their uses extend from solutions to stress and strain problems to differential equations and quantum mechanics.

Eigenvectors are vectors that point in directions where there is no rotation. Eigenvalues are the change in length of the eigenvector from the original length.

The basic equation in eigenvalue problems is:

Axx

Eigenvalues and eigenvectors

Slides from Fred K. Duennebier

Page 19: Lecture 5: Mathematics of Networks (Cont)

In words, this deceptively simple equation says that for the square matrix A, there is a vector x such that the product of Ax such that the result is a SCALAR, , that, when multiplied by x, results in the same product. The multiplication of vector x by a scalar constant is the same as stretching or shrinking the coordinates by a constant value.

Axx (E.01)

Eigenvalues and eigenvectors

Page 20: Lecture 5: Mathematics of Networks (Cont)

AxxThe vector x is called an eigenvector and the scalar , is called an eigenvalue.

Do all matrices have real eigenvalues?

No, they must be square and the determinant of A- I must equal zero. This is easy to show:

This can only be true if det(A- I )=|A- I |=0

Are eigenvectors unique?

No, if x is an eigenvector, then x is also an eigenvector and is an eigenvalue.

Ax x0 x A I 0

A(x)= Ax = x = (x)

(E.02)

(E.03)

(E.04)

Page 21: Lecture 5: Mathematics of Networks (Cont)

How do you calculate eigenvectors and eigenvalues?

Expand equation (E.03): det(A- I )=|A- I |=0 for a 2x2 matrix:

A I a11 a12

a21 a22

1 00 1

a11 a12

a21 a22

det A I a11 a12

a21 a22 a11 a22 a12a21 0

0 a11a22 a12a21 a11 a22 2

For a 2-dimensional problem such as this, the equation above is a simple quadratic equation with two solutions for . In fact, there is generally one eigenvalue for each dimension, but some may be zero, and some complex.

(E.05)

Page 22: Lecture 5: Mathematics of Networks (Cont)

The solution to E.05 is:

0 a11a22 a12a21 a11 a22 2

a11 a22 a11 a22 2

4 a11a22 a12a21

(E.06)

This “characteristic equation” does not involve x, and the resulting values of can be used to solve for x.

Consider the following example:

A1 22 4

(E.07)

Eqn. E.07 doesn’t work here because a11a22-a12a12=0, so we use E.06:

Page 23: Lecture 5: Mathematics of Networks (Cont)

0 a11a22 a12a21 a11 a22 2

0 14 22 (14) 2

(14) 2

We see that one solution to this equation is =0, and dividing both sides of the above equation by yields =5.

Thus we have our two eigenvalues, and the eigenvectors for the first eigenvalue, =0 are:

Axx, A I x0

1 22 4

00

xy

1 22 4

xy

1x2y2x4y

00

These equations are multiples of x=-2y, so the smallest whole number values that fit are x=2, y=-1

Page 24: Lecture 5: Mathematics of Networks (Cont)

For the other eigenvalue, =5:

1 22 4

5 00 5

xy

4 22 1

xy

4x2y2x 1y

00

-4x + 2y = 0, and 2x y0, so, x1, y2

This example is rather special; A-1 does not exist, the two rows of A- I are dependent and thus one of the eigenvalues is zero. (Zero is a legitimate eigenvalue!)

EXAMPLE: A more common case is A =[1.05 .05 ; .05 1] used in the strain exercise. Find the eigenvectors and eigenvalues for this A, and then calculate [V,D]=eig[A].

The procedure is:

1) Compute the determinant of A- I

2) Find the roots of the polynomial given by | A- I|=0

3) Solve the system of equations (A- I)x=0

Page 25: Lecture 5: Mathematics of Networks (Cont)

A2 .70 .45.30 .55

A3

.65 .525

.35 .475

A100

.600 .600

.400 .400

Or we could find the eigenvalues of A and obtain A100 very quickly using eigenvalues.

What is A100 ?

We can get A100 by multiplying matrices many many times:

What good are such things?

Consider the matrix:

A.8 .3.2 .7

Page 26: Lecture 5: Mathematics of Networks (Cont)

For now, I’ll just tell you that there are two eigenvectors for A:

x1 .6.4

and Ax1

.8 .3

.7 .2

.6

.4

x1 (1 = 1)

x2 1 1

and Ax2

.8 .3

.7 .2

1 1

.5 .5

(2 = 0.5)

The eigenvectors are x1=[.6 ; .4] and x2=[1 ; -1], and the eigenvalues are 1=1 and 2=0.5.

Note that if we multiply x1 by A, we get x1. If we multiply x1 by A again, we STILL get x1. Thus x1 doesn’t change as we mulitiply it by An.

Page 27: Lecture 5: Mathematics of Networks (Cont)

What about x2? When we multiply A by x2, we get x2/2, and if we multiply x2 by A2, we get x2/4 . This number gets very small fast.

Note that when A is squared the eigenvectors stay the same, but the eigenvalues are squared!

Back to our original problem we note that for A100, the eigenvectors will be the same, the eigenvalues 1=1 and 2=(0.5)100, which is effectively zero.

Each eigenvector is multiplied by its eigenvalue whenever A is applied,