Computational Techniques 2008-2009

Embed Size (px)

Citation preview

  • 7/30/2019 Computational Techniques 2008-2009

    1/5

  • 7/30/2019 Computational Techniques 2008-2009

    2/5

    https://sites.google.com/site/docrevision1112/

    which by our assumption is true iff when .Hence the columns of are independent

    ( Proof)

    Case 1 Assume the columns of are independent

    then the columns of are independent.Case 2 Assume the columns of are independent

    then the columns of are independent.Case 3 Assume the columns of are independent

    then the columns of are independent.

    So any elementary column operations on leave the rank unchanged( ). Equivalently, any elementary row operation onleaves the rank unchanged. Hence any elementary row operation on doesnt change the linear dependence or independence of the rows or columns.

    (e) The singular values are given by the square root of the eigenvalues of either(2x2) or (3x3). Use smaller (2x2).

    . Characteristic equation:

    Hence and

    2.(a) (i) Elimination:

    steps:

    After reducing the system, we end up with an equation: .Hence system is incompatible with

    (ii) from Gaussian Elimiation.Rank-nullity theorem:So using the rank-nullity theorem twice:

    Hence:

  • 7/30/2019 Computational Techniques 2008-2009

    3/5

    https://sites.google.com/site/docrevision1112/

    (iii) Elimination:

    steps:

    (b) (i) A matrix, , is symmetric iff .

    usingusing

    Hence is symmetric.A matrix, , is positive semidefinite iff

    usingas the dot product of any two vectors is nonnegative.

    Hence is positive semidefinite.

    (ii) Let where andNow,But and we know sinceHence by substitution:

    is true iff , i.e. whenIn other words

    (iii) The Cholesky Factorisation factors a positive semidefinite matrix into Inpart (ii) we showed that is positive semidefinite and so has a CholeskyFactorisation,So when we solve , we firstly find the Cholesky Factorisation for

    to get system which can be broken into two triangularsystems:

    Solving this system gives the value of that minimises (the least squares problem).

    3.(a) Condition number

    Hence

    Hence

    For :

    and

  • 7/30/2019 Computational Techniques 2008-2009

    4/5

    https://sites.google.com/site/docrevision1112/

    (b) Rate of convergence is defined as where. Eigenvalues of re found by solving the

    characteristic polynomial: .So solve:

    when we expand the 3x3 determinant down the third column, we get threeterms each with a 2x2 determinant. Two of these terms are zero. We are left with 2x2 determinant which simplifies to:SoHence the largest (modulus) eigenvalue .Hence rate of convergence

    (c) Iterative refinement is good for improving the accuracy of numerical solutionsto that can suffer from rounding errors in floating point arithmetic.In general where and

    The residual vector is defined as which measures the error in. The better is, the closer is to the range space, the smaller is.

    Iterative refinement works by taking an approximationand finds a better called where which is found by solving:

    . This works as:The process can then be repeated to refine the solution iteratively untilconvergence.The process is applied when is not ill-conditioned, i.e. is relatively small.When this is true, smaller residuals do imply a smaller error in the solution.However when the condition number becomes larger, a smaller residual doesnt necessarily imply a better solution and so iterative refinement doesnt work.

    4.(a) (i) For extreme points to exist, we require i.e. when

    So any extreme points are of the form:Hessian:

    when

    Now we look at sub-determinants to see whether is positive definite (subdetspositive) or negative definite (subdets negative).For minimum: and (for posdef )For maximum: and (for negdef )For either a maximum or a minimum we need both and

    , which is impossible. There are no minimum/maximum points.

    (ii) Let (without the )For extreme points to exist, we require i.e. when

    so when .Hessian:

  • 7/30/2019 Computational Techniques 2008-2009

    5/5

    https://sites.google.com/site/docrevision1112/

    which is positive definite since and .Hence there is a minimum at .In other words, i.e.

    but hence

    (b) We need to solve simultaneously:

    (i) (ii)

    Solving (i)

    But from (ii), we have so substituting this in for :we get so

    .So if we choose , we get and

    Answer check:

    (i)

    Hence and are conjugate wrt to

    (ii)

    Hence and are orthogonal

    (c) has dimensionLet

    where is , is (column), is (row) and is (scalar) Verification: