Upload
vuonghanh
View
214
Download
1
Embed Size (px)
Citation preview
Image Editing in the Gradient Domain
Shai AvidanTel Aviv University
Slide Credits (partial list)
• Rick Szeliski
• Steve Seitz• Alyosha Efros
• Yacov Hel-Or• Marc Levoy
• Bill Freeman• Fredo Durand
• Sylvain Paris
Image Composition
Target ImageSource Images
Basics
• Images as scalar fields
– R2 -> R
Vector Field
• A vector function G: R2 →R2
• Each point (x,y) is associated with a vector (u,v)
G(x,y)=[ u(x,y) , v(x,y) ]
Gradient Field• Partial derivatives of scalar field • Direction
– Maximum rate of change of scalar field
• Magnitude– Rate of change
• Not all vector fields are gradients of an image.– Only if they are curl-free (a.k.a conservative)– What’s the difference between 1D and 2D Gradient field?
,y
I
x
II
∂∂
∂∂
=∇
),( yxI
Continues v.s. Discrete
Image I(x,y)
Ix Iy
Continues case → derivative
Discrete case → Finite differences
[ ]
[ ] Iy
I
Ix
I
T ∗−→∂∂
∗−→∂∂
110
110
Interpolation
Ω
Ω
Ω∂Ω
over function scalar unknown :f
\over function scalar known :f
boundary with , ofsubset closed a :
ofsubset closed a :
*
2
S
S
RS
I
Intuition – hole filling
• 1D:
• 2D:
x1 x2
Membrane Interpolation2
min ∫∫Ω
∇ff
Solve the following minimization problem:
Subject to Dirichlet boundary conditions:Ω∂Ω∂
= *ff
Variational Methods to the Rescue!
Calculus:
When we want to minimize g(x) over the space of real values xWe derive and set g’(x)=0
What’s the derivative of a function?
Variational Methods:Express your problem as an energy minimization over a space of functions
Derivative Definition
1D Derivative: ( ) ( ) ( )ε
εε
xfxfxf
−+=′
→0lim
Multidimensional derivative for some direction vector w
( ) ( ) ( )ε
εε
xfwxfxfDw
rrrr
r−+
=→0
lim
We want to minimize ( ) ( ) ( )∫ ==′2
1
112 and with
x
x
bxfaxfdxxf
Assume we have a solution f and try to define some notion of 1D derivativewrt to a 1D parameter ε in a given direction of functional space:
For a perturbation function η(x) that also respects boundary conditions (i.e.η(x_1)=η(x_2) = 0) and a scalar ε the integral
( ) ( )( )∫ ′′+′2
1
alone n bigger tha be should 2x
x
fdxxxf ηε
And we are left with:
But since this must be true for every η, it holds that f’’(x) = 0 everywhere.
Calculus of VariationsLets open the parenthesis: ( ) ( ) ( ) ( ) dxxηεxfxηεxf
x
x∫ ′+′′+′
2
1
222 2
The third term is always positive and is negligible when ε is going to zero.
So derive the rest with respect to ε and set to zero: ( ) ( )∫ =′′2
1
02x
x
dxxfxη
Integrate by parts: ( ) ( ) ( ) ( )[ ] ( ) ( )∫ ∫ ′′′−′=′′2
1
2
1
2
1
x
x
x
x
xx dxxfxη xfxηdxxfxη
Where: ( ) ( )[ ] ( ) ( ) ( ) ( )agafbgbfxgxf ba −=
And since η(x_1)=η(x_2) = 0 then the expression in the squared brackets is equal to zero
( ) ( ) 02
1
=′′′∫x
xdxxfxη
Intuitioninterval over the integrated slove theis ofmin The ∫ ′f
Locally, if the second derivative was not zero, this would mean that the First derivative is varying which is bad since we want to be minimized∫ ′f
Recap:
We start with the functional we need to minimizeIntroduce the perturbation functionUse Calculus of variationSet to zeroIntegrate by partsAnd obtain the solution.
Euler-Lagrange Equation
( )∫=2
1
,,x
xx
dxffxFJ
A fundamental equation of calculus of variations, that states that if J is defined by an integral of the form
Then J has stationary value if the following differential equation is satisfied
0=∂∂
−∂∂
xf
F
dx
d
f
FEquation (2)
Equation (1)
2min ∫∫
Ω
∇ff
Recall, we want to solve the following minimization problem:
Subject to Dirichlet boundary conditions:Ω∂Ω∂
= *ff
Membrane Interpolation( )222
:caseour In yx fffF +=∇=
0 :becomes (2)equation Then =∂∂
−∂∂
−∂∂
yx f
F
dy
d
f
F
dx
d
f
F
( )
( )
( )2
222
2
222
22
22
22
0
y
ff
dy
d
f
ff
dx
d
f
F
dy
d
x
ff
dx
d
f
ff
dx
d
f
F
dx
d
f
ff
f
F
yy
yx
y
xx
yx
x
yx
∂∂
==∂
+∂=
∂∂
∂∂
==∂
+∂=
∂∂
=∂
+∂=
∂∂
0 :Laplacian get the weand2
2
2
2
=∆=∂∂
+∂∂
fy
f
x
f
Smooth image completion
Ω∂Ω∂Ω=∇∫∫ *2
.. minarg :Lagrange-Euler fftsff
The minimum is achieved when:
Ω∂Ω∂=Ω=∆ * .. 0 fftsoverf
Discrete Approximation (Membrane interpolation)
Ω∂Ω∂=Ω=∆ * .. over 0 fftsf
2
2
2
2
y
f
x
ff
∂∂
+∂∂
=∆
yxyxyxyxyx fffx
fff
x
f,1,,12
2
,,1 2 −++ +−≅∂∂
−≅∂∂
( )04
22,
,1,1,,1,1
1,,1,,1,,1
=−+++=
+−++−≅∆
−+−+
−+−+
yxyxyxyxyx
yxyxyxyxyxyx
fffff
ffffffyxf
Discrete Approximation
=
−
−
−
b
x
x
0
0
0
1
11411
1411
11411
2
1
Each fx,y is an unknown variable xi, there are N unknown (the pixel values)
This reduces to a sparse linear system of equations:
We have A_x * I = 0 A_y * I = 0 A_boundary * I = boundary so We can combine all and get Ax = b
Gradient constraints
Boundary conditions
What’s in the picture?
What’s in the picture?
What’s in the picture?
Editing in Gradient Domain
• Given vector field G=(u(x,y),v(x,y)) (pasted gradient) in a bounded region Ω. Find the values of f in Ω that optimize:
Ω∂∗
Ω∂Ω=−∇∫∫ ffwithGf
f
2min
G=(u,v)
f*
fI
f*
Ω
Intuition - What if G is null?
• 1D:
• 2D:
x1 x2
Ω∂∗
Ω∂Ω=∇∫∫ ffwithf
f
2min
What if G is not null?
• 1D case
Seamlessly paste onto
- Add a linear function so that the boundary condition is respected
- Gradient error is equally distributed all over Ω in order to respect the boundary condition
2D case
From Perez et al. 2003
2D case
From Perez et al. 2003
2D case
Poisson Equation( ) ( )222
:caseour In yyxx GfGfGfF −+−=−∇=
0=∂∂
−∂∂
−∂∂
yx f
F
dx
d
f
F
dx
d
f
F
∂
∂−
∂
∂=
−
∂
∂=
∂∂
∂∂
−∂∂
=
−∂∂
=∂∂
y
G
y
fG
y
f
dy
d
f
F
dy
d
x
G
x
fG
x
f
dx
d
f
F
dx
d
yyy
y
y
xxx
x
x
2
2
2
2
22
22
divGf
y
G
x
Gf
y
G
x
G
y
f
x
f
yx
yx
=∆
∂
∂+
∂∂
=∆
=∂
∂−
∂∂
−∂∂
+∂∂
0 :get weand2
2
2
2
Discrete Approximation (Poisson Cloning)
Ω∂Ω∂=Ω=∆ * .. fftsoverdivGf
2
2
2
2
y
f
x
ff
∂∂
+∂∂
=∆
yxyxyxyxyx fffx
fff
x
f,1,,12
2
,,1 2 −++ +−≅∂∂
−≅∂∂
( )04
22,
,1,1,,1,1
1,,1,,1,,1
=−+++=
+−++−≅∆
−+−+
−+−+
yxyxyxyxyx
yxyxyxyxyxyx
fffff
ffffffyxf
( ) ( ) ( ) ( )1,,,1, −−+−−≅∂
∂+
∂∂
= yxGyxGyxGyxGy
G
x
GdivG yyxx
yx
Alternative Derivation (discrete notation)
=
v
uI
y
x
D
D
−
−
−
−
−
−
−
=
11
11
11
11
11
11
11
xD[ ]*110* −=∂∂x
IxDIx
=∗∂∂
• Let Dx - Toeplitz matrix
2
min x
Iy
D
D
−
uI
v
( ) ( )
=
v
uI T
yTx
y
xTy
Tx DD
D
DDD
• Normal equation:
( ) vuI Ty
Txy
Tyx
Tx DDDDDD +=+
[ ] [ ]*011*110, −=−⇒ flipDNote Tx
Numerical Solution
• Discretize Laplacian
=+≡∇ yTyx
Tx DDDD2 [ ] ∗
−=∗
−+−
010
141
010
1
2
1
121
−
−
−
−
−
−
−
1411
1411
11411
11411
11411
1141
1141
Sparse Toeplitz Matrix
Comments:– A is a sparse.– A is symmetric and can be inverted.
– If Ω is rectangular A is a Toeplitz matrix.– Size of A is ∼NxN.– Impractical to form or store A.– Impractical to invert A
( ) vuI Ty
Txy
Tyx
Tx DDDDDD +=+
bI =A
Iterative Solution: Conjugate Gradient
• Solves a linear system Ax=b (in our case x=I)• A is square, symmetric, positive semi-definite.
• Advantages:– Fast!– No need to store A but calculating Ax – In our case Ax can be calculated using a single
convolution.– Can deal with constraints.
Conjugate Gradient as a minimization problem
• Minimizes
And since A is symmetric
Steepest Descent Method
• Pick gradient direction r(i)
• Find optimum along this direction x(i)+αr(i)
Gradient direction
Energy along the gradient
Behavior of gradient descent
• Zigzag or goes straight depending if we’re lucky– Ends up doing multiple steps in the same direction
Conjugate gradient
• For each step i: – Take the residual d(i)=b-Ax(i) (= -gradient)
– Make it A-orthogonal to the previous ones
– Find minimum along this direction
• Needs at most N iterations.
• Matlab command:
x=cgs(A,b)
A can be a function handle afun
such that afun(x) returns A*x
Solving Poisson equation with boundary conditions
Ω
*ΩS T
( ) ( )ΩΩ
+=+ SDDDDDDDD yTyx
Txy
Tyx
Tx I s.t. ∗∗ ΩΩ
= TI
Ω∇S
• Define a circumscribing square Π=Ω∪Ω*– Let Ω⊂ Π denotes the edited image area.– Let Ω*= Π-Ω denotes the surrounding area.
yTyx
Txk ∂∗∂+∂∗∂=
( ) ∗ΩΩ∗= IIkAI U
Ω
*ΩS T( ) *ΩΩ∗= TSkb U
x=cgs(A,b)
• The above requirements can be expressed as a linear set of equations:
Ω∇S
[ ] [ ]bAI
T
SDDDD
I
IDDDD yyT
xxT
yyT
xxT
=⇒
+=
+⇒
∗∗ Ω
Ω
Ω
Ω
Image stitching
Gradient Domain CompositionGradient Domain Composition
Cut & Paste Cut & Paste Paste in Gradient Domain Paste in Gradient Domain
Another example
Transparent CloningTransparent Cloning
I SΩ Ω∇ = ∇2
S TI Ω Ω
Ω
∇ + ∇∇ = ( )max ,I S TΩ Ω Ω∇ = ∇ ∇
Transparent Cloning
Another example
Another example
Changing local illumination
Defect concealment
High Dynamic Range Compression
Small exposure: Dark inside
High Dynamic Range Compression
Large exposure: Outside Saturated
Manipulate gradients
α is set to 0.1 of average gradient magnitudeβ is set between 0.8 and 0.9
Where the gradient is given by:
High Dynamic Range Compression
Desired Image
Software Tone Mapping
Short Exposure
Long Exposure
High Dynamic Range Compression
Shadow Removal
Color2Grey Algorithm
Optimization:
min Σ Σ ( (gi - gj) - δi,j )2
j=i-µ
If δij == ∆L then ideal image is gOtherwise, selectively modulated by ∆Cij
i
i+µ
Original Color2GreyPhotoshop
Grey
Results
Color2Grey+ Color
Color2Grey+ColorOriginal PhotoshopGrey Color2Grey
Original PhotoshopGrey Color2Grey
Original PhotoshopGrey Color2Grey